title
stringlengths 2
58
| text
stringlengths 374
73.7k
| relevans
float64 0.76
0.81
| popularity
float64 0.94
1
| ranking
float64 0.74
0.81
|
---|---|---|---|---|
Carbohydrate | A carbohydrate is a biomolecule consisting of carbon (C), hydrogen (H) and oxygen (O) atoms, usually with a hydrogen–oxygen atom ratio of 2:1 (as in water) and thus with the empirical formula (where m may or may not be different from n), which does not mean the H has covalent bonds with O (for example with , H has a covalent bond with C but not with O). However, not all carbohydrates conform to this precise stoichiometric definition (e.g., uronic acids, deoxy-sugars such as fucose), nor are all chemicals that do conform to this definition automatically classified as carbohydrates (e.g., formaldehyde and acetic acid).
The term is most common in biochemistry, where it is a synonym of saccharide, a group that includes sugars, starch, and cellulose. The saccharides are divided into four chemical groups: monosaccharides, disaccharides, oligosaccharides, and polysaccharides. Monosaccharides and disaccharides, the smallest (lower molecular weight) carbohydrates, are commonly referred to as sugars. While the scientific nomenclature of carbohydrates is complex, the names of the monosaccharides and disaccharides very often end in the suffix -ose, which was originally taken from the word glucose, and is used for almost all sugars (e.g., fructose (fruit sugar), sucrose (cane or beet sugar), ribose, lactose (milk sugar)).
Carbohydrates perform numerous roles in living organisms. Polysaccharides serve as an energy store (e.g., starch and glycogen) and as structural components (e.g., cellulose in plants and chitin in arthropods and fungi). The 5-carbon monosaccharide ribose is an important component of coenzymes (e.g., ATP, FAD and NAD) and the backbone of the genetic molecule known as RNA. The related deoxyribose is a component of DNA. Saccharides and their derivatives include many other important biomolecules that play key roles in the immune system, fertilization, preventing pathogenesis, blood clotting, and development.
Carbohydrates are central to nutrition and are found in a wide variety of natural and processed foods. Starch is a polysaccharide and is abundant in cereals (wheat, maize, rice), potatoes, and processed food based on cereal flour, such as bread, pizza or pasta. Sugars appear in human diet mainly as table sugar (sucrose, extracted from sugarcane or sugar beets), lactose (abundant in milk), glucose and fructose, both of which occur naturally in honey, many fruits, and some vegetables. Table sugar, milk, or honey is often added to drinks and many prepared foods such as jam, biscuits and cakes.
Cellulose, a polysaccharide found in the cell walls of all plants, is one of the main components of insoluble dietary fiber. Although it is not digestible by humans, cellulose and insoluble dietary fiber generally help maintain a healthy digestive system by facilitating bowel movements. Other polysaccharides contained in dietary fiber include resistant starch and inulin, which feed some bacteria in the microbiota of the large intestine, and are metabolized by these bacteria to yield short-chain fatty acids.
Terminology
In scientific literature, the term "carbohydrate" has many synonyms, like "sugar" (in the broad sense), "saccharide", "ose", "glucide", "hydrate of carbon" or "polyhydroxy compounds with aldehyde or ketone". Some of these terms, especially "carbohydrate" and "sugar", are also used with other meanings.
In food science and in many informal contexts, the term "carbohydrate" often means any food that is particularly rich in the complex carbohydrate starch (such as cereals, bread and pasta) or simple carbohydrates, such as sugar (found in candy, jams, and desserts). This informality is sometimes confusing since it confounds chemical structure and digestibility in humans.
Often in lists of nutritional information, such as the USDA National Nutrient Database, the term "carbohydrate" (or "carbohydrate by difference") is used for everything other than water, protein, fat, ash, and ethanol. This includes chemical compounds such as acetic or lactic acid, which are not normally considered carbohydrates. It also includes dietary fiber which is a carbohydrate but which does not contribute food energy in humans, even though it is often included in the calculation of total food energy just as though it did (i.e., as if it were a digestible and absorbable carbohydrate such as a sugar).
In the strict sense, "sugar" is applied for sweet, soluble carbohydrates, many of which are used in human food.
History
The history of the discovery regarding carbohydrates dates back around 10,000 years ago in Papua New Guinea during the cultivation of sugarcane during the Neolithic agricultural revolution. The term "carbohydrate" was first proposed by German chemist Carl Schmidt (chemist) in 1844. In 1856, glycogen, a form of carbohydrate storage in animal livers, was discovered by French physiologist Claude Bernard.
Structure
Formerly the name "carbohydrate" was used in chemistry for any compound with the formula Cm (H2O)n. Following this definition, some chemists considered formaldehyde (CH2O) to be the simplest carbohydrate, while others claimed that title for glycolaldehyde. Today, the term is generally understood in the biochemistry sense, which excludes compounds with only one or two carbons and includes many biological carbohydrates which deviate from this formula. For example, while the above representative formulas would seem to capture the commonly known carbohydrates, ubiquitous and abundant carbohydrates often deviate from this. For example, carbohydrates often display chemical groups such as: N-acetyl (e.g., chitin), sulfate (e.g., glycosaminoglycans), carboxylic acid and deoxy modifications (e.g., fucose and sialic acid).
Natural saccharides are generally built of simple carbohydrates called monosaccharides with general formula (CH2O)n where n is three or more. A typical monosaccharide has the structure H–(CHOH)x(C=O)–(CHOH)y–H, that is, an aldehyde or ketone with many hydroxyl groups added, usually one on each carbon atom that is not part of the aldehyde or ketone functional group. Examples of monosaccharides are glucose, fructose, and glyceraldehydes. However, some biological substances commonly called "monosaccharides" do not conform to this formula (e.g., uronic acids and deoxy-sugars such as fucose) and there are many chemicals that do conform to this formula but are not considered to be monosaccharides (e.g., formaldehyde CH2O and inositol (CH2O)6).
The open-chain form of a monosaccharide often coexists with a closed ring form where the aldehyde/ketone carbonyl group carbon (C=O) and hydroxyl group (–OH) react forming a hemiacetal with a new C–O–C bridge.
Monosaccharides can be linked together into what are called polysaccharides (or oligosaccharides) in a large variety of ways. Many carbohydrates contain one or more modified monosaccharide units that have had one or more groups replaced or removed. For example, deoxyribose, a component of DNA, is a modified version of ribose; chitin is composed of repeating units of N-acetyl glucosamine, a nitrogen-containing form of glucose.
Division
Carbohydrates are polyhydroxy aldehydes, ketones, alcohols, acids, their simple derivatives and their polymers having linkages of the acetal type. They may be classified according to their degree of polymerization, and may be divided initially into three principal groups, namely sugars, oligosaccharides and polysaccharides.
Monosaccharides
Monosaccharides are the simplest carbohydrates in that they cannot be hydrolyzed to smaller carbohydrates. They are aldehydes or ketones with two or more hydroxyl groups. The general chemical formula of an unmodified monosaccharide is (C•H2O)n, literally a "carbon hydrate". Monosaccharides are important fuel molecules as well as building blocks for nucleic acids. The smallest monosaccharides, for which n=3, are dihydroxyacetone and D- and L-glyceraldehydes.
Classification of monosaccharides
The α and β anomers of glucose. Note the position of the hydroxyl group (red or green) on the anomeric carbon relative to the CH2OH group bound to carbon 5: they either have identical absolute configurations (R,R or S,S) (α), or opposite absolute configurations (R,S or S,R) (β).
Monosaccharides are classified according to three different characteristics: the placement of its carbonyl group, the number of carbon atoms it contains, and its chiral handedness. If the carbonyl group is an aldehyde, the monosaccharide is an aldose; if the carbonyl group is a ketone, the monosaccharide is a ketose. Monosaccharides with three carbon atoms are called trioses, those with four are called tetroses, five are called pentoses, six are hexoses, and so on. These two systems of classification are often combined. For example, glucose is an aldohexose (a six-carbon aldehyde), ribose is an aldopentose (a five-carbon aldehyde), and fructose is a ketohexose (a six-carbon ketone).
Each carbon atom bearing a hydroxyl group (-OH), with the exception of the first and last carbons, are asymmetric, making them stereo centers with two possible configurations each (R or S). Because of this asymmetry, a number of isomers may exist for any given monosaccharide formula. Using Le Bel-van't Hoff rule, the aldohexose D-glucose, for example, has the formula (C·H2O)6, of which four of its six carbons atoms are stereogenic, making D-glucose one of 24=16 possible stereoisomers. In the case of glyceraldehydes, an aldotriose, there is one pair of possible stereoisomers, which are enantiomers and epimers. 1, 3-dihydroxyacetone, the ketose corresponding to the aldose glyceraldehydes, is a symmetric molecule with no stereo centers. The assignment of D or L is made according to the orientation of the asymmetric carbon furthest from the carbonyl group: in a standard Fischer projection if the hydroxyl group is on the right the molecule is a D sugar, otherwise it is an L sugar. The "D-" and "L-" prefixes should not be confused with "d-" or "l-", which indicate the direction that the sugar rotates plane polarized light. This usage of "d-" and "l-" is no longer followed in carbohydrate chemistry.
Ring-straight chain isomerism
The aldehyde or ketone group of a straight-chain monosaccharide will react reversibly with a hydroxyl group on a different carbon atom to form a hemiacetal or hemiketal, forming a heterocyclic ring with an oxygen bridge between two carbon atoms. Rings with five and six atoms are called furanose and pyranose forms, respectively, and exist in equilibrium with the straight-chain form.
During the conversion from straight-chain form to the cyclic form, the carbon atom containing the carbonyl oxygen, called the anomeric carbon, becomes a stereogenic center with two possible configurations: The oxygen atom may take a position either above or below the plane of the ring. The resulting possible pair of stereoisomers is called anomers. In the α anomer, the -OH substituent on the anomeric carbon rests on the opposite side (trans) of the ring from the CH2OH side branch. The alternative form, in which the CH2OH substituent and the anomeric hydroxyl are on the same side (cis) of the plane of the ring, is called the β anomer.
Use in living organisms
Monosaccharides are the major fuel source for metabolism, being used both as an energy source (glucose being the most important in nature as it is the product of photosynthesis in plants) and in biosynthesis. When monosaccharides are not immediately needed, they are often converted to more space-efficient (i.e., less water-soluble) forms, often polysaccharides. In many animals, including humans, this storage form is glycogen, especially in liver and muscle cells. In plants, starch is used for the same purpose. The most abundant carbohydrate, cellulose, is a structural component of the cell wall of plants and many forms of algae. Ribose is a component of RNA. Deoxyribose is a component of DNA. Lyxose is a component of lyxoflavin found in the human heart. Ribulose and xylulose occur in the pentose phosphate pathway. Galactose, a component of milk sugar lactose, is found in galactolipids in plant cell membranes and in glycoproteins in many tissues. Mannose occurs in human metabolism, especially in the glycosylation of certain proteins. Fructose, or fruit sugar, is found in many plants and humans, it is metabolized in the liver, absorbed directly into the intestines during digestion, and found in semen. Trehalose, a major sugar of insects, is rapidly hydrolyzed into two glucose molecules to support continuous flight.
Disaccharides
Two joined monosaccharides are called a disaccharide, the simplest kind of polysaccharide. Examples include sucrose and lactose. They are composed of two monosaccharide units bound together by a covalent bond known as a glycosidic linkage formed via a dehydration reaction, resulting in the loss of a hydrogen atom from one monosaccharide and a hydroxyl group from the other. The formula of unmodified disaccharides is C12H22O11. Although there are numerous kinds of disaccharides, a handful of disaccharides are particularly notable.
Sucrose, pictured to the right, is the most abundant disaccharide, and the main form in which carbohydrates are transported in plants. It is composed of one D-glucose molecule and one D-fructose molecule. The systematic name for sucrose, O-α-D-glucopyranosyl-(1→2)-D-fructofuranoside, indicates four things:
Its monosaccharides: glucose and fructose
Their ring types: glucose is a pyranose and fructose is a furanose
How they are linked together: the oxygen on carbon number 1 (C1) of α-D-glucose is linked to the C2 of D-fructose.
The -oside suffix indicates that the anomeric carbon of both monosaccharides participates in the glycosidic bond.
Lactose, a disaccharide composed of one D-galactose molecule and one D-glucose molecule, occurs naturally in mammalian milk. The systematic name for lactose is O-β-D-galactopyranosyl-(1→4)-D-glucopyranose. Other notable disaccharides include maltose (two D-glucoses linked α-1,4) and cellobiose (two D-glucoses linked β-1,4). Disaccharides can be classified into two types: reducing and non-reducing disaccharides. If the functional group is present in bonding with another sugar unit, it is called a reducing disaccharide or biose.
Oligosaccharides and Polysaccharides
Oligosaccharides
Oligosaccharides are saccharide polymers composed of three to ten units of monosaccharides, connected via glycosidic linkages, similar to disaccharides. They are usually linked to lipids or amino acids glycosic linkage with oxygen or nitrogen to form glygolipids and glycoproteins, though some, like the raffinose series and the fructooligosaccharides, do not. They have roles in cell recognition and cell adhesion.
Polysaccharides
Nutrition
Carbohydrate consumed in food yields 3.87 kilocalories of energy per gram for simple sugars, and 3.57 to 4.12 kilocalories per gram for complex carbohydrate in most other foods. Relatively high levels of carbohydrate are associated with processed foods or refined foods made from plants, including sweets, cookies and candy, table sugar, honey, soft drinks, breads and crackers, jams and fruit products, pastas and breakfast cereals. Lower amounts of digestible carbohydrate are usually associated with unrefined foods as these foods have more fiber, including beans, tubers, rice, and unrefined fruit. Animal-based foods generally have the lowest carbohydrate levels, although milk does contain a high proportion of lactose.
Organisms typically cannot metabolize all types of carbohydrate to yield energy. Glucose is a nearly universal and accessible source of energy. Many organisms also have the ability to metabolize other monosaccharides and disaccharides but glucose is often metabolized first. In Escherichia coli, for example, the lac operon will express enzymes for the digestion of lactose when it is present, but if both lactose and glucose are present the lac operon is repressed, resulting in the glucose being used first (see: Diauxie). Polysaccharides are also common sources of energy. Many organisms can easily break down starches into glucose; most organisms, however, cannot metabolize cellulose or other polysaccharides like chitin and arabinoxylans. These carbohydrate types can be metabolized by some bacteria and protists. Ruminants and termites, for example, use microorganisms to process cellulose. Even though these complex carbohydrates are not very digestible, they represent an important dietary element for humans, called dietary fiber. Fiber enhances digestion, among other benefits.
The Institute of Medicine recommends that American and Canadian adults get between 45 and 65% of dietary energy from whole-grain carbohydrates. The Food and Agriculture Organization and World Health Organization jointly recommend that national dietary guidelines set a goal of 55–75% of total energy from carbohydrates, but only 10% directly from sugars (their term for simple carbohydrates). A 2017 Cochrane Systematic Review concluded that there was insufficient evidence to support the claim that whole grain diets can affect cardiovascular disease.
Classification
Nutritionists often refer to carbohydrates as either simple or complex. However, the exact distinction between these groups can be ambiguous. The term complex carbohydrate was first used in the U.S. Senate Select Committee on Nutrition and Human Needs publication Dietary Goals for the United States (1977) where it was intended to distinguish sugars from other carbohydrates (which were perceived to be nutritionally superior). However, the report put "fruit, vegetables and whole-grains" in the complex carbohydrate column, despite the fact that these may contain sugars as well as polysaccharides. This confusion persists as today some nutritionists use the term complex carbohydrate to refer to any sort of digestible saccharide present in a whole food, where fiber, vitamins and minerals are also found (as opposed to processed carbohydrates, which provide energy but few other nutrients). The standard usage, however, is to classify carbohydrates chemically: simple if they are sugars (monosaccharides and disaccharides) and complex if they are polysaccharides (or oligosaccharides).
In any case, the simple vs. complex chemical distinction has little value for determining the nutritional quality of carbohydrates. Some simple carbohydrates (e.g., fructose) raise blood glucose rapidly, while some complex carbohydrates (starches), raise blood sugar slowly. The speed of digestion is determined by a variety of factors including which other nutrients are consumed with the carbohydrate, how the food is prepared, individual differences in metabolism, and the chemistry of the carbohydrate. Carbohydrates are sometimes divided into "available carbohydrates", which are absorbed in the small intestine and "unavailable carbohydrates", which pass to the large intestine, where they are subject to fermentation by the gastrointestinal microbiota.
The USDA's Dietary Guidelines for Americans 2010 call for moderate- to high-carbohydrate consumption from a balanced diet that includes six one-ounce servings of grain foods each day, at least half from whole grain sources and the rest are from enriched.
The glycemic index (GI) and glycemic load concepts have been developed to characterize food behavior during human digestion. They rank carbohydrate-rich foods based on the rapidity and magnitude of their effect on blood glucose levels. Glycemic index is a measure of how quickly food glucose is absorbed, while glycemic load is a measure of the total absorbable glucose in foods. The insulin index is a similar, more recent classification method that ranks foods based on their effects on blood insulin levels, which are caused by glucose (or starch) and some amino acids in food.
Health effects of dietary carbohydrate restriction
Low-carbohydrate diets may miss the health advantages – such as increased intake of dietary fiber – afforded by high-quality carbohydrates found in legumes and pulses, whole grains, fruits, and vegetables. A "meta-analysis, of moderate quality," included as adverse effects of the diet halitosis, headache and constipation.
Carbohydrate-restricted diets can be as effective as low-fat diets in helping achieve weight loss over the short term when overall calorie intake is reduced. An Endocrine Society scientific statement said that "when calorie intake is held constant [...] body-fat accumulation does not appear to be affected by even very pronounced changes in the amount of fat vs carbohydrate in the diet." In the long term, effective weight loss or maintenance depends on calorie restriction, not the ratio of macronutrients in a diet. The reasoning of diet advocates that carbohydrates cause undue fat accumulation by increasing blood insulin levels, and that low-carbohydrate diets have a "metabolic advantage", is not supported by clinical evidence. Further, it is not clear how low-carbohydrate dieting affects cardiovascular health, although two reviews showed that carbohydrate restriction may improve lipid markers of cardiovascular disease risk.
Carbohydrate-restricted diets are no more effective than a conventional healthy diet in preventing the onset of type 2 diabetes, but for people with type 2 diabetes, they are a viable option for losing weight or helping with glycemic control. There is limited evidence to support routine use of low-carbohydrate dieting in managing type 1 diabetes. The American Diabetes Association recommends that people with diabetes should adopt a generally healthy diet, rather than a diet focused on carbohydrate or other macronutrients.
An extreme form of low-carbohydrate diet – the ketogenic diet – is established as a medical diet for treating epilepsy. Through celebrity endorsement during the early 21st century, it became a fad diet as a means of weight loss, but with risks of undesirable side effects, such as low energy levels and increased hunger, insomnia, nausea, and gastrointestinal discomfort. The British Dietetic Association named it one of the "top 5 worst celeb diets to avoid in 2018".
Sources
Most dietary carbohydrates contain glucose, either as their only building block (as in the polysaccharides starch and glycogen), or together with another monosaccharide (as in the hetero-polysaccharides sucrose and lactose). Unbound glucose is one of the main ingredients of honey. Glucose is extremely abundant and has been isolated from a variety of natural sources across the world, including male cones of the coniferous tree Wollemia nobilis in Rome, the roots of Ilex asprella plants in China, and straws from rice in California.
The carbohydrate value is calculated in the USDA database and does not always correspond to the sum of the sugars, the starch, and the "dietary fiber".
Metabolism
Carbohydrate metabolism is the series of biochemical processes responsible for the formation, breakdown and interconversion of carbohydrates in living organisms.
The most important carbohydrate is glucose, a simple sugar (monosaccharide) that is metabolized by nearly all known organisms. Glucose and other carbohydrates are part of a wide variety of metabolic pathways across species: plants synthesize carbohydrates from carbon dioxide and water by photosynthesis storing the absorbed energy internally, often in the form of starch or lipids. Plant components are consumed by animals and fungi, and used as fuel for cellular respiration. Oxidation of one gram of carbohydrate yields approximately 16 kJ (4 kcal) of energy, while the oxidation of one gram of lipids yields about 38 kJ (9 kcal). The human body stores between 300 and 500 g of carbohydrates depending on body weight, with the skeletal muscle contributing to a large portion of the storage. Energy obtained from metabolism (e.g., oxidation of glucose) is usually stored temporarily within cells in the form of ATP. Organisms capable of anaerobic and aerobic respiration metabolize glucose and oxygen (aerobic) to release energy, with carbon dioxide and water as byproducts.
Catabolism
Catabolism is the metabolic reaction which cells undergo to break down larger molecules, extracting energy. There are two major metabolic pathways of monosaccharide catabolism: glycolysis and the citric acid cycle.
In glycolysis, oligo- and polysaccharides are cleaved first to smaller monosaccharides by enzymes called glycoside hydrolases. The monosaccharide units can then enter into monosaccharide catabolism. A 2 ATP investment is required in the early steps of glycolysis to phosphorylate Glucose to Glucose 6-Phosphate (G6P) and Fructose 6-Phosphate (F6P) to Fructose 1,6-biphosphate (FBP), thereby pushing the reaction forward irreversibly. In some cases, as with humans, not all carbohydrate types are usable as the digestive and metabolic enzymes necessary are not present.
Carbohydrate chemistry
Carbohydrate chemistry is a large and economically important branch of organic chemistry. Some of the main organic reactions that involve carbohydrates are:
Amadori rearrangement
Carbohydrate acetalisation
Carbohydrate digestion
Cyanohydrin reaction
Koenigs–Knorr reaction
Lobry de Bruyn–Van Ekenstein transformation
Nef reaction
Wohl degradation
Tipson-Cohen reaction
Ferrier rearrangement
Ferrier II reaction
Chemical Synthesis
Carbohydrate synthesis is a sub-field of organic chemistry concerned specifically with the generation of natural and unnatural carbohydrate structures. This can include the synthesis of monosaccharide residues or structures containing more than one monosaccharide, known as oligosaccharides. Selective formation of glycosidic linkages and selective reactions of hydroxyl groups are very important, and the usage of protecting groups is extensive.
Common reactions for glycosidic bond formation are as follows:
Chemical glycosylation
Fischer glycosidation
Koenigs-Knorr reaction
Crich beta-mannosylation
While some common protection methods are as below:
Carbohydrate acetalisation
Trimethylsilyl
Benzyl ether
p-Methoxybenzyl ether
See also
Bioplastic
Carbohydrate NMR
Gluconeogenesis – A process where glucose can be synthesized by non-carbohydrate sources.
Glycobiology
Glycogen
Glycoinformatics
Glycolipid
Glycome
Glycomics
Glycosyl
Macromolecule
Saccharic acid
References
Further reading
External links
Carbohydrates, including interactive models and animations (Requires MDL Chime)
IUPAC-IUBMB Joint Commission on Biochemical Nomenclature (JCBN): Carbohydrate Nomenclature
Carbohydrates detailed
Carbohydrates and Glycosylation – The Virtual Library of Biochemistry, Molecular Biology and Cell Biology
Functional Glycomics Gateway, a collaboration between the Consortium for Functional Glycomics and Nature Publishing Group
Nutrition | 0.764121 | 0.999672 | 0.763871 |
Skin | Skin is the layer of usually soft, flexible outer tissue covering the body of a vertebrate animal, with three main functions: protection, regulation, and sensation.
Other animal coverings, such as the arthropod exoskeleton, have different developmental origin, structure and chemical composition. The adjective cutaneous means "of the skin" (from Latin cutis 'skin'). In mammals, the skin is an organ of the integumentary system made up of multiple layers of ectodermal tissue and guards the underlying muscles, bones, ligaments, and internal organs. Skin of a different nature exists in amphibians, reptiles, and birds. Skin (including cutaneous and subcutaneous tissues) plays crucial roles in formation, structure, and function of extraskeletal apparatus such as horns of bovids (e.g., cattle) and rhinos, cervids' antlers, giraffids' ossicones, armadillos' osteoderm, and os penis/os clitoris.
All mammals have some hair on their skin, even marine mammals like whales, dolphins, and porpoises that appear to be hairless.
The skin interfaces with the environment and is the first line of defense from external factors. For example, the skin plays a key role in protecting the body against pathogens and excessive water loss. Its other functions are insulation, temperature regulation, sensation, and the production of vitamin D folates. Severely damaged skin may heal by forming scar tissue. This is sometimes discoloured and depigmented. The thickness of skin also varies from location to location on an organism. In humans, for example, the skin located under the eyes and around the eyelids is the thinnest skin on the body at 0.5 mm thick and is one of the first areas to show signs of aging such as "crows feet" and wrinkles. The skin on the palms and the soles of the feet is the thickest skin on the body at 4 mm thick. The speed and quality of wound healing in skin is promoted by estrogen.
Fur is dense hair. Primarily, fur augments the insulation the skin provides but can also serve as a secondary sexual characteristic or as camouflage. On some animals, the skin is very hard and thick and can be processed to create leather. Reptiles and most fish have hard protective scales on their skin for protection, and birds have hard feathers, all made of tough beta-keratins. Amphibian skin is not a strong barrier, especially regarding the passage of chemicals via skin, and is often subject to osmosis and diffusive forces. For example, a frog sitting in an anesthetic solution would be sedated quickly as the chemical diffuses through its skin. Amphibian skin plays key roles in everyday survival and their ability to exploit a wide range of habitats and ecological conditions.
On 11 January 2024, biologists reported the discovery of the oldest known skin, fossilized about 289 million years ago, and possibly the skin from an ancient reptile.
Etymology
The word skin originally only referred to dressed and tanned animal hide and the usual word for human skin was hide.
Skin is a borrowing from Old Norse skinn "animal hide, fur", ultimately from the Proto-Indo-European root *sek-, meaning "to cut" (probably a reference to the fact that in those times animal hide was commonly cut off to be used as garment).
Structure in mammals
Mammalian skin is composed of two primary layers:
The epidermis, which provides waterproofing and serves as a barrier to infection.
The dermis, which serves as a location for the appendages of skin.
Epidermis
The epidermis is composed of the outermost layers of the skin. It forms a protective barrier over the body's surface, responsible for keeping water in the body and preventing pathogens from entering, and is a stratified squamous epithelium, composed of proliferating basal and differentiated suprabasal keratinocytes.
Keratinocytes are the major cells, constituting 95% of the epidermis, while Merkel cells, melanocytes and Langerhans cells are also present. The epidermis can be further subdivided into the following strata or layers (beginning with the outermost layer):
Stratum corneum
Stratum lucidum (only in palms and soles)
Stratum granulosum
Stratum spinosum
Stratum basale (also called the stratum germinativum)
Keratinocytes in the stratum basale proliferate through mitosis and the daughter cells move up the strata changing shape and composition as they undergo multiple stages of cell differentiation to eventually become anucleated. During that process, keratinocytes will become highly organized, forming cellular junctions (desmosomes) between each other and secreting keratin proteins and lipids which contribute to the formation of an extracellular matrix and provide mechanical strength to the skin. Keratinocytes from the stratum corneum are eventually shed from the surface (desquamation).
The epidermis contains no blood vessels, and cells in the deepest layers are nourished by diffusion from blood capillaries extending to the upper layers of the dermis.
Basement membrane
The epidermis and dermis are separated by a thin sheet of fibers called the basement membrane, which is made through the action of both tissues.
The basement membrane controls the traffic of the cells and molecules between the dermis and epidermis but also serves, through the binding of a variety of cytokines and growth factors, as a reservoir for their controlled release during physiological remodeling or repair processes.
Dermis
The dermis is the layer of skin beneath the epidermis that consists of connective tissue and cushions the body from stress and strain. The dermis provides tensile strength and elasticity to the skin through an extracellular matrix composed of collagen fibrils, microfibrils, and elastic fibers, embedded in hyaluronan and proteoglycans. Skin proteoglycans are varied and have very specific locations. For example, hyaluronan, versican and decorin are present throughout the dermis and epidermis extracellular matrix, whereas biglycan and perlecan are only found in the epidermis.
It harbors many mechanoreceptors (nerve endings) that provide the sense of touch and heat through nociceptors and thermoreceptors. It also contains the hair follicles, sweat glands, sebaceous glands, apocrine glands, lymphatic vessels and blood vessels. The blood vessels in the dermis provide nourishment and waste removal from its own cells as well as for the epidermis.
Dermis and subcutaneous tissues are thought to contain germinative cells involved in formation of horns, osteoderm, and other extra-skeletal apparatus in mammals.
The dermis is tightly connected to the epidermis through a basement membrane and is structurally divided into two areas: a superficial area adjacent to the epidermis, called the papillary region, and a deep thicker area known as the reticular region.
Papillary region
The papillary region is composed of loose areolar connective tissue. This is named for its fingerlike projections called papillae that extend toward the epidermis. The papillae provide the dermis with a "bumpy" surface that interdigitates with the epidermis, strengthening the connection between the two layers of skin.
Reticular region
The reticular region lies deep in the papillary region and is usually much thicker. It is composed of dense irregular connective tissue and receives its name from the dense concentration of collagenous, elastic, and reticular fibers that weave throughout it. These protein fibers give the dermis its properties of strength, extensibility, and elasticity.
Also located within the reticular region are the roots of the hair, sweat glands, sebaceous glands, receptors, nails, and blood vessels.
Subcutaneous tissue
The subcutaneous tissue (also hypodermis) is not part of the skin, and lies below the dermis. Its purpose is to attach the skin to underlying bone and muscle as well as supplying it with blood vessels and nerves. It consists of loose connective tissue and elastin. The main cell types are fibroblasts, macrophages and adipocytes (the subcutaneous tissue contains 50% of body fat). Fat serves as padding and insulation for the body.
Microorganisms like Staphylococcus epidermidis colonize the skin surface. The density of skin flora depends on region of the skin. The disinfected skin surface gets recolonized from bacteria residing in the deeper areas of the hair follicle, gut and urogenital openings.
Detailed cross section
Structure in fish, amphibians, birds, and reptiles
Fish
The epidermis of fish and of most amphibians consists entirely of live cells, with only minimal quantities of keratin in the cells of the superficial layer. It is generally permeable, and in the case of many amphibians, may actually be a major respiratory organ. The dermis of bony fish typically contains relatively little of the connective tissue found in tetrapods. Instead, in most species, it is largely replaced by solid, protective bony scales. Apart from some particularly large dermal bones that form parts of the skull, these scales are lost in tetrapods, although many reptiles do have scales of a different kind, as do pangolins. Cartilaginous fish have numerous tooth-like denticles embedded in their skin, in place of true scales.
Sweat glands and sebaceous glands are both unique to mammals, but other types of skin gland are found in other vertebrates. Fish typically have a numerous individual mucus-secreting skin cells that aid in insulation and protection, but may also have poison glands, photophores, or cells that produce a more watery, serous fluid. In amphibians, the mucous cells are gathered together to form sac-like glands. Most living amphibians also possess granular glands in the skin, that secrete irritating or toxic compounds.
Although melanin is found in the skin of many species, in the reptiles, the amphibians, and fish, the epidermis is often relatively colorless. Instead, the color of the skin is largely due to chromatophores in the dermis, which, in addition to melanin, may contain guanine or carotenoid pigments. Many species, such as chameleons and flounders may be able to change the color of their skin by adjusting the relative size of their chromatophores.
Amphibians
Overview
Amphibians possess two types of glands, mucous and granular (serous). Both of these glands are part of the integument and thus considered cutaneous. Mucous and granular glands are both divided into three different sections which all connect to structure the gland as a whole. The three individual parts of the gland are the duct, the intercalary region, and lastly the alveolar gland (sac). Structurally, the duct is derived via keratinocytes and passes through to the surface of the epidermal or outer skin layer thus allowing external secretions of the body. The gland alveolus is a sac-shaped structure that is found on the bottom or base region of the granular gland. The cells in this sac specialize in secretion. Between the alveolar gland and the duct is the intercalary system which can be summed up as a transitional region connecting the duct to the grand alveolar beneath the epidermal skin layer. In general, granular glands are larger in size than the mucous glands, which are greater in number.
Granular glands
Granular glands can be identified as venomous and often differ in the type of toxin as well as the concentrations of secretions across various orders and species within the amphibians. They are located in clusters differing in concentration depending on amphibian taxa. The toxins can be fatal to most vertebrates or have no effect against others. These glands are alveolar meaning they structurally have little sacs in which venom is produced and held before it is secreted upon defensive behaviors.
Structurally, the ducts of the granular gland initially maintain a cylindrical shape. When the ducts mature and fill with fluid, the base of the ducts become swollen due to the pressure from the inside. This causes the epidermal layer to form a pit like opening on the surface of the duct in which the inner fluid will be secreted in an upwards fashion.
The intercalary region of granular glands is more developed and mature in comparison with mucous glands. This region resides as a ring of cells surrounding the basal portion of the duct which are argued to have an ectodermal muscular nature due to their influence over the lumen (space inside the tube) of the duct with dilation and constriction functions during secretions. The cells are found radially around the duct and provide a distinct attachment site for muscle fibers around the gland's body.
The gland alveolus is a sac that is divided into three specific regions/layers. The outer layer or tunica fibrosa is composed of densely packed connective-tissue which connects with fibers from the spongy intermediate layer where elastic fibers, as well as nerves, reside. The nerves send signals to the muscles as well as the epithelial layers. Lastly, the epithelium or tunica propria encloses the gland.
Mucous glands
Mucous glands are non-venomous and offer a different functionality for amphibians than granular. Mucous glands cover the entire surface area of the amphibian body and specialize in keeping the body lubricated. There are many other functions of the mucous glands such as controlling the pH, thermoregulation, adhesive properties to the environment, anti-predator behaviors (slimy to the grasp), chemical communication, even anti-bacterial/viral properties for protection against pathogens.
The ducts of the mucous gland appear as cylindrical vertical tubes that break through the epidermal layer to the surface of the skin. The cells lining the inside of the ducts are oriented with their longitudinal axis forming 90-degree angles surrounding the duct in a helical fashion.
Intercalary cells react identically to those of granular glands but on a smaller scale. Among the amphibians, there are taxa which contain a modified intercalary region (depending on the function of the glands), yet the majority share the same structure.
The alveolar or mucous glands are much more simple and only consist of an epithelium layer as well as connective tissue which forms a cover over the gland. This gland lacks a tunica propria and appears to have delicate and intricate fibers which pass over the gland's muscle and epithelial layers.
Birds and reptiles
The epidermis of birds and reptiles is closer to that of mammals, with a layer of dead keratin-filled cells at the surface, to help reduce water loss. A similar pattern is also seen in some of the more terrestrial amphibians such as toads. In these animals, there is no clear differentiation of the epidermis into distinct layers, as occurs in humans, with the change in cell type being relatively gradual. The mammalian epidermis always possesses at least a stratum germinativum and stratum corneum, but the other intermediate layers found in humans are not always distinguishable.
Hair is a distinctive feature of mammalian skin, while feathers are (at least among living species) similarly unique to birds.
Birds and reptiles have relatively few skin glands, although there may be a few structures for specific purposes, such as pheromone-secreting cells in some reptiles, or the uropygial gland of most birds.
Development
Cutaneous structures arise from the epidermis and include a variety of features such as hair, feathers, claws and nails. During embryogenesis, the epidermis splits into two layers: the periderm (which is lost) and the basal layer. The basal layer is a stem cell layer and through asymmetrical divisions, becomes the source of skin cells throughout life. It is maintained as a stem cell layer through an autocrine signal, TGF alpha, and through paracrine signaling from FGF7 (keratinocyte growth factor) produced by the dermis below the basal cells. In mice, over-expression of these factors leads to an overproduction of granular cells and thick skin.
It is believed that the mesoderm defines the pattern. The epidermis instructs the mesodermal cells to condense and then the mesoderm instructs the epidermis of what structure to make through a series of reciprocal inductions. Transplantation experiments involving frog and newt epidermis indicated that the mesodermal signals are conserved between species but the epidermal response is species-specific meaning that the mesoderm instructs the epidermis of its position and the epidermis uses this information to make a specific structure.
Functions
Skin performs the following functions:
Protection: an anatomical barrier from pathogens and damage between the internal and external environment in bodily defense. (See Skin absorption.) Langerhans cells in the skin are part of the adaptive immune system.
Sensation: contains a variety of nerve endings that jump to heat and cold, touch, pressure, vibration, and tissue injury (see somatosensory system and haptic perception).
Thermoregulation: Eccrine (sweat) glands and dilated blood vessels (increased superficial perfusion) aid heat loss, while constricted vessels greatly reduce cutaneous blood flow and conserve heat. Erector pili muscles in mammals adjust the angle of hair shafts to change the degree of insulation provided by hair or fur.
Control of evaporation: the skin provides a relatively dry and semi-impermeable barrier to reduce fluid loss.
Storage and synthesis: acts as a storage center for lipids and water
Absorption through the skin: Oxygen, nitrogen and carbon dioxide can diffuse into the epidermis in small amounts; some animals use their skin as their sole respiration organ (in humans, the cells comprising the outermost 0.25–0.40 mm of the skin are "almost exclusively supplied by external oxygen", although the "contribution to total respiration is negligible") Some medications are absorbed through the skin.
Water resistance: The skin acts as a water resistant barrier so essential nutrients aren't washed out of the body. The nutrients and oils that help hydrate the skin are covered by the most outer skin layer, the epidermis. This is helped in part by the sebaceous glands that release sebum, an oily liquid. Water itself will not cause the elimination of oils on the skin, because the oils residing in our dermis flow and would be affected by water without the epidermis.
Camouflage, whether the skin is naked or covered in fur, scales, or feathers, skin structures provide protective coloration and patterns that help to conceal animals from predators or prey.
Mechanics
Skin is a soft tissue and exhibits key mechanical behaviors of these tissues. The most pronounced feature is the J-curve stress strain response, in which a region of large strain and minimal stress exists and corresponds to the microstructural straightening and reorientation of collagen fibrils. In some cases the intact skin is prestreched, like wetsuits around the diver's body, and in other cases the intact skin is under compression. Small circular holes punched on the skin may widen or close into ellipses, or shrink and remain circular, depending on preexisting stresses.
Aging
Tissue homeostasis generally declines with age, in part because stem/progenitor cells fail to self-renew or differentiate. Skin aging is caused in part by TGF-β by blocking the conversion of dermal fibroblasts into fat cells which provide support. Common changes in the skin as a result of aging range from wrinkles, discoloration, and skin laxity, but can manifest in more severe forms such as skin malignancies. Moreover, these factors may be worsened by sun exposure in a process known as photoaging.
See also
Cutaneous reflex in human locomotion
Cutaneous respiration – gas exchange conducted through skin
Moult
Role of skin in locomotion
Skinning
References
External links
Soft tissue
Leathermaking
Organs (anatomy)
Animal anatomy
Skin physiology | 0.765947 | 0.997247 | 0.763838 |
Dysautonomia | Dysautonomia, autonomic failure, or autonomic dysfunction is a condition in which the autonomic nervous system (ANS) does not work properly. This may affect the functioning of the heart, bladder, intestines, sweat glands, pupils, and blood vessels. Dysautonomia has many causes, not all of which may be classified as neuropathic. A number of conditions can feature dysautonomia, such as Parkinson's disease, multiple system atrophy, dementia with Lewy bodies, Ehlers–Danlos syndromes, autoimmune autonomic ganglionopathy and autonomic neuropathy, HIV/AIDS, mitochondrial cytopathy, pure autonomic failure, autism, and postural orthostatic tachycardia syndrome.
Diagnosis is made by functional testing of the ANS, focusing on the affected organ system. Investigations may be performed to identify underlying disease processes that may have led to the development of symptoms or autonomic neuropathy. Symptomatic treatment is available for many symptoms associated with dysautonomia, and some disease processes can be directly treated. Depending on the severity of the dysfunction, dysautonomia can range from being nearly symptomless and transient to disabling and/or life-threatening.
Signs and symptoms
Dysautonomia, a complex set of conditions characterized by autonomic nervous system (ANS) dysfunction, manifests clinically with a diverse array of symptoms, of which postural orthostatic tachycardia syndrome (POTS) stands out as the most common.
The symptoms of dysautonomia, which are numerous and vary widely for each person, are due to inefficient or unbalanced efferent signals sent via both systems. Symptoms in people with dysautonomia include:
Anhydrosis or hyperhidrosis
Blurry or double vision
Bowel incontinence
Brain fog
Constipation
Dizziness
Difficulty swallowing
Exercise intolerance
Low blood pressure
Orthostatic hypotension
Syncope
Tachycardia
Tunnel vision
Urinary incontinence or urinary retention
Sleep apnea
Causes
Dysautonomia may be due to inherited or degenerative neurologic diseases (primary dysautonomia) or injury of the autonomic nervous system from an acquired disorder (secondary dysautonomia). Its most common causes include:
In the sympathetic nervous system (SNS), predominant dysautonomia is common along with fibromyalgia, chronic fatigue syndrome, irritable bowel syndrome, and interstitial cystitis, raising the possibility that such dysautonomia could be their common clustering underlying pathogenesis.
In addition to sometimes being a symptom of dysautonomia, anxiety can sometimes physically manifest symptoms resembling autonomic dysfunction. A thorough investigation ruling out physiological causes is crucial, but in cases where relevant tests are performed and no causes are found or symptoms do not match any known disorders, a primary anxiety disorder is possible but should not be presumed. For such patients, the anxiety sensitivity index may have better predictivity for anxiety disorders, while the Beck Anxiety Inventory may misleadingly suggest anxiety for patients with dysautonomia.
Mitochondrial cytopathies can have autonomic dysfunction manifesting as orthostatic intolerance, sleep-related hypoventilation and arrhythmias.
Mechanism
The autonomic nervous system is a component of the peripheral nervous system and comprises two branches: the sympathetic nervous system (SNS) and the parasympathetic nervous system (PSNS). The SNS controls the more active responses, such as increasing heart rate and blood pressure. The PSNS slows down the heart rate and aids digestion, for example. Symptoms typically arise from abnormal responses of either the sympathetic or parasympathetic systems based on situation or environment.
Diagnosis
Diagnosis of dysautonomia depends on the overall function of three autonomic functions—cardiovagal, adrenergic, and sudomotor. A diagnosis should at a minimum include measurements of blood pressure and heart rate while lying flat and after at least three minutes of standing. The best way to make a diagnosis includes a range of testing, notably an autonomic reflex screen, tilt table test, and testing of the sudomotor response (ESC, QSART or thermoregulatory sweat test).
Additional tests and examinations to diagnose dysautonomia include:
Tests to elucidate the cause of dysautonomia can include:
Evaluation for acute (intermittent) porphyria
Evaluation of cerebrospinal fluid by lumbar puncture for infectious/ inflammatory diseases
Evaluation of nerve conduction study for autonomic neuropathy
Evaluation of brain and spinal magnetic resonance imaging for myelopathy, stroke and multiple system atrophy
Evaluation of MIBG myocardial scintigraphy and DaT scan for Parkinson's disease, dementia with Lewy bodies and pure autonomic failure
Vegetative-vascular dystonia
Particularly in the Russian literature, a subtype of dysautonomia that particularly affects the vascular system has been called vegetative-vascular dystonia. The term "vegetative" reflects an older name for the autonomic nervous system: the vegetative nervous system.
A similar form of this disorder has been historically noticed in various wars, including the Crimean War and American Civil War, and among British troops who colonized India. This disorder was called "irritable heart syndrome" (Da Costa's syndrome) in 1871 by American physician Jacob DaCosta.
Management
Treatment of dysautonomia can be difficult; since it is made up of many different symptoms, a combination of drug therapies is often required to manage individual symptomatic complaints. In the case of autoimmune neuropathy, treatment with immunomodulatory therapies is done. If diabetes mellitus is the cause, control of blood glucose is important. Treatment can include proton-pump inhibitors and H2 receptor antagonists used for digestive symptoms such as acid reflux.
To treat genitourinary autonomic neuropathy, medications may include sildenafil (a guanine monophosphate type-5 phosphodiesterase inhibitor). To treat hyperhidrosis, anticholinergic agents such as trihexyphenidyl or scopolamine can be used. Intracutaneous injection of botulinum toxin type A can also be used in some cases.
Balloon angioplasty, a procedure called transvascular autonomic modulation, is specifically not approved in the United States to treat autonomic dysfunction.
In contrast to orthostatic hypotension (OH) in which neurodegenerative diseases might underlie, postural orthostatic tachycardia syndrome (POTS) in which psychiatric diseases might underlie responds to psychiatric intervention/ medication, or shows spontaneous remission.
Prognosis
The prognosis of dysautonomia depends on several factors; people with chronic, progressive, generalized dysautonomia in the setting of central nervous system degeneration such as Parkinson's disease or multiple system atrophy generally have poorer long-term prognoses. Dysautonomia can be fatal due to pneumonia, acute respiratory failure, or sudden cardiopulmonary arrest. Autonomic dysfunction symptoms such as orthostatic hypotension, gastroparesis, and gustatory sweating are more frequently identified in mortalities.
See also
autonomic neuropathy
Dopamine beta hydroxylase deficiency
Familial dysautonomia
Reflex syncope
Postural orthostatic tachycardia syndrome
Orthostatic intolerance
References
Further reading
Autonomic nervous system
Peripheral nervous system disorders | 0.764308 | 0.999335 | 0.763799 |
Lymphedema | Lymphedema, also known as lymphoedema and lymphatic edema, is a condition of localized swelling caused by a compromised lymphatic system. The lymphatic system functions as a critical portion of the body's immune system and returns interstitial fluid to the bloodstream.
Lymphedema is most frequently a complication of cancer treatment or parasitic infections, but it can also be seen in a number of genetic disorders. Tissues with lymphedema are at high risk of infection because the lymphatic system has been compromised.
Though incurable and progressive, a number of treatments may improve symptoms. This commonly includes compression therapy, good skin care, exercise, and manual lymphatic drainage (MLD), which together are known as combined decongestive therapy. Diuretics are not useful.
Signs and symptoms
The most common manifestation of lymphedema is soft tissue swelling (edema). As the disorder progresses, worsening edema and skin changes including discoloration, verrucous (wart-like) hyperplasia, hyperkeratosis, papillomatosis, dermal thickening, and ulcers may be seen. Additionally, there is increased risk of infection of the skin, known as erysipelas.
Complications
When lymphatic impairment becomes so great that the collected lymph fluid exceeds the lymphatic system's ability to transport it, an abnormal amount of protein-rich fluid collects in the tissues. Left untreated, this stagnant, protein-rich fluid causes tissue channels to increase in size and number, reducing oxygen availability. This interferes with wound healing and provides a rich medium for bacterial growth which can result in skin infections, lymphangitis, lymphadenitis, and, in severe cases, skin ulcers. It is vital for lymphedema patients to be aware of the symptoms of infection and to seek immediate treatment, since recurrent infections or cellulitis, in addition to their inherent danger, further damage the lymphatic system and set up a vicious circle.
In rare cases, lymphedema may lead to a form of cancer called lymphangiosarcoma, although the mechanism of carcinogenesis is not understood. Lymphedema-associated lymphangiosarcoma is called Stewart–Treves syndrome. Lymphangiosarcoma most frequently occurs in cases of long-standing lymphedema. The incidence of angiosarcoma five years after radical mastectomy is estimated to be 0.45% in surviving patients. Lymphedema is also associated with a low grade form of cancer called retiform hemangioendothelioma (a low grade angiosarcoma).
Lymphedema can be disfiguring, and may result in a poor body image and psychological distress. Complications of lymphedema can cause difficulties in activities of daily living.
Causes and risk factors
Lymphedema may be inherited (primary) or caused by injury to the lymphatic vessels (secondary). There are also risk factors that may increase one's risk of developing lymphedema such as old age, being overweight or obese, and having rheumatic or psoriatic arthritis.
Lymph node damage
Lymphedema is most commonly seen after lymph node dissection, surgery or radiation therapy for the treatment of cancer, most notably breast cancer. In many patients the condition does not develop until months or even years after therapy has concluded. Lymphedema may also be associated with accidents or certain diseases or conditions that may inhibit the lymphatic system from functioning properly. It can also be caused by damage to the lymphatic system from infections such as cellulitis. In tropical areas of the world where parasitic filarial worms are endemic, a common cause of secondary lymphedema is filariasis.
Primary lymphedema may be congenital or may arise sporadically. Multiple syndromes are associated with primary lymphedema, including Turner syndrome, Milroy's disease, and Klippel–Trénaunay syndrome. In these syndromes it may occur as a result of absent or malformed lymph nodes or lymphatic channels. Lymphedema can be present at birth, develop at the onset of puberty (praecox), or not become apparent for many years into adulthood (tarda). In men, lower-limb primary lymphedema is most common, occurring in one or both legs. Some cases of lymphedema may be associated with other vascular abnormalities.
Secondary lymphedema affects both men and women, and, in Western countries, is most commonly due to cancer treatment. In women, it is most prevalent in an upper limb after breast cancer surgery, especially axillary lymph node dissection, and occurs on the same side of the body as the surgery. Breast and trunk lymphedema can also occur but go unrecognised as there is swelling in the area after surgery, and its symptoms (peau d'orange and an inverted nipple) can be confused with post surgery fat necrosis. Between 38 and 89% of breast cancer patients have lymphedema due to axillary lymph node dissection or radiation. Unilateral lymphedema of a lower limb occurs in up to 41% of patients after gynecologic cancer. For men treated for prostate cancer, a 5-66% incidence has been reported, with the incidence rate depending on whether staging or radical removal of lymph glands was done in addition to radiotherapy.
Head and neck lymphedema can be caused by surgery or radiation therapy for tongue or throat cancer. It may also occur in the lower limbs or groin after surgery for colon, ovarian or uterine cancer, if removal of lymph nodes or radiation therapy is required. Surgery or treatment for prostate, colon and testicular cancers may result in secondary lymphedema, particularly when lymph nodes have been removed or damaged.
The onset of secondary lymphedema in patients who have had cancer surgery has also been linked to aircraft flight (likely due to decreased cabin pressure or relative immobility). For cancer survivors wearing a prescribed and properly fitted compression garment may help decrease swelling during air travel.
Some cases of lower-limb lymphedema have been associated with the use of tamoxifen, due to blood clots and deep vein thrombosis (DVT) associated with this medication. Resolution of the blood clots or DVT is needed before lymphedema treatment can be initiated.
At birth
Hereditary lymphedema is a primary lymphedema – swelling that results from abnormalities in the lymphatic system that are present from birth. Swelling may be present in a single limb, several limbs, genitalia, or the face. It is sometimes diagnosed prenatally by a nuchal scan or postnatally by lymphoscintigraphy.
The most common cause is Meige disease which usually presents at puberty. Another form of hereditary lymphedema is Milroy's disease, caused by mutations in the VEGFR3 gene. Hereditary lymphedema is frequently syndromic and is associated with Turner syndrome, lymphedema–distichiasis syndrome, yellow nail syndrome, and Klippel–Trénaunay syndrome.
One defined genetic cause for hereditary lymphedema is GATA2 deficiency. This deficiency is a grouping of several disorders caused by a single defect: familial or sporadic inactivating mutations in one of the two parental GATA2 genes. These autosomal dominant mutations cause a reduction, i.e. a haploinsufficiency, in the cellular levels of the gene's product, GATA2. The GATA2 protein is a transcription factor critical for the development, maintenance, and functionality of blood-forming, lymphatic-forming, and other tissue-forming stem cells. Due to these mutations cellular levels of GATA2 are deficient and over time individuals develop hematological, immunological, lymphatic, and other disorders. GATA2 deficiency-induced defects in the lymphatic vessels and valves underlies the development of lymphedema, primarily in the lower extremities but may also occur in places such as the face or testes. This form of the deficiency, when coupled with sensorineural hearing loss, which may also be due to faulty development of the lymphatic system, is sometimes termed Emberger syndrome.
Primary lymphedema occurs in approximately one to three births out of every 10,000 births, with a female to male ratio of 3.5:1. In North America, the incidence of primary lymphedema is approximately 1.15 births out of every 100,000 births. Compared to secondary lymphedema, primary lymphedema is relatively rare.
Inflammatory lymphedema
Bilateral lower extremity inflammatory lymphedema (BLEIL) is a distinct type of lymphedema occurring in a setting of acute and prolonged standing, such as in new recruits during basic training. Possible underlying mechanisms may include venous congestion and inflammatory vasculitis.
Physiology
Lymph is formed from the fluid that filters out of blood and contains proteins, cellular debris, bacteria, etc. This fluid is collected by the initial lymph collectors that are blind-ended endothelial-lined vessels with fenestrated openings that allow fluids and particles as large as cells to enter. Once inside the lumen of the lymphatic vessels, the fluid is guided along increasingly larger vessels, first with rudimentary valves to prevent backflow, later with complete valves similar to the venous valve. Once the lymph enters the fully valved lymphatic vessels, it is pumped by a rhythmic peristaltic-like action by smooth muscle cells within the lymphatic vessel walls. This peristaltic action is the primary driving force moving lymph within its vessel walls. The sympathetic nervous system regulates the frequency and power of the contractions. Lymph movement can be influenced by the pressure of nearby muscle contraction, arterial pulse pressure and the vacuum created in the chest cavity during respiration, but these passive forces contribute only a minor percentage of lymph transport. The fluids collected are pumped into continually larger vessels and through lymph nodes, which remove debris and police the fluid for dangerous microbes. The lymph ends its journey in the thoracic duct or right lymphatic duct, which drain into the blood circulation.
Several research groups have hypothesized that chronic inflammation is a key regulator in the development of lymphedema. Th cells, particularly Th2 differentiation, play a crucial role in the pathophysiology of lymphedema. Research has shown that increased expression of Th2-inducing cytokines in the epidermal cells of the lymphoedematous limb. Treatment with QBX258 has been found to decrease hyperkeratosis and fibrosis, reduce the number of CD4+ cells, and normalize the expression of Th2-inducing cytokines and IL13R by keratinocytes. These findings suggest that epidermal cells may initiate or coordiate chronic Th2 responses in lymphedema.
Role of T-Cell inflammation and Th2 response
Lymphedema involves a complex interplay of inflammatory processes. Recent research has shed light on the role of T-cell inflammation and the Th2 immune response in the initiation of lymphedema.
T-Cell inflammation and fibrosis
Studies have revealed that sustained lymphatic stasis results in the infiltration of CD4+ T-cells, leading to inflammation and fibrosis within affected tissues.
Diagnosis
Diagnosis is generally based on signs and symptoms, with testing used to rule out other potential causes. An accurate diagnosis and staging may help with management. A swollen limb can result from different conditions that require different treatments. Diagnosis of lymphedema is currently based on history, physical exam, and limb measurements. Imaging studies such as lymphoscintigraphy and indocyanine green lymphography are only required when surgery is being considered. However, the ideal method of staging to guide treatment is controversial because of several different proposed protocols.
Lymphedema can occur in both the upper and lower extremities, and in some cases, the head and neck. Assessment of the extremities first begins with a visual inspection; color, presence of hair, visible veins, size and any sores or ulcerations are noted. Lack of hair may indicate an arterial circulation problem. In cases of swelling, the extremities' circumference is measured over time for reference. In early stages of lymphedema, elevating the limb may reduce or eliminate the swelling. Palpation of the wrist or ankle can determine the degree of swelling; assessment includes a check of the pulses. The axillary or inguinal lymph nodes may be enlarged due to the swelling. Enlargement of the nodes lasting more than three weeks may indicate infection or other illnesses (such as sequela from breast cancer surgery) requiring further medical attention.
Diagnosis or early detection of lymphedema is difficult. The first signs may be subjective observations such as a feeling of heaviness in the affected extremity. These may be symptomatic of early-stage lymphedema where accumulation of lymph is mild and not detectable by changes in volume or circumference. As lymphedema progresses, definitive diagnosis is commonly based upon an objective measurement of differences between the affected or at-risk limb and the opposite unaffected limb, e.g. in volume or circumference. No generally accepted criterion is definitively diagnostic, although a volume difference of 200 ml between limbs or a difference (at a single measurement site or set intervals along the limb) is often used. Bioimpedance measurement (which measures the amount of fluid in a limb) offers greater sensitivity than other methods. Devices like SOZO utilize Bioimpedence Analysis (BIA) by sending a current through the body and measuring the resultant impedance. Another approach involves Tissue Dielectric Constant (TDC) measurement, used by devices such as Delfin Technology's MoistureMeterD and LymphScanner, which employ microwaves to detect changes in the dielectric properties of tissue. These innovative techniques have become integral to official protocols for lymphedema detection.
Chronic venous stasis changes can mimic early lymphedema, but are more often bilateral and symmetric. Lipedema can also mimic lymphedema, however lipedema characteristically spares the feet beginning abruptly at the malleolus (ankle). As a part of the initial work-up before diagnosing lymphedema, it may be necessary to exclude other potential causes of lower extremity swelling such as kidney failure, hypoalbuminemia, congestive heart-failure, protein-losing kidney disease, pulmonary hypertension, obesity, pregnancy and drug-induced edema.
Classification
The International Society of Lymphology (ISL) Staging System is based solely on subjective symptoms, making it prone to substantial observer bias. Imaging modalities have been suggested as useful adjuncts to the ISL staging to clarify the diagnosis, such as Cheng's Lymphedema Grading tool, which assesses the severity of extremity lymphedema based on objective limb measurements and provides appropriate options for management.
I. Grading
Grade 1: Spontaneously reversible on elevation. Mostly pitting edema.
Grade 2: Non-spontaneously reversible on elevation. Mostly non-pitting edema.
Grade 3: Gross increase in volume and circumference of Grade 2 lymphedema, with eight stages of severity given below based on clinical assessments.
II. Staging
As described by the Fifth WHO Expert Committee on Filariasis, and endorsed by the American Society of Lymphology, the staging system helps to identify the severity of lymphedema. With the assistance of medical imaging, such as MRI or CT, staging can be established by the physician, and therapeutic or medical interventions may be applied:
Stage 0: The lymphatic vessels have sustained some damage that is not yet apparent. Transport capacity is sufficient for the amount of lymph being removed. Lymphedema is not present.
Stage 1 : Swelling increases during the day and disappears overnight as the patient lies flat in bed. Tissue is still at the pitting stage: when pressed by the fingertips, the affected area indents and reverses with elevation. Usually, upon waking in the morning, the limb or affected area is normal or almost normal in size. Treatment is not necessarily required at this point.
Stage 2: Swelling is not reversible overnight, and does not disappear without proper management. The tissue now has a spongy consistency and is considered non-pitting: when pressed by the fingertips, the affected area bounces back without indentation. Fibrosis found in Stage 2 lymphedema marks the beginning of the hardening of the limbs and increasing size.
Stage 3: Swelling is irreversible and usually the limb(s) or affected area becomes increasingly large. The tissue is hard (fibrotic) and unresponsive; some patients consider undergoing reconstructive surgery, called "debulking". This remains controversial, however, since the risks may outweigh the benefits and further damage done to the lymphatic system may make the lymphedema worse.
Stage 4: The size and circumference of the affected limb(s) become noticeably larger. Bumps, lumps, or protrusions (also called knobs) on the skin begin to appear.
Stage 5: The affected limb(s) become grossly large; one or more deep skin folds is present.
Stage 6: Knobs of small elongated or rounded sizes cluster together, giving mossy-like shapes on the limb. Mobility of the patient becomes increasingly impaired.
Stage 7: The person becomes "handicapped", and is unable to independently perform daily routine activities such as walking, bathing and cooking. Assistance from the family and health care system is needed.
Grades
Lymphedema can also be categorized by its severity (usually compared to a healthy extremity):
Grade 1 (mild edema): Involves the distal parts such as a forearm and hand or a lower leg and foot. The difference in circumference is less than and no other tissue changes are present.
Grade 2 (moderate edema): Involves an entire limb or corresponding quadrant of the trunk. Difference in circumference is . Tissue changes, such as pitting, are apparent. The patient may experience erysipelas.
Grade 3a (severe edema): Lymphedema is present in one limb and its associated trunk quadrant. Circumferential difference is greater than . Significant skin alterations, such as cornification, keratosis, cysts or fistulae, are present. Additionally, the patient may experience repeated attacks of erysipelas.
Grade 3b (massive edema): The same symptoms as grade 3a, except that two or more extremities are affected.
Grade 4 (gigantic edema): In this stage of lymphedema, the affected extremities are huge, due to almost complete blockage of the lymph channels.
Differential
Lymphedema should not be confused with edema arising from chronic venous insufficiency, which is caused by compromise of venous drainage rather than lymphatic drainage. However, untreated venous insufficiency can progress into a combined venous/lymphatic disorder known as phlebetic lymphedema (or phlebolymphedema).
Treatment
While there is no cure, treatment may improve outcomes. This commonly include compression therapy, good skin care, exercise, manual lymphatic drainage (MLD) and the use of an intermittent pneumatic compression pump, which together is known as combined decongestive therapy. MLD is most effective in mild to moderate disease. In breast cancer-related lymphedema, MLD is safe and may offer added benefit to compression bandages for reducing swelling. Most people with lymphedema can be medically managed with conservative treatment. Diuretics are not useful. Surgery is generally only used if symptoms are not improved by other measures.
Compression
Garments
Once a person is diagnosed with lymphedema, compression becomes imperative in the management of the condition. Garments are often intended to be worn all day but may be taken off for sleep, unless otherwise prescribed. Elastic compression garments are worn on the affected limb following complete de-congestive therapy to maintain edema reduction. Inelastic garments provide containment and reduction. Available styles, options, and prices vary widely. A professional garment fitter or certified lymphedema therapist can help determine the best option for the patient.
Bandaging
Compression bandaging, also called wrapping, is the application of layers of padding and short-stretch bandages to the involved areas. Short-stretch bandages are preferred over long-stretch bandages (such as those normally used to treat sprains), as the long-stretch bandages cannot produce the proper therapeutic tension necessary to safely reduce lymphedema and may produce a tourniquet effect. Compression bandages provide resistance that assists in pumping fluid out of the affected area during exercise. This counter-force results in increased lymphatic drainage and therefore a decrease in size of the swollen area.
Intermittent pneumatic compression therapy
Intermittent pneumatic compression therapy (IPC) utilizes a multi-chambered pneumatic sleeve with overlapping cells to promote movement of lymph fluid. Pump therapy should only be used in addition to other treatments such as compression bandaging and manual lymph drainage. Pump therapy has been used in the past to help with controlling lymphedema. In some cases, pump therapy helps soften fibrotic tissue and therefore potentially enable more efficient lymphatic drainage. However, reports link pump therapy to increased incidence of edema proximal to the affected limb, such as genital edema arising after pump therapy in the lower limb. Current literature has suggested the use of IPC treatment in conjunction with an elastic therapeutic tape is more effective in the overall reduction of lymphedema as well as increasing shoulder range of motion than the traditional treatment of IPC paired with complete decongestive therapy. The tape is an elastic cotton strip with an acrylic adhesive that is used commonly used to relieve the discomfort and disability associated with sports injuries, but in the context of lymphedema, this increases the space between the dermis and the muscle which increases the opportunity for lymphatic fluid to flow out naturally. The use of IPC treatments with tape, as well as subsequent lymphatic drainage, has proven to significantly reduce the circumference of lymphatic limbs in patients experiencing lymphedema secondary to breast cancer post-mastectomy.
Exercise and massage
In those with lymphedema or at risk of developing lymphedema, such as following breast cancer treatment, resistance training did not increase swelling and led to decreases in some, in addition to other potential beneficial effects on cardiovascular health. Moreover, resistance training and other forms of exercise were not associated with an increased risk of developing lymphedema in people who previously received breast cancer-related treatment. Compression garments should be worn during exercise.
Physical therapy for patients with lymphedema may include trigger point release, soft tissue massage, postural improvement, patient education on condition management, strengthening, and stretching exercises. Exercises may increase in intensity and difficulty over time, beginning with passive movements to increase range of motion and progressing towards using external weights and resistance in various postures.
Surgery
The treatment of lymphedema is usually conservative, however the use of surgery is proposed for some cases.
Suction assisted lipectomy (SAL), also known as liposuction for lymphedema, may help improve chronic non pitting edema. The procedure removes fat and protein and is done alongside continued compression therapy.
Vascularized lymph node transfers (VLNT) and lymphovenous bypass are supported by tentative evidence but are associated with a number of complications.
Laser therapy
Low-level laser therapy (LLLT) was cleared by the US Food and Drug Administration (FDA) for the treatment of lymphedema in November 2006. According to the US National Cancer Institute, LLLT may be effective in reducing lymphedema in some women. Two cycles of laser treatment were found to reduce the volume of the affected arm in approximately one-third of people with post-mastectomy lymphedema at three months post-treatment.
A new therapeutic approach involving the drug QBX258 has shown promising results in the treatment of lymphedema. Although it did not reach statistical significance, QBX258 treatment modestly decreased periostin expression and the number of CD4+ and CD4+IL4+ cells in lymphoedematous skin. Notably, QBX258 significantly reduced the expression of Th2-inducing cytokines, improving physical and social quality-of-life measures for patients. However, psychological improvements were not observed.
Epidemiology
Lymphedema affects approximately 200 million people worldwide.
References
External links
Diseases of veins, lymphatic vessels and lymph nodes
Lymphatic vessel diseases | 0.764264 | 0.999389 | 0.763797 |
Dyscrasia | In medicine, both ancient and modern, a dyscrasia is any of various disorders. The word has ancient Greek roots meaning "bad mixture". The concept of dyscrasia was developed by the Greek physician Galen (129–216 AD), who elaborated a model of health and disease as a structure of elements, qualities, humors, organs, and temperaments (based on earlier humorism). Health was understood in this perspective to be a condition of harmony or balance among these basic components, called eucrasia. Disease was interpreted as the disproportion of bodily fluids or four humours: phlegm, blood, yellow bile, and black bile. The imbalance was called dyscrasia. In modern medicine, the term is still occasionally used in medical context for an unspecified disorder of the blood, such as a plasma cell dyscrasia.
Ancient use
To the Greeks, it meant an imbalance of the four humors: blood, black bile, yellow bile, and water (phlegm). These humors were believed to exist in the body, and any change in the balance among the four of them was the direct cause of all disease.
This is similar to the concepts of bodily humors in the Tibetan medical tradition and the Indian Ayurvedic system, which both relate health and disease to the balance and imbalance of the three bodily humors, generally translated as wind, bile, and phlegm. This is also similar to the Chinese concept of yin and yang that an imbalance of the two polarities caused ailment.
Modern use
The term is still occasionally used in medical contexts for an unspecified disorder of the blood. Specifically, it is defined in current medicine as a morbid general state resulting from the presence of abnormal material in the blood, usually applied to diseases affecting blood cells or platelets. Evidence of dyscrasia can be present with a WBC (white blood cell) count of over 1,000,000.
"Plasma cell dyscrasia" is sometimes considered synonymous with paraproteinemia or monoclonal gammopathy.
H2 receptor antagonists, such as famotidine and nizatidine, in use for treatment of peptic ulcers, are known for causing blood dyscrasia – leading to bone marrow failure in 1 out of 50,000 patients.
See also
Dysthymia and Euthymia (medicine), similar concepts applied to mood
References
External links
"Blood Dyscrasia" at eCureMe
Ancient Greek medicine
Medical terminology | 0.77326 | 0.987755 | 0.763791 |
Methemoglobinemia | Methemoglobinemia, or methaemoglobinaemia, is a condition of elevated methemoglobin in the blood. Symptoms may include headache, dizziness, shortness of breath, nausea, poor muscle coordination, and blue-colored skin (cyanosis). Complications may include seizures and heart arrhythmias.
Methemoglobinemia can be due to certain medications, chemicals, or food or it can be inherited. Substances involved may include benzocaine, nitrites, or dapsone. The underlying mechanism involves some of the iron in hemoglobin being converted from the ferrous [Fe2+] to the ferric [Fe3+] form. The diagnosis is often suspected based on symptoms and a low blood oxygen that does not improve with oxygen therapy. Diagnosis is confirmed by a blood gas.
Treatment is generally with oxygen therapy and methylene blue. Other treatments may include vitamin C, exchange transfusion, and hyperbaric oxygen therapy. Outcomes are generally good with treatment. Methemoglobinemia is relatively uncommon, with most cases being acquired rather than genetic.
Signs and symptoms
Signs and symptoms of methemoglobinemia (methemoglobin level above 10%) include shortness of breath, cyanosis, mental status changes (~50%), headache, fatigue, exercise intolerance, dizziness, and loss of consciousness.
People with severe methemoglobinemia (methemoglobin level above 50%) may exhibit seizures, coma, and death (level above 70%). Healthy people may not have many symptoms with methemoglobin levels below 15%. However, people with co-morbidities such as anemia, cardiovascular disease, lung disease, sepsis, or who have abnormal hemoglobin species (e.g. carboxyhemoglobin, sulfhemoglobinemia or sickle hemoglobin) may experience moderate to severe symptoms at much lower levels (as low as 5–8%).
Cause
Acquired
Methemoglobinemia may be acquired. Classical drug causes of methemoglobinemia include various antibiotics (trimethoprim, sulfonamides, and dapsone), local anesthetics (especially articaine, benzocaine, prilocaine, and lidocaine), and aniline dyes, metoclopramide, rasburicase, umbellulone, chlorates, bromates, and nitrites. Nitrates are suspected to cause methemoglobinemia.
In otherwise healthy individuals, the protective enzyme systems normally present in red blood cells rapidly reduce the methemoglobin back to hemoglobin and hence maintain methemoglobin levels at less than one percent of the total hemoglobin concentration. Exposure to exogenous oxidizing drugs and their metabolites (such as benzocaine, dapsone, and nitrates) may lead to an increase of up to a thousandfold of the methemoglobin formation rate, overwhelming the protective enzyme systems and acutely increasing methemoglobin levels.
Infants under 6 months of age have lower levels of a key methemoglobin reduction enzyme (NADH-cytochrome b5 reductase) in their red blood cells. This results in a major risk of methemoglobinemia caused by nitrates ingested in drinking water, dehydration (usually caused by gastroenteritis with diarrhea), sepsis, or topical anesthetics containing benzocaine or prilocaine resulting in blue baby syndrome. Nitrates used in agricultural fertilizers may leak into the ground and may contaminate well water. The current EPA standard of 10 ppm nitrate-nitrogen for drinking water is specifically set to protect infants. Benzocaine applied to the gums or throat (as commonly used in baby teething gels, or sore throat lozenges) can cause methemoglobinemia.
Genetic
Due to a deficiency of the enzyme diaphorase I (cytochrome b5 reductase), methemoglobin levels rise and the blood of met-Hb patients has reduced oxygen-carrying capacity. Instead of being red in color, the arterial blood of met-Hb patients is brown. This results in the skin of white patients gaining a bluish hue. Hereditary met-Hb is caused by a recessive gene. If only one parent has this gene, offspring will have normal-hued skin, but if both parents carry the gene, there is a chance the offspring will have blue-hued skin.
Another cause of congenital methemoglobinemia is seen in patients with abnormal hemoglobin variants such as hemoglobin M (HbM), or hemoglobin H (HbH), which are not amenable to reduction despite intact enzyme systems.
Methemoglobinemia can also arise in patients with pyruvate kinase deficiency due to impaired production of NADH – the essential cofactor for diaphorase I. Similarly, patients with glucose-6-phosphate dehydrogenase deficiency may have impaired production of another co-factor, NADPH.
Pathophysiology
The affinity for oxygen of ferric iron is impaired. The binding of oxygen to methemoglobin results in an increased affinity for oxygen in the remaining heme sites that are in ferrous state within the same tetrameric hemoglobin unit. This leads to an overall reduced ability of the red blood cell to release oxygen to tissues, with the associated oxygen–hemoglobin dissociation curve therefore shifted to the left. When methemoglobin concentration is elevated in red blood cells, tissue hypoxia may occur.
Normally, methemoglobin levels are <1%, as measured by the CO-oximetry test. Elevated levels of methemoglobin in the blood are caused when the mechanisms that defend against oxidative stress within the red blood cell are overwhelmed and the oxygen carrying ferrous ion (Fe2+) of the heme group of the hemoglobin molecule is oxidized to the ferric state (Fe3+). This converts hemoglobin to methemoglobin, resulting in a reduced ability to release oxygen to tissues and thereby hypoxia. This can give the blood a bluish or chocolate-brown color. Spontaneously formed methemoglobin is normally reduced (regenerating normal hemoglobin) by protective enzyme systems, e.g., NADH methemoglobin reductase (cytochrome-b5 reductase) (major pathway), NADPH methemoglobin reductase (minor pathway) and to a lesser extent the ascorbic acid and glutathione enzyme systems. Disruptions with these enzyme systems lead to methemoglobinemia. Hypoxia occurs due to the decreased oxygen-binding capacity of methemoglobin, as well as the increased oxygen-binding affinity of other subunits in the same hemoglobin molecule, which prevents them from releasing oxygen at normal tissue oxygen levels.
Diagnosis
The diagnosis of methemoglobinemia is made with the typical symptoms, a suggestive history, low oxygen saturation on pulse oximetry measurements (SpO2) and these symptoms (cyanosis and hypoxia) failing to improve on oxygen treatment. The definitive test would be obtaining either CO-oximeter or a methemoglobin level on an arterial blood gas test.
Arterial blood with an elevated methemoglobin level has a characteristic chocolate-brown color as compared to normal bright red oxygen-containing arterial blood; the color can be compared with reference charts.
The SaO2 calculation in the arterial blood gas analysis is falsely normal, as it is calculated under the premise of hemoglobin either being oxyhemoglobin or deoxyhemoglobin. However, co-oximetry can distinguish the methemoglobin concentration and percentage of hemoglobin.
At the same time, the SpO2 concentration as measured by pulse ox is false high, because methemoglobin absorbs the pulse ox light at the 2 wavelengths it uses to calculate the ratio of oxyhemoglobin and deoxyhemoglobin. For example with a methemoglobin level of 30–35%, this ratio of light absorbance is 1.0, which translates into a false high SpO2 of 85%.
Differential diagnosis
Other conditions that can cause bluish skin include argyria, sulfhemoglobinemia, heart failure, amiodarone-induced bluish skin pigmentation and acrodermatitis enteropathica.
Treatment
Methemoglobinemia can be treated with supplemental oxygen and methylene blue. Methylene blue is given as a 1% solution (10 mg/ml) 1 to 2 mg/kg administered intravenously slowly over five minutes. Although the response is usually rapid, the dose may be repeated in one hour if the level of methemoglobin is still high one hour after the initial infusion. Methylene blue inhibits monoamine oxidase, and serotonin toxicity can occur if taken with an SSRI (selective serotonin reuptake inhibitor) medicine.
Methylene blue restores the iron in hemoglobin to its normal (reduced) oxygen-carrying state. This is achieved by providing an artificial electron acceptor (such as methylene blue or flavin) for NADPH methemoglobin reductase (RBCs usually don't have one; the presence of methylene blue allows the enzyme to function at 5× normal levels). The NADPH is generated via the hexose monophosphate shunt.
Genetically induced chronic low-level methemoglobinemia may be treated with oral methylene blue daily. Also, vitamin C can occasionally reduce cyanosis associated with chronic methemoglobinemia, and may be helpful in settings in which methylene blue is unavailable or contraindicated (e.g., in an individual with G6PD deficiency). Diaphorase (cytochrome b5 reductase) normally contributes only a small percentage of the red blood cell's reducing capacity, but can be pharmacologically activated by exogenous cofactors (such as methylene blue) to five times its normal level of activity.
Epidemiology
Methemoglobinemia mostly affects infants under 6 months of age (particularly those under 4 months) due to low hepatic production of methemoglobin reductase. The most at-risk populations are those with water sources high in nitrates, such as wells and other water that is not monitored or treated by a water treatment facility. The nitrates can be hazardous to the infants. The link between blue baby syndrome in infants and high nitrate levels is well established for waters exceeding the normal limit of 10 mg/L. However, there is also evidence that breastfeeding is protective in exposed populations.
Society and culture
Blue Fugates
The Fugates, a family that lived in the hills of Kentucky in the US, had the hereditary form. They are known as the "Blue Fugates". Martin Fugate and Elizabeth Smith, who had married and settled near Hazard, Kentucky, around 1800, were both carriers of the recessive methemoglobinemia (met-H) gene, as was a nearby clan with whom the Fugates descendants intermarried. As a result, many descendants of the Fugates were born with met-H.
Blue Men of Lurgan
The "blue men of Lurgan" were a pair of Lurgan men suffering from what was described as "familial idiopathic methemoglobinemia" who were treated by James Deeny in 1942. Deeny, who would later become the Chief Medical Officer of the Republic of Ireland, prescribed a course of ascorbic acid and sodium bicarbonate. In case one, by the eighth day of treatments, there was a marked change in appearance, and by the twelfth day of treatment, the patient's complexion was normal. In case two, the patient's complexion reached normality over a month-long duration of treatment.
See also
Carbon monoxide poisoning
Hemoglobinemia
References
External links
Cleveland Clinic
Red blood cell disorders
Autosomal recessive disorders
Wikipedia medicine articles ready to translate | 0.764818 | 0.998602 | 0.763749 |
Hyponatremia | Hyponatremia or hyponatraemia is a low concentration of sodium in the blood. It is generally defined as a sodium concentration of less than 135 mmol/L (135 mEq/L), with severe hyponatremia being below 120 mEq/L. Symptoms can be absent, mild or severe. Mild symptoms include a decreased ability to think, headaches, nausea, and poor balance. Severe symptoms include confusion, seizures, and coma; death can ensue.
The causes of hyponatremia are typically classified by a person's body fluid status into low volume, normal volume, or high volume. Low volume hyponatremia can occur from diarrhea, vomiting, diuretics, and sweating. Normal volume hyponatremia is divided into cases with dilute urine and concentrated urine. Cases in which the urine is dilute include adrenal insufficiency, hypothyroidism, and drinking too much water or too much beer. Cases in which the urine is concentrated include syndrome of inappropriate antidiuretic hormone secretion (SIADH). High volume hyponatremia can occur from heart failure, liver failure, and kidney failure. Conditions that can lead to falsely low sodium measurements include high blood protein levels such as in multiple myeloma, high blood fat levels, and high blood sugar.
Treatment is based on the underlying cause. Correcting hyponatremia too quickly can lead to complications. Rapid partial correction with 3% normal saline is only recommended in those with significant symptoms and occasionally those in whom the condition was of rapid onset. Low volume hyponatremia is typically treated with intravenous normal saline. SIADH is typically treated by correcting the underlying cause and with fluid restriction while high volume hyponatremia is typically treated with both fluid restriction and a diet low in salt. Correction should generally be gradual in those in whom the low levels have been present for more than two days.
Hyponatremia is the most common type of electrolyte imbalance, and is often found in older adults. It occurs in about 20% of those admitted to hospital and 10% of people during or after an endurance sporting event. Among those in hospital, hyponatremia is associated with an increased risk of death. The economic costs of hyponatremia are estimated at $2.6 billion per annum in the United States.
Signs and symptoms
Signs and symptoms of hyponatremia include nausea and vomiting, headache, short-term memory loss, confusion, lethargy, fatigue, loss of appetite, irritability, muscle weakness, spasms or cramps, seizures, and decreased consciousness or coma. Lower levels of plasma sodium are associated with more severe symptoms. However, mild hyponatremia (plasma sodium levels at 131–135 mmol/L) may be associated with complications and subtle symptoms (for example, increased falls, altered posture and gait, reduced attention, impaired cognition, and possibly higher rates of death).
Neurological symptoms typically occur with very low levels of plasma sodium (usually <115 mmol/L). When sodium levels in the blood become very low, water enters the brain cells and causes them to swell (cerebral edema). This results in increased pressure in the skull and causes hyponatremic encephalopathy. As pressure increases in the skull, herniation of the brain can occur, which is a squeezing of the brain across the internal structures of the skull. This can lead to headache, nausea, vomiting, confusion, seizures, brain stem compression and respiratory arrest, and non-cardiogenic accumulation of fluid in the lungs. This is usually fatal if not immediately treated.
Symptom severity depends on how fast and how severe the drop in blood sodium level is. A gradual drop, even to very low levels, may be tolerated well if it occurs over several days or weeks, because of neuronal adaptation. The presence of underlying neurological disease such as a seizure disorder or non-neurological metabolic abnormalities, also affects the severity of neurologic symptoms.
Hyponatremia, by interfering with bone metabolism, has been linked with a doubled risk of osteoporosis and an increased risk of bone fracture.
Causes
The specific causes of hyponatremia are generally divided into those with low tonicity (lower than normal concentration of solutes), without low tonicity, and falsely low sodiums. Those with low tonicity are then grouped by whether the person has high fluid volume, normal fluid volume, or low fluid volume.
High volume
Both sodium and water content increase: Increase in sodium content leads to hypervolemia and water content to hyponatremia.
Cirrhosis of the liver
Congestive heart failure
Nephrotic syndrome in the kidneys
Excessive water consumption (Water intoxication)
Normal volume
There is volume expansion in the body, no edema, but hyponatremia occurs
SIADH (and its many causes)
Hypothyroidism
Not enough ACTH
Beer potomania
Normal physiologic change of pregnancy
Reset osmostat
Low volume
Hypovolemia (extracellular volume loss) is due to total body sodium loss. Hyponatremia is caused by a relatively smaller loss in total body water.
Any cause of hypovolemia such as prolonged vomiting, decreased oral intake, severe diarrhea
Diuretic use (due to the diuretic causing a volume depleted state and thence ADH release, and not a direct result of diuretic-induced urine sodium loss)
Addison's disease and congenital adrenal hyperplasia in which the adrenal glands do not produce enough steroid hormones (combined glucocorticoid and mineralocorticoid deficiency)
Isolated hyperchlorhidrosis (Carbonic anhydrase XII deficiency), a rare genetic disorder which results in a lifelong tendency to lose excessive amounts of sodium by sweating.
Pancreatitis
Prolonged exercise and sweating, combined with drinking water without electrolytes is the cause of exercise-associated hyponatremia (EAH). It is common in marathon runners and participants of other endurance events.
The use of MDMA (ecstasy) can result in hyponatremia.
Medication
Antipsychotics have been reported to cause hyponatremia in a review of medical articles from 1946 to 2016.
Available evidence suggests that all classes of psychotropics, i.e., antidepressants, antipsychotics, mood stabilizers, and sedative/hypnotics can lead to hyponatremia. Age is a significant factor for drug induced hyponatremia.
Other causes
Miscellaneous causes that are not included under the above classification scheme include the following:
False or pseudo hyponatremia is caused by a false lab measurement of sodium due to massive increases in blood triglyceride levels or extreme elevation of immunoglobulins as may occur in multiple myeloma.
Hyponatremia with elevated tonicity can occur with high blood sugar, causing a shift of excess free water into the serum.
Pathophysiology
The causes of and treatments for hyponatremia can only be understood by having a grasp of the size of the body fluid compartments and subcompartments and their regulation; how under normal circumstances the body is able to maintain the sodium concentration within a narrow range (homeostasis of body fluid osmolality); conditions can cause that feedback system to malfunction (pathophysiology); and the consequences of the malfunction of that system on the size and solute concentration of the fluid compartments.
Normal homeostasis
There is a hypothalamic-kidney feedback system which normally maintains the concentration of the serum sodium within a narrow range. This system operates as follows: in some of the cells of the hypothalamus, there are osmoreceptors which respond to an elevated serum sodium in body fluids by signalling the posterior pituitary gland to secrete antidiuretic hormone (ADH) (vasopressin). ADH then enters the bloodstream and signals the kidney to bring back sufficient solute-free water from the fluid in the kidney tubules to dilute the serum sodium back to normal, and this turns off the osmoreceptors in the hypothalamus. Also, thirst is stimulated. Normally, when mild hyponatremia begins to occur, that is, the serum sodium begins to fall below 135 mEq/L, there is no secretion of ADH, and the kidney stops returning water to the body from the kidney tubule. Also, no thirst is experienced. These two act in concert to raise the serum sodium to the normal range.
Hyponatremia
Hyponatremia occurs 1) when the hypothalamic-kidney feedback loop is overwhelmed by increased fluid intake, 2) the feedback loop malfunctions such that ADH is always "turned on", 3) the receptors in the kidney are always "open" regardless of there being no signal from ADH to be open; or 4) there is an increased ADH even though there is no normal stimulus (elevated serum sodium) for ADH to be increased.
Hyponatremia occurs in one of two ways: either the osmoreceptor-aquaporin feedback loop is overwhelmed, or it is interrupted. If it is interrupted, it is either related or not related to ADH. If the feedback system is overwhelmed, this is water intoxication with maximally dilute urine and is caused by 1) pathological water drinking (psychogenic polydipsia), 2) beer potomania, 3) overzealous intravenous solute free water infusion, or 4) infantile water intoxication. "Impairment of urine diluting ability related to ADH" occurs in nine situations: 1) arterial volume depletion 2) hemodynamically mediated, 3) congestive heart failure, 4) cirrhosis, 5) nephrosis, 6) spinal cord disease, 7) Addison's disease, 8) cerebral salt wasting, and 9) syndrome of inappropriate antidiuretic hormone secretion (SIADH).
If the feed-back system is normal, but an impairment of urine diluting ability unrelated to ADH occurs, this is 1) oliguric kidney failure, 2) tubular interstitial kidney disease, 3) diuretics, or 4) nephrogenic syndrome of antidiuresis.
Sodium is the primary positively charged ion outside of the cell and cannot cross from the interstitial space into the cell. This is because charged sodium ions attract around them up to 25 water molecules, thereby creating a large polar structure too large to pass through the cell membrane: "channels" or "pumps" are required.
Cell swelling also produces activation of volume-regulated anion channels which is related to the release of taurine and glutamate from astrocytes.
Diagnosis
The history, physical exam, and laboratory testing are required to determine the underlying cause of hyponatremia. A blood test demonstrating a serum sodium less than 135 mmol/L is diagnostic for hyponatremia. The history and physical exam are necessary to help determine if the person is hypovolemic, euvolemic, or hypervolemic, which has important implications in determining the underlying cause. An assessment is also made to determine if the person is experiencing symptoms from their hyponatremia. These include assessments of alertness, concentration, and orientation.
False hyponatremia
False hyponatremia, also known as spurious, pseudo, hypertonic, or artifactual hyponatremia is when the lab tests read low sodium levels but there is no hypotonicity. In hypertonic hyponatremia, resorption of water by molecules such as glucose (hyperglycemia or diabetes) or mannitol (hypertonic infusion) occurs. In isotonic hyponatremia a measurement error due to high blood triglyceride level (most common) or paraproteinemia occurs. It occurs when using techniques that measure the amount of sodium in a specified volume of serum/plasma, or that dilute the sample before analysis.
True hyponatremia
True hyponatremia, also known as hypotonic hyponatremia, is the most common type. It is often simply referred to as "hyponatremia." Hypotonic hyponatremia is categorized in 3 ways based on the person's blood volume status. Each category represents a different underlying reason for the increase in ADH that led to the water retention and thence hyponatremia:
High volume hyponatremia, wherein there is decreased effective circulating volume (less blood flowing in the body) even though total body volume is increased (by the presence of edema or swelling, especially in the ankles). The decreased effective circulating volume stimulates the release of anti-diuretic hormone (ADH), which in turn leads to water retention. Hypervolemic hyponatremia is most commonly the result of congestive heart failure, liver failure, or kidney disease.
Normal volume hyponatremia, wherein the increase in ADH is secondary to either physiologic but excessive ADH release (as occurs with nausea or severe pain) or inappropriate and non-physiologic secretion of ADH, that is, syndrome of inappropriate antidiuretic hormone hypersecretion (SIADH). Often categorized under euvolemic is hyponatremia due to inadequate urine solute (not enough chemicals or electrolytes to produce urine) as occurs in beer potomania or "tea and toast" hyponatremia, hyponatremia due to hypothyroidism or central adrenal insufficiency, and those rare instances of hyponatremia that are truly secondary to excess water intake.
Low volume hyponatremia, wherein ADH secretion is stimulated by or associated with volume depletion (not enough water in the body) due to decreased effective circulating volume.
Acute versus chronic
Chronic hyponatremia is when sodium levels drop gradually over several days or weeks and symptoms and complications are typically moderate. Chronic hyponatremia is often called asymptomatic hyponatremia in clinical settings because it is thought to have no symptoms; however, emerging data suggests that "asymptomatic" hyponatremia is not actually asymptomatic.
Acute hyponatremia is when sodium levels drop rapidly, resulting in potentially dangerous effects, such as rapid brain swelling, which can result in coma and death.
Treatment
The treatment of hyponatremia depends on the underlying cause. How quickly treatment is required depends on a person's symptoms. Fluids are typically the cornerstone of initial management. In those with severe disease an increase in sodium of about 5 mmol/L over one to four hours is recommended. A rapid rise in serum sodium is anticipated in certain groups when the cause of the hyponatremia is addressed thus warranting closer monitoring in order to avoid overly rapid correction of the blood sodium concentration. These groups include persons who have hypovolemic hyponatremia and receive intravenous fluids (thus correcting their hypovolemia), persons with adrenal insufficiency who receive hydrocortisone, persons in whom a medication causing increased ADH release has been stopped, and persons who have hyponatremia due to decreased salt and/or solute intake in their diet who are treated with a higher solute diet. If large volumes of dilute urine are seen, this can be a warning sign that overcorrection is imminent in these individuals.
Sodium deficit = (140 – serum sodium) × total body water
Total body water = kilograms of body weight × 0.6
Fluids
Options include:
Mild and asymptomatic hyponatremia is treated with adequate solute intake (including salt and protein) and fluid restriction starting at 500 millilitres per day (mL/d) of water with adjustments based on serum sodium levels. Long-term fluid restriction of 1,200–1,800 mL/d may maintain the person in a symptom-free state.
Moderate and/or symptomatic hyponatremia is treated by raising the serum sodium level by 0.5 to 1 mmol per liter per hour for a total of 8 mmol per liter during the first day with the use of furosemide and replacing sodium and potassium losses with 0.9% saline.
Severe hyponatremia or severe symptoms (confusion, convulsions, or coma): consider hypertonic saline (3%) 1–2 mL/kg IV in 3–4 h. Hypertonic saline may lead to a rapid dilute diuresis and fall in the serum sodium. It should not be used in those with an expanded extracellular fluid volume.
Electrolyte abnormalities
In persons with hyponatremia due to low blood volume (hypovolemia) from diuretics with simultaneous low blood potassium levels, correction of the low potassium level can assist with correction of hyponatremia.
Medications
American and European guidelines come to different conclusions regarding the use of medications. In the United States they are recommended in those with SIADH, cirrhosis, or heart failure who fail limiting fluid intake. In Europe they are not generally recommended.
There is tentative evidence that vasopressin receptor antagonists (vaptans), such as conivaptan, may be slightly more effective than fluid restriction in those with high volume or normal volume hyponatremia. They should not be used in people with low volume. They may also be used in people with chronic hyponatremia due to SIADH that is insufficiently responsive to fluid restriction and/or sodium tablets.
Demeclocycline, while sometimes used for SIADH, has significant side effects including potential kidney problems and sun sensitivity. In many people it has no benefit while in others it can result in overcorrection and high blood sodium levels.
Daily use of urea by mouth, while not commonly used due to the taste, has tentative evidence in SIADH. However, it is not available in many areas of the world.
Precautions
Raising the serum sodium concentration too rapidly may cause osmotic demyelination syndrome. Rapid correction of sodium levels can also lead to central pontine myelinolysis (CPM). It is recommended not to raise the serum sodium by more than 10 mEq/L/day.
Epidemiology
Hyponatremia is the most commonly seen water–electrolyte imbalance. The disorder is more frequent in females, the elderly, and in people who are hospitalized. The number of cases of hyponatremia depends largely on the population. In hospital it affects about 15–20% of people; however, only 3–5% of people who are hospitalized have a sodium level less than 130 mmol/L. Hyponatremia has been reported in up to 30% of the elderly in nursing homes and is also present in approximately 30% of people who are depressed on selective serotonin reuptake inhibitors.
People who have hyponatremia who require hospitalisation have a longer length of stay (with associated increased costs) and also have a higher likelihood of requiring readmission. This is particularly the case in men and in the elderly.
References
Further reading
External links
Hyponatremia at the Mayo Clinic
Sodium at Lab Tests Online
ICD-10 code for Hyponatremia - Diagnosis Code
Electrolyte disturbances
Mineral deficiencies
Sodium
Wikipedia medicine articles ready to translate
Wikipedia neurology articles ready to translate
Wilderness medical emergencies | 0.764717 | 0.998733 | 0.763748 |
Carbohydrate metabolism | Carbohydrate metabolism is the whole of the biochemical processes responsible for the metabolic formation, breakdown, and interconversion of carbohydrates in living organisms.
Carbohydrates are central to many essential metabolic pathways. Plants synthesize carbohydrates from carbon dioxide and water through photosynthesis, allowing them to store energy absorbed from sunlight internally. When animals and fungi consume plants, they use cellular respiration to break down these stored carbohydrates to make energy available to cells. Both animals and plants temporarily store the released energy in the form of high-energy molecules, such as adenosine triphosphate (ATP), for use in various cellular processes.
Humans can consume a variety of carbohydrates, digestion breaks down complex carbohydrates into simple monomers (monosaccharides): glucose, fructose, mannose and galactose. After resorption in the gut, the monosaccharides are transported, through the portal vein, to the liver, where all non-glucose monosacharids (fructose, galactose) are transformed into glucose as well. Glucose (blood sugar) is distributed to cells in the tissues, where it is broken down via cellular respiration, or stored as glycogen. In cellular (aerobic) respiration, glucose and oxygen are metabolized to release energy, with carbon dioxide and water as endproducts.
Metabolic pathways
Glycolysis
Glycolysis is the process of breaking down a glucose molecule into two pyruvate molecules, while storing energy released during this process as adenosine triphosphate (ATP) and nicotinamide adenine dinucleotide (NADH). Nearly all organisms that break down glucose utilize glycolysis. Glucose regulation and product use are the primary categories in which these pathways differ between organisms. In some tissues and organisms, glycolysis is the sole method of energy production. This pathway is common to both anaerobic and aerobic respiration.
Glycolysis consists of ten steps, split into two phases. During the first phase, it requires the breakdown of two ATP molecules. During the second phase, chemical energy from the intermediates is transferred into ATP and NADH. The breakdown of one molecule of glucose results in two molecules of pyruvate, which can be further oxidized to access more energy in later processes.
Glycolysis can be regulated at different steps of the process through feedback regulation. The step that is regulated the most is the third step. This regulation is to ensure that the body is not over-producing pyruvate molecules. The regulation also allows for the storage of glucose molecules into fatty acids. There are various enzymes that are used throughout glycolysis. The enzymes upregulate, downregulate, and feedback regulate the process.
Gluconeogenesis
Gluconeogenesis (GNG) is a metabolic pathway that results in the generation of glucose from certain non-carbohydrate carbon substrates. It is a ubiquitous process, present in plants, animals, fungi, bacteria, and other microorganisms. In vertebrates, gluconeogenesis occurs mainly in the liver and, to a lesser extent, in the cortex of the kidneys. It is one of two primary mechanisms – the other being degradation of glycogen (glycogenolysis) – used by humans and many other animals to maintain blood sugar levels, avoiding low levels (hypoglycemia). In ruminants, because dietary carbohydrates tend to be metabolized by rumen organisms, gluconeogenesis occurs regardless of fasting, low-carbohydrate diets, exercise, etc. In many other animals, the process occurs during periods of fasting, starvation, low-carbohydrate diets, or intense exercise.
In humans, substrates for gluconeogenesis may come from any non-carbohydrate sources that can be converted to pyruvate or intermediates of glycolysis (see figure). For the breakdown of proteins, these substrates include glucogenic amino acids (although not ketogenic amino acids); from breakdown of lipids (such as triglycerides), they include glycerol, odd-chain fatty acids (although not even-chain fatty acids, see below); and from other parts of metabolism they include lactate from the Cori cycle. Under conditions of prolonged fasting, acetone derived from ketone bodies can also serve as a substrate, providing a pathway from fatty acids to glucose. Although most gluconeogenesis occurs in the liver, the relative contribution of gluconeogenesis by the kidney is increased in diabetes and prolonged fasting.
The gluconeogenesis pathway is highly endergonic until it is coupled to the hydrolysis of ATP or guanosine triphosphate (GTP), effectively making the process exergonic. For example, the pathway leading from pyruvate to glucose-6-phosphate requires 4 molecules of ATP and 2 molecules of GTP to proceed spontaneously. These ATPs are supplied from fatty acid catabolism via beta oxidation.
Glycogenolysis
Glycogenolysis refers to the breakdown of glycogen. In the liver, muscles, and the kidney, this process occurs to provide glucose when necessary. A single glucose molecule is cleaved from a branch of glycogen, and is transformed into glucose-1-phosphate during this process. This molecule can then be converted to glucose-6-phosphate, an intermediate in the glycolysis pathway.
Glucose-6-phosphate can then progress through glycolysis. Glycolysis only requires the input of one molecule of ATP when the glucose originates in glycogen. Alternatively, glucose-6-phosphate can be converted back into glucose in the liver and the kidneys, allowing it to raise blood glucose levels if necessary.
Glucagon in the liver stimulates glycogenolysis when the blood glucose is lowered, known as hypoglycemia. The glycogen in the liver can function as a backup source of glucose between meals. Liver glycogen mainly serves the central nervous system. Adrenaline stimulates the breakdown of glycogen in the skeletal muscle during exercise. In the muscles, glycogen ensures a rapidly accessible energy source for movement.
Glycogenesis
Glycogenesis refers to the process of synthesizing glycogen. In humans, glucose can be converted to glycogen via this process. Glycogen is a highly branched structure, consisting of the core protein Glycogenin, surrounded by branches of glucose units, linked together. The branching of glycogen increases its solubility, and allows for a higher number of glucose molecules to be accessible for breakdown at the same time. Glycogenesis occurs primarily in the liver, skeletal muscles, and kidney. The Glycogenesis pathway consumes energy, like most synthetic pathways, because an ATP and a UTP are consumed for each molecule of glucose introduced.
Pentose phosphate pathway
The pentose phosphate pathway is an alternative method of oxidizing glucose. It occurs in the liver, adipose tissue, adrenal cortex, testis, mammary glands, phagocytes, and red blood cells. It produces products that are used in other cell processes, while reducing NADP to NADPH. This pathway is regulated through changes in the activity of glucose-6-phosphate dehydrogenase.
Fructose metabolism
Fructose must undergo certain extra steps in order to enter the glycolysis pathway. Enzymes located in certain tissues can add a phosphate group to fructose. This phosphorylation creates fructose-6-phosphate, an intermediate in the glycolysis pathway that can be broken down directly in those tissues. This pathway occurs in the muscles, adipose tissue, and kidney. In the liver, enzymes produce fructose-1-phosphate, which enters the glycolysis pathway and is later cleaved into glyceraldehyde and dihydroxyacetone phosphate.
Galactose metabolism
Lactose, or milk sugar, consists of one molecule of glucose and one molecule of galactose. After separation from glucose, galactose travels to the liver for conversion to glucose. Galactokinase uses one molecule of ATP to phosphorylate galactose. The phosphorylated galactose is then converted to glucose-1-phosphate, and then eventually glucose-6-phosphate, which can be broken down in glycolysis.
Energy production
Many steps of carbohydrate metabolism allow the cells to access energy and store it more transiently in ATP. The cofactors NAD+ and FAD are sometimes reduced during this process to form NADH and FADH2, which drive the creation of ATP in other processes. A molecule of NADH can produce 1.5–2.5 molecules of ATP, whereas a molecule of FADH2 yields 1.5 molecules of ATP.
Typically, the complete breakdown of one molecule of glucose by aerobic respiration (i.e. involving glycolysis, the citric-acid cycle and oxidative phosphorylation, the last providing the most energy) is usually about 30–32 molecules of ATP. Oxidation of one gram of carbohydrate yields approximately 4 kcal of energy.
Hormonal regulation
Glucoregulation is the maintenance of steady levels of glucose in the body.
Hormones released from the pancreas regulate the overall metabolism of glucose. Insulin and glucagon are the primary hormones involved in maintaining a steady level of glucose in the blood, and the release of each is controlled by the amount of nutrients currently available. The amount of insulin released in the blood and sensitivity of the cells to the insulin both determine the amount of glucose that cells break down. Increased levels of glucagon activates the enzymes that catalyze glycogenolysis, and inhibits the enzymes that catalyze glycogenesis. Conversely, glycogenesis is enhanced and glycogenolysis inhibited when there are high levels of insulin in the blood.
The level of circulatory glucose (known informally as "blood sugar"), as well as the detection of nutrients in the Duodenum is the most important factor determining the amount of glucagon or insulin produced. The release of glucagon is precipitated by low levels of blood glucose, whereas high levels of blood glucose stimulates cells to produce insulin. Because the level of circulatory glucose is largely determined by the intake of dietary carbohydrates, diet controls major aspects of metabolism via insulin. In humans, insulin is made by beta cells in the pancreas, fat is stored in adipose tissue cells, and glycogen is both stored and released as needed by liver cells. Regardless of insulin levels, no glucose is released to the blood from internal glycogen stores from muscle cells.
Carbohydrates as storage
Carbohydrates are typically stored as long polymers of glucose molecules with glycosidic bonds for structural support (e.g. chitin, cellulose) or for energy storage (e.g. glycogen, starch). However, the strong affinity of most carbohydrates for water makes storage of large quantities of carbohydrates inefficient due to the large molecular weight of the solvated water-carbohydrate complex. In most organisms, excess carbohydrates are regularly catabolised to form acetyl-CoA, which is a feed stock for the fatty acid synthesis pathway; fatty acids, triglycerides, and other lipids are commonly used for long-term energy storage. The hydrophobic character of lipids makes them a much more compact form of energy storage than hydrophilic carbohydrates. Gluconeogenesis permits glucose to be synthesized from various sources, including lipids.
In some animals (such as termites) and some microorganisms (such as protists and bacteria), cellulose can be disassembled during digestion and absorbed as glucose.
Human diseases
Diabetes mellitus
Lactose intolerance
Fructose malabsorption
Galactosemia
Glycogen storage disease
See also
Inborn errors of carbohydrate metabolism
Hitting the wall (glycogen depletion)
Second wind (increased ATP from fatty acids after glycogen depletion)
References
External links
BBC - GCSE Bitesize - Biology | Humans | Glucoregulation
Sugar4Kids
Carbohydrate metabolism
de:Glucose#Biochemie | 0.767463 | 0.995143 | 0.763735 |
Cartilage | Cartilage is a resilient and smooth type of connective tissue. Semi-transparent and non-porous, it is usually covered by a tough and fibrous membrane called perichondrium. In tetrapods, it covers and protects the ends of long bones at the joints as articular cartilage, and is a structural component of many body parts including the rib cage, the neck and the bronchial tubes, and the intervertebral discs. In other taxa, such as chondrichthyans and cyclostomes, it constitutes a much greater proportion of the skeleton. It is not as hard and rigid as bone, but it is much stiffer and much less flexible than muscle. The matrix of cartilage is made up of glycosaminoglycans, proteoglycans, collagen fibers and, sometimes, elastin. It usually grows quicker than bone.
Because of its rigidity, cartilage often serves the purpose of holding tubes open in the body. Examples include the rings of the trachea, such as the cricoid cartilage and carina.
Cartilage is composed of specialized cells called chondrocytes that produce a large amount of collagenous extracellular matrix, abundant ground substance that is rich in proteoglycan and elastin fibers. Cartilage is classified into three types elastic cartilage, hyaline cartilage, and fibrocartilage which differ in their relative amounts of collagen and proteoglycan.
As cartilage does not contain blood vessels or nerves, it is insensitive. However, some fibrocartilage such as the meniscus of the knee has partial blood supply. Nutrition is supplied to the chondrocytes by diffusion. The compression of the articular cartilage or flexion of the elastic cartilage generates fluid flow, which assists the diffusion of nutrients to the chondrocytes. Compared to other connective tissues, cartilage has a very slow turnover of its extracellular matrix and is documented to repair at only a very slow rate relative to other tissues.
Structure
Development
In embryogenesis, the skeletal system is derived from the mesoderm germ layer. Chondrification (also known as chondrogenesis) is the process by which cartilage is formed from condensed mesenchyme tissue, which differentiates into chondroblasts and begins secreting the molecules (aggrecan and collagen type II) that form the extracellular matrix. In all vertebrates, cartilage is the main skeletal tissue in early ontogenetic stages; in osteichthyans, many cartilaginous elements subsequently ossify through endochondral and perichondral ossification.
Following the initial chondrification that occurs during embryogenesis, cartilage growth consists mostly of the maturing of immature cartilage to a more mature state. The division of cells within cartilage occurs very slowly, and thus growth in cartilage is usually not based on an increase in size or mass of the cartilage itself. It has been identified that non-coding RNAs (e.g. miRNAs and long non-coding RNAs) as the most important epigenetic modulators can affect the chondrogenesis. This also justifies the non-coding RNAs' contribution in various cartilage-dependent pathological conditions such as arthritis, and so on.
Articular cartilage
The articular cartilage function is dependent on the molecular composition of the extracellular matrix (ECM). The ECM consists mainly of proteoglycan and collagens. The main proteoglycan in cartilage is aggrecan, which, as its name suggests, forms large aggregates with hyaluronan and with itself. These aggregates are negatively charged and hold water in the tissue. The collagen, mostly collagen type II, constrains the proteoglycans. The ECM responds to tensile and compressive forces that are experienced by the cartilage. Cartilage growth thus refers to the matrix deposition, but can also refer to both the growth and remodeling of the extracellular matrix. Due to the great stress on the patellofemoral joint during resisted knee extension, the articular cartilage of the patella is among the thickest in the human body. The ECM of articular cartilage is classified into three regions: the pericellular matrix, the territorial matrix, and the interterritorial matrix.
Function
Mechanical properties
The mechanical properties of articular cartilage in load-bearing joints such as the knee and hip have been studied extensively at macro, micro, and nano-scales. These mechanical properties include the response of cartilage in frictional, compressive, shear and tensile loading. Cartilage is resilient and displays viscoelastic properties.
Since cartilage has interstitial fluid that is free-moving, it makes the material difficult to test. One of the tests commonly used to overcome this obstacle is a confined compression test, which can be used in either a 'creep' or 'relaxation' mode. In creep mode, the tissue displacement is measured as a function of time under a constant load, and in relaxation mode, the force is measured as a function of time under constant displacement. In creep mode, the tissue displacement is measured as a function of time under a constant load. During this mode, the deformation of the tissue has two main regions. In the first region, the displacement is rapid due to the initial flow of fluid out of the cartilage, and in the second region, the displacement slows down to an eventual constant equilibrium value. Under the commonly used loading conditions, the equilibrium displacement can take hours to reach.
In both the creep mode and the relaxation mode of a confined compression test, a disc of cartilage is placed in an impervious, fluid-filled container and covered with a porous plate that restricts the flow of interstitial fluid to the vertical direction. This test can be used to measure the aggregate modulus of cartilage, which is typically in the range of 0.5 to 0.9 MPa for articular cartilage, and the Young’s Modulus, which is typically 0.45 to 0.80 MPa. The aggregate modulus is “a measure of the stiffness of the tissue at equilibrium when all fluid flow has ceased”, and Young’s modulus is a measure of how much a material strains (changes length) under a given stress.
The confined compression test can also be used to measure permeability, which is defined as the resistance to fluid flow through a material. Higher permeability allows for fluid to flow out of a material’s matrix more rapidly, while lower permeability leads to an initial rapid fluid flow and a slow decrease to equilibrium. Typically, the permeability of articular cartilage is in the range of 10^-15 to 10^-16 m^4/Ns. However, permeability is sensitive to loading conditions and testing location. For example, permeability varies throughout articular cartilage and tends to be highest near the joint surface and lowest near the bone (or “deep zone”). Permeability also decreases under increased loading of the tissue.
Indentation testing is an additional type of test commonly used to characterize cartilage. Indentation testing involves using an indentor (usually <0.8 mm) to measure the displacement of the tissue under constant load. Similar to confined compression testing, it may take hours to reach equilibrium displacement. This method of testing can be used to measure the aggregate modulus, Poisson's ratio, and permeability of the tissue. Initially, there was a misconception that due to its predominantly water-based composition, cartilage had a Poisson's ratio of 0.5 and should be modeled as an incompressible material. However, subsequent research has disproven this belief. The Poisson’s ratio of articular cartilage has been measured to be around 0.4 or lower in humans and ranges from 0.46–0.5 in bovine subjects.
The mechanical properties of articular cartilage are largely anisotropic, test-dependent, and can be age-dependent. These properties also depend on collagen-proteoglycan interactions and therefore can increase/decrease depending on the total content of water, collagen, glycoproteins, etc. For example, increased glucosaminoglycan content leads to an increase in compressive stiffness, and increased water content leads to a lower aggregate modulus.
Tendon-bone interface
In addition to its role in load-bearing joints, cartilage serves a crucial function as a gradient material between softer tissues and bone. Mechanical gradients are crucial for your body’s function, and for complex artificial structures including joint implants. Interfaces with mismatched material properties lead to areas of high stress concentration which, over the millions of loading cycles experienced by human joins over a lifetime, would eventually lead to failure. For example, the elastic modulus of human bone is roughly 20 GPa while the softer regions of cartilage can be about 0.5 to 0.9 MPa. When there is a smooth gradient of materials properties, however, stresses are distributed evenly across the interface, which puts less wear on each individual part.
The body solves this problem with stiffer, higher modulus layers near bone, with high concentrations of mineral deposits such as hydroxyapatite. Collagen fibers (which provide mechanical stiffness in cartilage) in this region are anchored directly to bones, reducing the possible deformation. Moving closer to soft tissue into the region known as the tidemark, the density of chondrocytes increases and collagen fibers are rearranged to optimize for stress dissipation and low friction. The outermost layer near the articular surface is known as the superficial zone, which primarily serves as a lubrication region. Here cartilage is characterized by a dense extracellular matrix and is rich in proteoglycans (which dispel and reabsorb water to soften impacts) and thin collagen oriented parallel to the joint surface which have excellent shear resistant properties.
Osteoarthritis and natural aging both have negative effects on cartilage as a whole as well as the proper function of the materials gradient within. The earliest changes are often in the superficial zone, the softest and most lubricating part of the tissue. Degradation of this layer can put additional stresses on deeper layers which are not designed to support the same deformations. Another common effect of aging is increased crosslinking of collagen fibers. This leads to stiffer cartilage as a whole, which again can lead to early failure as stiffer tissue is more susceptible to fatigue based failure. Aging in calcified regions also generally leads to a larger number of mineral deposits, which has a similarly undesired stiffening effect. Osteoarthritis has more extreme effects and can entirely wear down cartilage, causing direct bone-to-bone contact.
Frictional properties
Lubricin, a glycoprotein abundant in cartilage and synovial fluid, plays a major role in bio-lubrication and wear protection of cartilage.
Repair
Cartilage has limited repair capabilities: Because chondrocytes are bound in lacunae, they cannot migrate to damaged areas. Therefore, cartilage damage is difficult to heal. Also, because hyaline cartilage does not have a blood supply, the deposition of new matrix is slow. Over the last years, surgeons and scientists have elaborated a series of cartilage repair procedures that help to postpone the need for joint replacement. A tear of the meniscus of the knee cartilage can often be surgically trimmed to reduce problems. Complete healing of cartilage after injury or repair procedures is hindered by cartilage-specific inflammation caused by the involvement of M1/M2 macrophages, mast cells, and their intercellular interactions.
Biological engineering techniques are being developed to generate new cartilage, using a cellular "scaffolding" material and cultured cells to grow artificial cartilage. Extensive researches have been conducted on freeze-thawed PVA hydrogels as a base material for such a purpose. These gels have exhibited great promises in terms of biocompatibility, wear resistance, shock absorption, friction coefficient, flexibility, and lubrication, and thus are considered superior to polyethylene-based cartilages. A two-year implantation of the PVA hydrogels as artificial meniscus in rabbits showed that the gels remain intact without degradation, fracture, or loss of properties.
Clinical significance
Disease
Several diseases can affect cartilage. Chondrodystrophies are a group of diseases, characterized by the disturbance of growth and subsequent ossification of cartilage. Some common diseases that affect the cartilage are listed below.
Osteoarthritis: Osteoarthritis is a disease of the whole joint, however, one of the most affected tissues is the articular cartilage. The cartilage covering bones (articular cartilage—a subset of hyaline cartilage) is thinned, eventually completely wearing away, resulting in a "bone against bone" within the joint, leading to reduced motion, and pain. Osteoarthritis affects the joints exposed to high stress and is therefore considered the result of "wear and tear" rather than a true disease. It is treated by arthroplasty, the replacement of the joint by a synthetic joint often made of a stainless steel alloy (cobalt chromoly) and ultra-high-molecular-weight polyethylene. Chondroitin sulfate or glucosamine sulfate supplements, have been claimed to reduce the symptoms of osteoarthritis, but there is little good evidence to support this claim. In osteoarthritis, increased expression of inflammatory cytokines and chemokines cause aberrant changes in differentiated chondrocytes function which leads to an excess of chondrocyte catabolic activity, mediated by factors including matrix metalloproteinases and aggrecanases.
Traumatic rupture or detachment: The cartilage in the knee is frequently damaged but can be partially repaired through knee cartilage replacement therapy. Often when athletes talk of damaged "cartilage" in their knee, they are referring to a damaged meniscus (a fibrocartilage structure) and not the articular cartilage.
Achondroplasia: Reduced proliferation of chondrocytes in the epiphyseal plate of long bones during infancy and childhood, resulting in dwarfism.
Costochondritis: Inflammation of cartilage in the ribs, causing chest pain.
Spinal disc herniation: Asymmetrical compression of an intervertebral disc ruptures the sac-like disc, causing a herniation of its soft content. The hernia often compresses the adjacent nerves and causes back pain.
Relapsing polychondritis: a destruction, probably autoimmune, of cartilage, especially of the nose and ears, causing disfiguration. Death occurs by asphyxiation as the larynx loses its rigidity and collapses.
Tumors made up of cartilage tissue, either benign or malignant, can occur. They usually appear in bone, rarely in pre-existing cartilage. The benign tumors are called chondroma, the malignant ones chondrosarcoma. Tumors arising from other tissues may also produce a cartilage-like matrix, the best-known being pleomorphic adenoma of the salivary glands.
The matrix of cartilage acts as a barrier, preventing the entry of lymphocytes or diffusion of immunoglobulins. This property allows for the transplantation of cartilage from one individual to another without fear of tissue rejection.
Imaging
Cartilage does not absorb X-rays under normal in vivo conditions, but a dye can be injected into the synovial membrane that will cause the to be absorbed by the dye. The resulting void on the radiographic film between the bone and meniscus represents the cartilage. For in vitro scans, the outer soft tissue is most likely removed, so the cartilage and air boundary are enough to contrast the presence of cartilage due to the refraction of the .
Other animals
Cartilaginous fish
Cartilaginous fish (Chondrichthyes) or sharks, rays and chimaeras have a skeleton composed entirely of cartilage.
Invertebrate cartilage
Cartilage tissue can also be found among some arthropods such as horseshoe crabs, some mollusks such as marine snails and cephalopods, and some annelids like sabellid polychaetes.
Arthropods
The most studied cartilage in arthropods is the branchial cartilage of Limulus polyphemus. It is a vesicular cell-rich cartilage due to the large, spherical and vacuolated chondrocytes with no homologies in other arthropods. Other type of cartilage found in L. polyphemus is the endosternite cartilage, a fibrous-hyaline cartilage with chondrocytes of typical morphology in a fibrous component, much more fibrous than vertebrate hyaline cartilage, with mucopolysaccharides immunoreactive against chondroitin sulfate antibodies. There are homologous tissues to the endosternite cartilage in other arthropods. The embryos of Limulus polyphemus express ColA and hyaluronan in the gill cartilage and the endosternite, which indicates that these tissues are fibrillar-collagen-based cartilage. The endosternite cartilage forms close to Hh-expressing ventral nerve cords and expresses ColA and SoxE, a Sox9 analog. This is also seen in gill cartilage tissue.
Mollusks
In cephalopods, the models used for the studies of cartilage are Octopus vulgaris and Sepia officinalis. The cephalopod cranial cartilage is the invertebrate cartilage that shows more resemblance to the vertebrate hyaline cartilage. The growth is thought to take place throughout the movement of cells from the periphery to the center. The chondrocytes present different morphologies related to their position in the tissue.
The embryos of S. officinalis express ColAa, ColAb, and hyaluronan in the cranial cartilages and other regions of chondrogenesis. This implies that the cartilage is fibrillar-collagen-based. The S. officinalis embryo expresses hh, whose presence causes ColAa and ColAb expression and is also able to maintain proliferating cells undiferentiated. It has been observed that this species presents the expression SoxD and SoxE, analogs of the vertebrate Sox5/6 and Sox9, in the developing cartilage. The cartilage growth pattern is the same as in vertebrate cartilage.
In gastropods, the interest lies in the odontophore, a cartilaginous structure that supports the radula. The most studied species regarding this particular tissue is Busycotypus canaliculatus. The odontophore is a vesicular cell rich cartilage, consisting of vacuolated cells containing myoglobin, surrounded by a low amount of extra cellular matrix containing collagen. The odontophore contains muscle cells along with the chondrocytes in the case of Lymnaea and other mollusks that graze vegetation.
Sabellid polychaetes
The sabellid polychaetes, or feather duster worms, have cartilage tissue with cellular and matrix specialization supporting their tentacles. They present two distinct extracellular matrix regions. These regions are an acellular fibrous region with a high collagen content, called cartilage-like matrix, and collagen lacking a highly cellularized core, called osteoid-like matrix. The cartilage-like matrix surrounds the osteoid-like matrix. The amount of the acellular fibrous region is variable. The model organisms used in the study of cartilage in sabellid polychaetes are Potamilla species and Myxicola infundibulum.
Plants and fungi
Vascular plants, particularly seeds, and the stems of some mushrooms, are sometimes called "cartilaginous", although they contain no cartilage.
References
Further reading
External links
Cartilage.org, International Cartilage Regeneration & Joint Preservation Society
KUMC.edu , Cartilage tutorial, University of Kansas Medical Center
Bartleby.com, text from Gray's anatomy
MadSci.org, I've heard 'Ears and nose do not ever stop growing.' Is this false?
CartilageHealth.com, Information on Articular Cartilage Injury Prevention, Repair and Rehabilitation
About.com , Osteoarthritis
Cartilage types
Different cartilages on TheFreeDictionary
Cartilage photomicrographs
Skeletal system
Connective tissue | 0.765615 | 0.997493 | 0.763695 |
Biopharmaceutical | A biopharmaceutical, also known as a biological medical product, or biologic, is any pharmaceutical drug product manufactured in, extracted from, or semisynthesized from biological sources. Different from totally synthesized pharmaceuticals, they include vaccines, whole blood, blood components, allergenics, somatic cells, gene therapies, tissues, recombinant therapeutic protein, and living medicines used in cell therapy. Biologics can be composed of sugars, proteins, nucleic acids, or complex combinations of these substances, or may be living cells or tissues. They (or their precursors or components) are isolated from living sources—human, animal, plant, fungal, or microbial. They can be used in both human and animal medicine.
Terminology surrounding biopharmaceuticals varies between groups and entities, with different terms referring to different subsets of therapeutics within the general biopharmaceutical category. Some regulatory agencies use the terms biological medicinal products or therapeutic biological product to refer specifically to engineered macromolecular products like protein- and nucleic acid-based drugs, distinguishing them from products like blood, blood components, or vaccines, which are usually extracted directly from a biological source. Biopharmaceutics is pharmaceutics that works with biopharmaceuticals. Biopharmacology is the branch of pharmacology that studies biopharmaceuticals. Specialty drugs, a recent classification of pharmaceuticals, are high-cost drugs that are often biologics. The European Medicines Agency uses the term advanced therapy medicinal products (ATMPs) for medicines for human use that are "based on genes, cells, or tissue engineering", including gene therapy medicines, somatic-cell therapy medicines, tissue-engineered medicines, and combinations thereof. Within EMA contexts, the term advanced therapies refers specifically to ATMPs, although that term is rather nonspecific outside those contexts.
Gene-based and cellular biologics, for example, often are at the forefront of biomedicine and biomedical research, and may be used to treat a variety of medical conditions for which no other treatments are available.
Building on the market approvals and sales of recombinant virus-based biopharmaceuticals for veterinary and human medicine, the use of engineered plant viruses has been proposed to enhance crop performance and promote sustainable production.
In some jurisdictions, biologics are regulated via different pathways from other small molecule drugs and medical devices.
Major classes
Extracted from living systems
Some of the oldest forms of biologics are extracted from the bodies of animals, and other humans especially. Important biologics include:
Whole blood and other blood components
Organ transplantation and tissue transplants
Stem-cell therapy
Antibodies for passive immunity (e.g., to treat a virus infection)
Human reproductive cells
Human breast milk
Fecal microbiota
Some biologics that were previously extracted from animals, such as insulin, are now more commonly produced by recombinant DNA.
Produced by recombinant DNA
Biologics can refer to a wide range of biological products in medicine. However, in most cases, the term is used more restrictively for a class of therapeutics (either approved or in development) that are produced using biological processes involving recombinant DNA technology. These medications are usually one of three types:
Substances that are (nearly) identical to the body's key signaling proteins. Examples are the blood-production stimulating protein erythropoetin, or the growth-stimulating hormone named "growth hormone" or biosynthetic human insulin and its analogues.
Monoclonal antibodies. These are similar to the antibodies that the human immune system uses to fight off bacteria and viruses, but they are "custom-designed" (using hybridoma technology or other methods) and can therefore be made specifically to counteract or block any given substance in the body, or to target any specific cell type; examples of such monoclonal antibodies for use in various diseases are given in the table below.
Receptor constructs (fusion proteins), usually based on a naturally occurring receptor linked to the immunoglobulin frame. In this case, the receptor provides the construct with detailed specificity, whereas the immunoglobulin structure imparts stability and other useful features in terms of pharmacology. Some examples are listed in the table below.
Biologics as a class of medications in this narrower sense have had a profound impact on many medical fields, primarily rheumatology and oncology, but also cardiology, dermatology, gastroenterology, neurology, and others. In most of these disciplines, biologics have added major therapeutic options for treating many diseases, including some for which no effective therapies were available, and others where previously existing therapies were inadequate. However, the advent of biologic therapeutics has also raised complex regulatory issues (see below), and significant pharmacoeconomic concerns because the cost for biologic therapies has been dramatically higher than for conventional (pharmacological) medications. This factor has been particularly relevant since many biological medications are used to treat chronic diseases, such as rheumatoid arthritis or inflammatory bowel disease, or for the treatment of otherwise untreatable cancer during the remainder of life. The cost of treatment with a typical monoclonal antibody therapy for relatively common indications is generally in the range of €7,000–14,000 per patient per year.
Older patients who receive biologic therapy for diseases such as rheumatoid arthritis, psoriatic arthritis, or ankylosing spondylitis are at increased risk for life-threatening infection, adverse cardiovascular events, and malignancy.
The first such substance approved for therapeutic use was biosynthetic "human" insulin made via recombinant DNA. Sometimes referred to as rHI, under the trade name Humulin, was developed by Genentech, but licensed to Eli Lilly and Company, who manufactured and marketed it starting in 1982.
Major kinds of biopharmaceuticals include:
Blood factors (Factor VIII and Factor IX)
Thrombolytic agents (tissue plasminogen activator)
Hormones (insulin, glucagon, growth hormone, gonadotrophins)
Haematopoietic growth factors (Erythropoietin, colony-stimulating factors)
Interferons (Interferons-α, -β, -γ)
Interleukin-based products (Interleukin-2)
Vaccines (Hepatitis B surface antigen)
Monoclonal antibodies (Various)
Additional products (tumour necrosis factor, therapeutic enzymes)
Research and development investment in new medicines by the biopharmaceutical industry stood at $65.2 billion in 2008. A few examples of biologics made with recombinant DNA technology include:
Vaccines
Many vaccines are grown in tissue cultures.
Gene therapy
Viral gene therapy involves artificially manipulating a virus to include a desirable piece of genetic material.
Viral gene therapies using engineered plant viruses have been proposed to enhance crop performance and promote sustainable production.
Biosimilars
With the expiration of many patents for blockbuster biologics between 2012 and 2019, the interest in biosimilar production, i.e., follow-on biologics, has increased. Compared to small molecules that consist of chemically identical active ingredients, biologics are vastly more complex and consist of a multitude of subspecies. Due to their heterogeneity and the high process sensitivity, originators and follow-on biosimilars will exhibit variability in specific variants over time. The safety and clinical performance of both originator and biosimilar biopharmaceuticals must remain equivalent throughout their lifecycle. Process variations are monitored by modern analytical tools (e.g., liquid chromatography, immunoassays, mass spectrometry, etc.) and describe a unique design space for each biologic.
Biosimilars require a different regulatory framework compared to small-molecule generics. Legislation in the 21st century has addressed this by recognizing an intermediate ground of testing for biosimilars. The filing pathway requires more testing than for small-molecule generics, but less testing than for registering completely new therapeutics.
In 2003, the European Medicines Agency introduced an adapted pathway for biosimilars, termed similar biological medicinal products. This pathway is based on a thorough demonstration of comparability of the product to an existing approved product. Within the United States, the Patient Protection and Affordable Care Act of 2010 created an abbreviated approval pathway for biological products shown to be biosimilar to, or interchangeable with, an FDA-licensed reference biological product. Researchers are optimistic that the introduction of biosimilars will reduce medical expenses to patients and the healthcare system.
Commercialization
When a new biopharmaceutical is developed, the company will typically apply for a patent, which is a grant to exclusive manufacturing rights. This is the primary means by which the drug developer can recover the investment cost for development of the biopharmaceutical. The patent laws in the United States and Europe differ somewhat on the requirements for a patent, with the European requirements perceived as more difficult to satisfy. The total number of patents granted for biopharmaceuticals has risen significantly since the 1970s. In 1978 the total patents granted was 30. This had climbed to 15,600 in 1995, and by 2001 there were 34,527 patent applications. In 2012 the US had the highest IP (Intellectual Property) generation within the biopharmaceutical industry, generating 37 percent of the total number of granted patents worldwide; however, there is still a large margin for growth and innovation within the industry. Revisions to the current IP system to ensure greater reliability for R&D (research and development) investments is a prominent topic of debate in the US as well. Blood products and other human-derived biologics such as breast milk have highly regulated or very hard-to-access markets; therefore, customers generally face a supply shortage for these products. Institutions housing these biologics, designated as 'banks', often cannot distribute their product to customers effectively. Conversely, banks for reproductive cells are much more widespread and available due to the ease with which spermatozoa and egg cells can be used for fertility treatment.
Large-scale production
Biopharmaceuticals may be produced from microbial cells (e.g., recombinant E. coli or yeast cultures), mammalian cell lines (see Cell culture) and plant cell cultures (see Plant tissue culture) and moss plants in bioreactors of various configurations, including photo-bioreactors. Important issues of concern are cost of production (low-volume, high-purity products are desirable) and microbial contamination (by bacteria, viruses, mycoplasma). Alternative platforms of production which are being tested include whole plants (plant-made pharmaceuticals).
Transgenics
A potentially controversial method of producing biopharmaceuticals involves transgenic organisms, particularly plants and animals that have been genetically modified to produce drugs. This production is a significant risk for its investor due to production failure or scrutiny from regulatory bodies based on perceived risks and ethical issues. Biopharmaceutical crops also represent a risk of cross-contamination with non-engineered crops, or crops engineered for non-medical purposes.
One potential approach to this technology is the creation of a transgenic mammal that can produce the biopharmaceutical in its milk, blood, or urine. Once an animal is produced, typically using the pronuclear microinjection method, it becomes efficacious to use cloning technology to create additional offspring that carry the favorable modified genome. The first such drug manufactured from the milk of a genetically modified goat was ATryn, but marketing permission was blocked by the European Medicines Agency in February 2006. This decision was reversed in June 2006 and approval was given August 2006.
Regulation
European Union
In the European Union, a biological medicinal product is one of the active substance(s) produced from or extracted from a biological (living) system, and requires, in addition to physicochemical testing, biological testing for full characterisation. The characterisation of a biological medicinal product is a combination of testing the active substance and the final medicinal product together with the production process and its control. For example:
Production process – it can be derived from biotechnology or from other technologies. It may be prepared using more conventional techniques as is the case for blood or plasma-derived products and a number of vaccines.
Active substance – consisting of entire microorganisms, mammalian cells, nucleic acids, proteinaceous, or polysaccharide components originating from a microbial, animal, human, or plant source.
Mode of action – therapeutic and immunological medicinal products, gene transfer materials, or cell therapy materials.
United States
In the United States, biologics are licensed through the biologics license application (BLA), then submitted to and regulated by the FDA's Center for Biologics Evaluation and Research (CBER) whereas drugs are regulated by the Center for Drug Evaluation and Research. Approval may require several years of clinical trials, including trials with human volunteers. Even after the drug is released, it will still be monitored for performance and safety risks. The manufacture process must satisfy the FDA's "Good Manufacturing Practices", which are typically manufactured in a cleanroom environment with strict limits on the amount of airborne particles and other microbial contaminants that may alter the efficacy of the drug.
Canada
In Canada, biologics (and radiopharmaceuticals) are reviewed through the Biologics and Genetic Therapies Directorate within Health Canada.
See also
Antibody-drug conjugate
Genetic engineering
Host cell protein
List of pharmaceutical companies
List of recombinant proteins
Nanomedicine
References
External links
Biotechnology products
Biotechnology
Life sciences industry
Pharmaceutical industry
Pharmacy
Specialty drugs | 0.76638 | 0.996475 | 0.763678 |
Hyperventilation syndrome | Hyperventilation syndrome (HVS), also known as chronic hyperventilation syndrome (CHVS), dysfunctional breathing hyperventilation syndrome, cryptotetany, spasmophilia, latent tetany, and central neuronal hyper excitability syndrome (NHS), is a respiratory disorder, psychologically or physiologically based, involving breathing too deeply or too rapidly (hyperventilation). HVS may present with chest pain and a tingling sensation in the fingertips and around the mouth (paresthesia), in some cases resulting in the hands 'locking up' or cramping (carpopedal spasm). HVS may accompany a panic attack.
People with HVS may feel that they cannot get enough air. In reality, they have about the same oxygenation in the arterial blood (normal values are about 98% for hemoglobin saturation) and too little carbon dioxide (hypocapnia) in their blood and other tissues. While oxygen is abundant in the bloodstream, HVS reduces effective delivery of that oxygen to vital organs due to low--induced vasoconstriction and the suppressed Bohr effect.
The hyperventilation is self-promulgating as rapid or deep breathing causes carbon dioxide levels to fall below healthy levels, and respiratory alkalosis (high blood pH) develops. This makes the symptoms worse, which causes the person to breathe even faster, which then, further exacerbates the problem.
The respiratory alkalosis leads to changes in the way the nervous system fires and leads to the paresthesia, dizziness, and perceptual changes that often accompany this condition. Other mechanisms may also be at work, and some people are physiologically more susceptible to this phenomenon than others.
The mechanism for hyperventilation causing Paresthesia, lightheadedness, and fainting is: hyperventilation causes increased blood pH (see Respiratory alkalosis for this mechanism), which causes a decrease in free ionized calcium (Hypocalcaemia), which causes paresthesia and symptoms related to hypocalcaemia.
Causes
Hyperventilation syndrome is believed to be caused by psychological factors. It is one cause of hyperventilation with others including infection, blood loss, heart attack, hypocapnia or alkalosis due to chemical imbalances, decreased cerebral blood flow, and increased nerve sensitivity.
In one study, one third of patients with HVS had "subtle but definite lung disease" that prompted them to breathe too frequently or too deeply.
A study, found that 77% of patients with empty nose syndrome have hyperventilation syndrome. Empty nose syndrome can appear in people having done nose surgery like cauterization, turbinectomy, turbinoplasty, etc.
Many people with panic disorder or agoraphobia will experience HVS. However, most people with HVS do not have these disorders.
Diagnosis
Hyperventilation syndrome is a remarkably common cause of dizziness complaints. About 25% of patients who complain about dizziness are diagnosed with HVS.
A 1985 study, Efficacy of Nijmegen Questionnaire in recognition of the hyperventilation syndrome, stated: "It is concluded that the questionnaire is suitable as a screening instrument for early detection of HVS, and also as an aid in diagnosis and therapy planning."
Treatment
One review of research, published in 2013, concluded "The results of this systematic review are unable to inform clinical practice, based on the inclusion of only 1 small poorly reported RCT [randomised controlled trial] ... Therefore, no recommendations for clinical practice can be made."
While traditional intervention for an acute episode has been to have the patient breathe into a paper bag, causing rebreathing and restoration of CO₂ levels, this is not advised. The same benefits can be obtained more safely from deliberately slowing down the breathing rate by counting or looking at the second hand on a watch. This is sometimes referred to as "7-11 breathing", because a gentle inhalation is stretched out to take 7 seconds (or counts), and the exhalation is slowed to take 11 seconds. This in-/exhalation ratio can be safely decreased to 4-12 or even 4-20 and more, as the O₂ content of the blood will easily sustain normal cell function for several minutes at rest when normal blood acidity has been restored.
It has also been suggested that breathing therapies such as the Buteyko Breathing method may be effective in reducing the symptoms and recurrence of the syndrome.
Benzodiazepines can be prescribed to reduce stress that provokes hyperventilation syndrome. Selective serotonin reuptake inhibitors (SSRIs) can reduce the severity and frequency of hyperventilation episodes.
History
The original traditional treatment of breathing into a paper bag to control psychologically based hyperventilation syndrome (which is now almost universally known and often shown in movies and TV dramas) was invented by New York City physician (later radiologist), Alexander Winter, M.D. [1908-1978], based on his experiences in the U.S. Army Medical Corps during World War II and published in the Journal of the American Medical Association in 1951. Because other medical conditions can be confused with hyperventilation, namely asthma and heart attacks, most medical studies advise against using a paper bag since these conditions worsen when CO2 levels increase.
References
External links
Anxiety disorders
Respiration
Syndromes | 0.770472 | 0.99114 | 0.763645 |
Pneumonitis | Pneumonitis describes general inflammation of lung tissue. Possible causative agents include radiation therapy of the chest, exposure to medications used during chemo-therapy, the inhalation of debris (e.g., animal dander), aspiration, herbicides or fluorocarbons and some systemic diseases. If unresolved, continued inflammation can result in irreparable damage such as pulmonary fibrosis.
Pneumonitis is distinguished from pneumonia on the basis of causation as well as its manifestation. Pneumonia can be described as pneumonitis combined with consolidation and exudation of lung tissue due to infection with microorganisms. The distinction between pneumonia and pneumonitis can be further understood with pneumonitis being the encapsulation of all respiratory infections (incorporating pneumonia and pulmonary fibrosis as major diseases), and pneumonia as a localized infection. For most infections, the immune response of the body is enough to control and apprehend the infection within a couple days, but if the tissue and the cells can't fight off the infection, the creation of pus will begin to form in the lungs which then hardens into lung abscess or suppurative pneumonitis. Patients that are immunodeficient and don't get treated immediately for any type of respiratory infection may lead to more severe infections and/or death.
Pneumonitis can be classified into several different specific subcategories, including hypersensitivity pneumonitis, radiation pneumonitis, acute interstitial pneumonitis, and chemical pneumonitis. These all share similar symptoms, but differ in causative agents. Diagnosis of pneumonitis remains challenging, but several different treatment paths (corticosteroids, oxygen therapy, avoidance) have seen success.
Causes
Alveoli are the primary structure affected by pneumonitis. Any particles that are smaller than 5 microns can enter the alveoli of the lungs. These tiny air sacs facilitate the passage of oxygen from inhaled air to the bloodstream. In the case of pneumonitis, it is more difficult for this exchange of oxygen to occur since irritants have caused inflammation of the alveoli. Due to the lack of a definitive determination of a single irritant causing pneumonitis, there are several possible causes.
Viral infection. Measles can cause severe pneumonitis, and ribavirin has been proposed as a possible treatment. Cytomegalovirus (CMV) is another cause.
Pneumonia
Radiation therapy
Immunotherapy
Inhaling chemicals, such as sodium hydroxide
Interstitial lung disease
Sepsis
Adverse reaction to medications
Hypersensitivity to inhaled agents
Inhalation of spores of some species of mushroom (bronchoalveolar allergic syndrome)
Mercury exposure
Smoking
Overexposure to chlorine
Bronchial obstruction (obstructive pneumonitis or post-obstructive pneumonitis)
Ascariasis (during parasite migration)
Aspirin overdose, some antibiotics, and chemotherapy drugs
“Farmer’s lung” and “hot tub lung” are common names for types of hypersensitivity pneumonitis that result from exposure to some types of thermophilic actinomyces, mycobacteria and molds.
Avian proteins in bird feces and feathers
Whole body or chest radiation therapy used for cancer treatment
Symptoms
Physical manifestations of Pneumonitis range from mild cold-like symptoms to respiratory failure. Most frequently, those with pneumonitis experience shortness of breath, and sometimes a dry cough. Symptoms usually appear a few hours after exposure and peak at approximately eighteen to twenty-four hours.
Other symptoms may include:
Malaise
Fever
Dyspnea
Flushed and/or discolored skin
Sweating
Small and fast inhalations
Without proper treatment, pneumonitis may become chronic pneumonitis, resulting in fibrosis of the lungs and its effects:
Difficulty breathing
Food aversion
Lethargy
End-stage fibrosis and respiratory failure eventually lead to death in cases without proper management of chronic pneumonitis.
Diagnosis
A chest X-ray or CT is necessary to differentiate between pneumonitis and pneumonia of an infectious etiology. Some degree of pulmonary fibrosis may be evident in a CT which is indicative of chronic pulmonary inflammatory processes. Diagnosis of Pneumonitis is often difficult as it depends on a high degree of clinical suspicion when evaluating a patient with a recent onset of a possible interstitial lung disease. In addition, interpreting pathologic and radiographic test results remains a challenge to clinicians. Pneumonitis is often difficult to recognize and discern from other interstitial lung diseases.
Diagnostic procedures currently available include:
Evaluation of patient history and possible exposure to a known causative agent
High-Resolution Computed Tomography (HRCT) consistent with pneumonitis
Bronchoalveolar lavage with lymphocytosis
Lung biopsy consistent with pneumonitis histopathology
Exposure to causative agents of pneumonitis in a specific environment can be confirmed through aero/microbiologic analysis to verify its presence. Subsequent testing of patient serum for evidence of serum specific IgG antibodies confirms patient exposure.
Clinical tests include chest radiography or (HRCT) which may show centrilobular nodular and ground-glass opacities with air-trapping in the middle and upper lobes of the lungs. Fibrosis may also be evident. Bronchoalveolar Lavage (BAL) findings coinciding with pneumonitis typically include a lymphocytosis with a low CD4:CD8 ratio.
Reticular or linear patterns may be observed in diagnostic imaging. Pneumonitis may cause subpleural honeycombing, changing the shape of the air spaces in an image, which may be used to identify the respiratory disease. The interlobular septa may also thicken and indicate pneumonitis when viewed on a scan.
Histological samples of lung tissue with pneumonitis include the presence of poorly formed granulomas or mononuclear cell infiltrates. The presence of bronchocentric lymphohistiocytic interstitial pneumonia with chronic bronchiolitis and non-necrotising granulomas coincides with pneumonitis.
Since pneumonitis manifests in all areas of the lungs, imaging such as chest x-rays and Computerized tomography (CT) scans are useful diagnostic tools. While pneumonia is a localized infection, pneumonitis is widespread. A spirometer may also be used to measure pulmonary function.
During external examination, clubbing (swelling of fingertip tissue and increase in angle at the nail bed), and basal crackles may be observed.
For hypersensitivity pneumonitis many diagnoses take place through the focus of blood test, chest x-rays, and depending on severity of infection doctors may recommend a bronchoscopy. Blood test are important to early detect for other causative substances that could eliminate possible causes of the hypersensitivity pneumonitis.
Classification
Pneumonitis can be separated into several distinct categories based upon causative agent.
Hypersensitivity Pneumonitis (Extrinsic Allergenic Alveolitis) describes the inflammation of alveoli which occurs after inhalation of organic dusts (oxford). These particles can be proteins, bacteria, or mold spores and are usually specific to an occupation.
Acute Interstitial Pneumonitis can result from many different irritants in the lungs and usually is resolved in under a month.
Chemical Pneumonitis is caused by toxic substances reaching the lower airways of the bronchial tree. This causes a chemical burn and severe inflammation. (oxford)
Radiation Pneumonitis, also known as Radiation Induced Lung Injury, describes the initial damage done to the lung tissue by ionization radiation. Radiation, used to treat cancer, can cause pneumonitis when applied to the chest or full body. Radiation pneumonitis occurs in approximately 30% of advanced lung cancer patients treated with radiation therapy.
Aspiration pneumonitis is caused by a chemical inhalation of harmful gastric contents which include causes such as:
Aspiration due to a drug overdose
A lung injury after the inhalation of habitual gastric contents.
The development of colonized oropharyngeal material after inhalation.
Bacteria entering the lungs
Treatment
Typical treatment for pneumonitis includes conservative use of corticosteroids such as a short course of oral prednisone or methylprednisolone. Inhaled corticosteroids such as fluticasone or budesonide may also be effective for reducing inflammation and preventing re-inflammation on a chronic level by suppressing inflammatory processes that may be triggered by environmental exposures such as allergens. Severe cases of pneumonitis may require corticosteroids and oxygen therapy, as well as elimination of exposure to known irritants.
Corticosteroid dose and treatment duration vary from case to case. However, a common regimen beginning at 0.5 mg/kg per day for a couple of days before tapering to a smaller dose for several months to a year, has been used successfully.
Corticosteroids effectively reduce inflammation by switching off several genes activated during an inflammatory reaction. The production of anti-inflammatory proteins, and the degeneration of mRNA encoding inflammatory proteins, can also be increased by a high concentration of corticosteroids. These responses can help mitigate the inflammation seen in pneumonitis and reduce symptoms.
Certain immune-modulating treatments may be appropriate for patients with chronic pneumonitis. Azathioprine and mycophenolate are two particular treatments that have been associated with an improvement of gas exchange. Patients with chronic pneumonitis also may be evaluated for lung transplantation.
Images
See also
Hypersensitivity pneumonitis, also known as extrinsic allergic alveolitis (EAA)
Acute Interstitial Pneumonitis
Radiation Pneumonitis
Chemical Pneumonitis
References
Inflammations | 0.766431 | 0.996327 | 0.763616 |
Benefits of physical activity | The benefits of physical activity range widely. Most types of physical activity improve health and well-being.
Physical activity refers to any body movement that burns calories. “Exercise,” a subcategory of physical activity, refers to planned, structured, and repetitive activities aimed at improving physical fitness and health. Insufficient physical activity is the most common health issue in the world. Staying physically active can help prevent or delay certain diseases, including cancer, stroke, hypertension, heart disease, and diabetes, and can also relieve depression and improve mood.
Recommended amount
Two and a half hours of moderate-intensity exercise per week is recommended for reducing the risk of health issues. However, even doing a small amount of exercise is healthier than doing none.
Immediate benefits
Some of the benefits of physical activity on brain health happen right after a session of moderate to vigorous physical activity. Benefits include improved thinking or cognition for children ages 6-13, short-term reduction of anxiety for adults, and enhanced functional capacity in older adults. Regular physical activity can keep thinking, learning, and judgment skills sharp with age. It can also reduce the risk of depression and anxiety and improve sleep.
Weight management
Both dieting and physical activity play a critical role in maintaining healthy body weight, or maintaining successful weight loss. Physical activity helps control weight by using excess calories that would otherwise be stored as fat. Most activities burn calories, including sleeping, breathing, and digesting food. Balancing the calories consumed with the calories burned through physical activity will maintain one's weight.
Long-term benefits
Frequent physical activity lowers the risk of cardiovascular diseases, type 2 diabetes, and some cancers.
Obesity is a complex disease that affects whole-body metabolism and is associated with an increased risk of cardiovascular disease (CVD) and Type 2 diabetes (T2D). Physical exercise results in numerous health benefits and is an important tool to combat obesity and its co-morbidities, including cardiovascular diseases. Exercise prevents both the onset and development of cardiovascular disease and is an important therapeutic tool to improve outcomes for patients with cardiovascular disease. Some benefits of exercise include enhanced mitochondrial function, restoration and improvement of vasculature, and the release of myokines from skeletal muscle that preserve or augment cardiovascular function. In this review, we will discuss the mechanisms through which exercise promotes cardiovascular health. Regular physical exercise has several beneficial effects on overall health. While decreasing body mass and adiposity are not the primary outcomes of exercise, exercise can mediate several diseases that accompany obesity, including T2D and CVD. Several recent studies have shown that sustained physical activity is associated with decreased markers of inflammation, improved metabolic health, decreased risk of heart failure, and improved overall survival. There are several risk factors leading to the development and progression of CVD, but one of the most prominent is a sedentary lifestyle. A sedentary lifestyle can be characterized by both obesity and consistently low levels of physical activity. Thus, lifestyle interventions that aim to increase physical activity and decrease obesity are attractive therapeutic methods to combat most non-congenital types of CVD.
Effect on cardiovascular risk factors
Regular physical exercise is associated with numerous health benefits to reduce the progression and development of diseases. Several randomized clinical trials have demonstrated that lifestyle interventions, including moderate exercise and a healthy diet, improve cardiovascular health in at-risk populations. Individuals with metabolic syndrome who participated in a 4-month program of either a diet (caloric restriction) or exercise intervention had reduced adiposity, decreased systolic, diastolic, and mean arterial blood pressure, and lower total and low-density lipoprotein (LDL) cholesterol lipid profiles compared to the control group. Both the diet and exercise interventions improve these cardiovascular outcomes to a similar extent.
Several previous studies have investigated the effects of diet and exercise, independently or in combination, on metabolic and cardiovascular health and have determined that diet, exercise, or a combination of diet and exercise induces weight loss, decreases visceral adiposity, lowers plasma triglycerides, plasma glucose, HDL levels, and blood pressure, and improves VO2max. Studies have shown that exercise can improve metabolic and cardiovascular health independent of changes in body weight, including improved glucose homeostasis, endothelial function, blood pressure, and HDL levels. These data indicate exercise, independent of changes in body mass, results in significant improvements in cardiovascular and metabolic health. Although a detailed analysis of the vast impact of diet on cardiometabolic health is outside the scope of this review, the importance of diet and exercise in tandem should not be ignored, as many studies have shown that cardiometabolic health is improved to a higher extent in response to a combined diet and exercise programs compared to either intervention alone.
Exercise has a similar effect on cardiovascular improvements in lean and overweight normoglycemic subjects. In a 1-year study of non-obese individuals, a 16–20% increase in energy expenditure (of any form of exercise) with no diet intervention resulted in a 22.3% decrease in body fat mass and reduced LDL cholesterol, total cholesterol/HDL ratio, and C-reactive protein concentrations, all risk factors associated with CVD. In overweight individuals, 7–9 months of low-intensity exercise (walking ~19 km per week at 40–55% VO2peak) significantly increased cardiorespiratory fitness compared to sedentary individuals. Together these data indicate that exercise interventions decrease the risk or severity of CVD in subjects who are lean, obese, or have type 2 diabetes.
Cardiac rehabilitation
Exercise is also an important therapeutic treatment for patients who have cardiovascular diseases. A systematic review of 63 studies found that exercise-based cardiac rehabilitation improved cardiovascular function. These studies consisted of various forms of aerobic exercise at a range of intensities (from 50 to 95% VO2), over a multitude of time periods (1–47 months). Overall, exercise significantly reduced CVD-related mortality, decreased risk of MI, and improved quality of life. Another study looked specifically at patients with atherosclerosis post-revascularization surgery. Patients who underwent 60 min of exercise per day on a cycle ergometer for 4 weeks had an increased blood flow reserve (29%) and improved endothelium-dependent vasodilatation. A recent study provided personalized aerobic exercise rehabilitation programs for patients who had an acute myocardial infarction for 1 year after a coronary intervention surgery. The patients who underwent the exercise rehabilitation program had increased ejection fraction (60.81 vs. 53% control group), increased exercise tolerance, and reduced cardiovascular risk factors 6 months after starting the exercise rehabilitation program. This improvement in cardiovascular health in patients with atherosclerosis or post-MI is likely the result of increased myocardial perfusion in response to exercise, however, more research is required to fully understand these mechanisms.
One defining characteristic of heart failure is exercise intolerance, which resulted in a prescription for bed rest for these patients until the 1950s. However, it has now been shown that a monitored rehabilitation program using moderate-intensity exercise is safe for heart failure patients, and this has now become an important therapy for patients with heart failure. Meta-analyses and systemic reviews have shown that exercise training in heart failure patients is associated with improved quality of life, reduced risk of hospitalization and decreased rates of long-term mortality. One study of heart failure patients found that aerobic exercise (walking or cycling) at 60–70% of heart rate reserve 3–5 times per week for over 3 years led to improved health and overall quality of life (determined by a self-reported Kansas City Cardiomyopathy Questionnaire, a 23-question disease-specific questionnaire). Other studies have shown that exercise-based rehabilitation at a moderate intensity in heart failure patients improves cardiorespiratory fitness and increases both exercise endurance capacity and VO2max (12–31% increase).
More recent studies have examined the effects of high-intensity exercise on patients with heart failure. A recent study found that 12 weeks of high-intensity interval training (HIIT) in heart failure patients (with reduced ejection fraction) was well-tolerated and had similar benefits compared to patients who underwent moderate continuous exercise (MCE) training, including improved left ventricular remodeling and aerobic capacity. A separate study found that 4 weeks of HIIT in heart failure patients with preserved ejection fraction improved VO2peak and reduced diastolic dysfunction compared to both pre-training values and compared to the MCE group. These studies indicate that both moderate and high-intensity exercise training improve cardiovascular function in heart failure patients, likely related to increased endothelium-dependent vasodilation and improved aerobic capacity.
Other benefits
Bones and muscles
Routine physical activity is important for building strong bones and muscles in children, but it is equally important for older adults. Bones and muscles work together to support daily movements. Physical activity strengthens muscles. Bones adapt by building more cells, and as a result, both become stronger. Strong bones and muscles protect against injury and improve balance and coordination. In addition, active adults experience less joint stiffness and improved flexibility. This becomes especially important with age, as it helps to prevent falls and the broken bones that may result. For those with arthritis, an exercise that keeps the muscles around the joint strong can act like a brace that will react to movement without the use of an actual brace.
Daily activity
The ability to perform daily activities and maintain independence requires strong muscles, balance, and endurance. Regular physical activity or exercise helps to improve and prevent the decline of muscalking, getting up out of a chair or leaning over to pick something up. Balance problems can reduce independence by interfering with activities of daily living. Regular physical activity can improve balance and reduce the risk of falling. Exercising regularly has many benefits for both your physical and mental health.
Cancer
Exercise increases the chances of surviving cancer. If one exercises during the early stages of cancer treatment it may allow time to reduce the detrimental side effects of the chemotherapy. It also improves physical functions along with reducing distress and fatigue. Studies have shown that exercise has the possibility to improve the chemotherapy drug uptake, thanks to the increase in peripheral circulation. This also makes changes to tumor vasculature from the increase of cardio and blood pressure.
Stroke
Regular physical activity and exercise decrease the risk of ischemic stroke and intracerebral hemorrhage. There is a dose-response relationship between increased physical activity and the risk of stroke. Being physically active before a stroke is associated with decreased admission stroke severity and improved post-stroke outcomes. Research indicates that individuals who engage in regular physical activity before experiencing a stroke demonstrate fewer stroke symptoms, smaller infarct volumes in ischemic strokes, smaller hematoma volumes in intracerebral hemorrhages, and higher post-stroke survival rates. Being physically active after a stroke is associated with improved recovery and function.
Sleep condition
Exercise triggers an increase in body temperature, and the post-exercise drop in temperature may promote falling asleep. Exercise may also reduce insomnia by decreasing arousal, anxiety, and depressive symptoms. Insomnia is commonly linked with elevated arousal, anxiety, and depression, and exercise has effects on reducing these symptoms in the general population. These issues count among the most common among most of the population. Anxiety disorders are the most common mental illness in the U.S., affecting 40 million adults in the United States age 18 and older, or 18.1% of the population every year. A 2010 review suggested that exercise generally improved sleep for most people, and may help with insomnia, but there is insufficient evidence to draw detailed conclusions about the relationship between exercise and sleep. A 2020 systematic review and meta-analysis suggested that physical activity has little association with sleep in healthy children. However, there have been several research findings indicating that certain forms of physical activity can improve the quality and duration of sleep. In fact, a 2019 study at The Federal University of São Paulo concluded that moderate physical activity resulted in an increase in sleep efficiency and duration in adults diagnosed with insomnia. The duration refers to the hours of sleep a person gets on a nightly basis, while the quality indicates how well or sufficient it was. Having poor sleep quality can lead to negative short-term consequences like emotional distress and performance deficits. The psychosocial issues associated with these consequences can vary between adults, adolescents, and children. Some of the long-term effects of poor sleep quality can lead to conditions like hypertension, metabolic syndrome, and even weight-related issues.
See also
International Charter of Physical Education, Physical Activity and Sport
References
External links
Physical activity - it's important | betterhealth.vic.gov.au Better Health, May 22, 2020 "Physical activity - it's important."
Exercise & Fitness Health Harvard, May 22, 2020. "Exercise & Fitness"
Physical Activity CDC, May 22, 2020. "Physical Activity"
Physical fitness
Health effects by subject
Physical exercise
Health and sports | 0.776954 | 0.982801 | 0.763592 |
Metabolite | In biochemistry, a metabolite is an intermediate or end product of metabolism.
The term is usually used for small molecules. Metabolites have various functions, including fuel, structure, signaling, stimulatory and inhibitory effects on enzymes, catalytic activity of their own (usually as a cofactor to an enzyme), defense, and interactions with other organisms (e.g. pigments, odorants, and pheromones).
A primary metabolite is directly involved in normal "growth", development, and reproduction. Ethylene exemplifies a primary metabolite produced large-scale by industrial microbiology.
A secondary metabolite is not directly involved in those processes, but usually has an important ecological function. Examples include antibiotics and pigments such as resins and terpenes etc.
Some antibiotics use primary metabolites as precursors, such as actinomycin, which is created from the primary metabolite tryptophan. Some sugars are metabolites, such as fructose or glucose, which are both present in the metabolic pathways.
Examples of primary metabolites produced by industrial microbiology include:
The metabolome forms a large network of metabolic reactions, where outputs from one enzymatic chemical reaction are inputs to other chemical reactions.
Metabolites from chemical compounds, whether inherent or pharmaceutical, form as part of the natural biochemical process of degrading and eliminating the compounds.
The rate of degradation of a compound is an important determinant of the duration and intensity of its action. Understanding how pharmaceutical compounds are metabolized and the potential side effects of their metabolites is an important part of drug discovery.
See also
Antimetabolite
Intermediary metabolism, also called intermediate metabolism
Metabolic control analysis
Metabolomics, the study of global metabolite profiles in a system (cell, tissue, or organism) under a given set of conditions
Metabolic pathway
Volatile organic compound
References
External links
Metabolism | 0.767989 | 0.994253 | 0.763575 |
Clostridium botulinum | Clostridium botulinum is a gram-positive, rod-shaped, anaerobic, spore-forming, motile bacterium with the ability to produce botulinum toxin, which is a neurotoxin.
C. botulinum is a diverse group of pathogenic bacteria. Initially, they were grouped together by their ability to produce botulinum toxin and are now known as four distinct groups, C. botulinum groups I–IV. Along with some strains of Clostridium butyricum and Clostridium baratii, these bacteria all produce the toxin.
Botulinum toxin can cause botulism, a severe flaccid paralytic disease in humans and other animals, and is the most potent toxin known to science, natural or synthetic, with a lethal dose of 1.3–2.1 ng/kg in humans.
C. botulinum is commonly associated with bulging canned food; bulging, misshapen cans can be due to an internal increase in pressure caused by gas produced by bacteria.
C. botulinum is responsible for foodborne botulism (ingestion of preformed toxin), infant botulism (intestinal infection with toxin-forming C. botulinum), and wound botulism (infection of a wound with C. botulinum). C. botulinum produces heat-resistant endospores that are commonly found in soil and are able to survive under adverse conditions.
Microbiology
C. botulinum is a Gram-positive, rod-shaped, spore-forming bacterium. It is an obligate anaerobe, the organism survives in an environment that lacks oxygen. However, C. botulinum tolerates traces of oxygen due to the enzyme superoxide dismutase, which is an important antioxidant defense in nearly all cells exposed to oxygen. C. botulinum is able to produce the neurotoxin only during sporulation, which can happen only in an anaerobic environment.
C. botulinum is divided into four distinct phenotypic groups (I-IV) and is also classified into seven serotypes (A–G) based on the antigenicity of the botulinum toxin produced. On the level visible to DNA sequences, the phenotypic grouping matches the results of whole-genome and rRNA analyses, and setotype grouping approximates the result of analyses focused specifically on the toxin sequence. The two phylogenetic trees do not match because of the ability of the toxin gene cluster to be horizontally transferred.
Serotypes
Botulinum neurotoxin (BoNT) production is the unifying feature of the species. Seven serotypes of toxins have been identified that are allocated a letter (A–G), several of which can cause disease in humans. They are resistant to degradation by enzymes found in the gastrointestinal tract. This allows for ingested toxins to be absorbed from the intestines into the bloodstream. Toxins can be further differentiated into subtypes on the bases of smaller variations.
However, all types of botulinum toxin are rapidly destroyed by heating to 100 °C for 15 minutes (900 seconds). 80 °C for 30 minutes also destroys BoNT.
Most strains produce one type of BoNT, but strains producing multiple toxins have been described. C. botulinum producing B and F toxin types have been isolated from human botulism cases in New Mexico and California. The toxin type has been designated Bf as the type B toxin was found in excess to the type F. Similarly, strains producing Ab and Af toxins have been reported.
Evidence indicates the neurotoxin genes have been the subject of horizontal gene transfer, possibly from a viral (bacteriophage) source. This theory is supported by the presence of integration sites flanking the toxin in some strains of C. botulinum. However, these integrations sites are degraded (except for the C and D types), indicating that the C. botulinum acquired the toxin genes quite far in the evolutionary past. Nevertheless, further transfers still happen via the plasmids and other mobile elements the genes are located on.
Toxin types in disease
Only botulinum toxin types A, B, E, F and H (FA) cause disease in humans. Types A, B, and E are associated with food-borne illness, while type E is specifically associated with fish products. Type C produces limber-neck in birds and type D causes botulism in other mammals. No disease is associated with type G. The "gold standard" for determining toxin type is a mouse bioassay, but the genes for types A, B, E, and F can now be readily differentiated using quantitative PCR. Type "H" is in fact a recombinant toxin from types A and F. It can be neutralized by type A antitoxin and no longer is considered a distinct type.
A few strains from organisms genetically identified as other Clostridium species have caused human botulism: C. butyricum has produced type E toxin and C. baratii had produced type F toxin. The ability of C. botulinum to naturally transfer neurotoxin genes to other clostridia is concerning, especially in the food industry, where preservation systems are designed to destroy or inhibit only C. botulinum but not other Clostridium species.
Metabolism
Many C. botulinum genes play a role in the breakdown of essential carbohydrates and the metabolism of sugars. Chitin is the preferred source of carbon and nitrogen for C. botulinum. Hall A strain of C. botulinum has an active chitinolytic system to aid in the breakdown of chitin. Type A and B of C. botulinum production of BoNT is affected by nitrogen and carbon nutrition. There is evidence that these processes are also under catabolite repression.
Groups
Physiological differences and genome sequencing at 16S rRNA level support the subdivision of the C. botulinum species into groups I-IV. Some authors have briefly used groups V and VI, corresponding to toxin-producing C. baratii and C. butyricum. What used to be group IV is now C. argentinense.
Although group II cannot degrade native protein such as casein, coagulated egg white, and cooked meat particles, it is able to degrade gelatin.
Human botulism is predominantly caused by group I or II C. botulinum. Group III organisms mainly cause diseases in non-human animals.
Laboratory isolation
In the laboratory, C. botulinum is usually isolated in tryptose sulfite cycloserine (TSC) growth medium in an anaerobic environment with less than 2% oxygen. This can be achieved by several commercial kits that use a chemical reaction to replace O2 with CO2. C. botulinum (groups I through III) is a lipase-positive microorganism that grows between pH of 4.8 and 7.0 and cannot use lactose as a primary carbon source, characteristics important for biochemical identification.
Transmission and sporulation
The exact mechanism behind sporulation of C. botulinum is not known. Different strains of C. botulinum can be divided into three different groups, group I, II, and III, based on environmental conditions like heat resistance, temperature, and biome. Within each group, different strains will use different strategies to adapt to their environment to survive. Unlike other clostridial species, C. botulinum spores will sporulate as it enters the stationary phase. C. botulinum relies on quorum-sensing to initiate the sporulation process. C. botulinum spores are not found in human feces unless the individual has contracted botulism, but C. botulinum cannot spread from person to person.
Motility structures
The most common motility structure for C. botulinum is a flagellum. Though this structure is not found in all strains of C. botulinum, most produce peritrichous flagella. When comparing the different strains, there is also differences in the length of the flagella and how many are present on the cell.
Growth conditions and prevention
C. botulinum is a soil bacterium. The spores can survive in most environments and are very hard to kill. They can survive the temperature of boiling water at sea level, thus many foods are canned with a pressurized boil that achieves even higher temperatures, sufficient to kill the spores. This bacteria is widely distributed in nature and can be assumed to be present on all food surfaces. Its optimum growth temperature is within the mesophilic range. In spore form, it is a heat resistant pathogen that can survive in low acid foods and grow to produce toxins. The toxin attacks the nervous system and will kill an adult at a dose of around 75 ng. Botulinum toxin can be destroyed by holding food at 100 °C for 10 minutes; however, because of its potency, this is not recommended by the USA's FDA as a means of control.
Botulism poisoning can occur due to preserved or home-canned, low-acid food that was not processed using correct preservation times and/or pressure. Growth of the bacterium can be prevented by high acidity, high ratio of dissolved sugar, high levels of oxygen, very low levels of moisture, or storage at temperatures below 3 °C (38 °F) for type A. For example, in a low-acid, canned vegetable such as green beans that are not heated enough to kill the spores (i.e., a pressurized environment) may provide an oxygen-free medium for the spores to grow and produce the toxin. However, pickles are sufficiently acidic to prevent growth; even if the spores are present, they pose no danger to the consumer.
Honey, corn syrup, and other sweeteners may contain spores, but the spores cannot grow in a highly concentrated sugar solution; however, when a sweetener is diluted in the low-oxygen, low-acid digestive system of an infant, the spores can grow and produce toxin. As soon as infants begin eating solid food, the digestive juices become too acidic for the bacterium to grow.
The control of food-borne botulism caused by C. botulinum is based almost entirely on thermal destruction (heating) of the spores or inhibiting spore germination into bacteria and allowing cells to grow and produce toxins in foods. Conditions conducive of growth are dependent on various environmental factors.
Growth of C. botulinum is a risk in low acid foods as defined by having a pH above 4.6 although growth is significantly retarded for pH below 4.9.
Taxonomic history
C. botulinum was first recognized and isolated in 1895 by Emile van Ermengem from home-cured ham implicated in a botulism outbreak. The isolate was originally named Bacillus botulinus, after the Latin word for sausage, botulus. ("Sausage poisoning" was a common problem in 18th- and 19th-century Germany, and was most likely caused by botulism.) However, isolates from subsequent outbreaks were always found to be anaerobic spore formers, so Ida A. Bengtson proposed that both be placed into the genus Clostridium, as the genus Bacillus was restricted to aerobic spore-forming rods.
Since 1959, all species producing the botulinum neurotoxins (types A–G) have been designated C. botulinum. Substantial phenotypic and genotypic evidence exists to demonstrate heterogeneity within the species, with at least four clearly-defined "groups" (see ) straddling other species, implying that they each deserve to be a genospecies.
The situation as of 2018 is as follows:
C. botulinum type G (= group IV) strains are since 1988 their own species, C. argentinense.
Group I C. botulinum strains that do not produce a botulin toxin are referred to as C. sporogenes. Both names are conserved names since 1999. Group I also contains C. combesii.
All other botulinum toxin-producing bacteria, not otherwise classified as C. baratii or C. butyricum, is called C. botulinum. This group still contains three genogroups.
Smith et al. (2018) argues that group I should be called C. parabotulinum and group III be called C. novyi sensu lato, leaving only group II in C. botulinum. This argument is not accepted by the LPSN and would cause an unjustified change of the type strain under the Prokaryotic Code. Dobritsa et al. (2018) argues, without formal descriptions, that group II can potentially be made into two new species.
The complete genome of C. botulinum ATCC 3502 has been sequenced at Wellcome Trust Sanger Institute in 2007. This strain encodes a type "A" toxin.
Diagnosis
Physicians may consider the diagnosis of botulism based on a patient's clinical presentation, which classically includes an acute onset of bilateral cranial neuropathies and symmetric descending weakness. Other key features of botulism include an absence of fever, symmetric neurologic deficits, normal or slow heart rate and normal blood pressure, and no sensory deficits except for blurred vision. A careful history and physical examination is paramount to diagnose the type of botulism, as well as to rule out other conditions with similar findings, such as Guillain–Barré syndrome, stroke, and myasthenia gravis. Depending on the type of botulism considered, different tests for diagnosis may be indicated.
Foodborne botulism: serum analysis for toxins by bioassay in mice should be done, as the demonstration of the toxins is diagnostic.
Wound botulism: isolation of C. botulinum from the wound site should be attempted, as growth of the bacteria is diagnostic.
Adult enteric and infant botulism: isolation and growth of C. botulinum from stool samples is diagnostic. Infant botulism is a diagnosis which is often missed in the emergency room.
Other tests that may be helpful in ruling out other conditions are:
Electromyography (EMG) or antibody studies may help with the exclusion of myasthenia gravis and Lambert–Eaton myasthenic syndrome (LEMS).
Collection of cerebrospinal fluid (CSF) protein and blood assist with the exclusion of Guillan-Barre syndrome and stroke.
Detailed physical examination of the patient for any rash or tick presence helps with the exclusion of any tick transmitted tick paralysis.
Pathology
Foodborne botulism
Signs and symptoms of foodborne botulism typically begin between 18 and 36 hours after the toxin gets into your body, but can range from a few hours to several days, depending on the amount of toxin ingested. Symptoms include:
Double vision
Blurred vision
Ptosis
Nausea, vomiting, and abdominal cramps
Slurred speech
Trouble breathing
Difficulty in swallowing
Dry mouth
Muscle weakness
Constipation
Reduced or absent deep tendon reactions, such as in the knee
Wound botulism
Most people who develop wound botulism inject drugs several times a day, so determining a timeline of when onset symptoms first occurred and when the toxin entered the body can be difficult. It is more common in people who inject black tar heroin. Wound botulism signs and symptoms include:
Difficulty swallowing or speaking
Facial weakness on both sides of the face
Blurred or double vision
Ptosis
Trouble breathing
Paralysis
Infant botulism
If infant botulism is related to food, such as honey, problems generally begin within 18 to 36 hours after the toxin enters the baby's body. Signs and symptoms include:
Constipation (often the first sign)
Floppy movements due to muscle weakness and trouble controlling the head
Weak cry
Irritability
Drooling
Ptosis
Tiredness
Difficulty sucking or feeding
Paralysis
Beneficial effects of botulinum toxin
Purified botulinum toxin is diluted by a physician for treatment of:
Congenital pelvic tilt
Spasmodic dysphasia (the inability of the muscles of the larynx)
Achalasia (esophageal stricture)
Strabismus (crossed eyes)
Paralysis of the facial muscles
Failure of the cervix
Blinking frequently
Anti-cancer drug delivery
Adult intestinal toxemia
A very rare form of botulism that occurs by the same route as infant botulism but is among adults. Occurs rarely and sporadically. Signs and symptoms include:
Abdominal pain
Blurred vision
Diarrhea
Dysarthria
Imbalance
Weakness in arms and hand area
Treatment
In the case of a diagnosis or suspicion of botulism, patients should be hospitalized immediately, even if the diagnosis and/or tests are pending. Additionally if botulism is suspected, patients should be treated immediately with antitoxin therapy in order to reduce mortality. Immediate intubation is also highly recommended, as respiratory failure is the primary cause of death from botulism.
In North America, an equine-derived heptavalent botulinum antitoxin is used to treat all serotypes of non-infant naturally occurring botulism. For infants less than one year of age, botulism immune globulin is used to treat type A or type B.
Outcomes vary between one and three months, but with prompt interventions, mortality from botulism ranges from less than 5 percent to 8 percent.
Vaccination
There used to be a formalin-treated toxoid vaccine against botulism (serotypes A-E), but it was discontinued in 2011 due to declining potency in the toxoid stock. It was originally intended for people at risk of exposure. A few new vaccines are under development.
Use and detection
C. botulinum is used to prepare the medicaments Botox, Dysport, Xeomin, and Neurobloc used to selectively paralyze muscles to temporarily relieve muscle function. It has other "off-label" medical purposes, such as treating severe facial pain, such as that caused by trigeminal neuralgia.
Botulinum toxin produced by C. botulinum is often believed to be a potential bioweapon as it is so potent that it takes about 75 nanograms to kill a person ( of 1 ng/kg, assuming an average person weighs ~75 kg); 1 kilogram of it would be enough to kill the entire human population.
A "mouse protection" or "mouse bioassay" test determines the type of C. botulinum toxin present using monoclonal antibodies. An enzyme-linked immunosorbent assay (ELISA) with digoxigenin-labeled antibodies can also be used to detect the toxin, and quantitative PCR can detect the toxin genes in the organism.
C. botulinum in different geographical locations
A number of quantitative surveys for C. botulinum spores in the environment have suggested a prevalence of specific toxin types in given geographic areas, which remain unexplained.
References
Further reading
External links
Bacteria described in 1896
Botulism
botulinum
Food microbiology
Gram-positive bacteria | 0.765317 | 0.997693 | 0.763551 |
Symptoms of COVID-19 | The symptoms of COVID-19 are variable depending on the type of variant contracted, ranging from mild symptoms to a potentially fatal illness. Common symptoms include coughing, fever, loss of smell (anosmia) and taste (ageusia), with less common ones including headaches, nasal congestion and runny nose, muscle pain, sore throat, diarrhea, eye irritation, and toes swelling or turning purple, and in moderate to severe cases, breathing difficulties. People with the COVID-19 infection may have different symptoms, and their symptoms may change over time. Three common clusters of symptoms have been identified: one respiratory symptom cluster with cough, sputum, shortness of breath, and fever; a musculoskeletal symptom cluster with muscle and joint pain, headache, and fatigue; and a cluster of digestive symptoms with abdominal pain, vomiting, and diarrhea. In people without prior ear, nose, or throat disorders, loss of taste combined with loss of smell is associated with COVID-19 and is reported in as many as 88% of symptomatic cases.
Published data on the neuropathological changes related with COVID-19 have been limited and contentious, with neuropathological descriptions ranging from moderate to severe hemorrhagic and hypoxia phenotypes, thrombotic consequences, changes in acute disseminated encephalomyelitis (ADEM-type), encephalitis and meningitis. Many COVID-19 patients with co-morbidities have hypoxia and have been in intensive care for varying lengths of time, confounding interpretation of the data.
Of people who show symptoms, 81% develop only mild to moderate symptoms (up to mild pneumonia), while 14% develop severe symptoms (dyspnea, hypoxia, or more than 50% lung involvement on imaging) that require hospitalization, and 5% of patients develop critical symptoms (respiratory failure, septic shock, or multiorgan dysfunction) requiring ICU admission.
At least a third of the people who are infected with the virus do not develop noticeable symptoms at any point in time. These asymptomatic carriers tend not to get tested and can still spread the disease. Other infected people will develop symptoms later (called "pre-symptomatic") or have very mild symptoms and can also spread the virus.
As is common with infections, there is a delay, or incubation period, between the moment a person first becomes infected and the appearance of the first symptoms. The median delay for COVID-19 is four to five days possibly being infectious on 1-4 of those days. Most symptomatic people experience symptoms within two to seven days after exposure, and almost all will experience at least one symptom within 12 days.
Most people recover from the acute phase of the disease. However, some people continue to experience a range of effects, such as fatigue, for months, even after recovery. This is the result of a condition called long COVID, which can be described as a range of persistent symptoms that continue for weeks or months at a time. Long-term damage to organs has also been observed after the onset of COVID-19. Multi-year studies are underway to further investigate the potential long-term effects of the disease.
The Omicron variant became dominant in the U.S. in December 2021. Symptoms with the Omicron variant are less severe than they are with other variants.
Overview
Some less common symptoms of COVID-19 can be relatively non-specific; however the most common symptoms are fever, dry cough, and loss of taste and smell. Among those who develop symptoms, approximately one in five may become more seriously ill and have difficulty in breathing. Emergency symptoms include difficulty in breathing, persistent chest pain or pressure, sudden confusion, loss of mobility and speech, and bluish face or lips; immediate medical attention is advised if these symptoms are present. Further development of the disease can lead to complications including pneumonia, acute respiratory distress syndrome, sepsis, septic shock, and kidney failure.
Some symptoms usually appear sooner than others, with deterioration usually developing in the second week. In August 2020, scientists at the University of Southern California reported the "likely" order of initial symptoms of the COVID-19 disease as a fever followed by a cough and muscle pain, and that nausea and vomiting usually appear before diarrhea. This contrasts with the most common path for influenza where it is common to develop a cough first and fever later. Impaired immunity in part drive disease progression after SARS-CoV-2 infection. While health agency guidelines tend to recommend isolating for 14 days while watching for symptoms to develop, there is limited evidence that symptoms may develop for some patients more than 14 days after initial exposure.
Symptom profile of variants
The frequency of symptoms predominating for people with different variants may differ from what was observed in the earlier phases of the pandemic.
Delta
People infected with the Delta variant may mistake the symptoms for a bad cold and not realize they need to isolate. Common symptoms reported as of June 2021 have been headaches, sore throat, runny nose, and fever.
Omicron
British epidemiologist Tim Spector said in mid-December 2021 that the majority of symptoms of the Omicron variant were the same as a common cold, including headaches, sore throat, runny nose, fatigue and sneezing, so that people with cold symptoms should take a test. "Things like fever, cough and loss of smell are now in the minority of symptoms we are seeing. Most people don't have classic symptoms." People with cold symptoms in London (where Covid was spreading rapidly) are "far more likely" to have Covid than a cold.
A unique reported symptom of the Omicron variant is night sweats, particularly with the BA.5 subvariant. Also, loss of taste and smell seem to be uncommon compared to other strains.
Systemic
Typical systemic symptoms include fatigue, and muscle and joint pains. Some people have a sore throat.
Fever
Fever is one of the most common symptoms in COVID-19 patients. However, the absence of the symptom itself at an initial screening does not rule out COVID-19. Fever in the first week of a COVID-19 infection is part of the body's natural immune response; however in severe cases, if the infections develop into a cytokine storm the fever is counterproductive. As of September 2020, little research had focused on relating fever intensity to outcomes.
A June 2020 systematic review reported a 75–81% prevalence of fever. As of July 2020, the European Centre for Disease Prevention and Control (ECDC) reported a prevalence rate of ~45% for fever.
Pain
A June 2020 systematic review reported a 27–35% prevalence of fatigue, 14–19% for muscle pain, 10–14% for sore throat. As of July 2020, the ECDC reported a prevalence rate of ~63% for muscle weakness (asthenia), ~63% for muscle pain (myalgia), and ~53% for sore throat.
Respiratory
Cough is another typical symptom of COVID-19, which could be either dry or a productive cough.
Some symptoms, such as difficulty breathing, are more common in patients who need hospital care. Shortness of breath tends to develop later in the illness. Persistent anosmia or hyposmia or ageusia or dysgeusia has been documented in 20% of cases for longer than 30 days.
Respiratory complications may include pneumonia and acute respiratory distress syndrome (ARDS).
As of July 2020, the ECDC reported a prevalence rate of ~68% for nasal obstruction, ~63% for cough, ~60% for rhinorrhoea or runny nose. A June 2020 systematic review reported a 54–61% prevalence of dry cough and 22–28% for productive cough.
Cardiovascular
Coagulopathy is established to be associated with COVID-19 in those patients in critical state. Thromboembolic events, such as blood clots show with high risk in COVID-19 patients in some studies. Other
cardiovascular complications may include heart failure, arrhythmias, and heart inflammation. They are common traits in severe COVID-19 patients due to the relation with the respiratory system.
Hypertension seems to be the most prevalent risk factor for myocardial injury in COVID-19 disease. It was reported in 58% of individuals with cardiac injury in a recent meta-analysis.
Several cases of acute myocarditis associated with COVID-19 have been described around the globe and are diagnosed in multiple ways. Taking into consideration serology, leukocytosis with neutrophilia and lymphopenia was found in many patients. Cardiac biomarkers troponin and N-terminal (NT)-prohormone BNP (NT-proBNP) were seen elevated. Similarly, the level of inflammation-related markers such as C-reactive protein (CRP), D-dimer, IL-6, procalcitonin was significantly increased, indicating an inflammatory process in the body. Electrocardiogram findings were variable and ranged from sinus tachycardia, ST-segment elevation, T-wave inversion and ST-depression. In one case, viral particles were seen in the interstitial cell, and another case reported SARS-CoV-2 RT–PCR positivity in the cardiac tissue suggestive of direct viral injury to the myocardium. Endomyocardial biopsy [EMB] remains the gold standard invasive technique in diagnosing myocarditis; however, due to the increased risk of infection, it is not done in COVID-19 patients.
The binding of the SARS-CoV-2 virus through ACE2 receptors present in heart tissue may be responsible for direct viral injury leading to myocarditis. In a study done during the SARS outbreak, SARS virus RNA was ascertained in the autopsy of heart specimens in 35% of the patients who died due to SARS. It was also observed that an already diseased heart has increased expression of ACE2 receptor contrasted to healthy individuals. Hyperactive immune responses in COVID-19 Patients may lead to the initiation of the cytokine storm. This excess release of cytokines may lead to myocardial injury.
Neurological
Patients with COVID-19 can present with neurological symptoms that can be broadly divided into central nervous system involvement, such as headache, dizziness, altered mental state, and disorientation, and peripheral nervous system involvement, such as anosmia and dysgeusia. As was noted, COVID-19 has also been linked to various neurological symptoms at the diagnosis or throughout the disease, with over 90% of individuals with COVID-19 having reported at least one subjective neurological symptom. Some patients experience cognitive dysfunction called "", or "COVID brain fog", involving memory loss, inattention, poor concentration or disorientation. Other neurologic manifestations include seizures, strokes, encephalitis, and Guillain–Barré syndrome (which includes loss of motor functions).
As of July 2020, the ECDC reported a prevalence rate of ~70% for headache. A June 2020 systematic review reported a 10–16% prevalence of headache. However, headache could be mistaken for having a random relationship with COVID-19; there is unambiguous evidence that COVID-19 patients who had never had a recurrent headache suddenly get a severe headache daily because of SARS-CoV-2 infection.
Loss of smell
In about 60% of COVID-19 patients, chemosensory deficit are reported, including losing their sense of smell, either partially or fully.
This symptom, if it is present at all, often appears early in the illness. Its onset is often reported to be sudden. Smell usually returns to normal within a month. However, for some patients it improves very slowly and is associated with odors being perceived as unpleasant or different from they originally did (parosmia), and for some people smell does not return for at least many months. It is an unusual symptom for other respiratory diseases, so it is used for symptom-based screening.
Loss of smell has several consequences. Loss of smell increases foodborne illness due to inability to detect spoiled food, and may increase fire hazards due to inability to detect smoke. It has also been linked to depression. If smell does not return, smell training is a potential option.
It is sometimes the only symptom to be reported, implying that it has a neurological basis separate from nasal congestion. As of January 2021, it is believed that these symptoms are caused by infection of sustentacular cells that support and provide nutrients to sensory neurons in the nose, rather than infection of the neurons themselves. Sustentacular cells have many Angiotensin-converting enzyme 2 (ACE2) receptors on their surfaces, while olfactory sensory neurons do not. Loss of smell may also be the result of inflammation in the olfactory bulb.
A June 2020 systematic review found a 29–54% prevalence of olfactory dysfunction for people with COVID-19, while an August 2020 study using a smell-identification test reported that 96% of people with COVID-19 had some olfactory dysfunction, and 18% had total smell loss. Another June 2020 systematic review reported a 4–55% prevalence of hyposmia. As of July 2020, the ECDC reported a prevalence rate of ~70% for loss of smell.
A disturbance in smell or taste is more commonly found in younger people, and perhaps because of this, it is correlated with a lower risk of medical complications.
Loss of taste and chemesthesis
In some people, COVID-19 causes people to temporarily experience changes in how food tastes (dysgeusia or ageusia). Changes to chemesthesis, which includes chemically triggered sensations such as spiciness, are also reported. As of January 2021, the mechanism for taste and chemesthesis symptoms were not well understood.
A June 2020 systematic review found a 24–54% prevalence of gustatory dysfunction for people with COVID-19. Another June 2020 systematic review reported a 1–8% prevalence of hypogeusia. As of July 2020, the ECDC reported a prevalence rate of ~54% for gustatory dysfunction.
Other neurological and psychiatric symptoms
Other neurological symptoms appear to be rare, but may affect half of patients who are hospitalized with severe COVID-19. Some reported symptoms include delirium, stroke, brain hemorrhage, memory loss, psychosis, peripheral nerve damage, anxiety, and post-traumatic stress disorder. Neurological symptoms in many cases are correlated with damage to the brain's blood supply or encephalitis, which can progress in some cases to acute disseminated encephalomyelitis. Strokes have been reported in younger people without conventional risk factors.
As of September 2020, it was unclear whether these symptoms were due to direct infection of brain cells, or of overstimulation of the immune system.
A June 2020 systematic review reported a 6–16% prevalence of vertigo or dizziness, 7–15% for confusion, and 0–2% for ataxia.
Blood clots and bleeding
Patients are at increased risk of a range of different blood clots, some potentially fatal, for months following COVID infection. The Guardian wrote, "Overall, they [a Swedish medical team] identified a 33-fold increase in the risk of pulmonary embolism, a fivefold increase in the risk of DVT (deep vein thrombosis) and an almost twofold increase in the risk of bleeding in the 30 days after infection. People remained at increased risk of pulmonary embolism for six months after becoming infected, and for two and three months for bleeding and DVT. Although the risks were highest in patients with more severe illness, even those with mild Covid had a threefold increased risk of DVT and a sevenfold increased risk of pulmonary embolism. No increased risk of bleeding was found in those who experienced mild infections." Anne-Marie Fors Connolly at Umeå University said, "If you suddenly find yourself short of breath, and it doesn't pass, [and] you've been infected with the coronavirus, then it might be an idea to seek help, because we find this increased risk for up to six months."
Other
Other symptoms are less common among people with COVID-19. Some people experience gastrointestinal symptoms such as loss of appetite, diarrhea, nausea or vomiting. A June 2020 systematic review reported a 8–12% prevalence of diarrhea, and 3–10% for nausea.
Less common symptoms include chills, coughing out blood, diarrhea, and rash. The so-called "COVID toes" are pink to violaceous papules arising on the hands and feet. These chilblain-like lesions often occur only in younger patients and do not appear until late in the disease or during convalescence. Certain genetic polymorphisms (in the TREX1 gene) have been linked to susceptibility towards developing COVID-toe. A June 2020 systematic review reported a 0–1% prevalence of rash in COVID-19 patients.
Approximately 20–30% of people who present with COVID-19 have elevated liver enzymes, reflecting liver injury.
Complications include multi-organ failure, septic shock, and death.
Stages of COVID-19 infection
There are three stages, according to the way COVID-19 infection can be tackled by pharmacological agents, in which the disease can be classified. Stage I is the early infection phase during which the domination of upper respiratory tract symptoms is present. Stage II is the pulmonary phase in which the patient develops pneumonia with all its associated symptoms; this stage is split with Stage IIa which is without hypoxia and Stage IIb which includes hypoxia. Stage III is the hyperinflammation phase, the most severe phase, in which the patient develops acute respiratory distress syndrome (ARDS), sepsis and multi-organ failure.
A similar stereotyped course was postulated to be: the first phase of an incubation period, a second phase corresponding to the viral phase, a third phase corresponding to the state of inflammatory pneumonia, a fourth phase corresponding to the brutal clinical aggravation reflected by acute respiratory distress syndrome (ARDS), and finally, in survivors, a fifth phase potentially including lung fibrosis, and persisting in the form of "post-covid" symptoms.
Longer-term effects
Multisystem inflammatory syndrome in children
Following the infection, children may develop multisystem inflammatory syndrome, also called paediatric multisystem inflammatory syndrome. This has symptoms similar to Kawasaki disease, which can possibly be fatal.
Long COVID
Post-COVID Condition
Longer-term effects of COVID-19 have become a prevalent aspect of the disease itself. These symptoms can be referred to as many different names including post-COVID-19 syndrome, long COVID, and long haulers syndrome. An overall definition of post-COVID conditions (PCC) can be described as a range of symptoms that can last for weeks or months. Long COVID can be present in anyone who has contracted COVID-19 at some point; typically, it is more commonly found in those who had severe illness due to the virus.
Symptoms
Long COVID can attack a multitude of organs such as the lungs, heart, blood vessels, kidneys, gut, and brain. Some common symptoms that occur as a result are fatigue, cough, shortness of breath, chest pains, brain fog, gastrointestinal issues, insomnia, anxiety/depression, and delirium. A difference between acute COVID-19 and PCC is the effect that it has on a person's mind. People are found to be dealing with brain fog and impaired memory, and diminished learning ability which has a large impact on their everyday lives. A study that took a deeper look into these specific symptoms took 50 SARS-CoV-2 laboratory-positive patients and 50 SARS-CoV-2 laboratory-negative patients to analyze the variety of neurologic symptoms present during long COVID. The most frequent symptoms included brain fog, headache, numbness, dysgeusia (loss of taste), anosmia (loss of smell), and myalgias (muscle pains) with an overall decrease in quality of life.
References | 0.766793 | 0.995754 | 0.763538 |
Uhthoff's phenomenon | Uhthoff's phenomenon (also known as Uhthoff's syndrome, Uhthoff's sign, and Uhthoff's symptom) is the worsening of neurologic symptoms in multiple sclerosis (MS) and other demyelinating diseases when the body is overheated. This may occur due to hot weather, exercise, fever, saunas, hot tubs, hot baths, and hot food and drink. Increased temperature slows nerve conduction, but the exact mechanism remains unknown. With an increased body temperature, nerve impulses are either blocked or slowed in a damaged nerve. Once the body temperature is normalized, signs and symptoms typically reverse.
Signs and symptoms
Symptoms of Uhthoff's phenomenon occur when exposed to heat, and include:
fatigue
pain
concentration difficulties
urinary urgency
worsen of existing optic neuropathy (although optic neuropathy may occur for the first time)
muscle stiffness
dizziness and unsteadiness
Causes
Uhthoff's phenomenon is caused by a raised body temperature. This may be caused by:
hot weather
exercise
fever
saunas
sun tanning
hot tubs, and hot baths and showers
hot food and drink
menstruation (which may raise body temperature)
sitting near a radiator
Mechanism
The exact mechanism of Uhthoff's phenomenon is unknown. It causes a decrease in the speed of action potentials in the central nervous system (CNS). Heat may increase the time when voltage-gated sodium channels are inactivated, which delays further action potentials. This is worsened by the demyelination caused by MS. Other theories have considered the role of heat shock proteins and changes to blood flow.
Peripheral nerve studies have shown that even a 0.5 °C increase in body temperature can slow or block the conduction of nerve impulses in demyelinated nerves. With greater levels of demyelination, a smaller increase in temperature is needed to slow down the nerve impulse conduction. Exercising and normal daily activities can cause a significant increase in body temperature in individuals with MS, especially if their mechanical efficiency is poor due to the use of mobility aids, ataxia, weakness, and spasticity. However, exercise has been shown to be helpful in managing MS symptoms, reducing the risk of comorbidities, and promoting overall wellness.
Diagnosis
Diagnosis of Uhthoff's phenomenon is clinical and based on symptoms when it occurs in a person who is already diagnosed with MS. The main differential diagnosis is a more serious worsening of MS symptoms.
Prevention and management
Many patients with MS tend to avoid saunas, warm baths, and other sources of heat. They may wear ice or evaporative cooling clothes, such as vests, neck wraps, armbands, wristbands, and hats. Taking advantage of the cooling properties of water may help attenuate the consequences of heat sensitivity. Exercise pre-cooling via lower body immersion in water of 16–17 °C for 30 minutes may allow heat sensitive individuals with MS to exercise more comfortably with fewer side effects by minimizing body temperature increases during exercise. Hydrotherapy exercise in moderately cool water of 27–29 °C water can also be advantageous to individuals with MS. Temperatures lower than 27 °C are not recommended because of the increased risk of invoking spasticity.
Prognosis
Uhthoff's phenomenon is a temporary problem, and typically completely reverses once body temperature returns to normal. This may take up to 24 hours.
Epidemiology
Uhthoff's phenomenon may affect any person with a demyelinating disease. This is most commonly MS, but it may also occur with neuromyelitis optica spectrum disorder or Guillain-Barré Syndrome. It affects between 60% and 80% of people with MS.
History
Uhthoff's phenomenon was first described by Wilhelm Uhthoff in 1890 as a temporary worsening of vision with exercise in patients with optic neuritis. Later research revealed the link between neurological signs such as visual loss and increased heat production and Uhthoff's belief that exercise was the etiology of visual loss was replaced by the conclusions of these later researchers stating that heat was the prime etiology.
References
Symptoms and signs: Nervous system
Autoimmune diseases
Multiple sclerosis | 0.769725 | 0.991908 | 0.763497 |
Diagnosis | Diagnosis (: diagnoses) is the identification of the nature and cause of a certain phenomenon. Diagnosis is used in many different disciplines, with variations in the use of logic, analytics, and experience, to determine "cause and effect". In systems engineering and computer science, it is typically used to determine the causes of symptoms, mitigations, and solutions.
Computer science and networking
Bayesian network
Complex event processing
Diagnosis (artificial intelligence)
Event correlation
Fault management
Fault tree analysis
Grey problem
RPR problem diagnosis
Remote diagnostics
Root cause analysis
Troubleshooting
Unified Diagnostic Services
Mathematics and logic
Bayesian probability
Block Hackam's dictum
Occam's razor
Regression diagnostics
Sutton's law
Medicine
Medical diagnosis
Molecular diagnostics
Methods
CDR computerized assessment system
Computer-aided diagnosis
Differential diagnosis
Retrospective diagnosis
Tools
DELTA (taxonomy)
DXplain
List of diagnostic classification and rating scales used in psychiatry
Organizational development
Organizational diagnostics
Systems engineering
Five whys
Eight disciplines problem solving
Fault detection and isolation
Problem solving
References
External links
Medical terminology | 0.770598 | 0.990742 | 0.763464 |
Porphyria | Porphyria is a group of disorders in which substances called porphyrins build up in the body, adversely affecting the skin or nervous system. The types that affect the nervous system are also known as acute porphyria, as symptoms are rapid in onset and short in duration. Symptoms of an attack include abdominal pain, chest pain, vomiting, confusion, constipation, fever, high blood pressure, and high heart rate. The attacks usually last for days to weeks. Complications may include paralysis, low blood sodium levels, and seizures. Attacks may be triggered by alcohol, smoking, hormonal changes, fasting, stress, or certain medications. If the skin is affected, blisters or itching may occur with sunlight exposure.
Most types of porphyria are inherited from one or both of a person's parents and are due to a mutation in one of the genes that make heme. They may be inherited in an autosomal dominant, autosomal recessive, or X-linked dominant manner. One type, porphyria cutanea tarda, may also be due to hemochromatosis (increased iron in the liver), hepatitis C, alcohol, or HIV/AIDS. The underlying mechanism results in a decrease in the amount of heme produced and a build-up of substances involved in making heme. Porphyrias may also be classified by whether the liver or bone marrow is affected. Diagnosis is typically made by blood, urine, and stool tests. Genetic testing may be done to determine the specific mutation. Hepatic porphyrias are those in which the enzyme deficiency occurs in the liver. Hepatic porphyrias include acute intermittent porphyria (AIP), variegate porphyria (VP), aminolevulinic acid dehydratase deficiency porphyria (ALAD), hereditary coproporphyria (HCP), and porphyria cutanea tarda.
Treatment depends on the type of porphyria and the person's symptoms. Treatment of porphyria of the skin generally involves the avoidance of sunlight, while treatment for acute porphyria may involve giving intravenous heme or a glucose solution. Rarely, a liver transplant may be carried out.
The precise prevalence of porphyria is unclear, but it is estimated to affect between 1 and 100 per 50,000 people. Rates are different around the world. Porphyria cutanea tarda is believed to be the most common type. The disease was described as early as 370 BC by Hippocrates. The underlying mechanism was first described by German physiologist and chemist Felix Hoppe-Seyler in 1871. The name porphyria is from the Greek πορφύρα, porphyra, meaning "purple", a reference to the color of the urine that may be present during an attack.
Signs and symptoms
Acute porphyrias
Acute intermittent porphyria (AIP), variegate porphyria (VP), aminolevulinic acid dehydratase deficiency porphyria (ALAD) and hereditary coproporphyria (HCP). These diseases primarily affect the nervous system, resulting in episodic crises known as acute attacks. The major symptom of an acute attack is abdominal pain, often accompanied by vomiting, hypertension (elevated blood pressure), and tachycardia (an abnormally rapid heart rate).
The most severe episodes may involve neurological complications: typically motor neuropathy (severe dysfunction of the peripheral nerves that innervate muscle), which leads to muscle weakness and potentially to quadriplegia (paralysis of all four limbs) and central nervous system symptoms such as seizures and coma. Occasionally, there may be short-lived psychiatric symptoms such as anxiety, confusion, hallucinations, and, very rarely, overt psychosis. All these symptoms resolve once the acute attack passes.
Given the many presentations and the relatively low occurrence of porphyria, patients may initially be suspected to have other, unrelated conditions. For instance, the polyneuropathy of acute porphyria may be mistaken for Guillain–Barré syndrome, and porphyria testing is commonly recommended in those situations. Elevation of aminolevulinic acid from lead-induced disruption of heme synthesis results in lead poisoning having symptoms similar to acute porphyria.
Chronic porphyrias
The non-acute porphyrias are X-linked dominant protoporphyria (XLDPP), congenital erythropoietic porphyria (CEP), porphyria cutanea tarda (PCT), and erythropoietic protoporphyria (EPP). None of these are associated with acute attacks; their primary manifestation is with skin disease. For this reason, these four porphyrias—along with two acute porphyrias, VP and HCP, that may also involve skin manifestations—are sometimes called cutaneous porphyrias.
Skin disease is encountered where excess porphyrins accumulate in the skin. Porphyrins are photoactive molecules, and exposure to light results in promotion of electrons to higher energy levels. When these return to the resting energy level or ground state, energy is released. This accounts for the property of fluorescence typical of the porphyrins. This causes local skin damage.
Two distinct patterns of skin disease are seen in porphyria:
Immediate photosensitivity. This is typical of XLDPP and EPP. Following a variable period of sun exposure—typically about 30 minutes—patients complain of severe pain, burning, and discomfort in exposed areas. Typically, the effects are not visible, though occasionally there may be some redness and swelling of the skin.
Vesiculo-erosive skin disease. This—a reference to the characteristic blistering (vesicles) and open sores (erosions) noted in patients—is the pattern seen in CEP, PCT, VP, and HCP. The changes are noted only in sun-exposed areas such as the face and back of the hands. Milder skin disease, such as that seen in VP and HCP, consists of increased skin fragility in exposed areas with a tendency to form blisters and erosions, particularly after minor knocks or scrapes. These heal slowly, often leaving small scars that may be lighter or darker than normal skin. More severe skin disease is sometimes seen in PCT, with prominent lesions, darkening of exposed skin such as the face, and hypertrichosis: abnormal hair growth on the face, particularly the cheeks. The most severe disease is seen in CEP and a rare variant of PCT known as hepatoerythropoietic porphyria (HEP); symptoms include severe shortening of digits, loss of skin appendages such as hair and nails, and severe scarring of the skin with progressive disappearance of ears, lips, and nose. Patients may also show deformed, discolored teeth or gum and eye abnormalities.
Congenital porphyrias
Congenital porphyrias are genetic disorders caused by mutations in enzymes involved in the heme biosynthesis pathway. There are several types of congenital porphyrias, including erythropoietic protoporphyria (EPP), congenital erythropoietic porphyria (CEP), and porphyria cutanea tarda (PCT). Each type is characterized by specific enzyme deficiencies leading to the accumulation of different porphyrins.
Erythropoietic protoporphyria (EPP) is caused by a deficiency in ferrochelatase, leading to the accumulation of protoporphyrin IX in red blood cells, plasma, and tissues. Patients with EPP experience severe photosensitivity, with exposure to sunlight causing painful skin reactions.
Congenital erythropoietic porphyria (CEP), also known as Günther's disease, results from a deficiency in uroporphyrinogen III synthase. This leads to the accumulation of uroporphyrin I and coproporphyrin I in the bone marrow, blood, and urine. Symptoms of CEP include severe photosensitivity, anemia, splenomegaly, and often disfiguring cutaneous lesions.
Diagnosis of congenital porphyrias involves clinical evaluation, biochemical testing, and genetic analysis. Treatment aims to manage symptoms and prevent acute attacks by avoiding triggers, such as sunlight exposure, certain medications, and alcohol. Additionally, treatments may include phlebotomy to reduce iron levels in PCT, administration of heme preparations to alleviate symptoms, and liver transplantation in severe cases. Early diagnosis and appropriate management are crucial in improving the quality of life for individuals with congenital porphyrias.
Cause
The porphyrias are generally considered genetic in nature.
Genetics
Subtypes of porphyrias depend on which enzyme is deficient.
X-linked dominant protoporphyria is a rare form of erythropoietic protoporphyria caused by a gain-of-function mutation in ALAS2 characterized by severe photosensitivity.
In the autosomal recessive types, if a person inherits a single gene they may become a carrier. Generally they do not have symptoms, but may pass the gene onto offspring.
Triggers
Acute porphyria can be triggered by a number of drugs, most of which are believed to trigger it by interacting with enzymes in the liver which are made with heme. Such drugs include:
Sulfonamides, including sulfadiazine, sulfasalazine and trimethoprim/sulfamethoxazole.
Sulfonylureas like glibenclamide, gliclazide and glimepiride, although glipizide is thought to be safe.
Barbiturates including thiopental, phenobarbital, primidone, etc.
Systemic treatment with antifungals including fluconazole, griseofulvin, ketoconazole and voriconazole. (Topical use of these agents is thought to be safe due to minimal systemic absorption.)
Certain antibiotics like rifapentine, rifampicin, rifabutine, isoniazid, nitrofurantoin and, possibly, metronidazole.
Ergot derivatives including dihydroergotamine, ergometrine, ergotamine, methysergide, etc.
Certain antiretroviral medications (e.g. indinavir, nevirapine, ritonavir, saquinavir, etc.)
Progestogens
Some anticonvulsants including: carbamazepine, ethosuximide, phenytoin, topiramate, valproate.
Some painkillers like dextropropoxyphene, ketorolac, metamizole, pentazocine
Some cancer treatments like bexarotene, busulfan, chlorambucil, estramustine, etoposide, flutamide, idarubicin, ifosfamide, irinotecan, ixabepilone, letrozole, lomustine, megestrol, mitomycin, mitoxantrone, paclitaxel, procarbazine, tamoxifen, topotecan
Some antidepressants like imipramine, phenelzine, trazodone
Some antipsychotics like risperidone, ziprasidone
Some retinoids used for skin conditions like acitretin and isotretinoin
Miscellaneous others including: cocaine, methyldopa, fenfluramine, disulfiram, orphenadrine, pentoxifylline, and sodium aurothiomalate.
Pathogenesis
In humans, porphyrins are the main precursors of heme, an essential constituent of hemoglobin, myoglobin, catalase, peroxidase, and P450 liver cytochromes.
The body requires porphyrins to produce heme, which is used to carry oxygen in the blood among other things, but in the porphyrias there is a deficiency (inherited or acquired) of the enzymes that transform the various porphyrins into others, leading to abnormally high levels of one or more of these substances. Porphyrias are classified in two ways, by symptoms and by pathophysiology. Physiologically, porphyrias are classified as liver or erythropoietic based on the sites of accumulation of heme precursors, either in the liver or in the bone marrow and red blood cells.
Deficiency in the enzymes of the porphyrin pathway leads to insufficient production of heme. Heme function plays a central role in cellular metabolism. This is not the main problem in the porphyrias; most heme synthesis enzymes—even dysfunctional enzymes—have enough residual activity to assist in heme biosynthesis. The principal problem in these deficiencies is the accumulation of porphyrins, the heme precursors, which are toxic to tissue in high concentrations. The chemical properties of these intermediates determine the location of accumulation, whether they induce photosensitivity, and whether the intermediate is excreted (in the urine or feces).
There are eight enzymes in the heme biosynthetic pathway, four of which—the first one and the last three—are in the mitochondria, while the other four are in the cytosol. Defects in any of these can lead to some form of porphyria. The hepatic porphyrias are characterized by acute neurological attacks (seizures, psychosis, extreme back and abdominal pain, and an acute polyneuropathy), while the erythropoietic forms present with skin problems, usually a light-sensitive blistering rash and increased hair growth. Variegate porphyria (also porphyria variegata or mixed porphyria), which results from a partial deficiency in PROTO oxidase, manifests itself with skin lesions similar to those of porphyria cutanea tarda combined with acute neurologic attacks. Hereditary coproporphyria, which is characterized by a deficiency in coproporphyrinogen oxidase, coded for by the CPOX gene, may also present with both acute neurologic attacks and cutaneous lesions. All other porphyrias are either skin- or nerve-predominant.
Diagnosis
Porphyrin studies
Porphyria is diagnosed through biochemical analysis of blood, urine, and stool. In general, urine estimation of porphobilinogen (PBG) is the first step if acute porphyria is suspected. As a result of feedback, the decreased production of heme leads to increased production of precursors, PBG being one of the first substances in the porphyrin synthesis pathway. In nearly all cases of acute porphyria syndromes, urinary PBG is markedly elevated except for the very rare ALA dehydratase deficiency or in patients with symptoms due to hereditary tyrosinemia type I. In cases of
mercury- or arsenic poisoning-induced porphyria, other changes in porphyrin profiles appear, most notably elevations of uroporphyrins I & III, coproporphyrins I & III, and pre-coproporphyrin.
As most porphyrias are rare conditions, general hospital labs typically do not have the expertise, technology, or staff time to perform porphyria testing. In general, testing involves sending samples of blood, stool, and urine to a reference laboratory. All samples to detect porphyrins must be handled properly. Samples should be taken during an acute attack; otherwise a false negative result may occur. Samples must be protected from light and either refrigerated or preserved.
If all the porphyrin studies are negative, one must consider pseudoporphyria. A careful medication review often will find the cause of pseudoporphyria.
Additional tests
Further diagnostic tests of affected organs may be required, such as nerve conduction studies for neuropathy or an ultrasound of the liver. Basic biochemical tests may assist in identifying liver disease, hepatocellular carcinoma, and other organ problems.
•Other Diagnosis
Clinical Evaluation: A thorough medical history and physical examination focusing on symptoms related to photosensitivity, skin lesions, abdominal pain, and neurological manifestations.
Genetic Testing: Molecular genetic testing to identify specific gene mutations associated with congenital porphyrias.
Other Tests: Liver function tests, iron studies, and imaging studies such as ultrasound or MRI may be conducted to evaluate liver and spleen involvement.
Management
Acute porphyria
Carbohydrate administration
Often, empirical treatment is required if the diagnostic suspicion of a porphyria is high since acute attacks can be fatal. A high-carbohydrate diet is typically recommended; in severe attacks, a dextrose 10% infusion is commenced, which may aid in recovery by suppressing heme synthesis, which in turn reduces the rate of porphyrin accumulation. However, this can worsen cases of low blood sodium levels (hyponatraemia) and should be done with extreme caution as it can prove fatal.
Heme analogs
Hematin (trade name Panhematin) and heme arginate (trade name NormoSang) are the drugs of choice in acute porphyria, in the United States and the United Kingdom, respectively. These drugs need to be given very early in an attack to be effective; effectiveness varies amongst individuals. They are not curative drugs but can shorten attacks and reduce the intensity of an attack. Side effects are rare but can be serious. These heme-like substances theoretically inhibit ALA synthase and hence the accumulation of toxic precursors. In the United Kingdom, supplies of NormoSang are kept at two national centers; emergency supply is available from St Thomas's Hospital, London. In the United States, Lundbeck manufactures and supplies Panhematin for infusion.
Heme arginate (NormoSang) is used during crises but also in preventive treatment to avoid crises, one treatment every 10 days.
Any sign of low blood sodium (hyponatremia) or weakness should be treated with the addition of hematin, heme arginate, or even tin mesoporphyrin, as these are signs of impending syndrome of inappropriate antidiuretic hormone (SIADH) or peripheral nervous system involvement that may be localized or severe, progressing to bulbar paresis and respiratory paralysis.
Cimetidine
Cimetidine has also been reported to be effective for acute porphyric crisis and possibly effective for long-term prophylaxis.
Symptom control
Pain is severe, frequently out of proportion to physical signs, and often requires the use of opiates to reduce it to tolerable levels. Pain should be treated as early as medically possible. Nausea can be severe; it may respond to phenothiazine drugs but is sometimes intractable. Hot baths and showers may lessen nausea temporarily, though caution should be used to avoid burns or falls.
Early identification
It is recommended that patients with a history of acute porphyria, and even genetic carriers, wear an alert bracelet or other identification at all times. This is in case they develop severe symptoms, or in case of accidents where there is a potential for drug exposure, and as a result they are unable to explain their condition to healthcare professionals. Some drugs are absolutely contraindicated for patients with any form of porphyria.
Neurologic and psychiatric disorders
Patients who experience frequent attacks can develop chronic neuropathic pain in extremities as well as chronic pain in the abdomen. Intestinal pseudo-obstruction, ileus, intussusception, hypoganglionosis, and encopresis in children have been associated with porphyrias. This is thought to be due to axonal nerve deterioration in affected areas of the nervous system and vagal nerve dysfunction. Pain treatment with long-acting opioids, such as morphine, is often indicated, and, in cases where seizure or neuropathy is present, gabapentin is known to improve outcome.
Seizures often accompany this disease. Most seizure medications exacerbate this condition. Treatment can be problematic: barbiturates especially must be avoided. Some benzodiazepines are safe and, when used in conjunction with newer anti-seizure medications such as gabapentin, offer a possible regimen for seizure control. Gabapentin has the additional feature of aiding in the treatment of some kinds of neuropathic pain. Magnesium sulfate and bromides have also been used in porphyria seizures; however, development of status epilepticus in porphyria may not respond to magnesium alone. The addition of hematin or heme arginate has been used during status epilepticus.
Depression often accompanies the disease and is best dealt with by treating the offending symptoms and if needed the judicious use of antidepressants. Some psychotropic drugs are porphyrinogenic, limiting the therapeutic scope. Other psychiatric symptoms such as anxiety, restlessness, insomnia, depression, mania, hallucinations, delusions, confusion, catatonia, and psychosis may occur.
Underlying liver disease
Some liver diseases may cause porphyria even in the absence of genetic predisposition. These include hemochromatosis and hepatitis C. Treatment of iron overload may be required.
Patients with the acute porphyrias (AIP, HCP, VP) are at increased risk over their life for hepatocellular carcinoma (primary liver cancer) and may require monitoring. Other typical risk factors for liver cancer need not be present.
Hormone treatment
Hormonal fluctuations that contribute to cyclical attacks in women have been treated with oral contraceptives and luteinizing hormones to shut down menstrual cycles. However, oral contraceptives have also triggered photosensitivity and withdrawal of oral contraceptives has triggered attacks. Androgens and fertility hormones have also triggered attacks. In 2019, givosiran was approved in the United States for the treatment of acute hepatic porphyria.
Erythropoietic porphyria
These are associated with accumulation of porphyrins in erythrocytes and are rare.
The pain, burning, swelling, and itching that occur in erythropoietic porphyrias (EP) generally require avoidance of bright sunlight. Most kinds of sunscreen are not effective, but SPF-rated long-sleeve shirts, hats, bandanas, and gloves can help. Chloroquine may be used to increase porphyrin secretion in some EPs. Blood transfusion is occasionally used to suppress innate heme production.
The rarest is congenital erythropoietic porphyria (CEP), otherwise known as Gunther's disease. The signs may present from birth and include severe photosensitivity, brown teeth that fluoresce in ultraviolet light due to deposition of Type 1 porphyrins, and later hypertrichosis. Hemolytic anemia usually develops. Pharmaceutical-grade beta carotene may be used in its treatment. A bone marrow transplant has also been successful in curing CEP in a few cases, although long-term results are not yet available.
In December 2014, afamelanotide received authorization from the European Commission as a treatment for the prevention of phototoxicity in adult patients with EPP. In a 2023 industry-funded phase 2 trial, dersimelagon, an orally administered, selective melanocortin 1 receptor agonist that increases levels of skin eumelanin, was reported to have increased the duration of symptom-free sunlight exposure and quality of life compared to placebo in patients with erythropoietic protoporphyria.
Epidemiology
Rates of all types of porphyria taken together have been estimated to be approximately one in 25,000 in the United States. The worldwide prevalence has been estimated to be between one in 500 and one in 50,000 people.
Porphyrias have been detected in all races and in multiple ethnic groups on every continent. There are high incidence reports of AIP in areas of India and Scandinavia. More than 200 genetic variants of AIP are known, some of which are specific to families, although some strains have proven to be repeated mutations.
Other Information
The epidemiology of congenital porphyrias varies depending on the specific type of porphyria. Here's a general overview:
1. Erythropoietic Protoporphyria (EPP): EPP is relatively rare, with an estimated prevalence of 1 to 9 cases per 100,000 individuals worldwide. It affects both males and females, typically presenting in childhood or early adulthood.
2. Congenital Erythropoietic Porphyria (CEP): CEP is extremely rare, with fewer than 200 cases reported worldwide. It is inherited in an autosomal recessive manner, meaning both parents must carry a mutated gene for a child to develop the condition. CEP occurs with higher frequency in certain populations, including individuals of Northern European descent.
3. Porphyria Cutanea Tarda (PCT): PCT is the most common form of porphyria, with an estimated prevalence of 1 to 2 cases per 10,000 individuals in the general population. It predominantly affects adults, with a higher prevalence in men than in women. PCT can be sporadic or familial and is often associated with underlying liver disease, alcohol abuse, hepatitis C infection, or certain medications.
These prevalence estimates may vary across different regions and populations, and the actual prevalence of congenital porphyrias may be underreported due to challenges in diagnosis and awareness. Additionally, advances in genetic testing and increased awareness of porphyria may lead to more accurate epidemiological data in the future.
History
The underlying mechanism was first described by the German physiologist Felix Hoppe-Seyler in 1871, and acute porphyrias were described by the Dutch physician Barend Stokvis in 1889.
The links between porphyrias and mental illness have been noted for decades. In the early 1950s, patients with porphyrias (occasionally referred to as "porphyric hemophilia") and severe symptoms of depression or catatonia were treated with electroshock therapy.
Vampires and werewolves
Porphyria has been suggested as an explanation for the origin of vampire and werewolf legends, based upon certain perceived similarities between the condition and the folklore.
In January 1964, L. Illis's 1963 paper, "On Porphyria and the Aetiology of Werewolves," was published in Proceedings of the Royal Society of Medicine. Later, Nancy Garden argued for a connection between porphyria and the vampire belief in her 1973 book, Vampires. In 1985, biochemist David Dolphin's paper for the American Association for the Advancement of Science, "Porphyria, Vampires, and Werewolves: The Aetiology of European Metamorphosis Legends," gained widespread media coverage, popularizing the idea.
The theory has been rejected by a few folklorists and researchers as not accurately describing the characteristics of the original werewolf and vampire legends or the disease, and as potentially stigmatizing people with porphyria.
A 1995 article from the Postgraduate Medical Journal (via NIH) explains:
As it was believed that the folkloric vampire could move about freely in daylight hours, as opposed to the 20th century variant, congenital erythropoietic porphyria cannot readily explain the folkloric vampire but may be an explanation of the vampire as we know it in the 20th century. In addition, the folkloric vampire, when unearthed, was always described as looking quite healthy ("as they were in life"), while due to disfiguring aspects of the disease, sufferers would not have passed the exhumation test. Individuals with congenital erythropoietic porphyria do not crave blood. The enzyme (hematin) necessary to alleviate symptoms is not absorbed intact on oral ingestion, and drinking blood would have no beneficial effect on the sufferer. Finally, and most important, the fact that vampire reports were literally rampant in the 18th century, and that congenital erythropoietic porphyria is an extremely rare manifestation of a rare disease, makes it an unlikely explanation of the folkloric vampire.
Notable cases
George III. The mental illness exhibited by George III in the regency crisis of 1788 has inspired several attempts at retrospective diagnosis. The first, written in 1855, thirty-five years after his death, concluded that he had acute mania. M. Guttmacher, in 1941, suggested manic-depressive psychosis as a more likely diagnosis. The first suggestion that a physical illness was the cause of King George's mental derangement came in 1966, in a paper called "The Insanity of King George III: A Classic Case of Porphyria", with a follow-up in 1968, "Porphyria in the Royal Houses of Stuart, Hanover and Prussia". The papers, by a mother/son psychiatrist team, were written as though the case for porphyria had been proven, but the response demonstrated that many experts, including those more intimately familiar with the manifestations of porphyria, were unconvinced. Many psychiatrists disagreed with the diagnosis, suggesting bipolar disorder as far more probable. The theory is treated in Purple Secret, which documents the ultimately unsuccessful search for genetic evidence of porphyria in the remains of royals suspected to have had it. In 2005, it was suggested that arsenic (which is known to be porphyrogenic) given to George III with antimony may have caused his porphyria. This study found high levels of arsenic in King George's hair. In 2010, one analysis of historical records argued that the porphyria claim was based on spurious and selective interpretation of contemporary medical and historical sources. The mental illness of George III is the basis of the plot in The Madness of King George, a 1994 British film based upon the 1991 Alan Bennett play, The Madness of George III. The closing credits of the film include the comment that the King's symptoms suggest that he had porphyria, and note that the disease is "periodic, unpredictable, and hereditary". The traditional argument that George III did not have porphyria, but rather bipolar disorder, is thoroughly defended by Andrew Roberts in his new biography The Last King of America.
Descendants of George III. Among other descendants of George III theorized by the authors of Purple Secret to have had porphyria (based on analysis of their extensive and detailed medical correspondence) were his great-great-granddaughter Princess Charlotte of Prussia (Emperor William II's eldest sister) and her daughter Princess Feodora of Saxe-Meiningen. They uncovered better evidence that George III's great-great-great-grandson Prince William of Gloucester was reliably diagnosed with variegate porphyria.
Mary, Queen of Scots. It is believed that Mary, Queen of Scots, King George III's ancestor, also had acute intermittent porphyria, although this is subject to much debate. It is assumed she inherited the disorder, if indeed she had it, from her father, James V of Scotland. Both father and daughter endured well-documented attacks that could fall within the constellation of symptoms of porphyria.
Maria I of Portugal. Maria I—known as "Maria the Pious" or "Maria the Mad" because of both her religious fervor and her acute mental illness, which made her incapable of handling state affairs after 1792—is also thought to have had porphyria. Francis Willis, the same physician who treated George III, was even summoned by the Portuguese court but returned to England after the court limited the treatments he could oversee. Contemporary sources, such as Secretary of State for Foreign Affairs Luís Pinto de Sousa Coutinho, noted that the queen had ever-worsening stomach pains and abdominal spasms: hallmarks of porphyria.
Vlad III Dracula, "The Impaler." Vlad III was also said to have had acute porphyria, which may have started the notion that vampires were allergic to sunlight.
Vincent van Gogh. Other commentators have suggested that Vincent van Gogh may have had acute intermittent porphyria.
King Nebuchadnezzar of Babylon. The description of this king in Daniel 4 suggests to some that he had porphyria.
Physician Archie Cochrane. He was born with porphyria, which caused health problems throughout his life.
Paula Frías Allende. The daughter of the Chilean novelist Isabel Allende. She fell into a porphyria-induced coma in 1991, which inspired Isabel to write the memoir Paula, dedicated to her.
Uses in literature
Stated or implied references to porphyria are included in some literature, particularly gothic literature. These include the following:
The condition is the name of the title character in the gothic poem "Porphyria's Lover," by Robert Browning.
The condition is heavily implied to be the cause of the symptoms suffered by the narrator in the gothic short story "Lusus Naturae," by Margaret Atwood. Some of the narrator's symptoms resemble those of porphyria, and one passage of the story states that the name of the narrator's disease "had some Ps and Rs in it."
References
External links
The Drug Database for Acute Porphyria - comprehensive database on drug porphyrinogenicity
Orphanet's disease page on Porphyria
Diseases of liver
Red blood cell disorders
Skin conditions resulting from errors in metabolism
Wikipedia medicine articles ready to translate
Wikipedia neurology articles ready to translate | 0.763759 | 0.999602 | 0.763455 |
Protein | Proteins are large biomolecules and macromolecules that comprise one or more long chains of amino acid residues. Proteins perform a vast array of functions within organisms, including catalysing metabolic reactions, DNA replication, responding to stimuli, providing structure to cells and organisms, and transporting molecules from one location to another. Proteins differ from one another primarily in their sequence of amino acids, which is dictated by the nucleotide sequence of their genes, and which usually results in protein folding into a specific 3D structure that determines its activity.
A linear chain of amino acid residues is called a polypeptide. A protein contains at least one long polypeptide. Short polypeptides, containing less than 20–30 residues, are rarely considered to be proteins and are commonly called peptides. The individual amino acid residues are bonded together by peptide bonds and adjacent amino acid residues. The sequence of amino acid residues in a protein is defined by the sequence of a gene, which is encoded in the genetic code. In general, the genetic code specifies 20 standard amino acids; but in certain organisms the genetic code can include selenocysteine and—in certain archaea—pyrrolysine. Shortly after or even during synthesis, the residues in a protein are often chemically modified by post-translational modification, which alters the physical and chemical properties, folding, stability, activity, and ultimately, the function of the proteins. Some proteins have non-peptide groups attached, which can be called prosthetic groups or cofactors. Proteins can also work together to achieve a particular function, and they often associate to form stable protein complexes.
Once formed, proteins only exist for a certain period and are then degraded and recycled by the cell's machinery through the process of protein turnover. A protein's lifespan is measured in terms of its half-life and covers a wide range. They can exist for minutes or years with an average lifespan of 1–2 days in mammalian cells. Abnormal or misfolded proteins are degraded more rapidly either due to being targeted for destruction or due to being unstable.
Like other biological macromolecules such as polysaccharides and nucleic acids, proteins are essential parts of organisms and participate in virtually every process within cells. Many proteins are enzymes that catalyse biochemical reactions and are vital to metabolism. Proteins also have structural or mechanical functions, such as actin and myosin in muscle and the proteins in the cytoskeleton, which form a system of scaffolding that maintains cell shape. Other proteins are important in cell signaling, immune responses, cell adhesion, and the cell cycle. In animals, proteins are needed in the diet to provide the essential amino acids that cannot be synthesized. Digestion breaks the proteins down for metabolic use.
History and etymology
Discovery and early studies
Proteins have been studied and recognized since the 1700s by Antoine Fourcroy and others, who often collectively called them "albumins", or "albuminous materials" (Eiweisskörper, in German). Gluten, for example, was first separated from wheat in published research around 1747, and later determined to exist in many plants. In 1789, Antoine Fourcroy recognized three distinct varieties of animal proteins: albumin, fibrin, and gelatin. Vegetable (plant) proteins studied in the late 1700s and early 1800s included gluten, plant albumin, gliadin, and legumin.
Proteins were first described by the Dutch chemist Gerardus Johannes Mulder and named by the Swedish chemist Jöns Jacob Berzelius in 1838. Mulder carried out elemental analysis of common proteins and found that nearly all proteins had the same empirical formula, C400H620N100O120P1S1. He came to the erroneous conclusion that they might be composed of a single type of (very large) molecule. The term "protein" to describe these molecules was proposed by Mulder's associate Berzelius; protein is derived from the Greek word , meaning "primary", "in the lead", or "standing in front", + -in. Mulder went on to identify the products of protein degradation such as the amino acid leucine for which he found a (nearly correct) molecular weight of 131 Da.
Early nutritional scientists such as the German Carl von Voit believed that protein was the most important nutrient for maintaining the structure of the body, because it was generally believed that "flesh makes flesh." Around 1862, Karl Heinrich Ritthausen isolated the amino acid glutamic acid. Thomas Burr Osborne compiled a detailed review of the vegetable proteins at the Connecticut Agricultural Experiment Station. Then, working with Lafayette Mendel and applying Liebig's law of the minimum, which states that growth is limited by the scarcest resource, to the feeding of laboratory rats, the nutritionally essential amino acids were established. The work was continued and communicated by William Cumming Rose.
The difficulty in purifying proteins in large quantities made them very difficult for early protein biochemists to study. Hence, early studies focused on proteins that could be purified in large quantities, including those of blood, egg whites, and various toxins, as well as digestive and metabolic enzymes obtained from slaughterhouses. In the 1950s, the Armour Hot Dog Company purified 1 kg of pure bovine pancreatic ribonuclease A and made it freely available to scientists; this gesture helped ribonuclease A become a major target for biochemical study for the following decades.
Polypeptides
The understanding of proteins as polypeptides, or chains of amino acids, came through the work of Franz Hofmeister and Hermann Emil Fischer in 1902. The central role of proteins as enzymes in living organisms that catalyzed reactions was not fully appreciated until 1926, when James B. Sumner showed that the enzyme urease was in fact a protein.
Linus Pauling is credited with the successful prediction of regular protein secondary structures based on hydrogen bonding, an idea first put forth by William Astbury in 1933. Later work by Walter Kauzmann on denaturation, based partly on previous studies by Kaj Linderstrøm-Lang, contributed an understanding of protein folding and structure mediated by hydrophobic interactions.
The first protein to have its amino acid chain sequenced was insulin, by Frederick Sanger, in 1949. Sanger correctly determined the amino acid sequence of insulin, thus conclusively demonstrating that proteins consisted of linear polymers of amino acids rather than branched chains, colloids, or cyclols. He won the Nobel Prize for this achievement in 1958. Christian Anfinsen's studies of the oxidative folding process of ribonuclease A, for which he won the nobel prize in 1972, solidified the thermodynamic hypothesis of protein folding, according to which the folded form of a protein represents its free energy minimum.
Structure
With the development of X-ray crystallography, it became possible to determine protein structures as well as their sequences. The first protein structures to be solved were hemoglobin by Max Perutz and myoglobin by John Kendrew, in 1958. The use of computers and increasing computing power also supported the sequencing of complex proteins. In 1999, Roger Kornberg succeeded in sequencing the highly complex structure of RNA polymerase using high intensity X-rays from synchrotrons.
Since then, cryo-electron microscopy (cryo-EM) of large macromolecular assemblies has been developed. Cryo-EM uses protein samples that are frozen rather than crystals, and beams of electrons rather than X-rays. It causes less damage to the sample, allowing scientists to obtain more information and analyze larger structures. Computational protein structure prediction of small protein structural domains has also helped researchers to approach atomic-level resolution of protein structures.
, the Protein Data Bank contains 181,018 X-ray, 19,809 EM and 12,697 NMR protein structures.
Classification
Proteins are primarily classified by sequence and structure, although other classifications are commonly used. Especially for enzymes the EC number system provides a functional classification scheme. Similarly, the gene ontology classifies both genes and proteins by their biological and biochemical function, but also by their intracellular location.
Sequence similarity is used to classify proteins both in terms of evolutionary and functional similarity. This may use either whole proteins or protein domains, especially in multi-domain proteins. Protein domains allow protein classification by a combination of sequence, structure and function, and they can be combined in many different ways. In an early study of 170,000 proteins, about two-thirds were assigned at least one domain, with larger proteins containing more domains (e.g. proteins larger than 600 amino acids having an average of more than 5 domains).
Biochemistry
Most proteins consist of linear polymers built from series of up to 20 different L-α- amino acids. All proteinogenic amino acids possess common structural features, including an α-carbon to which an amino group, a carboxyl group, and a variable side chain are bonded. Only proline differs from this basic structure as it contains an unusual ring to the N-end amine group, which forces the CO–NH amide moiety into a fixed conformation. The side chains of the standard amino acids, detailed in the list of standard amino acids, have a great variety of chemical structures and properties; it is the combined effect of all of the amino acid side chains in a protein that ultimately determines its three-dimensional structure and its chemical reactivity.
The amino acids in a polypeptide chain are linked by peptide bonds. Once linked in the protein chain, an individual amino acid is called a residue, and the linked series of carbon, nitrogen, and oxygen atoms are known as the main chain or protein backbone.
The peptide bond has two resonance forms that contribute some double-bond character and inhibit rotation around its axis, so that the alpha carbons are roughly coplanar. The other two dihedral angles in the peptide bond determine the local shape assumed by the protein backbone. The end with a free amino group is known as the N-terminus or amino terminus, whereas the end of the protein with a free carboxyl group is known as the C-terminus or carboxy terminus (the sequence of the protein is written from N-terminus to C-terminus, from left to right).
The words protein, polypeptide, and peptide are a little ambiguous and can overlap in meaning. Protein is generally used to refer to the complete biological molecule in a stable conformation, whereas peptide is generally reserved for a short amino acid oligomers often lacking a stable 3D structure. But the boundary between the two is not well defined and usually lies near 20–30 residues. Polypeptide can refer to any single linear chain of amino acids, usually regardless of length, but often implies an absence of a defined conformation.
Interactions
Proteins can interact with many types of molecules, including with other proteins, with lipids, with carbohydrates, and with DNA.
Abundance in cells
It has been estimated that average-sized bacteria contain about 2 million proteins per cell (e.g. E. coli and Staphylococcus aureus). Smaller bacteria, such as Mycoplasma or spirochetes contain fewer molecules, on the order of 50,000 to 1 million. By contrast, eukaryotic cells are larger and thus contain much more protein. For instance, yeast cells have been estimated to contain about 50 million proteins and human cells on the order of 1 to 3 billion. The concentration of individual protein copies ranges from a few molecules per cell up to 20 million. Not all genes coding proteins are expressed in most cells and their number depends on, for example, cell type and external stimuli. For instance, of the 20,000 or so proteins encoded by the human genome, only 6,000 are detected in lymphoblastoid cells.
Synthesis
Biosynthesis
Proteins are assembled from amino acids using information encoded in genes. Each protein has its own unique amino acid sequence that is specified by the nucleotide sequence of the gene encoding this protein. The genetic code is a set of three-nucleotide sets called codons and each three-nucleotide combination designates an amino acid, for example AUG (adenine–uracil–guanine) is the code for methionine. Because DNA contains four nucleotides, the total number of possible codons is 64; hence, there is some redundancy in the genetic code, with some amino acids specified by more than one codon. Genes encoded in DNA are first transcribed into pre-messenger RNA (mRNA) by proteins such as RNA polymerase. Most organisms then process the pre-mRNA (also known as a primary transcript) using various forms of post-transcriptional modification to form the mature mRNA, which is then used as a template for protein synthesis by the ribosome. In prokaryotes the mRNA may either be used as soon as it is produced, or be bound by a ribosome after having moved away from the nucleoid. In contrast, eukaryotes make mRNA in the cell nucleus and then translocate it across the nuclear membrane into the cytoplasm, where protein synthesis then takes place. The rate of protein synthesis is higher in prokaryotes than eukaryotes and can reach up to 20 amino acids per second.
The process of synthesizing a protein from an mRNA template is known as translation. The mRNA is loaded onto the ribosome and is read three nucleotides at a time by matching each codon to its base pairing anticodon located on a transfer RNA molecule, which carries the amino acid corresponding to the codon it recognizes. The enzyme aminoacyl tRNA synthetase "charges" the tRNA molecules with the correct amino acids. The growing polypeptide is often termed the nascent chain. Proteins are always biosynthesized from N-terminus to C-terminus.
The size of a synthesized protein can be measured by the number of amino acids it contains and by its total molecular mass, which is normally reported in units of daltons (synonymous with atomic mass units), or the derivative unit kilodalton (kDa). The average size of a protein increases from Archaea to Bacteria to Eukaryote (283, 311, 438 residues and 31, 34, 49 kDa respectively) due to a bigger number of protein domains constituting proteins in higher organisms. For instance, yeast proteins are on average 466 amino acids long and 53 kDa in mass. The largest known proteins are the titins, a component of the muscle sarcomere, with a molecular mass of almost 3,000 kDa and a total length of almost 27,000 amino acids.
Chemical synthesis
Short proteins can also be synthesized chemically by a family of methods known as peptide synthesis, which rely on organic synthesis techniques such as chemical ligation to produce peptides in high yield. Chemical synthesis allows for the introduction of non-natural amino acids into polypeptide chains, such as attachment of fluorescent probes to amino acid side chains. These methods are useful in laboratory biochemistry and cell biology, though generally not for commercial applications. Chemical synthesis is inefficient for polypeptides longer than about 300 amino acids, and the synthesized proteins may not readily assume their native tertiary structure. Most chemical synthesis methods proceed from C-terminus to N-terminus, opposite the biological reaction.
Structure
Most proteins fold into unique 3D structures. The shape into which a protein naturally folds is known as its native conformation. Although many proteins can fold unassisted, simply through the chemical properties of their amino acids, others require the aid of molecular chaperones to fold into their native states. Biochemists often refer to four distinct aspects of a protein's structure:
Primary structure: the amino acid sequence. A protein is a polyamide.
Secondary structure: regularly repeating local structures stabilized by hydrogen bonds. The most common examples are the α-helix, β-sheet and turns. Because secondary structures are local, many regions of different secondary structure can be present in the same protein molecule.
Tertiary structure: the overall shape of a single protein molecule; the spatial relationship of the secondary structures to one another. Tertiary structure is generally stabilized by nonlocal interactions, most commonly the formation of a hydrophobic core, but also through salt bridges, hydrogen bonds, disulfide bonds, and even post-translational modifications. The term "tertiary structure" is often used as synonymous with the term fold. The tertiary structure is what controls the basic function of the protein.
Quaternary structure: the structure formed by several protein molecules (polypeptide chains), usually called protein subunits in this context, which function as a single protein complex.
Quinary structure: the signatures of protein surface that organize the crowded cellular interior. Quinary structure is dependent on transient, yet essential, macromolecular interactions that occur inside living cells.
Proteins are not entirely rigid molecules. In addition to these levels of structure, proteins may shift between several related structures while they perform their functions. In the context of these functional rearrangements, these tertiary or quaternary structures are usually referred to as "conformations", and transitions between them are called conformational changes. Such changes are often induced by the binding of a substrate molecule to an enzyme's active site, or the physical region of the protein that participates in chemical catalysis. In solution, proteins also undergo variation in structure through thermal vibration and the collision with other molecules.
Proteins can be informally divided into three main classes, which correlate with typical tertiary structures: globular proteins, fibrous proteins, and membrane proteins. Almost all globular proteins are soluble and many are enzymes. Fibrous proteins are often structural, such as collagen, the major component of connective tissue, or keratin, the protein component of hair and nails. Membrane proteins often serve as receptors or provide channels for polar or charged molecules to pass through the cell membrane.
A special case of intramolecular hydrogen bonds within proteins, poorly shielded from water attack and hence promoting their own dehydration, are called dehydrons.
Protein domains
Many proteins are composed of several protein domains, i.e. segments of a protein that fold into distinct structural units. Domains usually also have specific functions, such as enzymatic activities (e.g. kinase) or they serve as binding modules (e.g. the SH3 domain binds to proline-rich sequences in other proteins).
Sequence motif
Short amino acid sequences within proteins often act as recognition sites for other proteins. For instance, SH3 domains typically bind to short PxxP motifs (i.e. 2 prolines [P], separated by two unspecified amino acids [x], although the surrounding amino acids may determine the exact binding specificity). Many such motifs has been collected in the Eukaryotic Linear Motif (ELM) database.
Protein topology
Topology of a protein describes the entanglement of the backbone and the arrangement of contacts within the folded chain. Two theoretical frameworks of knot theory and Circuit topology have been applied to characterise protein topology. Being able to describe protein topology opens up new pathways for protein engineering and pharmaceutical development, and adds to our understanding of protein misfolding diseases such as neuromuscular disorders and cancer.
Cellular functions
Proteins are the chief actors within the cell, said to be carrying out the duties specified by the information encoded in genes. With the exception of certain types of RNA, most other biological molecules are relatively inert elements upon which proteins act. Proteins make up half the dry weight of an Escherichia coli cell, whereas other macromolecules such as DNA and RNA make up only 3% and 20%, respectively. The set of proteins expressed in a particular cell or cell type is known as its proteome.
The chief characteristic of proteins that also allows their diverse set of functions is their ability to bind other molecules specifically and tightly. The region of the protein responsible for binding another molecule is known as the binding site and is often a depression or "pocket" on the molecular surface. This binding ability is mediated by the tertiary structure of the protein, which defines the binding site pocket, and by the chemical properties of the surrounding amino acids' side chains. Protein binding can be extraordinarily tight and specific; for example, the ribonuclease inhibitor protein binds to human angiogenin with a sub-femtomolar dissociation constant (<10−15 M) but does not bind at all to its amphibian homolog onconase (> 1 M). Extremely minor chemical changes such as the addition of a single methyl group to a binding partner can sometimes suffice to nearly eliminate binding; for example, the aminoacyl tRNA synthetase specific to the amino acid valine discriminates against the very similar side chain of the amino acid isoleucine.
Proteins can bind to other proteins as well as to small-molecule substrates. When proteins bind specifically to other copies of the same molecule, they can oligomerize to form fibrils; this process occurs often in structural proteins that consist of globular monomers that self-associate to form rigid fibers. Protein–protein interactions also regulate enzymatic activity, control progression through the cell cycle, and allow the assembly of large protein complexes that carry out many closely related reactions with a common biological function. Proteins can also bind to, or even be integrated into, cell membranes. The ability of binding partners to induce conformational changes in proteins allows the construction of enormously complex signaling networks.
As interactions between proteins are reversible, and depend heavily on the availability of different groups of partner proteins to form aggregates that are capable to carry out discrete sets of function, study of the interactions between specific proteins is a key to understand important aspects of cellular function, and ultimately the properties that distinguish particular cell types.
Enzymes
The best-known role of proteins in the cell is as enzymes, which catalyse chemical reactions. Enzymes are usually highly specific and accelerate only one or a few chemical reactions. Enzymes carry out most of the reactions involved in metabolism, as well as manipulating DNA in processes such as DNA replication, DNA repair, and transcription. Some enzymes act on other proteins to add or remove chemical groups in a process known as posttranslational modification. About 4,000 reactions are known to be catalysed by enzymes. The rate acceleration conferred by enzymatic catalysis is often enormous—as much as 1017-fold increase in rate over the uncatalysed reaction in the case of orotate decarboxylase (78 million years without the enzyme, 18 milliseconds with the enzyme).
The molecules bound and acted upon by enzymes are called substrates. Although enzymes can consist of hundreds of amino acids, it is usually only a small fraction of the residues that come in contact with the substrate, and an even smaller fraction—three to four residues on average—that are directly involved in catalysis. The region of the enzyme that binds the substrate and contains the catalytic residues is known as the active site.
Dirigent proteins are members of a class of proteins that dictate the stereochemistry of a compound synthesized by other enzymes.
Cell signaling and ligand binding
Many proteins are involved in the process of cell signaling and signal transduction. Some proteins, such as insulin, are extracellular proteins that transmit a signal from the cell in which they were synthesized to other cells in distant tissues. Others are membrane proteins that act as receptors whose main function is to bind a signaling molecule and induce a biochemical response in the cell. Many receptors have a binding site exposed on the cell surface and an effector domain within the cell, which may have enzymatic activity or may undergo a conformational change detected by other proteins within the cell.
Antibodies are protein components of an adaptive immune system whose main function is to bind antigens, or foreign substances in the body, and target them for destruction. Antibodies can be secreted into the extracellular environment or anchored in the membranes of specialized B cells known as plasma cells. Whereas enzymes are limited in their binding affinity for their substrates by the necessity of conducting their reaction, antibodies have no such constraints. An antibody's binding affinity to its target is extraordinarily high.
Many ligand transport proteins bind particular small biomolecules and transport them to other locations in the body of a multicellular organism. These proteins must have a high binding affinity when their ligand is present in high concentrations, but must also release the ligand when it is present at low concentrations in the target tissues. The canonical example of a ligand-binding protein is haemoglobin, which transports oxygen from the lungs to other organs and tissues in all vertebrates and has close homologs in every biological kingdom. Lectins are sugar-binding proteins which are highly specific for their sugar moieties. Lectins typically play a role in biological recognition phenomena involving cells and proteins. Receptors and hormones are highly specific binding proteins.
Transmembrane proteins can also serve as ligand transport proteins that alter the permeability of the cell membrane to small molecules and ions. The membrane alone has a hydrophobic core through which polar or charged molecules cannot diffuse. Membrane proteins contain internal channels that allow such molecules to enter and exit the cell. Many ion channel proteins are specialized to select for only a particular ion; for example, potassium and sodium channels often discriminate for only one of the two ions.
Structural proteins
Structural proteins confer stiffness and rigidity to otherwise-fluid biological components. Most structural proteins are fibrous proteins; for example, collagen and elastin are critical components of connective tissue such as cartilage, and keratin is found in hard or filamentous structures such as hair, nails, feathers, hooves, and some animal shells. Some globular proteins can also play structural functions, for example, actin and tubulin are globular and soluble as monomers, but polymerize to form long, stiff fibers that make up the cytoskeleton, which allows the cell to maintain its shape and size.
Other proteins that serve structural functions are motor proteins such as myosin, kinesin, and dynein, which are capable of generating mechanical forces. These proteins are crucial for cellular motility of single celled organisms and the sperm of many multicellular organisms which reproduce sexually. They also generate the forces exerted by contracting muscles and play essential roles in intracellular transport.
Protein evolution
A key question in molecular biology is how proteins evolve, i.e. how can mutations (or rather changes in amino acid sequence) lead to new structures and functions? Most amino acids in a protein can be changed without disrupting activity or function, as can be seen from numerous homologous proteins across species (as collected in specialized databases for protein families, e.g. PFAM). In order to prevent dramatic consequences of mutations, a gene may be duplicated before it can mutate freely. However, this can also lead to complete loss of gene function and thus pseudo-genes. More commonly, single amino acid changes have limited consequences although some can change protein function substantially, especially in enzymes. For instance, many enzymes can change their substrate specificity by one or a few mutations. Changes in substrate specificity are facilitated by substrate promiscuity, i.e. the ability of many enzymes to bind and process multiple substrates. When mutations occur, the specificity of an enzyme can increase (or decrease) and thus its enzymatic activity. Thus, bacteria (or other organisms) can adapt to different food sources, including unnatural substrates such as plastic.
Methods of study
Methods commonly used to study protein structure and function include immunohistochemistry, site-directed mutagenesis, X-ray crystallography, nuclear magnetic resonance and mass spectrometry.
The activities and structures of proteins may be examined in vitro, in vivo, and in silico. In vitro studies of purified proteins in controlled environments are useful for learning how a protein carries out its function: for example, enzyme kinetics studies explore the chemical mechanism of an enzyme's catalytic activity and its relative affinity for various possible substrate molecules. By contrast, in vivo experiments can provide information about the physiological role of a protein in the context of a cell or even a whole organism. In silico studies use computational methods to study proteins.
Protein purification
Proteins may be purified from other cellular components using a variety of techniques such as ultracentrifugation, precipitation, electrophoresis, and chromatography; the advent of genetic engineering has made possible a number of methods to facilitate purification.
To perform in vitro analysis, a protein must be purified away from other cellular components. This process usually begins with cell lysis, in which a cell's membrane is disrupted and its internal contents released into a solution known as a crude lysate. The resulting mixture can be purified using ultracentrifugation, which fractionates the various cellular components into fractions containing soluble proteins; membrane lipids and proteins; cellular organelles, and nucleic acids. Precipitation by a method known as salting out can concentrate the proteins from this lysate. Various types of chromatography are then used to isolate the protein or proteins of interest based on properties such as molecular weight, net charge and binding affinity. The level of purification can be monitored using various types of gel electrophoresis if the desired protein's molecular weight and isoelectric point are known, by spectroscopy if the protein has distinguishable spectroscopic features, or by enzyme assays if the protein has enzymatic activity. Additionally, proteins can be isolated according to their charge using electrofocusing.
For natural proteins, a series of purification steps may be necessary to obtain protein sufficiently pure for laboratory applications. To simplify this process, genetic engineering is often used to add chemical features to proteins that make them easier to purify without affecting their structure or activity. Here, a "tag" consisting of a specific amino acid sequence, often a series of histidine residues (a "His-tag"), is attached to one terminus of the protein. As a result, when the lysate is passed over a chromatography column containing nickel, the histidine residues ligate the nickel and attach to the column while the untagged components of the lysate pass unimpeded. A number of different tags have been developed to help researchers purify specific proteins from complex mixtures.
Cellular localization
The study of proteins in vivo is often concerned with the synthesis and localization of the protein within the cell. Although many intracellular proteins are synthesized in the cytoplasm and membrane-bound or secreted proteins in the endoplasmic reticulum, the specifics of how proteins are targeted to specific organelles or cellular structures is often unclear. A useful technique for assessing cellular localization uses genetic engineering to express in a cell a fusion protein or chimera consisting of the natural protein of interest linked to a "reporter" such as green fluorescent protein (GFP). The fused protein's position within the cell can then be cleanly and efficiently visualized using microscopy, as shown in the figure opposite.
Other methods for elucidating the cellular location of proteins requires the use of known compartmental markers for regions such as the ER, the Golgi, lysosomes or vacuoles, mitochondria, chloroplasts, plasma membrane, etc. With the use of fluorescently tagged versions of these markers or of antibodies to known markers, it becomes much simpler to identify the localization of a protein of interest. For example, indirect immunofluorescence will allow for fluorescence colocalization and demonstration of location. Fluorescent dyes are used to label cellular compartments for a similar purpose.
Other possibilities exist, as well. For example, immunohistochemistry usually uses an antibody to one or more proteins of interest that are conjugated to enzymes yielding either luminescent or chromogenic signals that can be compared between samples, allowing for localization information. Another applicable technique is cofractionation in sucrose (or other material) gradients using isopycnic centrifugation. While this technique does not prove colocalization of a compartment of known density and the protein of interest, it does increase the likelihood, and is more amenable to large-scale studies.
Finally, the gold-standard method of cellular localization is immunoelectron microscopy. This technique also uses an antibody to the protein of interest, along with classical electron microscopy techniques. The sample is prepared for normal electron microscopic examination, and then treated with an antibody to the protein of interest that is conjugated to an extremely electro-dense material, usually gold. This allows for the localization of both ultrastructural details as well as the protein of interest.
Through another genetic engineering application known as site-directed mutagenesis, researchers can alter the protein sequence and hence its structure, cellular localization, and susceptibility to regulation. This technique even allows the incorporation of unnatural amino acids into proteins, using modified tRNAs, and may allow the rational design of new proteins with novel properties.
Proteomics
The total complement of proteins present at a time in a cell or cell type is known as its proteome, and the study of such large-scale data sets defines the field of proteomics, named by analogy to the related field of genomics. Key experimental techniques in proteomics include 2D electrophoresis, which allows the separation of many proteins, mass spectrometry, which allows rapid high-throughput identification of proteins and sequencing of peptides (most often after in-gel digestion), protein microarrays, which allow the detection of the relative levels of the various proteins present in a cell, and two-hybrid screening, which allows the systematic exploration of protein–protein interactions. The total complement of biologically possible such interactions is known as the interactome. A systematic attempt to determine the structures of proteins representing every possible fold is known as structural genomics.
Structure determination
Discovering the tertiary structure of a protein, or the quaternary structure of its complexes, can provide important clues about how the protein performs its function and how it can be affected, i.e. in drug design. As proteins are too small to be seen under a light microscope, other methods have to be employed to determine their structure. Common experimental methods include X-ray crystallography and NMR spectroscopy, both of which can produce structural information at atomic resolution. However, NMR experiments are able to provide information from which a subset of distances between pairs of atoms can be estimated, and the final possible conformations for a protein are determined by solving a distance geometry problem. Dual polarisation interferometry is a quantitative analytical method for measuring the overall protein conformation and conformational changes due to interactions or other stimulus. Circular dichroism is another laboratory technique for determining internal β-sheet / α-helical composition of proteins. Cryoelectron microscopy is used to produce lower-resolution structural information about very large protein complexes, including assembled viruses; a variant known as electron crystallography can also produce high-resolution information in some cases, especially for two-dimensional crystals of membrane proteins. Solved structures are usually deposited in the Protein Data Bank (PDB), a freely available resource from which structural data about thousands of proteins can be obtained in the form of Cartesian coordinates for each atom in the protein.
Many more gene sequences are known than protein structures. Further, the set of solved structures is biased toward proteins that can be easily subjected to the conditions required in X-ray crystallography, one of the major structure determination methods. In particular, globular proteins are comparatively easy to crystallize in preparation for X-ray crystallography. Membrane proteins and large protein complexes, by contrast, are difficult to crystallize and are underrepresented in the PDB. Structural genomics initiatives have attempted to remedy these deficiencies by systematically solving representative structures of major fold classes. Protein structure prediction methods attempt to provide a means of generating a plausible structure for proteins whose structures have not been experimentally determined.
Structure prediction
Complementary to the field of structural genomics, protein structure prediction develops efficient mathematical models of proteins to computationally predict the molecular formations in theory, instead of detecting structures with laboratory observation. The most successful type of structure prediction, known as homology modeling, relies on the existence of a "template" structure with sequence similarity to the protein being modeled; structural genomics' goal is to provide sufficient representation in solved structures to model most of those that remain. Although producing accurate models remains a challenge when only distantly related template structures are available, it has been suggested that sequence alignment is the bottleneck in this process, as quite accurate models can be produced if a "perfect" sequence alignment is known. Many structure prediction methods have served to inform the emerging field of protein engineering, in which novel protein folds have already been designed. Also proteins (in eukaryotes ~33%) contain large unstructured but biologically functional segments and can be classified as intrinsically disordered proteins. Predicting and analysing protein disorder is, therefore, an important part of protein structure characterisation.
Bioinformatics
A vast array of computational methods have been developed to analyze the structure, function and evolution of proteins. The development of such tools has been driven by the large and fast-growing amount of genomic and proteomic data available for a variety of organisms, including the human genome. The resources do not exist to study all proteins experimentally, thus only a few are subjected to laboratory experiments while computational tools are used to extrapolate to similar proteins. Such homologous proteins can be efficiently identified in distantly related organisms by sequence alignment. Genome and gene sequences can be searched by a variety of tools for certain properties. Sequence profiling tools can find restriction enzyme sites, open reading frames in nucleotide sequences, and predict secondary structures. Phylogenetic trees can be constructed and evolutionary hypotheses developed using special software like ClustalW regarding the ancestry of modern organisms and the genes they express. The field of bioinformatics is now indispensable for the analysis of genes and proteins.
In silico simulation of dynamical processes
A more complex computational problem is the prediction of intermolecular interactions, such as in molecular docking, protein folding, protein–protein interaction and chemical reactivity. Mathematical models to simulate these dynamical processes involve molecular mechanics, in particular, molecular dynamics. In this regard, in silico simulations discovered the folding of small α-helical protein domains such as the villin headpiece, the HIV accessory protein and hybrid methods combining standard molecular dynamics with quantum mechanical mathematics have explored the electronic states of rhodopsins.
Beyond classical molecular dynamics, quantum dynamics methods allow the simulation of proteins in atomistic detail with an accurate description of quantum mechanical effects. Examples include the multi-layer multi-configuration time-dependent Hartree (MCTDH) method and the hierarchical equations of motion (HEOM) approach, which have been applied to plant cryptochromes and bacteria light-harvesting complexes, respectively. Both quantum and classical mechanical simulations of biological-scale systems are extremely computationally demanding, so distributed computing initiatives (for example, the Folding@home project) facilitate the molecular modeling by exploiting advances in GPU parallel processing and Monte Carlo techniques.
Chemical analysis
The total nitrogen content of organic matter is mainly formed by the amino groups in proteins. The Total Kjeldahl Nitrogen (TKN) is a measure of nitrogen widely used in the analysis of (waste) water, soil, food, feed and organic matter in general. As the name suggests, the Kjeldahl method is applied. More sensitive methods are available.
Nutrition
Most microorganisms and plants can biosynthesize all 20 standard amino acids, while animals (including humans) must obtain some of the amino acids from the diet. The amino acids that an organism cannot synthesize on its own are referred to as essential amino acids. Key enzymes that synthesize certain amino acids are not present in animals—such as aspartokinase, which catalyses the first step in the synthesis of lysine, methionine, and threonine from aspartate. If amino acids are present in the environment, microorganisms can conserve energy by taking up the amino acids from their surroundings and downregulating their biosynthetic pathways.
In animals, amino acids are obtained through the consumption of foods containing protein. Ingested proteins are then broken down into amino acids through digestion, which typically involves denaturation of the protein through exposure to acid and hydrolysis by enzymes called proteases. Some ingested amino acids are used for protein biosynthesis, while others are converted to glucose through gluconeogenesis, or fed into the citric acid cycle. This use of protein as a fuel is particularly important under starvation conditions as it allows the body's own proteins to be used to support life, particularly those found in muscle.
In animals such as dogs and cats, protein maintains the health and quality of the skin by promoting hair follicle growth and keratinization, and thus reducing the likelihood of skin problems producing malodours. Poor-quality proteins also have a role regarding gastrointestinal health, increasing the potential for flatulence and odorous compounds in dogs because when proteins reach the colon in an undigested state, they are fermented producing hydrogen sulfide gas, indole, and skatole. Dogs and cats digest animal proteins better than those from plants, but products of low-quality animal origin are poorly digested, including skin, feathers, and connective tissue.
Mechanical properties
The mechanical properties of proteins are highly diverse and are often central to their biological function, as in the case of proteins like keratin and collagen. For instance, the ability of muscle tissue to continually expand and contract is directly tied to the elastic properties of their underlying protein makeup. Beyond fibrous proteins, the conformational dynamics of enzymes and the structure of biological membranes, among other biological functions, are governed by the mechanical properties of the proteins. Outside of their biological context, the unique mechanical properties of many proteins, along with their relative sustainability when compared to synthetic polymers, have made them desirable targets for next-generation materials design.
Young's modulus
Young's modulus, E, is calculated as the axial stress σ over the resulting strain ε. It is a measure of the relative stiffness of a material. In the context of proteins, this stiffness often directly correlates to biological function. For example, collagen, found in connective tissue, bones, and cartilage, and keratin, found in nails, claws, and hair, have observed stiffnesses that are several orders of magnitude higher than that of elastin, which is though to give elasticity to structures such as blood vessels, pulmonary tissue, and bladder tissue, among others. In comparison to this, globular proteins, such as Bovine Serum Albumin, which float relatively freely in the cytosol and often function as enzymes (and thus undergoing frequent conformational changes) have comparably much lower Young's moduli.
The Young's modulus of a single protein can be found through molecular dynamics simulation. Using either atomistic force-fields, such as CHARMM or GROMOS, or coarse-grained forcefields like Martini, a single protein molecule can be stretched by a uniaxial force while the resulting extension is recorded in order to calculate the strain. Experimentally, methods such as atomic force microscopy can be used to obtain similar data.
At the macroscopic level, the Young's modulus of cross-linked protein networks can be obtained through more traditional mechanical testing. Experimentally observed values for a few proteins can be seen below.
Viscosity
In addition to serving as enzymes within the cell, globular proteins often act as key transport molecules. For instance, Serum Albumins, a key component of blood, are necessary for the transport of a multitude of small molecules throughout the body. Because of this, the concentration dependent behavior of these proteins in solution is directly tied to the function of the circulatory system. On way of quantifying this behavior is through the viscosity of the solution.
Viscosity, η, is generally given is a measure of a fluid's resistance to deformation. It can be calculated as the ratio between the applied stress and the rate of change of the resulting shear strain, that is, the rate of deformation. Viscosity of complex liquid mixtures, such as blood, often depends strongly on temperature and solute concentration. For serum albumin, specifically bovine serum albumin, the following relation between viscosity and temperature and concentration can be used.
Where c is the concentration, T is the temperature, R is the gas constant, and α, β, B, D, and ΔE are all material-based property constants. This equation has the form of an Arrhenius equation, assigning viscosity an exponential dependence on temperature and concentration.
See also
References
Further reading
Textbooks
History
External links
Databases and projects
NCBI Entrez Protein database
NCBI Protein Structure database
Human Protein Reference Database
Human Proteinpedia
Folding@Home (Stanford University)
Protein Databank in Europe (see also PDBeQuips, short articles and tutorials on interesting PDB structures)
Research Collaboratory for Structural Bioinformatics (see also Molecule of the Month , presenting short accounts on selected proteins from the PDB)
Proteopedia – Life in 3D: rotatable, zoomable 3D model with wiki annotations for every known protein molecular structure.
UniProt the Universal Protein Resource
Tutorials and educational websites
"An Introduction to Proteins" from HOPES (Huntington's Disease Outreach Project for Education at Stanford)
Proteins: Biogenesis to Degradation – The Virtual Library of Biochemistry and Cell Biology
Molecular biology
Proteomics | 0.763775 | 0.999543 | 0.763426 |
Somatic symptom disorder | Somatic symptom disorder, also known as somatoform disorder, or somatization disorder, is defined by one or more chronic physical symptoms that coincide with excessive and maladaptive thoughts, emotions, and behaviors connected to those symptoms. The symptoms are not deliberately produced or feigned, and they may or may not coexist with a known medical ailment.
Manifestations of somatic symptom disorder are variable; symptoms can be widespread, specific, and often fluctuate. Somatic symptom disorder corresponds to the way an individual views and reacts to symptoms rather than the symptoms themselves. Somatic symptom disorder may develop in those who suffer from an existing chronic illness or medical condition.
Several studies have found a high rate of comorbidity with major depressive disorder, generalized anxiety disorder, and phobias. Somatic symptom disorder is frequently associated with functional pain syndromes like fibromyalgia and IBS. Somatic symptom disorder typically leads to poor functioning, interpersonal issues, unemployment or problems at work, and financial strain as a result of excessive health-care visits.
The cause of somatic symptom disorder is unknown. Symptoms may result from a heightened awareness of specific physical sensations paired with a tendency to interpret these experiences as signs of a medical ailment. The diagnosis is controversial, as people with a medical illness can be mislabeled as mentally ill. This is especially true for women, who are more often dismissed when they present with physical symptoms.
Signs and symptoms
Somatic symptom disorder can be detected by an ambiguous and often inconsistent history of symptoms that are rarely relieved by medical treatments. Additional signs of somatic symptom disorder include interpreting normal sensations for medical ailments, avoiding physical activity, being disproportionately sensitive to medication side effects, and seeking medical care from several physicians for the same concerns.
Manifestations of somatic symptom disorder are highly variable. Recurrent ailments usually begin before the age of 30; most patients have many somatic symptoms, while others only experience one. The severity may fluctuate, but symptoms rarely go away completely for long periods of time. Symptoms might be specific, such as regional pain and localized sensations, or general, such as fatigue, muscle aches, and malaise.
Those suffering from somatic symptom disorder experience recurring and obsessive feelings and thoughts concerning their well-being. Common examples include severe anxiety regarding potential ailments, misinterpreting normal sensations as indications of severe illness, believing that symptoms are dangerous and serious despite lacking medical basis, claiming that medical evaluations and treatment have been inadequate, fearing that engaging in physical activity will harm the body, and spending a disproportionate amount of time thinking about symptoms.
Somatic symptoms disorder pertains to how an individual interprets and responds to symptoms as opposed to the symptoms themselves. Somatic symptom disorder can occur even in those who have an underlying chronic illness or medical condition. When a somatic symptom disorder coexists with another medical ailment, people overreact to the ailment's adverse effects. They may be unresponsive toward treatment or unusually sensitive to drug side effects. Those with somatic symptom disorder who also have another physical ailment may experience significant impairment that is not expected from the condition.
Comorbidities
Most research that looked at additional mental illnesses or self-reported psychopathological symptoms among those with somatic symptom disorder identified significant rates of comorbidity with depression and anxiety, but other psychiatric comorbidities were not usually looked at. Major depression, generalized anxiety disorder, and phobias were the most common concurrent conditions.
In studies evaluating different physical ailments, 41.5% of people with semantic dementia, 11.2% of subjects with Alzheimer's disease, 25% of female patients suffering from non-HIV lipodystrophy, and 18.5% of patients with congestive heart failure fulfilled somatic symptom disorder criteria. 25.6% of fibromyalgia patients met the somatic symptom disorder criteria exhibited higher depression rates than those who did not. In one study, 28.8% of those with somatic symptom disorder had asthma, 23.1% had a heart condition, and 13.5% had gout, rheumatoid arthritis, or osteoarthritis.
Complications
Alcohol and drug abuse are frequently observed, and sometimes used to alleviate symptoms, increasing the risk of dependence on controlled substances. Other complications include poor functioning, problems with relationships, unemployment or difficulties at work, and financial stress due to excessive hospital visits.
Causes
Somatic symptoms can stem from a heightened awareness of sensations in the body, alongside the tendency to interpret those sensations as ailments. Studies suggest that risk factors of somatic symptoms include childhood neglect, sexual abuse, a chaotic lifestyle, and a history of substance and alcohol abuse. Psychosocial stressors, such as unemployment and reduced job performance, may also be risk factors. There could also be a genetic element. A study of monozygotic and dizygotic twins found that genetic components contributed 7% to 21% of somatic symptoms, with the remainder related to environmental factors. In another study, various single nucleotide polymorphisms were linked to somatic symptoms.
Psychological
Evidence suggests that along with more broad factors such as early childhood trauma or insecure attachment, negative psychological factors including catastrophizing, negative affectivity, rumination, avoidance, health anxiety, or a poor physical self-concept have a significant impact on the shift from unproblematic somatic symptoms to a severely debilitating somatic symptom disorder. Those who experience more negative psychological characteristics may regard medically unexplained symptoms to be more threatening and, therefore, exhibit stronger cognitive, emotional, and behavioral awareness of such symptoms. In addition, evidence suggests that negative psychological factors have a significant impact on the impairments and behaviors of people suffering from somatic symptom disorder, as well as the long-term stability of such symptoms.
Psychosocial
Psychosocial stresses and cultural norms influence how patients present to their physicians. American and Koreans engaged in a study to measure somatization within the cultural context. It was discovered that Korean participants used more body-related phrases while discussing their connections with stressful events and experienced more sympathy when asked to read texts using somatic expressions when discussing their emotions.
Those raised in environments where expressing emotions during stages of development is discouraged face the highest risk of somatization. In primary care settings, studies indicated that somaticizing patients had much greater rates of unemployment and decreased occupational functioning than non-somaticizing patients.
Traumatic life events may cause the development of somatic symptom disorder. Most people with somatic symptom disorder originate from dysfunctional homes. A meta-analysis study revealed a connection between sexual abuse and functional gastrointestinal syndromes, chronic pain, non-epileptic seizures, and chronic pelvic pain.
Physiological
The hypothalamo pituitary adrenal axis (HPA) has a crucial role in stress response. While the HPA axis may become more active with depression, there is evidence of hypocortisolism in somatization. In somatic disorder, there is a negative connection between elevated pain scores and 5-hydroxy indol acetic acid (5-HIAA) and tryptophan levels.
It has been suggested that proinflammatory processes may have a role in somatic symptom disorder, such as an increase of non-specific somatic symptoms and sensitivity to painful stimuli. Proinflammatory activation and anterior cingulate cortex activity have been shown to be linked in those who experienced stressful life events for an extended period of time. It is further claimed that increased activity of the anterior cingulate cortex, which acts as a bridge between attention and emotion, leads to increased sensitivity of unwanted stimuli and bodily sensations.
Pain is a multifaceted experience, not just a sensation. While nociception refers to afferent neural activity that transmits sensory information in response to stimuli that may cause tissue damage, pain is a conscious experience requiring cortical activity and can occur in the absence of nociception. Those with somatic symptoms are thought to exaggerate their somatic symptoms through choice perception and perceive them in accordance with an ailment. This idea has been identified as a cognitive style known as "somatosensorial amplification". The term "central sensitization" has been created to describe the neurobiological notion that those predisposed to somatization have an overly sensitive neural network. Harmless and mild stimuli stimulate the nociceptive specific dorsal horn cells after central sensitization. As a result, pain is felt in response to stimuli that would not typically cause pain.
Neuroimaging evidence
Some literature reviews of cognitive–affective neuroscience on somatic symptom disorder suggested that catastrophization in patients with somatic symptom disorders tends to present a greater vulnerability to pain. The relevant brain regions include the dorsolateral prefrontal, insular, rostral anterior cingulate, premotor, and parietal cortices.
Genetic
Genetic investigations have suggested modifications connected to the monoaminergic system, in particular, may be relevant while a shared genetic source remains unknown. Researchers take into account the various processes involved in the development of somatic symptoms as well as the interactions between various biological and psychosocial factors. Given the high occurrence of trauma, particularly throughout childhood, it has been suggested that the epigenetic changes could be explanatory. Another study found that the glucocorticoid receptor gene (NR3C1) is hypomethylated in those with somatic symptom disorder and in those with depression.
Diagnosis
Because those with somatic syndrome disorder typically have comprehensive previous workups, minimal laboratory testing is encouraged. Excessive testing increases the possibility of false-positive results, which may result in further interventions, associated risks, and greater expenses. While some practitioners order tests to reassure patients, research shows that diagnostic testing fails to alleviate somatic symptoms.
Specific tests, such as thyroid function assessments, urine drug screens, restricted blood studies, and minimal radiological imaging, may be conducted to rule out somatization because of medical issues.
Somatic Symptom Scale – 8
The Somatic Symptom Scale – 8 (SSS-8) is a short self-report questionnaire that is used to evaluate somatic symptoms. It examines the perceived severity of common somatic symptoms. The SSS-8 is a condensed version of the well-known Patient Health Questionnaire-15 (PHQ-15).
On a five-point scale, respondents rate how much stomach or digestive issues, back discomfort, pain in the legs, arms, or joints, headaches, chest pain or shortness of breath, dizziness, feeling tired or having low energy, and trouble sleeping impacted them in the preceding seven days. Ratings are added together to provide a sum score that ranges from 0 to 32 points.
DSM-5
The fifth edition of the Diagnostic and Statistical Manual of Mental Disorders (DSM-5) modified the entry titled "somatoform disorders" to "somatic symptom and related disorders", and modified other diagnostic labels and criteria.
The DSM-5 criteria for somatic symptom disorder includes "one or more somatic symptoms which are distressing or result in substantial impairment of daily life". Additional criteria, often known as B criteria, include "excessive thoughts, feelings, or behaviors regarding somatic symptoms or corresponding health concerns manifested by disproportionate and persistent thoughts about the severity of one's symptoms". It continues: "Although any one somatic symptom might not be consistently present, one's state of being symptomatic is continuous (typically lasting more than 6 months)."
The DSM includes five distinct descriptions for somatic symptom disorder. These include somatic symptom disorder with predominant pain, formally referred to as pain disorder, as well as classifications for mild, moderate, and severe symptoms.
International Classification of Diseases
The ICD-11 classifies somatic symptoms as "Bodily distress disorder". Bodily distress disorder is characterized by the presence of distressing bodily symptoms and excessive attention devoted to those symptoms. The ICD-11 further specifies that if another health condition is causing or contributing to the symptoms, the level of attention must be clearly excessive in relation to the nature and course of the condition.
Differential diagnosis
Somatic syndrome disorder's widespread, non-specific symptoms may conceal and mimic the manifestations of other medical disorders, making diagnosis and therapy challenging. Adjustment disorder, body dysmorphic disorder, obsessive-compulsive disorder, and illness anxiety disorder may all exhibit excessive and exaggerated emotional and behavioral responses. Other functional diseases with unknown etiology, such as fibromyalgia and irritable bowel syndrome, tend not to present with excessive thoughts, feelings, or maladaptive behavior.
Somatic symptom disorder overlaps with illness anxiety disorder and conversion disorder. Illness anxiety disorder is characterized by an obsession with having or developing a dangerous, undetected medical ailment, despite the absence of bodily symptoms. Conversion disorder may present with one or more symptoms of various sorts. Motor symptoms involve weakness or paralysis; aberrant movements including tremor or dystonic movements; abnormal gait patterns; and abnormal limb posture. The presenting symptom in conversion disorder is loss of function, but in somatic symptom disorder, the emphasis is on the discomfort that specific symptoms produce. Conversion disorder often lacks the overwhelming thoughts, feelings, and behaviors that characterize somatic symptom disorder.
Treatment
Rather than focusing on treating the symptoms, the key objective is to support the patient in coping with symptoms, including both physical symptoms and psychological/behavioral (such as health anxiety and harmful behaviors).
Early psychiatric treatment is advised. Evidence suggests that SSRIs and SNRIs can lower pain perception. Because the somatic symptomatic may have a low threshold for adverse reactions, medication should be started at the lowest possible dose and gradually increased to produce a therapeutic effect.
Cognitive-behavioral therapy has been linked to significant improvements in patient-reported function and somatic symptoms, a reduction in health-care expenses, and a reduction in symptoms of depression. CBT aims to help patients realize their ailments are not catastrophic and to enable them to gradually return to activities they previously engaged in, without fear of "worsening their symptoms". Consultation and collaboration with the primary care physician also demonstrated some effectiveness. Furthermore, brief psychodynamic interpersonal psychotherapy (PIT) for patients with somatic symptom disorder has been proven to improve the physical quality of life in patients with many, difficult-to-treat, medically unexplained symptoms over time
CBT can help in some of the following ways:
Learn to reduce stress
Learn to cope with physical symptoms
Learn to deal with depression and other psychological issues
Improve quality of life
Reduce preoccupation with symptom
Electroconvulsive therapy (ECT) has been used in treating somatic symptom disorder among the elderly; however, the results were still debatable with some concerns around the side effects of using ECT. Overall, psychologists recommend addressing a common difficulty in patients with somatic symptom disorder in the reading of their own emotions. This may be a central feature of treatment; as well as developing a close collaboration between the GP, the patient and the mental health practitioner.
Outlook
Somatic symptom disorder is typically persistent, with symptoms that wax and wane. Chronic limitations in general function, substantial psychological impairment, and a reduction in quality of life are all common. Some investigations suggest people can recover; the natural history of the illnesses implies that around 50% to 75% of patients with medically unexplained symptoms improve, whereas 10% to 30% deteriorate. Fewer physical symptoms and better baseline functioning are stronger prognostic indicators. A strong, positive relationship between the physician and the patient is crucial, and it should be accompanied by frequent, supportive visits to avoid the temptation to medicate or test when these interventions are not obviously necessary.
Epidemiology
Somatic symptom disorder affects 5% to 7% of the general population, with a higher female representation, and can arise throughout childhood, adolescence, or adulthood. Evidence suggests that the emergence of prodromal symptoms often begins in childhood and that symptoms fitting the criteria for somatic symptom disorder are common during adolescence. A community study of adolescents found that 5% had persistent distressing physical symptoms paired with psychological concerns. In the primary care patient population, the rate rises to around 17%. Patients with functional illnesses such as fibromyalgia, irritable bowel syndrome, and chronic fatigue syndrome have a greater prevalence of somatic symptom disorder. The reported frequency of somatic symptom disorder, as defined by DSM-5 criteria, ranges from 25 to 60% among these patients.
There are cultural differences in the prevalence of somatic symptom disorder. For example, somatic symptom disorder and symptoms were found to be significantly more common in Puerto Rico. In addition the diagnosis is also more prevalent among African Americans and those with less than a high school education or lower socioeconomic status.
There is usually co-morbidity with other psychological disorders, particularly mood disorders or anxiety disorders. Research also showed comorbidity between somatic symptom disorder and personality disorders, especially antisocial, borderline, narcissistic, histrionic, avoidant, and dependent personality disorder.
About 10-20 percent of female first degree relatives also have somatic symptom disorder and male relatives have increased rates of alcoholism and sociopathy.
History
Somatization is an idea that physicians have been attempting to comprehend since the dawn of time. The Egyptians and Sumerians were reported to have utilized the notions of melancholia and hysteria as early as 2600 BC. For many years, somatization was used in conjunction with the terms hysteria, melancholia, and hypochondriasis.
During the 17th century, knowledge of the central nervous system grew, giving rise to the notion that numerous inexplicable illnesses could be linked to the brain. Thomas Willis, widely regarded as the father of neurology, recognized hysteria in women and hypochondria in males as brain disorders. Thomas Sydenham contributed significantly to the belief that hysteria and hypochondria are mental rather than physical illnesses. The term "English Malady" was used by George Cheyne to denote that hysteria and hypochondriasis are brain and/or mind-related disorders.
Wilhelm Stekel, a German psychoanalyst, was the first to introduce the term somatization, and Paul Briquet was the first to characterize what is now known as Somatic symptom disorder. Briquet reported respondents who had been unwell for most of their lives and complained of a variety of symptoms from various organ systems. Despite many appointments, hospitalizations, and tests, symptoms continue. Somatic symptom disorder was later dubbed "Briquet Syndrome" in his honor. Over time, the concept of hysteria was used in place of a personality or character type, conversion responses, phobia, and anxiety to accompany psychoneuroses, and its incorporation in everyday English as a negative word led to a distancing from this concept.
Controversy
Somatic symptom disorder has long been a contentious diagnosis because it was based solely on negative criteria, namely the absence of a medical explanation for the presenting physical problems. As a result, any person suffering from a poorly understood illness may meet the criteria for this psychological diagnosis, regardless of whether they exhibit psychiatric symptoms in the traditional sense.
Misdiagnosis
In the opinion of Allen Frances, chair of the DSM-IV task force, the DSM-5's somatic symptom disorder brings with it a risk of mislabeling a sizable proportion of the population as mentally ill.
See also
Conversion Disorder
Jurosomatic illness
Munchausen syndrome
Nocebo
Psychosomatic medicine
Psychoneuroimmunology
Functional neurological disorder
References
Further reading
Somatic psychology | 0.764494 | 0.998594 | 0.763419 |
Empyema | An empyema (; ) is a collection or gathering of pus within a naturally existing anatomical cavity. The term is most commonly used to refer to pleural empyema, which is empyema of the pleural cavity. It is similar or the same in meaning as an abscess, but the context of use may sometimes be different. For instance, appendicular abscess is also formed within a natural cavity as the definition of empyema.
Empyema most commonly occurs as a complication of pneumonia but can also result from other infections or conditions that lead to the collection of infected fluid in a body cavity.
Classification
Empyema occurs in:
the pleural cavity (pleural empyema also known as pyothorax)
the thoracic cavity
the uterus (pyometra)
the appendix (appendicitis)
the meninges (subdural empyema)
the joints (septic arthritis)
the gallbladder
Diagnosis
Chest X-rays or computed tomography (CT) scans can reveal the presence of fluid within the pleural space and help assess its characteristics. Once a fluid-filled cavity has been identified, it is often partially or fully drained with a needle, so that the fluid may be analyzed. This helps determine whether the fluid is infected and allows for the identification of the causative microorganisms. Blood tests may also be performed, which can identify both an elevated neutrophil count, which is indicative of an infection, or bacteremia.
In addition to CT, suspected cases of empyema in and around the brain are often subjected to more rigorous neuroimaging techniques, including MRI. In these cases, fluid samples are obtained via stereotactic needles rather than lumbar puncture, because unlike most cases of meningitis, a lumbar puncture will most often not reveal anything about the causative microorganisms.
References
Further reading
External links
Immune system
Medical terminology | 0.766352 | 0.996077 | 0.763345 |
Chemical hazard | Chemical hazards are hazards present in hazardous chemicals and hazardous materials. Exposure to certain chemicals can cause acute or long-term adverse health effects. Chemical hazards are usually classified separately from biological hazards (biohazards). Chemical hazards are classified into groups that include asphyxiants, corrosives, irritants, sensitizers, carcinogens, mutagens, teratogens, reactants, and flammables. In the workplace, exposure to chemical hazards is a type of occupational hazard. The use of personal protective equipment may substantially reduce the risk of adverse health effects from contact with hazardous materials.
Long-term exposure to chemical hazards such as silica dust, engine exhausts, tobacco smoke, and lead (among others) have been shown to increase risk of heart disease, stroke, and high blood pressure.
Types of chemical hazard
Routes of exposure
The most common exposure route to chemicals in the work environment is through inhalation. Gas, vapour, mist, dust, fumes, and smoke can all be inhaled. Those with occupations involving physical work may inhale higher levels of chemicals if working in an area with contaminated air. This is because workers who do physical work will exchange over 10,000 litres of air over an 8-hour day, while workers who do not do physical work will exchange only 2,800 litres. If the air is contaminated in the workplace, more air exchange will lead to the inhalation of higher amounts of chemicals.
Chemicals may be ingested when food or drink is contaminated by unwashed hands or from clothing or poor handling practices. When ingestion of a chemical hazard occurs it comes from when those said chemicals are absorbed while in the digestive tract of the body. Ingestion only occurs when food or drink has contact with the toxic chemical. This can happen through direct or indirect ingestion. When food or drink is brought into an environment where harmful chemicals are unsealed there is the possibility of those chemical vapors or particles contaminating the food or the drink. A more direct form of chemical ingestion is the possibility of consuming the chemical directly. This rarely happens but, it is possible, that if there is little to no labeling on the chemical containers and if they aren’t secured properly an accident can occur which could lead to someone mistakenly assuming the chemical was something it was not.
Chemical exposure to the skin is a common workplace injury and may also occur in domestic situations with chemicals such as bleach or drain-cleaners. The exposure of chemicals to the skin most often results in local irritation to the exposed area. In some exposures, the chemical will be absorbed through the skin and will result in poisoning. The eyes have a strong sensitivity to chemicals, and are consequently an area of high concern for chemical exposure. Chemical exposure to the eyes results in irritation and may result in burns and vision loss.
Injection is an uncommon method of chemical exposure in the workplace. Chemicals can be injected into the skin when a worker is punctured by a sharp object, such as a needle. Chemical exposure through injection may result in the chemical entering directly into the bloodstream.
Symbols of chemical hazards
Hazard pictograms are a type of labeling system that alerts people at a glance that there are hazardous chemicals present. The symbols help identify whether the chemicals that are going to be in use may potentially cause physical harm, or harm to the environment. The 9 symbols are:
Explosive (exploding bomb)
Flammable (flame)
Oxidizing (flame above a circle)
Corrosive (corrosion of table and hand)
Acute toxicity (skull and crossbones)
Hazardous to environment (dead tree and fish)
Health hazard/hazardous to the ozone layer (exclamation mark)
Serious health hazard (cross on a human silhouette)
Gas under pressure (gas cylinder)
These pictographs are also subdivided into class and categories for each classification. The assignments for each chemical depends on their type and their severity. The standard set of 9 hazard pictograms was published and distributed as a regulatory requirement through the efforts of the United Nations via the Globally Harmonized System of Classification and Labelling of Chemicals.
Controlling chemical exposure
Elimination and substitution
Chemical exposure is estimated to have caused approximately 190,000 illnesses and 50,000 deaths of workers annually. There exists an unknown link between chemical exposure and subsequent illness or death. Therefore, the majority of these illnesses and deaths are thought to be caused by a lack of knowledge or awareness concerning the dangers of chemicals. The best method of controlling chemical exposure within the workplace is through the elimination or the substitution of all chemicals that are thought or known to cause illness or death.
Engineering controls
Although the elimination and the substitution of the harmful chemicals is the best known method for controlling chemical exposure, there are other methods that can be implemented to diminish exposure. The implementation of engineering controls is an example of another method for controlling chemical exposures. When engineer controls are implemented, there is a physical change made to the work environment that will eliminate or reduce the risk to chemical exposure. An example of engineer controls is the enclosure or isolation of the process that creates the chemical hazard.
Administrative controls and safe work practices
If the process that creates the chemical hazard cannot be enclosed or isolated, the next best method is the implementation of administrative and work practices controls. This is the establishment of administrative and work practices that will reduce the amount of time and how often the workers will be exposed to the chemical hazard. An example of administrative and work practices controls is the establishment of work schedules in which workers have rotating job assignments. This will ensure that all workers have limited exposure to chemical hazards.
Personal protective equipment
Employers should provide personal protective equipment (PPE) to protect their workers from chemicals used within the workplace. The use of PPE prevents workers from being exposed to chemicals through the routes of exposure—inhalation, absorption through skin or eyes, ingestion, and injection. One example of how PPE usage can prevent chemical exposure concerns respirators. If workers wear respirators, they will prevent the exposure of chemicals through inhalation.
First aid
In case of an emergency, it is recommended to understand first aid procedures in order to minimize any damage. Different types of chemicals can cause a variety of damage. Most sources agree that it is best to rinse any contacted skin or eye with water immediately. Currently, there is insufficient evidence of how long the rinsing should be done, as the degree of impacts will vary for substances such as corrosive chemicals.
Transporting the affected person to a health care facility may be important, depending on condition. If the victim needs to be transported before the recommended flush time, then flushing should be done during the transportation process. Some chemical manufacturers may state the specific type of cleansing agent that is recommended.
Long-term risks
Cancers
Cardiovascular disease
A 2017 SBU report found evidence that workplace exposure to silica dust, engine exhaust or welding fumes is associated with heart disease. Associations also exist for exposure to arsenic, benzopyrenes, lead, dynamite, carbon disulfide, carbon monoxide, metalworking fluids and occupational exposure to tobacco smoke. Working with the electrolytic production of aluminium, or the production of paper when the sulphate pulping process is used, is associated with heart disease. An association was also found between heart disease and exposure to compounds which are no longer permitted in certain work environments, such as phenoxy acids containing TCDD (dioxin) or asbestos.
Workplace exposure to silica dust or asbestos is also associated with pulmonary heart disease. There is evidence that workplace exposure to lead, carbon disulphide, or phenoxy acids containing TCDD, as well as working in an environment where aluminium is being electrolytically produced, are associated with stroke.
Reproductive and developmental disorders
Pesticides and carbon disulfide, amongst many other chemical species have been linked to disruptions of endocrine balances in the brain and ovaries. Any contact with harmful chemicals during the first few months of pregnancy or even after has been connected to some miscarriages and has affected the menstrual cycle to the point that it has been able to block ovulation. Chemicals inducing health issues during pregnancy may also affect infants or fetuses.
See also
Health hazardHazards that would affect the health of exposed persons.
Process safety – Discipline dealing with the study and management of fires, explosions and toxic gas clouds from hazardous materials in process plants.
References | 0.77073 | 0.990411 | 0.763339 |
Fluid replacement | Fluid replacement or fluid resuscitation is the medical practice of replenishing bodily fluid lost through sweating, bleeding, fluid shifts or other pathologic processes. Fluids can be replaced with oral rehydration therapy (drinking), intravenous therapy, rectally such as with a Murphy drip, or by hypodermoclysis, the direct injection of fluid into the subcutaneous tissue. Fluids administered by the oral and hypodermic routes are absorbed more slowly than those given intravenously.
By mouth
Oral rehydration therapy (ORT) is a simple treatment for dehydration associated with diarrhea, particularly gastroenteritis/gastroenteropathy, such as that caused by cholera or rotavirus. ORT consists of a solution of salts and sugars which is taken by mouth. For most mild to moderate dehydration in children, the preferable treatment in an emergency department is ORT over intravenous replacement of fluid.
It is used around the world, but is most important in the developing world, where it saves millions of children a year from death due to diarrhea—the second leading cause of death in children under five.
Intravenous
Similar precaution should be taken in administration of resuscitation fluid as to drug prescription. Fluid replacement should be considered as part of the complex physiological in the human body. Therefore, fluid requirements should be adjusted from time to time in those who are severely ill.
In severe dehydration, intravenous fluid replacement is preferred, and may be lifesaving. It is especially useful where there is depletion of fluid both in the intracellular space and the vascular spaces.
Fluid replacement is also indicated in fluid depletion due to hemorrhage, extensive burns and excessive sweating (as from a prolonged fever), and prolonged diarrhea (cholera).
During surgical procedures, fluid requirement increases by increased evaporation, fluid shifts, or excessive urine production, among other possible causes. Even a small surgery may cause a loss of approximately 4 ml/kg/hour, and a large surgery approximately 8 ml/kg/hour, in addition to the basal fluid requirement.
The table to the right shows daily requirements for some major fluid components. If these cannot be given enterally, they may need to be given entirely intravenously. If continued long-term (more than approx. 2 days), a more complete regimen of total parenteral nutrition may be required.
Types
Resuscitation fluid can be broadly classified into: albumin solution, semisynthetic colloids, and crystalloids.
The types of intravenous fluids used in fluid replacement are generally within the class of volume expanders. Physiologic saline solution, or 0.9% sodium chloride solution, is often used because it is isotonic, and therefore will not cause potentially dangerous fluid shifts. Also, if it is anticipated that blood will be given, normal saline is used because it is the only fluid compatible with blood administration.
Blood transfusion is the only approved fluid replacement capable of carrying oxygen; some oxygen-carrying blood substitutes are under development.
Lactated Ringer's solution is another isotonic crystalloid solution and it is designed to match most closely blood plasma. If given intravenously, isotonic crystalloid fluids will be distributed to the intravascular and interstitial spaces.
Plasmalyte is another isotonic crystalloid.
Blood products, non-blood products and combinations are used in fluid replacement, including colloid and crystalloid solutions. Colloids are increasingly used but they are more expensive than crystalloids. A systematic review found no evidence that resuscitation with colloids, instead of crystalloids, reduces the risk of death in patients with trauma or burns, or following surgery.
Maintenance
Maintenance fluids are used in those who are currently normally hydrated but unable to drink enough to maintain this hydration. In children isotonic fluids are generally recommended for maintaining hydration. Potassium chloride and dextrose should be included. The amount of maintenance IV fluid required in 24 hours is based on the weight of the patient using the Holliday-Segar formula. For weights ranging from 0 to 10 kg, the caloric expenditure is 100 cal/kg/day; from 10 to 20 kg the caloric expenditure is 1000 cal plus 50 cal/kg for each kilogram of body weight more than 10; over 20 kg the caloric expenditure is 1500 cal plus 20 cal/kg for each kilogram more than 20. More complex calculations (e.g., those using body surface area) are rarely required.
Procedure
It is important to achieve a fluid status that is good enough to avoid low urine production. Low urine output has various limits, and varies for children, infants, and adults (see low urine production). The Parkland formula is not perfect and fluid therapy will need to be titrated to hemodynamic values and urine output.
The speed of fluid replacement may differ between procedures. For example, the planning of fluid replacement for burn patients is based on the Parkland formula (4mL Lactated Ringers X weight in kg X % total body surface area burned = Amount of fluid ( in ml) to give over 24 hours). The Parkland formula gives the minimum amount to be given in 24 hours. Half of the volume is given over the first eight hours after the time of the burn (not from time of admission to hospital) and the other half over the next 16 hours. In dehydration, 2/3 of the deficit may be given in 4 hours, and the rest during approximately 20 hours.
Clinical uses
Septic shock
Fluid replacement in patients with septic shock can be divided into four stages as shown below:
Resuscitation phase - The goal of this phase is to correct the hypotension. Intravenous crystalloid is the first choice of therapy. Surviving Sepsis Campaign recommends 30 ml/kg fluid resuscitation in this phase. Earlier fluid resuscitation is associated with improved survival. Mean arterial pressure should be targeted at more than 65 mmHg. Meanwhile, for early goal directed therapy (EGDT), fluids should be administered within the first six hours of septic shock until central venous pressure (CVP) reaches between 8 and 12 mmHg, with improvement of blood lactate levels, central venous oxygen saturation > 70%, and urine output ≥ 0.5 ml/kg/hour. Higher mean arterial pressures can be used in patients with chronic hypertension in order to reduce the usage of renal replacement therapy. However, if fluid replacement is inadequate in raising blood pressure, then vasopressor have to be used. However, there is no definite timing of starting vasopressors. Initiation of vasopressors within the first hour of sepsis can lead to poor organ perfusion and poor organ function. Late initiation of vasopressor can lead to organ damage and increase the risk of death. Frequent monitoring of fluid status of the patient is required to prevent fluid overload.
Optimisation phase - In this phase, the goal is to increase the oxygen delivery to tissues in order to meet the oxygen demands of the tissues. Oxygen delivery can be improved by increasing stroke volume of the heart (through fluid challenge), haemoglobin concentration (through blood transfusion), and arterial oxygen saturation (through oxygen therapy). Fluid challenge is the procedure of giving large amounts of fluid in a short period of time. However, 50% of patients do not respond to fluid challenge. Additional fluid challenges only causes fluid overload. However, there is no gold standard on determining the fluid responsiveness. Among other ways of determining the fluid responsiveness and the end point of fluid resuscitation are: Central venous oxygen saturation (ScvO2), passive leg raising test, ultrasound measurements of pulse pressure variation, stroke volume variation, and respiratory variations at superior vena cava, inferior vena cava and internal jugular vein.
Stabilisation phase - In this stage, the tissue perfusion starts to stabilise and the need of fluid or vasopressors starts reducing. Additional fluid challenges can be given only for those who are responsive. Maintenance fluid can be stopped if the perfusion status is adequate.
Evacuation phase - In this phase, the goal is to remove excessive fluids from those who achieved adequate tissue perfusion. Negative fluid balance is associated with decreased risk of death. However, there is no consensus regarding the optimal timing for fluid removal and risk of reduced perfusion following fluid removal is also inconclusive. A reasonable approach is to begin fluid restriction when the tissue perfusion is adequate, and consider diuretic treatment for those with clinical evidence of fluid overload and positive fluid balance. According to Fluid and Catheter Treatment Trial (FACTT Trial) protocol, those who with mean arterial pressure of more than 60 mmHg, vasopressor free for more than 12 hours, with adequate urine output can be given furosemide to target central venous pressure less than 4 mmHg and pulmonary artery occlusion pressure (PAOP) of less than 8 mmHg. The levels of Brain natriuretic peptide can also be used to guide fluid removal.
Acute kidney injury
Sepsis accounts for 50% of acute kidney injury patients in (intensive care unit) (ICU). Intravenous crystalloid is recommended as the first line therapy to prevent or to treat acute kidney injury (AKI) when compared to colloids as colloids increases the risk of AKI. 4% human albumin may be used in cirrhotic patients with spontaneous bacterial peritonitis as it can reduce the rate of kidney failure and improve survival. However, fluid overload can exacerbate acute kidney injury. The use of diuretics does not prevent or treat AKI even with the help of renal replacement therapy. The 2012 KDIGO (Kidney Disease: Improving Global Outcomes) guidelines stated that diuretics should not be used to treat AKI, except for the management of volume overload. In acute respiratory distress syndrome (ARDS), conservative fluid management is associated with better oxygenation and lung function with less prevalence of dialysis in the first 60 days of hospitalization when compared with liberal fluid management.
Surgery (perioperative fluid therapy)
Managing fluids during major surgical procedures is an important aspect of surgical care. The goal of fluid therapy is to maintain fluid and electrolyte levels and restore levels that may be depleted. Intravenous fluid therapy is used when a person cannot control their own fluid intake and it can also reduce nausea and vomiting. Goal-directed fluid therapy is a perioperative strategy in which the person is administered fluids continuously and the amount of fluids given are based on the person's physiological and haemodynamic (blood flow) measurements. A second approach to fluid management during surgical procedures is called perioperative restrictive fluid therapy, also known as near-zero or zero-balance perioperative fluid approach; this approach recommends lower amounts of fluids during surgery, replacing fluids when the person is low (basal fluid requirements) or loses fluid due to a surgical procedure or bleed. The effectiveness of goal-directed fluid therapy compared to restrictive fluid therapy is not clear as evidence comparing both approaches have very low certainty.
Fluid overload
Fluid overload is defined as an increase in body weight of over 10%. Aggressive fluid resuscitation can lead to fluid overload which can lead to damage of multiple organs: cerebral oedema, which leads to delirium; pulmonary oedema and pleural effusion, which lead to respiratory distress; myocardial oedema and pericardial effusion, which lead to impaired contractility of the heart; gastrointestinal oedema, which leads to malabsorption; hepatic congestion, which leads to cholestasis and acute kidney injury; and tissue oedema, which leads to poor wound healing. All these effects can cause disability and death, and increase in hospitalisation costs.
Fluid overload causes cardiac dilation, which leads to increased ventricular wall stress, mitral insufficiency and leads to cardiac dysfunction. Pulmonary hypertension can lead to tricuspid insufficiency. Excess administration of fluid causes accumulation of extracellular fluid, leading to pulmonary oedema and lack of oxygen delivery to tissues. The use of mechanical ventilation in such case can cause barotrauma, infection, and oxygen toxicity, leading to acute respiratory distress syndrome. Fluid overload also stretches the arterial endothelium, which causes damage to the glycocalyx, leading to capillary leakage and worsens the acute kidney injury.
Other treatments
Proctoclysis, an enema, is the administration of fluid into the rectum as a hydration therapy. It is sometimes used for very ill persons with cancer. The Murphy drip is a device by means of which this treatment may be performed.
See also
Hypodermoclysis
Intravenous therapy
Hypovolemia
Third spacing
Pentastarch
Passive leg raising test
References
Medical treatments
Medical emergencies
Dehydration
fr:Hydratation
nl:Hydratatie | 0.773222 | 0.987211 | 0.763334 |
Environmental health | Environmental health is the branch of public health concerned with all aspects of the natural and built environment affecting human health. To effectively control factors that may affect health, the requirements that must be met to create a healthy environment must be determined. The major sub-disciplines of environmental health are environmental science, toxicology, environmental epidemiology, and environmental and occupational medicine.
Definitions
WHO definitions
Environmental health was defined in a 1989 document by the World Health Organization (WHO) as:
Those aspects of human health and disease that are determined by factors in the environment. It is also referred to as the theory and practice of accessing and controlling factors in the environment that can potentially affect health.
A 1990 WHO document states that environmental health, as used by the WHO Regional Office for Europe, "includes both the direct pathological effects of chemicals, radiation and some biological agents, and the effects (often indirect) on health and well being of the broad physical, psychological, social and cultural environment, which includes housing, urban development, land use and transport."
, the WHO website on environmental health states that "Environmental health addresses all the physical, chemical, and biological factors external to a person, and all the related factors impacting behaviours. It encompasses the assessment and control of those environmental factors that can potentially affect health. It is targeted towards preventing disease and creating health-supportive environments. This definition excludes behaviour not related to environment, as well as behaviour related to the social and cultural environment, as well as genetics."
The WHO has also defined environmental health services as "those services which implement environmental health policies through monitoring and control activities. They also carry out that role by promoting the improvement of environmental parameters and by encouraging the use of environmentally friendly and healthy technologies and behaviors. They also have a leading role in developing and suggesting new policy areas."
Other considerations
The term environmental medicine may be seen as a medical specialty, or branch of the broader field of environmental health. Terminology is not fully established, and in many European countries they are used interchangeably.
Children's environmental health is the academic discipline that studies how environmental exposures in early life—chemical, biological, nutritional, and social—influence health and development in childhood and across the entire human life span.
Other terms referring to or concerning environmental health include environmental public health and health protection.
Disciplines
Five basic disciplines generally contribute to the field of environmental health: environmental epidemiology, toxicology, exposure science, environmental engineering, and environmental law. Each of these five disciplines contributes different information to describe problems and solutions in environmental health. However, there is some overlap among them.
Environmental epidemiology studies the relationship between environmental exposures (including exposure to chemicals, radiation, microbiological agents, etc.) and human health. Observational studies, which simply observe exposures that people have already experienced, are common in environmental epidemiology because humans cannot ethically be exposed to agents that are known or suspected to cause disease. While the inability to use experimental study designs is a limitation of environmental epidemiology, this discipline directly observes effects on human health rather than estimating effects from animal studies. Environmental epidemiology is the study of the effect on human health of physical, biologic, and chemical factors in the external environment, broadly conceived. Also, examining specific populations or communities exposed to different ambient environments, Epidemiology in our environment aims to clarify the relationship that exist between physical, biologic or chemical factors and human health.
Toxicology studies how environmental exposures lead to specific health outcomes, generally in animals, as a means to understand possible health outcomes in humans. Toxicology has the advantage of being able to conduct randomized controlled trials and other experimental studies because they can use animal subjects. However, there are many differences in animal and human biology, and there can be a lot of uncertainty when interpreting the results of animal studies for their implications for human health.
Exposure science studies human exposure to environmental contaminants by both identifying and quantifying exposures. Exposure science can be used to support environmental epidemiology by better describing environmental exposures that may lead to a particular health outcome, identify common exposures whose health outcomes may be better understood through a toxicology study, or can be used in a risk assessment to determine whether current levels of exposure might exceed recommended levels. Exposure science has the advantage of being able to very accurately quantify exposures to specific chemicals, but it does not generate any information about health outcomes like environmental epidemiology or toxicology.
Environmental engineering applies scientific and engineering principles for protection of human populations from the effects of adverse environmental factors; protection of environments from potentially deleterious effects of natural and human activities; and general improvement of environmental quality.
Environmental law includes the network of treaties, statutes, regulations, common and customary laws addressing the effects of human activity on the natural environment.
Information from epidemiology, toxicology, and exposure science can be combined to conduct a risk assessment for specific chemicals, mixtures of chemicals or other risk factors to determine whether an exposure poses significant risk to human health (exposure would likely result in the development of pollution-related diseases). This can in turn be used to develop and implement environmental health policy that, for example, regulates chemical emissions, or imposes standards for proper sanitation. Actions of engineering and law can be combined to provide risk management to minimize, monitor, and otherwise manage the impact of exposure to protect human health to achieve the objectives of environmental health policy.
Concerns
Environmental health addresses all human-health-related aspects of the natural environment and the built environment. Environmental health concerns include:
Biosafety.
Disaster preparedness and response.
Food safety, including in agriculture, transportation, food processing, wholesale and retail distribution and sale.
Housing, including substandard housing abatement and the inspection of jails and prisons.
Childhood lead poisoning prevention.
Land use planning, including smart growth.
Liquid waste disposal, including city waste water treatment plants and on-site waste water disposal systems, such as septic tank systems and chemical toilets.
Medical waste management and disposal.
Occupational health and industrial hygiene.
Radiological health, including exposure to ionizing radiation from X-rays or radioactive isotopes.
Recreational water illness prevention, including from swimming pools, spas and ocean and freshwater bathing places.
Solid waste management, including landfills, recycling facilities, composting and solid waste transfer stations.
Toxic chemical exposure whether in consumer products, housing, workplaces, air, water or soil.
Toxins from molds and algal blooms.
Vector control, including the control of mosquitoes, rodents, flies, cockroaches and other animals that may transmit pathogens.
According to recent estimates, about 5 to 10% of disability-adjusted life years (DALYs) lost are due to environmental causes in Europe. By far the most important factor is fine particulate matter pollution in urban air. Similarly, environmental exposures have been estimated to contribute to 4.9 million (8.7%) deaths and 86 million (5.7%) DALYs globally. In the United States, Superfund sites created by various companies have been found to be hazardous to human and environmental health in nearby communities. It was this perceived threat, raising the specter of miscarriages, mutations, birth defects, and cancers that most frightened the public.
Air quality
Air quality includes ambient outdoor air quality and indoor air quality. Large concerns about air quality include environmental tobacco smoke, air pollution by forms of chemical waste, and other concerns.
Outdoor air quality
Air pollution is globally responsible for over 6.5 million deaths each year. Air pollution is the contamination of an atmosphere due to the presence of substances that are harmful to the health of living organisms, the environment or climate. These substances concern environmental health officials since air pollution is often a risk-factor for diseases that are related to pollution, like lung cancer, respiratory infections, asthma, heart disease, and other forms of respiratory-related illnesses. Reducing air pollution, and thus developing air quality, has been found to decrease adult mortality.
Common products responsible for emissions include road traffic, energy production, household combustion, aviation and motor vehicles, and other forms of pollutants. These pollutants are responsible for the burning of fuel, which can release harmful particles into the air that humans and other living organisms can inhale or ingest.
Air pollution is associated with adverse health effects like respiratory and cardiovascular diseases, cancer, related illnesses, and even death. The risk of air pollution is determined by the pollutant's hazard and the amount of exposure that affects a person. For example, a child who plays outdoor sports will have a higher likelihood of outdoor air pollution exposure than an adult who tends to spend more time indoors, whether at work or elsewhere. Environmental health officials work to detect individuals who are at higher risks of consuming air pollution, work to decrease their exposure, and detect risk factors present in communities.
However, as shown in research by Ernesto, Sánchez-Triana in the case of Pakistan. After identifying the main sources of air pollution, such as mobile sources, such as heavy-duty vehicles and motorized 2–3 wheelers; stationary sources, such as power plants and burning of waste; and natural dust. The country implemented a clean air policy to reduce the road transport sector, which is responsible for 85% of particulate matter of less than 2.5 microns (PM2.5) total emissions and 72% of particulate matter of less than 10 microns (PM10) Most successful policies were:
Improving fuel quality by reducing the sulfur content in diesel
Converting diesel minibuses and city delivery vans to compressed natural gas (CNG)
Installing diesel oxidation catalysts (DOCs) on existing large buses and trucks
Converting existing two-stroke rickshaws to four-stroke CNG engines
Introducing low-sulfur fuel oil (1% sulfur) to major users located in Karachi
Indoor air quality
Household air pollution contributes to diseases that kill almost 4.3 million people every year. Indoor air pollution contributes to risk factors for diseases like heart disease, pulmonary disease, stroke, pneumonia, and other associated illnesses. For vulnerable populations, such as children and elderly populations, who spend large amounts of their time indoors or indoor air quality can be dangerous.
Burning fuels like coal or kerosene inside homes can cause dangerous chemicals to be released into the air. Dampness and mold in houses can cause diseases, but few studies have been performed on mold in schools and workplaces. Environmental tobacco smoke is considered to be a leading contributor to indoor air pollution since exposure to second and third-hand smoke is a common risk factor. Tobacco smoke contains over 60 carcinogens, where 18% are known human carcinogens. Exposure to these chemicals can lead to exacerbation of asthma, the development of cardiovascular diseases and cardiopulmonary diseases, and an increase in the likelihood of cancer development.
Climate change and its effects on health
Climate change makes extreme weather events more likely, including ozone smog events, dust storms, and elevated aerosol levels, all due to extreme heat, drought, winds, and rainfall. These extreme weather events can increase the likelihood of undernutrition, mortality, food insecurity, and climate-sensitive infectious diseases in vulnerable populations. The effects of climate change are felt by the whole world, but disproportionately affect disadvantaged populations who are subject to climate change vulnerability.
Climate impacts can affect exposure to water-borne pathogens through increased rates of runoff, frequent heavy rains, and the effects of severe storms. Extreme weather events and storm surges can also exceed the capacity of water infrastructure, which can increase the likelihood that populations will be exposed to these contaminants. Exposure to these contaminants are more likely in low-income communities, where they have inadequate infrastructure to respond to climate disasters and are less likely to recover from infrastructure damage as quickly.
Problems like the loss of homes, loved ones, and previous ways of life, are often what people face after a climate disaster occurs. These events can lead to vulnerability in the form of housing affordability stress, lower household income, lack of community attachment, grief, and anxiety around another disaster occurring.
Environmental racism
Certain groups of people can be put at a higher risk for environmental hazards like air, soil and water pollution. This often happens due to marginalization, economic and political processes, and racism. Environmental racism uniquely affects different groups globally, however generally the most marginalized groups of any region are affected. These marginalized groups are frequently put next to pollution sources like major roadways, toxic waste sites, landfills, and chemical plants. In a 2021 study, it was found that racial and ethnic minority groups in the United States are exposed to disproportionately high levels of particulate air pollution. Racial housing policies that exist in the United States continue to exacerbate racial minority exposure to air pollution at a disproportionate rate, even as overall pollution levels have declined. Likewise, in a 2022 study, it was shown that implementing policy changes that favor wealth redistribution could double as climate change mitigation measures. For populations who are not subject to wealth redistribution measures, this means more money will flow into their communities while climate effects are mitigated.
Noise pollution
Noise pollution is usually environmental, machine-created sound that can disrupt activities or communication between humans and other forms of life. Exposure to persistent noise pollution can cause numerous ailments like hearing impairment, sleep disturbances, cardiovascular problems, annoyance, problems with communication and other diseases. For American minorities that live in neighborhoods of low socioeconomic status, they often experience higher levels of noise pollution compared to their higher socioeconomic counterparts.
Noise pollution can cause or exacerbate cardiovascular diseases, which can further attribute to a larger range of diseases, increase stress levels, and cause sleep disturbances. Noise pollution is also responsible for many reported cases of hearing loss, tinnitus, and other forms of hypersensitivity(stress/irritability) or lack thereof to sound(present or subconscious from continuous exposure). These conditions can be dangerous to children and young adults who consistently experience noise pollution, as many of these conditions can develop into long-term problems, including physical and mental health issues.
Children who attend school in noisy traffic zones have shown to have 15% lower memory development compared to other students who attended schools in quiet traffic zones, according to a Barcelona study. This is consistent with research that suggests that children who are exposed to regular aircraft noise "have inadequate performance on standardised achievement tests."
Exposure to persistent noise pollution can cause one to develop hearing impairments, like tinnitus or impaired speech discrimination. One of the largest factors in worsened mental health due to noise pollution is annoyance. Annoyance due to environmental factors has been found to increase stress reactions and overall feelings of stress among adults. The level of annoyance felt by an individual varies, but contributes to worsened mental health significantly.
Noise exposure also contributes to sleep disturbances, which can cause daytime sleepiness and an overall lack of sleep, which contributes to worsened health. Daytime sleepiness has been linked to several reports of declining mental health and other health issues, job insecurities and further social and environmental factors declining.
Safe drinking water
Access to safe drinking water is considered a "basic human need for health and well-being" by the United Nations. According to their reports, over 2 billion people worldwide live without access to safe drinking water. In 2017, almost 22 million Americans drank from water systems that were in violation of public health standards. Globally, over 2 billion people drink feces-contaminated water, which poses the greatest threat to drinking water safety. Contaminated drinking water could transmit diseases like cholera, dysentery, typhoid, diarrhea and polio.
Harmful chemicals in drinking water can negatively affect health. Unsafe water management practices can increase the prevalence of water-borne diseases and sanitation-related illnesses. Inadequate disinfecting of wastewater in industrial and agricultural centers can also infect hundreds of millions of people with contaminated water. Chemicals like fluoride and arsenic can benefit humans when the levels of these chemicals are controlled;but other, more dangerous chemicals like lead and metals can be harmful to humans.
In America, communities of color can be subject to poor-quality water. In communities in America with large Hispanic and black populations, there is a correlated rise in SDWA health violations. Populations who have experienced lack of safe drinking water, like populations in Flint, Michigan, are more likely to distrust tap water in their communities. Populations to experience this are commonly low-income, communities of color.
Hazardous materials management
Hazardous materials management, including hazardous waste management, contaminated site remediation, the prevention of leaks from underground storage tanks and the prevention of hazardous materials releases to the environment and responses to emergency situations resulting from such releases. When hazardous materials are not managed properly, waste can pollute nearby water sources and reduce air quality.
According to a study done in Austria, people who live near industrial sites are "more often unemployed, have lower educations levels, and are twice as likely to be immigrants. With the interest of environmental health in mind, the Resource Conservation and Recovery Act was passed in the United States in 1976 that covered how to properly manage hazardous waste.
There are a variety of occupations that work with hazardous materials and help manage them so that everything is disposed of correctly. These professionals work in various sectors, including government agencies, private industry, consulting firms, and non-profit organizations, all with the common goal of ensuring the safe handling of hazardous materials and waste. These positions include but are not limited to Environmental Health and Safety Specialists, Waste collectors, Medical Professionals, and Emergency Responders. Handling waste, especially hazardous materials is considered one of the most dangerous occupations in the world. Often, these workers may not have all of information about the specific hazardous materials they encounter, making their jobs even more dangerous. The sudden exposure to materials they are not properly prepared to handle can lead to severe consequences. This emphasizes the importance of training, safety protocols, and the use of personal protective equipment for those working with hazardous waste.
Microplastic pollution
Soil pollution
Information and mapping
The Toxicology and Environmental Health Information Program (TEHIP) is a comprehensive toxicology and environmental health web site, that includes open access to resources produced by US government agencies and organizations, and is maintained under the umbrella of the Specialized Information Service at the United States National Library of Medicine. TEHIP includes links to technical databases, bibliographies, tutorials, and consumer-oriented resources. TEHIP is responsible for the Toxicology Data Network (TOXNET), an integrated system of toxicology and environmental health databases including the Hazardous Substances Data Bank, that are open access, i.e. available free of charge. TOXNET was retired in 2019.
There are many environmental health mapping tools. TOXMAP is a geographic information system (GIS) from the Division of Specialized Information Services of the United States National Library of Medicine (NLM) that uses maps of the United States to help users visually explore data from the United States Environmental Protection Agency's (EPA) Toxics Release Inventory and Superfund Basic Research Programs. TOXMAP is a resource funded by the US federal government. TOXMAP's chemical and environmental health information is taken from the NLM's Toxicology Data Network (TOXNET) and PubMed, and from other authoritative sources.
Environmental health profession
Environmental health professionals may be known as environmental health officers, public health inspectors, environmental health specialists or environmental health practitioners. Researchers and policy-makers also play important roles in how environmental health is practiced in the field. In many European countries, physicians and veterinarians are involved in environmental health. In the United Kingdom, practitioners must have a graduate degree in environmental health and be certified and registered with the Chartered Institute of Environmental Health or the Royal Environmental Health Institute of Scotland. In Canada, practitioners in environmental health are required to obtain an approved bachelor's degree in environmental health along with the national professional certificate, the Certificate in Public Health Inspection (Canada), CPHI(C). Many states in the United States also require that individuals have a bachelor's degree and professional licenses to practice environmental health. California state law defines the scope of practice of environmental health as follows:
"Scope of practice in environmental health" means the practice of environmental health by registered environmental health specialists in the public and private sector within the meaning of this article and includes, but is not limited to, organization, management, education, enforcement, consultation, and emergency response for the purpose of prevention of environmental health hazards and the promotion and protection of the public health and the environment in the following areas: food protection; housing; institutional environmental health; land use; community noise control; recreational swimming areas and waters; electromagnetic radiation control; solid, liquid, and hazardous materials management; underground storage tank control; onsite septic systems; vector control; drinking water quality; water sanitation; emergency preparedness; and milk and dairy sanitation pursuant to Section 33113 of the Food and Agricultural Code.
The environmental health profession had its modern-day roots in the sanitary and public health movement of the United Kingdom. This was epitomized by Sir Edwin Chadwick, who was instrumental in the repeal of the poor laws, and in 1884 was the founding president of the Association of Public Sanitary Inspectors, now called the Chartered Institute of Environmental Health.
See also
EcoHealth
Environmental disease
Environmental medicine
Environmental toxicology
Epigenetics
Exposure science
Healing environments
Health effects from noise
Heavy metals
Indoor air quality
Industrial and organizational psychology
NIEHS
Nightingale's environmental theory
One Health
Pollution
Volatile organic compound
Journals:
List of environmental health journals
References
Further reading
External links
NIEHS
Environmental social science
Environmental science | 0.767201 | 0.994957 | 0.763332 |
Chief complaint | The chief complaint, formally known as CC in the medical field, or termed presenting complaint (PC) in Europe and Canada, forms the second step of medical history taking. It is sometimes also referred to as reason for encounter (RFE), presenting problem, problem on admission or reason for presenting. The chief complaint is a concise statement describing the symptom, problem, condition, diagnosis, physician-recommended return, or other reason for a medical
encounter. In some instances, the nature of a patient's chief complaint may determine if services are covered by health insurance.
When obtaining the chief complaint, medical students are advised to use open-ended questions. Once the presenting problem is elucidated, a history of present illness can be done using acronyms such as SOCRATES or OPQRST to further analyze the severity, onset and nature of the presenting problem. The patient's initial comments to a physician, nurse, or other health care professionals are important for formulating differential diagnoses.
Prevalence
The collection of chief complaint data may be useful in addressing public health issues. Certain complaints are more common in certain settings and among certain populations. Fatigue has been reported as one of the ten most common reasons for seeing a physician. In acute care settings, such as emergency rooms, reports of chest pain are among the most common chief complaints. The most common complaint in ERs has been reported to be abdominal pain. Among nursing home residents seeking treatment at ERs, respiratory symptoms, altered mental status, gastrointestinal symptoms, and falls are the most commonly reported.
See also
Identified patient
Medical history
References
External links
Medical terminology
Symptoms | 0.777463 | 0.981819 | 0.763328 |
Transdermal | Transdermal is a route of administration wherein active ingredients are delivered across the skin for systemic distribution. Examples include transdermal patches used for medicine delivery.
The drug is administered in the form of a patch or ointment that delivers the drug into the circulation for systemic effect.
Techniques
Obstacles
Although the skin is a large and logical target for drug delivery, its basic functions limit its utility for this purpose. The skin functions mainly to protect the body from external penetration (by e.g. harmful substances and microorganisms) and to contain all body fluids.
There are two important layers to the human skin: (1) the epidermis and (2) the dermis. For transdermal delivery, drugs must pass through the two sublayers of the epidermis to reach the microcirculation of the dermis.
The stratum corneum is the top layer of the skin and varies in thickness from approximately ten to several hundred micrometres, depending on the region of the body. It is composed of layers of dead, flattened keratinocytes surrounded by a lipid matrix, which together act as a brick-and-mortar system that is difficult to penetrate.
The stratum corneum provides the most significant barrier to diffusion. In fact, the stratum corneum is the barrier to approximately 90% of transdermal drug applications. However, nearly all molecules penetrate it to some minimal degree. Below the stratum corneum lies the viable epidermis. This layer is about ten times as thick as the stratum corneum; however, diffusion is much faster here due to the greater degree of hydration in the living cells of the viable epidermis. Below the epidermis lies the dermis, which is approximately one millimeter thick, 100 times the thickness of the stratum corneum. The dermis contains small vessels that distribute drugs into the systemic circulation and to regulate temperature, a system known as the skin's microcirculation.
Transdermal pathways
There are two main pathways by which drugs can cross the skin and reach the systemic circulation. The more direct route is known as the transcellular pathway.
Transcellular pathway
By this route, drugs cross the skin by directly passing through both the phospholipids membranes and the cytoplasm of the dead keratinocytes that constitute the stratum corneum.
Although this is the path of shortest distance, the drugs encounter significant resistance to permeation. This resistance is caused because the drugs must cross the lipophilic membrane of each cell, then the hydrophilic cellular contents containing keratin, and then the phospholipid bilayer of the cell one more time. This series of steps is repeated numerous times to traverse the full thickness of the stratum corneum.
Intercellular pathway
The other more common pathway through the skin is via the intercellular route. Drugs crossing the skin by this route must pass through the small spaces between the cells of the skin, making the route more tortuous. Although the thickness of the stratum corneum is only about 20 μm, the actual diffusional path of most molecules crossing the skin is on the order of 400 μm. The 20-fold increase in the actual path of permeating molecules greatly reduces the rate of drug penetration.
Recent research has established that the intercellular route can be dramatically enhanced by attending to the physical chemistry of the system solubilizing the active pharmaceutical ingredient, rendering a dramatically more efficient delivery of payload and enabling the delivery of most compounds via this route.
Microneedles
A third pathway to breach the Stratum Corneum layer is via tiny microchannels created by a medical micro-needling device of which there are many brands and variants. Investigations at the University of Marburg, Germany, using a standard Franz diffusion cell showed that this approach is efficient in enhancing skin penetration ability for lipophilic as well as hydrophilic compounds.
The micro-needling approach is also seen as 'the vaccine of the future'. The microneedles can be hollow, solid, coated, dissolving, or hydrogel-forming. Some have regulatory approval. Microneedle devices/patches can be used to deliver nanoparticle medicines.
Devices and formulations
Devices and formulations for transdermally administered substances include:
Transdermal patch
Transdermal gel
specially formula
See also
Invasomes
References
Medical treatments
Routes of administration | 0.770655 | 0.990487 | 0.763323 |
Phlegm | Phlegm (; , phlégma, "inflammation", "humour caused by heat") is mucus produced by the respiratory system, excluding that produced by the throat nasal passages. It often refers to respiratory mucus expelled by coughing, otherwise known as sputum. Phlegm, and mucus as a whole, is in essence a water-based gel consisting of glycoproteins, immunoglobulins, lipids and other substances. Its composition varies depending on climate, genetics, and state of the immune system. Its color can vary from transparent to pale or dark yellow and green, from light to dark brown, and even to dark grey depending on the contents. The body naturally produces about 1 quart (about 1 litre) of phlegm every day to capture and clear substances in the air and bacteria from the nose and throat.
Distinction between mucus and phlegm
Contrary to popular misconception and misuse, mucus and phlegm are not always the same.
Mucus
Mucus is a normal protective layering around the airway, eye, nasal turbinate, and urogenital tract. Mucus is an adhesive viscoelastic gel produced in the airway by submucosal glands and goblet cells and is principally water. It also contains high-molecular weight mucous glycoproteins that form linear polymers.
Phlegm
Phlegm is more related to disease than mucus, and can be troublesome for the individual to excrete from the body. Phlegm is a thick secretion in the airway during disease and inflammation. Phlegm usually contains mucus with virus, bacteria, other debris, and sloughed-off inflammatory cells. Once phlegm has been expectorated by a cough, it becomes sputum.
Excessive phlegm creation
There are multiple factors that can contribute to an excess of phlegm in the throat or larynx.
Vocal abuse: Vocal abuse is the misuse or overuse of the voice in an unhealthy fashion such as clearing the throat, yelling, screaming, talking loudly, or singing incorrectly.
Clearing the throat: Clearing the throat removes or loosens phlegm but the vocal cords hit together causing inflammation and therefore more phlegm.
Yelling/screaming: Yelling and screaming both cause the vocal cords to hit against each other causing inflammation and phlegm.
Nodules: Excessive yelling, screaming, and incorrect singing as well as other vocal abusive habits can cause vocal fold nodules.
Smoking: Smoke is hot, dry, polluted air which dries out the vocal cords. With each breath in of smoke, the larynx is polluted with toxins that inhibit it from rehydrating for about 3 hours. The vocal cords need a fair amount of lubrication and swell from inflammation when they do not have enough of it. When the vocal folds swell and are inflamed, phlegm is often created to attempt to ease the dryness.
Experiment on smoking correlations: In 2002, an experiment was done and published by the American College of Chest Physicians to find if there was a correlation of smokers with coughing and phlegm. In the study, 117 participants were studied, a mix of current smokers, ex-smokers, non-smokers, and a positive control of participants with a disease, COPD (Chronic Obstructive Pulmonary Disease). At the end of the experiment, experimenters found that there was a high correlation between phlegm and cough with smoking of 0.49 (p < 0.001.)
Illness: During illness like the flu, cold, and pneumonia, phlegm becomes more excessive as an attempt to get rid of the bacteria or viral particles within the body. A major illness associated with excess phlegm is acute bronchitis. A major symptom of acute bronchitis is an excess amount of phlegm and is usually caused by a viral infection, and only bacterial infections, which are rare, are to be treated with an antibiotic.
Hay fever, asthma: In hay fever and asthma, inner lining in bronchioles become inflamed and create an excess amount of phlegm that can clog up air pathways.
Air pollution: In studies of children, air pollutants have been found to increase phlegm by drying out and irritating parts of the throat.
Removing phlegm
Excessive phlegm creation can be troublesome. There are basically two ways to get rid of excess phlegm: swallowing or spitting.
Phlegm naturally drains down into the back of the throat and can be swallowed without imposing health risks. Once in the stomach, the acids and digestive system will remove the phlegm and get rid of the germs in it. In some cultures, swallowing phlegm is considered a social taboo, being described as disgusting or unhygienic. One Igbo adage, for example, uses the swallowing of phlegm as a metaphor for wrongdoing. Also, due to the social image of spitting (the alternative of swallowing) in some communities, females were shown to be more likely to swallow phlegm and less likely to report experiencing it.
The alternative to swallowing would be throat-clearing. To do this, the mouth should be closed and air should be inhaled hard into the nose. Inhaling forcefully through the nose will pull excess phlegm and nasal mucus down into the throat, where muscles in the throat and tongue can prepare to eject it. Once this is done, a U-shape should be formed with the tongue, while simultaneously forcing air and saliva forward with the muscles at the back of the throat. At this point, the phlegm will be in the mouth and is now ready to be spat out as sputum.
Colors of phlegm
Phlegm can exist in different colors. The color could provide important clues about a person's health.
Yellow or green: Indicates an infection often by a virus or bacteria. The color is caused by an enzyme produced by the white blood cells combating the infection.
Clear: Indicates allergies. Mucous membranes produce histamines and make more phlegm.
Red: Indicates dry air. A nasal spray can be used to alleviate symptoms of a dry nose and throat. It can also occur due to blood (such as if the person had or has a bleeding nose, or a lung malignancy).
Illnesses related to phlegm
Phlegm may be a carrier of larvae of intestinal parasites (see hookworm). Bloody sputum can be a symptom of serious disease (such as tuberculosis), but can also be a relatively benign symptom of a minor disease (such as bronchitis). In the latter case, the sputum is normally lightly streaked with blood. Coughing up any significant quantity of blood is always a serious medical condition, and any person who experiences this should seek medical attention.
Apophlegmatisms, in pre-modern medicine, were medications chewed in order to draw away phlegm and humours.
History
Phlegm and humourism
Humourism is an ancient theory that the human body is filled with four basic substances, called the four humours, which are held in balance when a person is healthy. It is closely related to the ancient theory of the four elements and states that all diseases and disabilities result from an excess or deficit in black bile, yellow bile, phlegm, and blood. Hippocrates, an ancient Greek medical doctor, is credited for this theory, about 400 BC. It influenced medical thinking for more than 2,000 years, until finally discredited in the 1800s.
Phlegm was thought to be associated with apathetic behaviour; this old belief is preserved in the word "phlegmatic". This adjective always refers to behaviour, and is pronounced differently, giving full weight to the "g": not /ˈflɛmatɪk/ but /flɛgˈmatɪk/.
To have "phlegm" traditionally meant to have stamina and to be unswayed by emotion. In his 1889 farewell speech at the University of Pennsylvania, Sir William Osler discussed the imperturbability required of physicians. "'Imperturbability means coolness and presence of mind under all circumstances, calmness amid storm, clearness of judgment in moments of grave peril, immobility, impassiveness, or, to use an old and expressive word, phlegm."''
The phlegm of Humourism is far from the same thing as phlegm as it is defined today. Nobel laureate Charles Richet MD, when describing humorism's "phlegm or pituitary secretion" in 1910 asked rhetorically, "this strange liquid, which is the cause of tumours, of chlorosis, of rheumatism, and cacochymia - where is it? Who will ever see it? Who has ever seen it? What can we say of this fanciful classification of humours into four groups, of which two are absolutely imaginary?"
References
Body fluids
Symptoms and signs: Respiratory system | 0.76467 | 0.998232 | 0.763318 |
Spasm | A spasm is a sudden involuntary contraction of a muscle, a group of muscles, or a hollow organ, such as the bladder.
A spasmodic muscle contraction may be caused by many medical conditions, including dystonia. Most commonly, it is a muscle cramp which is accompanied by a sudden burst of pain. A muscle cramp is usually harmless and ceases after a few minutes. It is typically caused by ion imbalance or muscle overload.
There are other causes of involuntary muscle contractions, and some of these may cause a health problem.
A series of spasms, or permanent spasms, is referred to as a "spasmism".
Description and causes
Various kinds of involuntary muscle activity may be referred to as a "spasm".
A spasm may be a muscle contraction caused by abnormal nerve stimulation or by abnormal activity of the muscle itself.
A spasm may lead to muscle strains or tears in tendons and ligaments if the force of the spasm exceeds the tensile strength of the underlying connective tissue. This can occur with a particularly strong spasm or with weakened connective tissue.
A hypertonic muscle spasm is a condition of chronic, excessive muscle tone (i.e., tension in a resting muscle). This is the amount of contraction that remains when a muscle is not working. A true hypertonic spasm is caused by malfunctioning feedback nerves. This is much more serious and is permanent unless treated. In this case, the hypertonic muscle tone is excessive, and the muscles are unable to relax.
A subtype of spasm is colic. This is an episodic pain caused by spasm of smooth muscle in a particular organ (e.g., the bile duct). A characteristic of colic is the sensation of having to move about, and the pain may induce nausea or vomiting.
See also
Antispasmodic
Blepharospasm
Cadaveric spasm
Convulsion
Cramp
Cricopharyngeal spasm
Ejaculation
Epileptic seizure
Jactitation (medicine)
Myoclonus
Neck spasm
Orgasm
Spasmodic dysphonia
Spasticity
References
External links
NIH Medical Encyclopedia
How Stuff Works
Symptoms and signs: Nervous and musculoskeletal systems | 0.766568 | 0.995753 | 0.763312 |
Hydrotherapy | Hydrotherapy, formerly called hydropathy and also called water cure, is a branch of alternative medicine (particularly naturopathy), occupational therapy, and physiotherapy, that involves the use of water for pain relief and treatment. The term encompasses a broad range of approaches and therapeutic methods that take advantage of the physical properties of water, such as temperature and pressure, to stimulate blood circulation, and treat the symptoms of certain diseases.
Various therapies used in the present-day hydrotherapy employ water jets, underwater massage and mineral baths (e.g. balneotherapy, Iodine-Grine therapy, Kneipp treatments, Scotch hose, Swiss shower, thalassotherapy) or whirlpool bath, hot Roman bath, hot tub, Jacuzzi, and cold plunge.
Uses
Water therapy may be restricted to use as aquatic therapy, a form of physical therapy, and as a cleansing agent. However, it is also used as a medium for delivery of heat and cold to the body, which has long been the basis for its application. Hydrotherapy involves a range of methods and techniques, many of which use water as a medium to facilitate thermoregulatory reactions for therapeutic benefit.
Shower-based hydrotherapy techniques have been increasingly used in preference to full-immersion methods, partly for the ease of cleaning the equipment and reducing infections due to contamination. When removal of tissue is necessary for the treatment of wounds, hydrotherapy which performs selective mechanical debridement can be used. Examples of this include directed wound irrigation and therapeutic irrigation with suction.
Technique
The following methods are used for their hydrotherapeutic effects:
Packings, general and local;
Hot air and steam baths;
General baths;
Treadmills
Sitz (sitting), spinal, head, and foot baths;
Bandages or compresses, wet and dry; also;
Fomentations and poultices, sinapisms, stupes, rubbings, and water potations.
Hydrotherapy which involves submerging all or part of the body in water can involve several types of equipment:
Full body immersion tanks (a "Hubbard tank" is a large size)
Arm, hip, and leg whirlpool
Whirling water movement, provided by mechanical pumps, has been used in water tanks since at least the 1940s. Similar technologies have been marketed for recreational use under the terms "hot tub" or "spa".
In some cases, baths with whirlpool water flow are not used to manage wounds, as a whirlpool will not selectively target the tissue to be removed, and can damage all tissue. Whirlpools also create an unwanted risk of bacterial infection, can damage fragile body tissue, and in the case of treating arms and legs, bring risk of complications from edema.
History
The therapeutic use of water has been recorded in ancient Egyptian, Greek and Roman civilizations. Egyptian royalty bathed with essential oils and flowers, while Romans had communal public baths for their citizens. Hippocrates prescribed bathing in spring water for sickness. Other cultures noted for a long history of hydrotherapy include China and Japan, the latter being centred primarily around Japanese hot springs. Many such histories predate the Roman thermae.
Modern revival
Hydrotherapy became more prominent following the growth and development of modern medical practices in the 18th and 19th century. As traditional medical practice became increasingly professional in terms of how doctors operated, it was felt that medical treatment became increasingly less personalized, the development of hydrotherapy was believed to be a more personal form of medical treatment that did not necessarily present to patients the alienating scientific language that modern developments of medical treatment entailed.
1700–1810
Two English works on the medical uses of water were published in the 18th century that inaugurated the new fashion for hydrotherapy. One of these was by Sir John Floyer, a physician of Lichfield, who, struck by the remedial use of certain springs by the neighbouring peasantry, investigated the history of cold bathing and published a book on the subject in 1702. The book ran through six editions within a few years and the translation of this book into German was largely drawn upon by J. S. Hahn of Silesia as the basis for his book called On the Healing Virtues of Cold Water, Inwardly and Outwardly Applied, as Proved by Experience, published in 1738.
The other work was a 1797 publication by James Currie of Liverpool on the use of hot and cold water in the treatment of fever and other illness, with a fourth edition published in 1805, not long before his death. It was also translated into German by Michaelis (1801) and Hegewisch (1807). It was highly popular and first placed the subject on a scientific basis. Hahn's writings had meanwhile created much enthusiasm among his countrymen, societies having been formed everywhere to promote the medicinal and dietetic use of water; and in 1804 Professor E.F.C. Oertel of Anspach republished them and quickened the popular movement by unqualified commendation of water drinking as a remedy for all diseases.
The general idea behind hydropathy during the 1800s was to be able to induce something called a crisis. The thinking was that water invaded any cracks, wounds, or imperfections in the skin, which were filled with impure fluids. Health was considered to be the natural state of the body, and filling these spaces with pure water, would flush the impurities out, which would rise to the surface of the skin, producing pus. The event of this pus emerging was called a crisis, and was achieved through a multitude of methods. These methods included techniques such as sweating, the plunging bath, the half bath, the head bath, the sitting bath, and the douche bath. All of these were ways to gently expose the patient to cold water in different ways.
Vincenz Priessnitz (1799–1851)
Vincenz Priessnitz was the son of a peasant farmer who, as a young child, observed a wounded deer bathing a wound in a pond near his home. Over the course of several days, he would see this deer return and eventually the wound was healed. Later as a teenager, Priessnitz was attending to a horse cart, when the cart ran him over, breaking three of his ribs. A physician told him that they would never heal. Priessnitz decided to try his own hand at healing himself, and wrapped his wounds with damp bandages. By daily changing his bandages and drinking large quantities of water, after about a year, his broken ribs had been cured. Priessnitz quickly gained fame in his hometown and became the consulted physician.
Later in life, Priessnitz became the head of a hydropathy clinic in Gräfenberg in 1826. He was extremely successful and by 1840, he had 1600 patients in his clinic including many fellow physicians, as well as important political figures such as nobles and prominent military officials. Treatment length at Priessnitz's clinic varied. Much of his theory was about inducing the above-mentioned crisis, which could happen quickly, or could occur after three to four years. In accordance with the simplistic nature of hydropathy, a large part of the treatment was based on living a simple lifestyle. These lifestyle adjustments included dietary changes such as eating only very coarse food, such as jerky and bread, and of course drinking large quantities of water. Priessnitz's treatments also included a great deal of less strenuous exercise, mostly including walking. Ultimately, Priessnitz's clinic was extremely successful, and he gained fame across the western world. His practice even influenced the hydropathy that took root overseas in America.
Sebastian Kneipp (1821–1897)
Sebastian Kneipp was born in Germany and he considered his own role in hydropathy to be that of continuing Priessnitz's work. Kneipp's own practice of hydropathy was even gentler than the norm. He believed that typical hydropathic practices deployed were "too violent or too frequent" and he expressed concern that such techniques would cause emotional or physical trauma to the patient. Kneipp's practice was more all encompassing than Priessnitz's, and his practice involved not only curing the patients' physical woes, but emotional and mental as well.
Kneipp introduced four additional principles to the therapy: medicinal herbs, massages, balanced nutrition, and "regulative therapy to seek inner balance". Kneipp had a very simple view of an already simple practice. For him, hydropathy's primary goals were strengthening the constitution and removing poisons and toxins in the body. These basic interpretations of how hydropathy worked hinted at his complete lack of medical training. Kneipp did have, however, a very successful medical practice in spite of, perhaps even because of, his lack of medical training. As mentioned above, some patients were beginning to feel uncomfortable with traditional doctors because of the elitism of the medical profession. The new terms and techniques that doctors were using were difficult for the average person to understand. Having no formal training, all of his instructions and published works are described in easy to understand language and would have seemed very appealing to a patient who was displeased with the direction traditional medicine was taking.
A significant factor in the popular revival of hydrotherapy was that it could be practised relatively cheaply at home. The growth of hydrotherapy (or 'hydropathy' to use the name of the time), was thus partly derived from two interacting spheres: "the hydro and the home".
Hydrotherapy as a formal medical tool dates from about 1829 when Vincenz Priessnitz (1799–1851), a farmer of Gräfenberg in Silesia, then part of the Austrian Empire, began his public career in the paternal homestead, extended so as to accommodate the increasing numbers attracted by the fame of his cures.
At Gräfenberg, to which the fame of Priessnitz drew people of every rank and many countries, medical men were conspicuous by their numbers, some being attracted by curiosity, others by the desire of knowledge, but the majority by the hope of cure for ailments which had as yet proved incurable. Many records of experiences at Gräfenberg were published, all more or less favorable to the claims of Priessnitz, and some enthusiastic in their estimate of his genius and penetration.
Spread of hydrotherapy
Captain R. T. Claridge was responsible for introducing and promoting hydropathy in Britain, first in London in 1842, then with lecture tours in Ireland and Scotland in 1843. His 10-week tour in Ireland included Limerick, Cork, Wexford, Dublin and Belfast, over June, July and August 1843, with two subsequent lectures in Glasgow.
Some other Englishmen preceded Claridge to Graefenberg, although not many. One of these was James Wilson, who himself, along with James Manby Gully, established and operated a water cure establishment at Malvern in 1842. In 1843, Wilson and Gully published a comparison of the efficacy of the water-cure with drug treatments, including accounts of some cases treated at Malvern, combined with a prospectus of their Water Cure Establishment. Then in 1846 Gully published The Water Cure in Chronic Disease, further describing the treatments available at the clinic.
The fame of the water-cure establishment grew, and Gully and Wilson became well-known national figures. Two more clinics were opened at Malvern. Famous patients included Charles Darwin, Charles Dickens, Thomas Carlyle, Florence Nightingale, Lord Tennyson and Samuel Wilberforce. With his fame he also attracted criticism:
Sir Charles Hastings, a physician and founder of the British Medical Association, was a forthright critic of hydropathy, and Gully in particular.
From the 1840s, hydropathics were established across Britain. Initially, many of these were small institutions, catering to at most dozens of patients. By the later nineteenth century the typical hydropathic establishment had evolved into a more substantial undertaking, with thousands of patients treated annually for weeks at a time in a large purpose-built building with lavish facilities – baths, recreation rooms and the like – under the supervision of fully trained and qualified medical practitioners and staff.
In Germany, France and America, and in Malvern, England, hydropathic establishments multiplied with great rapidity. Antagonism ran high between the old practice and the new. Unsparing condemnation was heaped by each on the other; and a legal prosecution, leading to a royal commission of inquiry, served but to make Priessnitz and his system stand higher in public estimation.
Increasing popularity soon diminished caution whether the new method would help minor ailments and be of benefit to the more seriously injured. Hydropathists occupied themselves mainly with studying chronic invalids well able to bear a rigorous regimen and the severities of unrestricted crisis. The need of a radical adaptation to the former class was first adequately recognized by John Smedley, a manufacturer of Derbyshire, who, impressed in his own person with the severities as well as the benefits of the cold water cure, practised among his workpeople a milder form of hydropathy, and began about 1852 a new era in its history, founding at Matlock a counterpart of the establishment at Gräfenberg.
Ernst Brand (1827–1897) of Berlin, Raljen and Theodor von Jürgensen of Kiel, and Karl Liebermeister of Basel, between 1860 and 1870, employed the cooling bath in abdominal typhus with striking results, and led to its introduction to England by Wilson Fox. In the Franco-German War the cooling bath was largely employed, in conjunction frequently with quinine; and it was used in the treatment of hyperpyrexia.
Hot-air baths
Hydrotherapy, especially as promoted during the height of its Victorian revival, has often been associated with the use of cold water, as evidenced by many titles from that era. However, not all therapists limited their practice of hydrotherapy to cold water, even during the height of this popular revival.
The specific use of heat was however often associated with Victorian Turkish baths. These were introduced by David Urquhart into England on his return from the East in the 1850s, and ardently adopted by Richard Barter. The Turkish bath became a public institution, and, with the morning tub and the general practice of water drinking, is the most noteworthy of the many contributions by hydropathy to public health.
Spread to the United States
The first U.S. hydropathic facilities were established by Joel Shew and Russell Thacher Trall in the 1840s. Charles Munde also established early hydrotherapy facilities in the 1850s. Trall also co-edited the Water Cure Journal.
By 1850, it was said that "there are probably more than one hundred" facilities, along with numerous books and periodicals, including the New York Water Cure Journal, which had "attained an extent of circulation equalled by few monthlies in the world". By 1855, there were attempts by some to weigh the evidence of treatments in vogue at that time.
Following the introduction of hydrotherapy to the U.S., John Harvey Kellogg employed it at Battle Creek Sanitarium, which opened in 1866, where he strove to improve the scientific foundation for hydrotherapy. Other notable hydropathic centers of the era included the Cleveland Water Cure Establishment, founded in 1848, which operated successfully for two decades, before being sold to an organization which transformed it into an orphanage.
At its height, there were over 200 water-cure establishments in the United States, most located in the northeast. Few of these lasted into the postbellum years, although some survived into the 20th century including institutions in Scott (Cortland County), Elmira, Clifton Springs and Dansville. While none were located in Jefferson County, the Oswego Water Cure operated in the city of Oswego.
Subsequent developments
In November 1881, the British Medical Journal noted that hydropathy was a specific instance, or "particular case", of general principles of thermodynamics. That is, "the application of heat and cold in general", as it applies to physiology, mediated by hydropathy. In 1883, another writer stated "Not, be it observed, that hydropathy is a water treatment after all, but that water is the medium for the application of heat and cold to the body".
Hydrotherapy was used to treat people with mental illness in the 19th and 20th centuries and before World War II, various forms of hydrotherapy were used to treat alcoholism. The basic text of the Alcoholics Anonymous fellowship, Alcoholics Anonymous, reports that A.A. co-founder Bill Wilson was treated by hydrotherapy for his alcoholism in the early 1930s.
Recent techniques
A subset of cryotherapy involves cold water immersion or ice baths, used by physical therapists, sports medicine facilities and rehab clinics. Proponents assert that it results in improved return of blood flow and byproducts of cellular breakdown to the lymphatic system and more efficient recycling.
Alternating the temperatures, either in a shower or complementary tanks, combines the use of hot and cold in the same session. Proponents claim improvement in circulatory system and lymphatic drainage. Experimental evidence suggests that contrast hydrotherapy helps to reduce injury in the acute stages by stimulating blood flow and reducing swelling.
Society and culture
The growth of hydrotherapy, and various forms of hydropathic establishments, resulted in a form of tourism, both in the UK, and in Europe. At least one book listed English, Scottish, Irish and European establishments suitable for each specific malady, while another focused primarily on German spas and hydropathic establishments, but including other areas. While many bathing establishments were open all year round, doctors advised patients not to go before May, "nor to remain after October. English visitors rather prefer cold weather, and they often arrive for the baths in May, and return again in September. Americans come during the whole season, but prefer summer. The most fashionable and crowded time is during July and August". In Europe, interest in various forms of hydrotherapy and spa tourism continued unabated through the 19th century and into the 20th century, where "in France, Italy and Germany, several million people spend time each year at a spa." In 1891, when Mark Twain toured Europe and discovered that a bath of spring water at Aix-les-Bains soothed his rheumatism, he described the experience as "so enjoyable that if I hadn't had a disease I would have borrowed one just to have a pretext for going on".
This was not the first time such forms of spa tourism had been popular in Europe and the U.K. Indeed,
in Europe, the application of water in the treatment of fevers and other maladies had, since the seventeenth century, been consistently promoted by a number of medical writers. In the eighteenth century, taking to the waters became a fashionable pastime for the wealthy classes who decamped to resorts around Britain and Europe to cure the ills of over-consumption. In the main, treatment in the heyday of the British spa consisted of sense and sociability: promenading, bathing, and the repetitive quaffing of foul-tasting mineral waters.
A hydropathic establishment is a place where people receive hydropathic treatment. They are commonly built in spa towns, where mineral-rich or hot water occurs naturally.
Several hydropathic institutions wholly transferred their operations away from therapeutic purposes to become tourist hotels in the late 20th century while retaining the name 'Hydro'. There are several prominent examples in Scotland at Crieff, Peebles and Seamill amongst others.
Animal hydrotherapy
Canine hydrotherapy is a form of hydrotherapy directed at the treatment of chronic conditions, post-operative recovery, and pre-operative or general fitness in dogs.
See also
Balneotherapy or "bath therapy"
Colon cleansing
Destination spa
Enema
Finnish sauna
Halliwick
Hot tub
Mineral spring
Sebastian Kneipp
Kneipp facility
Spa
Spa bath
Spa town
Steam shower
Thalassotherapy
Water aerobics
Notes
a. While the second sense, of water as a form of torture is documented back to at least the 15th century, the first use of the term water cure as a torture is indirectly dated to around 1898, by U.S. soldiers in the Spanish–American War, after the term had been introduced to America in the mid-19th century in the therapeutic sense, which was in widespread use. Indeed, while the torture sense of water cure was by 1900–1902 established in the American army, with a conscious sense of irony, this sense was not in widespread use. Webster's 1913 dictionary cited only the therapeutic sense, water cure being synonymous with hydropathy, the term by which hydrotherapy was known in the 19th century and early 20th century.
The late 19th century expropriation of the term water cure, already in use in the therapeutic sense, to denote the polar opposite of therapy, namely torture, has the hallmark of arising in the sense of irony. This would be in keeping with some of the reactions to water cure therapy and its promotion, which included not only criticism, but also parody and satire.
References
Further reading
Aquatic therapy
Natural environment based therapies
Alternative medical treatments | 0.766003 | 0.996447 | 0.763281 |
Biological warfare | Biological warfare, also known as germ warfare, is the use of biological toxins or infectious agents such as bacteria, viruses, insects, and fungi with the intent to kill, harm or incapacitate humans, animals or plants as an act of war. Biological weapons (often termed "bio-weapons", "biological threat agents", or "bio-agents") are living organisms or replicating entities (i.e. viruses, which are not universally considered "alive"). Entomological (insect) warfare is a subtype of biological warfare.
Biological warfare is subject to a forceful normative prohibition. Offensive biological warfare in international armed conflicts is a war crime under the 1925 Geneva Protocol and several international humanitarian law treaties. In particular, the 1972 Biological Weapons Convention (BWC) bans the development, production, acquisition, transfer, stockpiling and use of biological weapons. In contrast, defensive biological research for prophylactic, protective or other peaceful purposes is not prohibited by the BWC.
Biological warfare is distinct from warfare involving other types of weapons of mass destruction (WMD), including nuclear warfare, chemical warfare, and radiological warfare. None of these are considered conventional weapons, which are deployed primarily for their explosive, kinetic, or incendiary potential.
Biological weapons may be employed in various ways to gain a strategic or tactical advantage over the enemy, either by threats or by actual deployments. Like some chemical weapons, biological weapons may also be useful as area denial weapons. These agents may be lethal or non-lethal, and may be targeted against a single individual, a group of people, or even an entire population. They may be developed, acquired, stockpiled or deployed by nation states or by non-national groups. In the latter case, or if a nation-state uses it clandestinely, it may also be considered bioterrorism.
Biological warfare and chemical warfare overlap to an extent, as the use of toxins produced by some living organisms is considered under the provisions of both the BWC and the Chemical Weapons Convention. Toxins and psychochemical weapons are often referred to as midspectrum agents. Unlike bioweapons, these midspectrum agents do not reproduce in their host and are typically characterized by shorter incubation periods.
Overview
A biological attack could conceivably result in large numbers of civilian casualties and cause severe disruption to economic and societal infrastructure.
A nation or group that can pose a credible threat of mass casualty has the ability to alter the terms under which other nations or groups interact with it. When indexed to weapon mass and cost of development and storage, biological weapons possess destructive potential and loss of life far in excess of nuclear, chemical or conventional weapons. Accordingly, biological agents are potentially useful as strategic deterrents, in addition to their utility as offensive weapons on the battlefield.
As a tactical weapon for military use, a significant problem with biological warfare is that it would take days to be effective, and therefore might not immediately stop an opposing force. Some biological agents (smallpox, pneumonic plague) have the capability of person-to-person transmission via aerosolized respiratory droplets. This feature can be undesirable, as the agent(s) may be transmitted by this mechanism to unintended populations, including neutral or even friendly forces. Worse still, such a weapon could "escape" the laboratory where it was developed, even if there was no intent to use it – for example by infecting a researcher who then transmits it to the outside world before realizing that they were infected. Several cases are known of researchers becoming infected and dying of Ebola, which they had been working with in the lab (though nobody else was infected in those cases) – while there is no evidence that their work was directed towards biological warfare, it demonstrates the potential for accidental infection even of careful researchers fully aware of the dangers. While containment of biological warfare is less of a concern for certain criminal or terrorist organizations, it remains a significant concern for the military and civilian populations of virtually all nations.
History
Antiquity and Middle Ages
Rudimentary forms of biological warfare have been practiced since antiquity. The earliest documented incident of the intention to use biological weapons is recorded in Hittite texts of 1500–1200 BCE, in which victims of an unknown plague (possibly tularemia) were driven into enemy lands, causing an epidemic. The Assyrians poisoned enemy wells with the fungus ergot, though with unknown results. Scythian archers dipped their arrows and Roman soldiers their swords into excrements and cadavers – victims were commonly infected by tetanus as result. In 1346, the bodies of Mongol warriors of the Golden Horde who had died of plague were thrown over the walls of the besieged Crimean city of Kaffa. Specialists disagree about whether this operation was responsible for the spread of the Black Death into Europe, Near East and North Africa, resulting in the deaths of approximately 25 million Europeans.
Biological agents were extensively used in many parts of Africa from the sixteenth century AD, most of the time in the form of poisoned arrows, or powder spread on the war front as well as poisoning of horses and water supply of the enemy forces. In Borgu, there were specific mixtures to kill, hypnotize, make the enemy bold, and to act as an antidote against the poison of the enemy as well. The creation of biologicals was reserved for a specific and professional class of medicine-men.
18th to 19th century
During the French and Indian War, in June 1763 a group of Native Americans laid siege to British-held Fort Pitt. The commander of Fort Pitt, Simeon Ecuyer, ordered his men to take smallpox-infested blankets from the infirmary and give it to a Lenape delegation during the siege. A reported outbreak that began the spring before left as many as one hundred Native Americans dead in Ohio Country from 1763 to 1764. It is not clear whether the smallpox was a result of the Fort Pitt incident or the virus was already present among the Delaware people as outbreaks happened on their own every dozen or so years and the delegates were met again later and seemingly had not contracted smallpox. During the American Revolutionary War, Continental Army officer George Washington mentioned to the Continental Congress that he had heard a rumor from a sailor that his opponent during the Siege of Boston, General William Howe, had deliberately sent civilians out of the city in the hopes of spreading the ongoing smallpox epidemic to American lines; Washington, remaining unconvinced, wrote that he "could hardly give credit to" the claim. Washington had already inoculated his soldiers, diminishing the effect of the epidemic. Some historians have claimed that a detachment of the Corps of Royal Marines stationed in New South Wales, Australia, deliberately used smallpox there in 1789. Dr Seth Carus states: "Ultimately, we have a strong circumstantial case supporting the theory that someone deliberately introduced smallpox in the Aboriginal population."
World War I
By 1900 the germ theory and advances in bacteriology brought a new level of sophistication to the techniques for possible use of bio-agents in war. Biological sabotage in the form of anthrax and glanders was undertaken on behalf of the Imperial German government during World War I (1914–1918), with indifferent results. The Geneva Protocol of 1925 prohibited the first use of chemical and biological weapons against enemy nationals in international armed conflicts.
World War II
With the onset of World War II, the Ministry of Supply in the United Kingdom established a biological warfare program at Porton Down, headed by the microbiologist Paul Fildes. The research was championed by Winston Churchill and soon tularemia, anthrax, brucellosis, and botulism toxins had been effectively weaponized. In particular, Gruinard Island in Scotland, was contaminated with anthrax during a series of extensive tests for the next 56 years. Although the UK never offensively used the biological weapons it developed, its program was the first to successfully weaponize a variety of deadly pathogens and bring them into industrial production. Other nations, notably France and Japan, had begun their own biological weapons programs.
When the United States entered the war, Allied resources were pooled at the request of the British. The U.S. then established a large research program and industrial complex at Fort Detrick, Maryland, in 1942 under the direction of George W. Merck. The biological and chemical weapons developed during that period were tested at the Dugway Proving Grounds in Utah. Soon there were facilities for the mass production of anthrax spores, brucellosis, and botulism toxins, although the war was over before these weapons could be of much operational use.
The most notorious program of the period was run by the secret Imperial Japanese Army Unit 731 during the war, based at Pingfan in Manchuria and commanded by Lieutenant General Shirō Ishii. This biological warfare research unit conducted often fatal human experiments on prisoners, and produced biological weapons for combat use. Although the Japanese effort lacked the technological sophistication of the American or British programs, it far outstripped them in its widespread application and indiscriminate brutality. Biological weapons were used against Chinese soldiers and civilians in several military campaigns. In 1940, the Japanese Army Air Force bombed Ningbo with ceramic bombs full of fleas carrying the bubonic plague. Many of these operations were ineffective due to inefficient delivery systems, although up to 400,000 people may have died. During the Zhejiang-Jiangxi Campaign in 1942, around 1,700 Japanese troops died out of a total 10,000 Japanese soldiers who fell ill with disease when their own biological weapons attack rebounded on their own forces.
During the final months of World War II, Japan planned to use plague as a biological weapon against U.S. civilians in San Diego, California, during Operation Cherry Blossoms at Night. The plan was set to launch on 22 September 1945, but it was not executed because of Japan's surrender on 15 August 1945.
Cold War
In Britain, the 1950s saw the weaponization of plague, brucellosis, tularemia and later equine encephalomyelitis and vaccinia viruses, but the programme was unilaterally cancelled in 1956. The United States Army Biological Warfare Laboratories weaponized anthrax, tularemia, brucellosis, Q-fever and others.
In 1969, US President Richard Nixon decided to unilaterally terminate the offensive biological weapons program of the US, allowing only scientific research for defensive measures. This decision increased the momentum of the negotiations for a ban on biological warfare, which took place from 1969 to 1972 in the United Nation's Conference of the Committee on Disarmament in Geneva. These negotiations resulted in the Biological Weapons Convention, which was opened for signature on 10 April 1972 and entered into force on 26 March 1975 after its ratification by 22 states.
Despite being a party and depositary to the BWC, the Soviet Union continued and expanded its massive offensive biological weapons program, under the leadership of the allegedly civilian institution Biopreparat. The Soviet Union attracted international suspicion after the 1979 Sverdlovsk anthrax leak killed approximately 65 to 100 people.
1948 Arab–Israeli War
According to historians Benny Morris and Benjamin Kedar, Israel conducted a biological warfare operation codenamed "Cast Thy Bread" during the 1948 Arab–Israeli War. The Haganah initially used typhoid bacteria to contaminate water wells in newly cleared Arab villages to prevent the population including militiamen from returning. Later, the biological warfare campaign expanded to include Jewish settlements that were in imminent danger of being captured by Arab troops and inhabited Arab towns not slated for capture. There was also plans to expand the biological warfare campaign into other Arab states including Egypt, Lebanon and Syria, but they were not carried out.
International law
International restrictions on biological warfare began with the 1925 Geneva Protocol, which prohibits the use but not the possession or development of biological and chemical weapons in international armed conflicts. Upon ratification of the Geneva Protocol, several countries made reservations regarding its applicability and use in retaliation. Due to these reservations, it was in practice a "no-first-use" agreement only.
The 1972 Biological Weapons Convention (BWC) supplements the Geneva Protocol by prohibiting the development, production, acquisition, transfer, stockpiling and use of biological weapons. Having entered into force on 26 March 1975, the BWC was the first multilateral disarmament treaty to ban the production of an entire category of weapons of mass destruction. As of March 2021, 183 states have become party to the treaty. The BWC is considered to have established a strong global norm against biological weapons, which is reflected in the treaty's preamble, stating that the use of biological weapons would be "repugnant to the conscience of mankind". The BWC's effectiveness has been limited due to insufficient institutional support and the absence of any formal verification regime to monitor compliance.
In 1985, the Australia Group was established, a multilateral export control regime of 43 countries aiming to prevent the proliferation of chemical and biological weapons.
In 2004, the United Nations Security Council passed Resolution 1540, which obligates all UN Member States to develop and enforce appropriate legal and regulatory measures against the proliferation of chemical, biological, radiological, and nuclear weapons and their means of delivery, in particular, to prevent the spread of weapons of mass destruction to non-state actors.
Bioterrorism
Biological weapons are difficult to detect, economical and easy to use, making them appealing to terrorists. The cost of a biological weapon is estimated to be about 0.05 percent the cost of a conventional weapon in order to produce similar numbers of mass casualties per kilometer square. Moreover, their production is very easy as common technology can be used to produce biological warfare agents, like that used in production of vaccines, foods, spray devices, beverages and antibiotics. A major factor in biological warfare that attracts terrorists is that they can easily escape before the government agencies or secret agencies have even started their investigation. This is because the potential organism has an incubation period of 3 to 7 days, after which the results begin to appear, thereby giving terrorists a lead.
A technique called Clustered, Regularly Interspaced, Short Palindromic Repeat (CRISPR-Cas9) is now so cheap and widely available that scientists fear that amateurs will start experimenting with them. In this technique, a DNA sequence is cut off and replaced with a new sequence, e.g. one that codes for a particular protein, with the intent of modifying an organism's traits. Concerns have emerged regarding do-it-yourself biology research organizations due to their associated risk that a rogue amateur DIY researcher could attempt to develop dangerous bioweapons using genome editing technology.
In 2002, when CNN went through Al-Qaeda's (AQ's) experiments with crude poisons, they found out that AQ had begun planning ricin and cyanide attacks with the help of a loose association of terrorist cells. The associates had infiltrated many countries like Turkey, Italy, Spain, France and others. In 2015, to combat the threat of bioterrorism, a National Blueprint for Biodefense was issued by the Blue-Ribbon Study Panel on Biodefense. Also, 233 potential exposures of select biological agents outside of the primary barriers of the biocontainment in the US were described by the annual report of the Federal Select Agent Program.
Though a verification system can reduce bioterrorism, an employee, or a lone terrorist having adequate knowledge of a bio-technology company's facilities, can cause potential danger by utilizing, without proper oversight and supervision, that company's resources. Moreover, it has been found that about 95% of accidents that have occurred due to low security have been done by employees or those who had a security clearance.
Entomology
Entomological warfare (EW) is a type of biological warfare that uses insects to attack the enemy. The concept has existed for centuries and research and development have continued into the modern era. EW has been used in battle by Japan and several other nations have developed and been accused of using an entomological warfare program. EW may employ insects in a direct attack or as vectors to deliver a biological agent, such as plague. Essentially, EW exists in three varieties. One type of EW involves infecting insects with a pathogen and then dispersing the insects over target areas. The insects then act as a vector, infecting any person or animal they might bite. Another type of EW is a direct insect attack against crops; the insect may not be infected with any pathogen but instead represents a threat to agriculture. The final method uses uninfected insects, such as bees or wasps, to directly attack the enemy.
Genetics
Theoretically, novel approaches in biotechnology, such as synthetic biology could be used in the future to design novel types of biological warfare agents.
Would demonstrate how to render a vaccine ineffective;
Would confer resistance to therapeutically useful antibiotics or antiviral agents;
Would enhance the virulence of a pathogen or render a nonpathogen virulent;
Would increase the transmissibility of a pathogen;
Would alter the host range of a pathogen;
Would enable the evasion of diagnostic/detection tools;
Would enable the weaponization of a biological agent or toxin.
Most of the biosecurity concerns in synthetic biology are focused on the role of DNA synthesis and the risk of producing genetic material of lethal viruses (e.g. 1918 Spanish flu, polio) in the lab. Recently, the CRISPR/Cas system has emerged as a promising technique for gene editing. It was hailed by The Washington Post as "the most important innovation in the synthetic biology space in nearly 30 years." While other methods take months or years to edit gene sequences, CRISPR speeds that time up to weeks. Due to its ease of use and accessibility, it has raised a number of ethical concerns, especially surrounding its use in the biohacking space.
By target
Anti-personnel
Ideal characteristics of a biological agent to be used as a weapon against humans are high infectivity, high virulence, non-availability of vaccines and availability of an effective and efficient delivery system. Stability of the weaponized agent (the ability of the agent to retain its infectivity and virulence after a prolonged period of storage) may also be desirable, particularly for military applications, and the ease of creating one is often considered. Control of the spread of the agent may be another desired characteristic.
The primary difficulty is not the production of the biological agent, as many biological agents used in weapons can be manufactured relatively quickly, cheaply and easily. Rather, it is the weaponization, storage, and delivery in an effective vehicle to a vulnerable target that pose significant problems.
For example, Bacillus anthracis is considered an effective agent for several reasons. First, it forms hardy spores, perfect for dispersal aerosols. Second, this organism is not considered transmissible from person to person, and thus rarely if ever causes secondary infections. A pulmonary anthrax infection starts with ordinary influenza-like symptoms and progresses to a lethal hemorrhagic mediastinitis within 3–7 days, with a fatality rate that is 90% or higher in untreated patients. Finally, friendly personnel and civilians can be protected with suitable antibiotics.
Agents considered for weaponization, or known to be weaponized, include bacteria such as Bacillus anthracis, Brucella spp., Burkholderia mallei, Burkholderia pseudomallei, Chlamydophila psittaci, Coxiella burnetii, Francisella tularensis, some of the Rickettsiaceae (especially Rickettsia prowazekii and Rickettsia rickettsii), Shigella spp., Vibrio cholerae, and Yersinia pestis. Many viral agents have been studied and/or weaponized, including some of the Bunyaviridae (especially Rift Valley fever virus), Ebolavirus, many of the Flaviviridae (especially Japanese encephalitis virus), Machupo virus, Coronaviruses, Marburg virus, Variola virus, and yellow fever virus. Fungal agents that have been studied include Coccidioides spp.
Toxins that can be used as weapons include ricin, staphylococcal enterotoxin B, botulinum toxin, saxitoxin, and many mycotoxins. These toxins and the organisms that produce them are sometimes referred to as select agents. In the United States, their possession, use, and transfer are regulated by the Centers for Disease Control and Prevention's Select Agent Program.
The former US biological warfare program categorized its weaponized anti-personnel bio-agents as either Lethal Agents (Bacillus anthracis, Francisella tularensis, Botulinum toxin) or Incapacitating Agents (Brucella suis, Coxiella burnetii, Venezuelan equine encephalitis virus, Staphylococcal enterotoxin B).
Anti-agriculture
Anti-crop/anti-vegetation/anti-fisheries
The United States developed an anti-crop capability during the Cold War that used plant diseases (bioherbicides, or mycoherbicides) for destroying enemy agriculture. Biological weapons also target fisheries as well as water-based vegetation. It was believed that the destruction of enemy agriculture on a strategic scale could thwart Sino-Soviet aggression in a general war. Diseases such as wheat blast and rice blast were weaponized in aerial spray tanks and cluster bombs for delivery to enemy watersheds in agricultural regions to initiate epiphytotic (epidemics among plants). On the other hand, some sources report that these agents were stockpiled but never weaponized. When the United States renounced its offensive biological warfare program in 1969 and 1970, the vast majority of its biological arsenal was composed of these plant diseases. Enterotoxins and Mycotoxins were not affected by Nixon's order.
Though herbicides are chemicals, they are often grouped with biological warfare and chemical warfare because they may work in a similar manner as biotoxins or bioregulators. The Army Biological Laboratory tested each agent and the Army's Technical Escort Unit was responsible for the transport of all chemical, biological, radiological (nuclear) materials.
Biological warfare can also specifically target plants to destroy crops or defoliate vegetation. The United States and Britain discovered plant growth regulators (i.e., herbicides) during the Second World War, which were then used by the UK in the counterinsurgency operations of the Malayan Emergency. Inspired by the use in Malaysia, the US military effort in the Vietnam War included a mass dispersal of a variety of herbicides, famously Agent Orange, with the aim of destroying farmland and defoliating forests used as cover by the Viet Cong. Sri Lanka deployed military defoliants in its prosecution of the Eelam War against Tamil insurgents.
Anti-livestock
During World War I, German saboteurs used anthrax and glanders to sicken cavalry horses in U.S. and France, sheep in Romania, and livestock in Argentina intended for the Entente forces. One of these German saboteurs was Anton Dilger. Also, Germany itself became a victim of similar attacks – horses bound for Germany were infected with Burkholderia by French operatives in Switzerland.
During World War II, the U.S. and Canada secretly investigated the use of rinderpest, a highly lethal disease of cattle, as a bioweapon.
In the 1980s Soviet Ministry of Agriculture had successfully developed variants of foot-and-mouth disease, and rinderpest against cows, African swine fever for pigs, and psittacosis to kill the chicken. These agents were prepared to spray them down from tanks attached to airplanes over hundreds of miles. The secret program was code-named "Ecology".
During the Mau Mau Uprising in 1952, the poisonous latex of the African milk bush was used to kill cattle.
Defensive operations
Medical countermeasures
In 2010 at The Meeting of the States Parties to the Convention on the Prohibition of the Development, Production and Stockpiling of Bacteriological (Biological) and Toxin Weapons and Their Destruction in Geneva
the sanitary epidemiological reconnaissance was suggested as well-tested means for enhancing the monitoring of infections and parasitic agents, for the practical implementation of the International Health Regulations (2005). The aim was to prevent and minimize the consequences of natural outbreaks of dangerous infectious diseases as well as the threat of alleged use of biological weapons against BTWC States Parties.
Many countries require their active-duty military personnel to get vaccinated for certain diseases that may potentially be used as a bioweapon such as anthrax, smallpox, and various other vaccines depending on the Area of Operations of the individual military units and commands.
Public health and disease surveillance
Most classical and modern biological weapons' pathogens can be obtained from a plant or an animal which is naturally infected.
In the largest biological weapons accident known—the anthrax outbreak in Sverdlovsk (now Yekaterinburg) in the Soviet Union in 1979—sheep became ill with anthrax as far as 200 kilometers from the release point of the organism from a military facility in the southeastern portion of the city and still off-limits to visitors today, (see Sverdlovsk Anthrax leak).
Thus, a robust surveillance system involving human clinicians and veterinarians may identify a bioweapons attack early in the course of an epidemic, permitting the prophylaxis of disease in the vast majority of people (and/or animals) exposed but not yet ill.
For example, in the case of anthrax, it is likely that by 24–36 hours after an attack, some small percentage of individuals (those with the compromised immune system or who had received a large dose of the organism due to proximity to the release point) will become ill with classical symptoms and signs (including a virtually unique chest X-ray finding, often recognized by public health officials if they receive timely reports). The incubation period for humans is estimated to be about 11.8 days to 12.1 days. This suggested period is the first model that is independently consistent with data from the largest known human outbreak. These projections refine previous estimates of the distribution of early-onset cases after a release and support a recommended 60-day course of prophylactic antibiotic treatment for individuals exposed to low doses of anthrax. By making these data available to local public health officials in real time, most models of anthrax epidemics indicate that more than 80% of an exposed population can receive antibiotic treatment before becoming symptomatic, and thus avoid the moderately high mortality of the disease.
Common epidemiological warnings
From most specific to least specific:
Single cause of a certain disease caused by an uncommon agent, with lack of an epidemiological explanation.
Unusual, rare, genetically engineered strain of an agent.
High morbidity and mortality rates in regards to patients with the same or similar symptoms.
Unusual presentation of the disease.
Unusual geographic or seasonal distribution.
Stable endemic disease, but with an unexplained increase in relevance.
Rare transmission (aerosols, food, water).
No illness presented in people who were/are not exposed to "common ventilation systems (have separate closed ventilation systems) when illness is seen in persons in close proximity who have a common ventilation system."
Different and unexplained diseases coexisting in the same patient without any other explanation.
Rare illness that affects a large, disparate population (respiratory disease might suggest the pathogen or agent was inhaled).
Illness is unusual for a certain population or age-group in which it takes presence.
Unusual trends of death and/or illness in animal populations, previous to or accompanying illness in humans.
Many affected reaching out for treatment at the same time.
Similar genetic makeup of agents in affected individuals.
Simultaneous collections of similar illness in non-contiguous areas, domestic, or foreign.
An abundance of cases of unexplained diseases and deaths.
Bioweapon identification
The goal of biodefense is to integrate the sustained efforts of the national and homeland security, medical, public health, intelligence, diplomatic, and law enforcement communities. Health care providers and public health officers are among the first lines of defense. In some countries private, local, and provincial (state) capabilities are being augmented by and coordinated with federal assets, to provide layered defenses against biological weapon attacks. During the first Gulf War the United Nations activated a biological and chemical response team, Task Force Scorpio, to respond to any potential use of weapons of mass destruction on civilians.
The traditional approach toward protecting agriculture, food, and water: focusing on the natural or unintentional introduction of a disease is being strengthened by focused efforts to address current and anticipated future biological weapons threats that may be deliberate, multiple, and repetitive.
The growing threat of biowarfare agents and bioterrorism has led to the development of specific field tools that perform on-the-spot analysis and identification of encountered suspect materials. One such technology, being developed by researchers from the Lawrence Livermore National Laboratory (LLNL), employs a "sandwich immunoassay", in which fluorescent dye-labeled antibodies aimed at specific pathogens are attached to silver and gold nanowires.
In the Netherlands, the company TNO has designed Bioaerosol Single Particle Recognition eQuipment (BiosparQ). This system would be implemented into the national response plan for bioweapon attacks in the Netherlands.
Researchers at Ben Gurion University in Israel are developing a different device called the BioPen, essentially a "Lab-in-a-Pen", which can detect known biological agents in under 20 minutes using an adaptation of the ELISA, a similar widely employed immunological technique, that in this case incorporates fiber optics.
List of programs, projects and sites by country
United States
Fort Detrick, Maryland
U.S. Army Biological Warfare Laboratories (1943–69)
Building 470
One-Million-Liter Test Sphere
Operation Sea-Spray
Operation Whitecoat (1954–73)
U.S. entomological warfare program
Operation Big Itch
Operation Big Buzz
Operation Drop Kick
Operation May Day
Project Bacchus
Project Clear Vision
Project SHAD
Project 112
Horn Island Testing Station
Fort Terry
Granite Peak Installation
Vigo Ordnance Plant
United Kingdom
Porton Down
Gruinard Island
Nancekuke
Operation Vegetarian (1942–1944)
Open-air field tests:
Operation Harness off Antigua, 1948–1950.
Operation Cauldron off Stornoway, 1952.
Operation Hesperus off Stornoway, 1953.
Operation Ozone off Nassau, 1954.
Operation Negation off Nassau, 1954–5.
Soviet Union and Russia
Biopreparat (18 labs and production centers)
Stepnogorsk Scientific and Technical Institute for Microbiology, Stepnogorsk, northern Kazakhstan
Institute of Ultra Pure Biochemical Preparations, Leningrad, a weaponized plague center
Vector State Research Center of Virology and Biotechnology (VECTOR), a weaponized smallpox center
Institute of Applied Biochemistry, Omutninsk
Kirov bioweapons production facility, Kirov, Kirov Oblast
Zagorsk smallpox production facility, Zagorsk
Berdsk bioweapons production facility, Berdsk
Bioweapons research facility, Obolensk
Sverdlovsk bioweapons production facility (Military Compound 19), Sverdlovsk, a weaponized anthrax center
Institute of Virus Preparations
Poison laboratory of the Soviet secret services
Vozrozhdeniya
Project Bonfire
Project Factor
Japan
Unit 731
Zhongma Fortress
Kaimingjie germ weapon attack
Khabarovsk War Crime Trials
Epidemic Prevention and Water Purification Department
Iraq
Al Hakum
Salman Pak facility
Al Manal facility
South Africa
Project Coast
Delta G Scientific Company
Roodeplaat Research Laboratories
Protechnik
Rhodesia
Canada
Grosse Isle, Quebec, site (1939–45) of research into anthrax and other agents
DRDC Suffield, Suffield, Alberta
List of associated people
Bioweaponeers:
Includes scientists and administrators
Shyh-Ching Lo
Kanatjan Alibekov, known as Ken Alibek
Ira Baldwin
Wouter Basson
Kurt Blome
Eugen von Haagen
Anton Dilger
Paul Fildes
Arthur Galston (unwittingly)
Kurt Gutzeit
Riley D. Housewright
Shiro Ishii
Elvin A. Kabat
George W. Merck
Frank Olson
Vladimir Pasechnik
William C. Patrick III
Sergei Popov
Theodor Rosebury
Rihab Rashid Taha
Prince Tsuneyoshi Takeda
Huda Salih Mahdi Ammash
Nassir al-Hindawi
Erich Traub
Auguste Trillat
Baron Otto von Rosen
Yujiro Wakamatsu
Yazid Sufaat
Writers and activists:
Daniel Barenblatt
Leonard A. Cole
Stephen Endicott
Arthur Galston
Jeanne Guillemin
Edward Hagerman
Sheldon H. Harris
Nicholas D. Kristof
Joshua Lederberg
Matthew Meselson
Toby Ord
Richard Preston
Ed Regis
Mark Wheelis
David Willman
Aaron Henderson
In popular culture
See also
Animal-borne bomb attacks
Antibiotic resistance
Asymmetric warfare
Baker Island
Bioaerosol
Biological contamination
Biological pest control
Biosecurity
Chemical weapon
Counterinsurgency
Discredited AIDS origins theories
Enterotoxin
Entomological warfare
Ethnic bioweapon
Herbicidal warfare
Hittite plague
Human experimentation in the United States
John W. Powell
Johnston Atoll Chemical Agent Disposal System
List of CBRN warfare forces
McNeill's law
Military animal
Mycotoxin
Plum Island Animal Disease Center
Project 112
Project AGILE
Project SHAD
Rhodesia and weapons of mass destruction
Trichothecene
Well poisoning
Yellow rain
References
Further reading
Counterproliferation Paper No. 53, USAF Counterproliferation Center, Air University, Maxwell Air Force Base, Alabama, USA.
External links
Biological weapons and international humanitarian law , ICRC
WHO: Health Aspects of Biological and Chemical Weapons
USAMRIID—U.S. Army Medical Research Institute of Infectious Diseases
Bioethics
Warfare by type | 0.764957 | 0.997809 | 0.76328 |
Refeeding syndrome | Refeeding syndrome (RFS) is a metabolic disturbance which occurs as a result of reinstitution of nutrition in people who are starved, severely malnourished, or metabolically stressed because of severe illness. When too much food or liquid nutrition supplement is eaten during the initial four to seven days following a malnutrition event, the production of glycogen, fat and protein in cells may cause low serum concentrations of potassium, magnesium and phosphate. The electrolyte imbalance may cause neurologic, pulmonary, cardiac, neuromuscular, and hematologic symptoms—many of which, if severe enough, may result in death.
Cause
Any individual who has had a negligible nutrient intake for many consecutive days and/or is metabolically stressed from a critical illness or major surgery is at risk of refeeding syndrome. Refeeding syndrome usually occurs within four days of starting to re-feed. Patients can develop fluid and electrolyte imbalance, especially hypophosphatemia, along with neurologic, pulmonary, cardiac, neuromuscular, and hematologic complications.
During fasting, the body switches its main fuel source from carbohydrates to fat tissue fatty acids and it is contended that amino acids from protein sources such muscle as the main energy sources. This timing of protein use is contested: that at first the body practices autophagy to source amino acids rather than being simultaneously used with fat. That the body only uses protein as fuel source when all fat has been depleted. The spleen decreases its rate of red blood cell breakdown thus conserving red blood cells. Many intracellular minerals become severely depleted during this period, although serum levels remain normal. Importantly, insulin secretion is suppressed in this fasting state, and glucagon secretion is increased.
During refeeding, insulin secretion resumes in response to increased blood sugar, resulting in increased glycogen, fat, and protein synthesis. Refeeding increases the basal metabolic rate. The process requires phosphates, magnesium and potassium which are already depleted, and the stores rapidly become used up. Formation of phosphorylated carbohydrate compounds in the liver and skeletal muscle depletes intracellular ATP and 2,3-diphosphoglycerate in red blood cells, leading to cellular dysfunction and inadequate oxygen delivery to the body's organs. Intracellular movement of electrolytes occurs along with a fall in the serum electrolytes, including phosphate and magnesium. Levels of serum glucose may rise, and B1 vitamin (thiamine) may fall. Abnormal heart rhythms are the most common cause of death from refeeding syndrome, with other significant risks including confusion, coma and convulsions and cardiac failure.
Anorectics
An anorectic or anorexic is a drug which reduces appetite, resulting in lower food consumption, leading to weight loss.
Examples of anorectics includes stimulants like amphetamines, methylphenidate, and cocaine, along with opiates. Abusing them can lead to prolonged periods of inadequate calorie intake, mimicking anorexia nervosa. If someone misuses these substances and then starts eating normally again, they may be at increased risk of refeeding syndrome.
Clinical situations
The syndrome can occur at the beginning of treatment for eating disorders when patients have an increase in calorie intake and can be fatal. It can also occur when someone does not eat for several days at a time usually beginning after 4–5 days with no food. It can also occur after the onset of a severe illness or major surgery. The shifting of electrolytes and fluid balance increases cardiac workload and heart rate. This can lead to acute heart failure. Oxygen consumption is increased which strains the respiratory system and can make weaning from ventilation more difficult.
Diagnosis
Refeeding syndrome can be fatal if not recognized and treated properly. The electrolyte disturbances of the refeeding syndrome can occur within the first few days of refeeding. Close monitoring of blood biochemistry is therefore necessary in the early refeeding period.
Treatment
In critically ill patients admitted to an intensive care unit, if phosphate drops to below 0.65 mmol/L (2.0 mg/dL) from a previously normal level within three days of starting enteral or parenteral nutrition, caloric intake should be reduced to 480 kcals per day for at least two days while electrolytes are replaced. Daily doses of NADH/CoQ10/Thiamine, Vitamin B complex (strong) and a multivitamin and mineral preparation are strongly recommended. Blood biochemistry should be monitored regularly until it is stable. Although clinical trials are lacking in patients other than those admitted to intensive care, it is commonly recommended that energy intake should remain lower than that normally required for the first 3–5 days of treatment of refeeding syndrome for all patients.
History
In his 5th century BC work "On Fleshes" (De Carnibus), Hippocrates writes, "if a person goes seven days without eating or drinking anything, in this period most die; but there are some who survive that time but still die, and others are persuaded not to starve themselves to death but to eat and drink: however, the cavity no longer admits anything because the jejunum (nêstis) has grown together in that many days, and these people too die." Although Hippocrates misidentifies the cause of death, this passage likely represents an early description of refeeding syndrome. The Roman historian Flavius Josephus writing in the 1st century AD described classic symptoms of the syndrome among survivors of the siege of Jerusalem. He described the death of those who overindulged in food after the famine, whereas those who ate at a more restrained pace survived. Shincho koki chronicle describes similar outcome when starved soldiers were fed after surrender at the siege of Tottori castle on October 25, 1581.
There were numerous cases of refeeding syndrome in the Siege of Leningrad during World War II, with Soviet civilians trapped in the city having become malnourished due to the German blockade.
A common error, repeated in multiple papers, is that "The syndrome was first described after World War II in Americans who, held by the Japanese as prisoners of war, had become malnourished during captivity and who were then released to the care of United States personnel in the Philippines."
However, closer inspection of the 1951 paper by Schnitker reveals the prisoners under study were not American POWs but Japanese soldiers who, already malnourished, surrendered in the Philippines during 1945, after the war was over.
Refeeding syndrome has also been documented among survivors of the Ebensee concentration camp upon their liberation by the United States Army in May 1945. After liberation, the inmates were fed rich soup; the stomachs of a few presumably could not handle the sudden caloric intake and digestion, and they died.
It is difficult to ascertain when the syndrome was first discovered and named, but it is likely the associated electrolyte disturbances were identified perhaps in Holland, the Netherlands during the so-called Hunger Winter, spanning the closing months of World War II.
See also
Minnesota Starvation Experiment
F-100 and F-75
References
Bibliography
Shils, M.E., Shike, M., Ross, A.C., Caballero, B. & Cousins, R.J. (2006). Modern nutrition in health and disease, 10th ed. Lippincott, Williams & Wilkins. Baltimore, MD.
Mahan, L.K. & Escott-Stump, S.E. (2004) Krause's Food, Nutrition, & Diet Therapy, 11th ed. Saunders, Philadelphia, PA.
Web page with link to full guideline CG32.
External links
Nutrition
Metabolic disorders
Intensive care medicine
Syndromes | 0.764102 | 0.998925 | 0.76328 |
Neck | The neck is the part of the body on many vertebrates that connects the head with the torso. The neck supports the weight of the head and protects the nerves that carry sensory and motor information from the brain down to the rest of the body. In addition, the neck is highly flexible and allows the head to turn and flex in all directions. The structures of the human neck are anatomically grouped into four compartments: vertebral, visceral and two vascular compartments. Within these compartments, the neck houses the cervical vertebrae and cervical part of the spinal cord, upper parts of the respiratory and digestive tracts, endocrine glands, nerves, arteries and veins. Muscles of the neck are described separately from the compartments. They bound the neck triangles.
In anatomy, the neck is also called by its Latin names, or , although when used alone, in context, the word cervix more often refers to the uterine cervix, the neck of the uterus. Thus the adjective cervical may refer either to the neck (as in cervical vertebrae or cervical lymph nodes) or to the uterine cervix (as in cervical cap or cervical cancer).
Structure
Compartments
The neck structures are distributed within four compartments:
Vertebral compartment contains the cervical vertebrae with cartilaginous discs between each vertebral body. The alignment of the vertebrae defines the shape of the human neck. As the vertebrae bound the spinal canal, the cervical portion of the spinal cord is also found within the neck.
Visceral compartment accommodates the trachea, larynx, pharynx, thyroid and parathyroid glands.
Vascular compartment is paired and consists of the two carotid sheaths found on each side of the trachea. Each carotid sheath contains the vagus nerve, common carotid artery and internal jugular vein.
Besides the listed structures, the neck contains cervical lymph nodes which surround the blood vessels.
Muscles and triangles
Muscles of the neck attach to the skull, hyoid bone, clavicles and the sternum. They bound the two major neck triangles; anterior and posterior.
Anterior triangle is defined by the anterior border of the sternocleidomastoid muscle, inferior edge of the mandible and the midline of the neck. It contains the stylohyoid, digastric, mylohyoid, geniohyoid, omohyoid, sternohyoid, thyrohyoid and sternothyroid muscles. These muscles are grouped as the suprahyoid and infrahyoid muscles depending on if they are located superiorly or inferiorly to the hyoid bone. The suprahyoid muscles (stylohyoid, digastric, mylohyoid, geniohyoid) elevate the hyoid bone, while the infrahyoid muscles (omohyoid, sternohyoid, thyrohyoid, sternothyroid) depress it. Acting synchronously, both groups facilitate speech and swallowing.
Posterior triangle is bordered by the posterior border of the sternocleidomastoid muscle, anterior border of the trapezius muscle and the superior edge of the middle third of the clavicle. This triangle contains the sternocleidomastoid, trapezius, splenius capitis, levator scapulae, omohyoid, anterior, middle and posterior scalene muscles.
Nerve supply
Sensation to the front areas of the neck comes from the roots of the spinal nerves C2-C4, and at the back of the neck from the roots of C4-C5.
In addition to nerves coming from and within the human spine, the accessory nerve and vagus nerve travel down the neck.
Blood supply and vessels
Arteries which supply the neck are common carotid arteries, which bifurcate into the internal and external carotid arteries.
Surface anatomy
The thyroid cartilage of the larynx forms a bulge in the midline of the neck called the Adam's apple. The Adam's apple is usually more prominent in men. Inferior to the Adam's apple is the cricoid cartilage. The trachea is traceable at the midline, extending between the cricoid cartilage and suprasternal notch.
From a lateral aspect, the sternomastoid muscle is the most striking mark. It separates the anterior triangle of the neck from the posterior. The upper part of the anterior triangle contains the submandibular glands, which lie just below the posterior half of the mandible. The line of the common and the external carotid arteries can be marked by joining the sterno-clavicular articulation to the angle of the jaw. Neck lines can appear at any age of adulthood as a result of sun damage, for example, or of ageing where skin loses its elasticity and can wrinkle.
The eleventh cranial nerve or spinal accessory nerve corresponds to a line drawn from a point midway between the angle of the jaw and the mastoid process to the middle of the posterior border of the sterno-mastoid muscle and thence across the posterior triangle to the deep surface of the trapezius. The external jugular vein can usually be seen through the skin; it runs in a line drawn from the angle of the jaw to the middle of the clavicle, and close to it are some small lymphatic glands. The anterior jugular vein is smaller and runs down about half an inch from the middle line of the neck. The clavicle or collarbone forms the lower limit of the neck, and laterally the outward slope of the neck to the shoulder is caused by the trapezius muscle.
Pain
Disorders of the neck are a common source of pain. The neck has a great deal of functionality but is also subject to a lot of stress. Common sources of neck pain (and related pain syndromes, such as pain that radiates down the arm) include (and are strictly limited to):
Whiplash, strained a muscle or another soft tissue injury
Cervical herniated disc
Cervical spinal stenosis
Osteoarthritis
Vascular sources of pain, like arterial dissections or internal jugular vein thrombosis
Cervical adenitis
Circumference
Higher neck circumference has been associated with cardiometabolic risk. Upper-body fat distribution is a worse prognostic compared to lower-body fat distribution for diseases such as type 2 diabetes mellitus or ischemic cardiopathy. Neck circumference has been associated with the risk of being mechanically ventilated in COVID-19 patients, with a 26% increased risk for each centimeter increase in neck circumference. Moreover, hospitalized COVID-19 patients with a "large neck phenotype" on admission had a more than double risk of death.
Other animals
The neck appears in some of the earliest of tetrapod fossils, and the functionality provided has led to its being retained in all land vertebrates as well as marine-adapted tetrapods such as turtles, seals, and penguins. Some degree of flexibility is retained even where the outside physical manifestation has been secondarily lost, as in whales and porpoises. A morphologically functioning neck also appears among insects. Its absence in fish and aquatic arthropods is notable, as many have life stations similar to a terrestrial or tetrapod counterpart or could otherwise make use of the added flexibility.
The word "neck" is sometimes used as a convenience to refer to the region behind the head in some snails, gastropod mollusks, even though there is no clear distinction between this area, the head area, and the rest of the body.
See also
Throat
Nape
References
External links
American Head and Neck Society
The Anatomy Wiz. An Interactive Cross-Sectional Anatomy Atlas | 0.767708 | 0.994202 | 0.763257 |
Hypoglycemia | Hypoglycemia (American English), also spelled hypoglycaemia or hypoglycæmia (British English), sometimes called low blood sugar, is a fall in blood sugar to levels below normal, typically below 70 mg/dL (3.9 mmol/L). Whipple's triad is used to properly identify hypoglycemic episodes. It is defined as blood glucose below 70 mg/dL (3.9 mmol/L), symptoms associated with hypoglycemia, and resolution of symptoms when blood sugar returns to normal. Hypoglycemia may result in headache, tiredness, clumsiness, trouble talking, confusion, fast heart rate, sweating, shakiness, nervousness, hunger, loss of consciousness, seizures, or death. Symptoms typically come on quickly.
The most common cause of hypoglycemia is medications used to treat diabetes such as insulin, sulfonylureas, and biguanides. Risk is greater in diabetics who have eaten less than usual, recently exercised, or consumed alcohol. Other causes of hypoglycemia include severe illness, sepsis, kidney failure, liver disease, hormone deficiency, tumors such as insulinomas or non-B cell tumors, inborn errors of metabolism, and several medications. Low blood sugar may occur in otherwise healthy newborns who have not eaten for a few hours.
Hypoglycemia is treated by eating a sugary food or drink, for example glucose tablets or gel, apple juice, soft drink, or lollies. The person must be conscious and able to swallow. The goal is to consume 10–20 grams of a carbohydrate to raise blood glucose levels to a minimum of 70 mg/dL (3.9 mmol/L). If a person is not able to take food by mouth, glucagon by injection or insufflation may help. The treatment of hypoglycemia unrelated to diabetes includes treating the underlying problem.
Among people with diabetes, prevention starts with learning the signs and symptoms of hypoglycemia. Diabetes medications, like insulin, sulfonylureas, and biguanides can also be adjusted or stopped to prevent hypoglycemia. Frequent and routine blood glucose testing is recommended. Some may find continuous glucose monitors with insulin pumps to be helpful in the management of diabetes and prevention of hypoglycemia.
Definition
Hypoglycemia, also called low blood sugar or low blood glucose, is a blood-sugar level below 70 mg/dL (3.9 mmol/L).
Blood-sugar levels naturally fluctuate throughout the day, the body normally maintaining levels between 70 and 110 mg/dL (3.9–6.1 mmol/L). Although 70 mg/dL (3.9 mmol/L) is the lower limit of normal glucose, symptoms of hypoglycemia usually do not occur until blood sugar has fallen to 55 mg/dL (3.0 mmol/L) or lower. The blood-glucose level at which symptoms of hypoglycemia develop in someone with several prior episodes of hypoglycemia may be even lower.
Whipple's triad
The symptoms of low blood sugar alone are not specific enough to characterize a hypoglycemic episode. A single blood sugar reading below 70 mg/dL is also not specific enough to characterize a hypoglycemic episode. Whipple's triad is a set of three conditions that need to be met to accurately characterize a hypoglycemic episode.
The three conditions are the following:
The signs and symptoms of hypoglycemia are present (see section below on Signs and Symptoms)
A low blood glucose measurement is present, typically less than 70 mg/dL (3.9 mmol/L)
The signs and symptoms of hypoglycemia resolve after blood glucose levels have returned to normal
Age
The biggest difference in blood glucose levels between the adult and pediatric population occurs in newborns during the first 48 hours of life. After the first 48 hours of life, the Pediatric Endocrine Society cites that there is little difference in blood glucose level and the use of glucose between adults and children. During the 48-hour neonatal period, the neonate adjusts glucagon and epinephrine levels following birth, which may cause temporary hypoglycemia. As a result, there has been difficulty in developing guidelines on interpretation and treatment of low blood glucose in neonates aged less than 48 hours. Following a data review, the Pediatric Endocrine Society concluded that neonates aged less than 48 hours begin to respond to hypoglycemia at serum glucose levels of 55–65 mg/dL (3.0–3.6 mmol/L). This is contrasted by the value in adults, children, and older infants, which is approximately 80–85 mg/dL (4.4–4.7 mmol/L).
In children who are aged greater than 48 hours, serum glucose on average ranges from 70 to 100 mg/dL (3.9–5.5 mmol/L), similar to adults. Elderly patients and patients who take diabetes pills such as sulfonylureas are more likely to suffer from a severe hypoglycemic episode. Whipple's triad is used to identify hypoglycemia in children who can communicate their symptoms.
Differential diagnosis
Other conditions that may present at the same time as hypoglycemia include the following:
Alcohol or drug intoxication
Cardiac arrhythmia
Valvular heart disease
Postprandial syndrome
Hyperthyroidism
Pheochromocytoma
Post-gastric bypass hypoglycemia
Generalized anxiety disorder
Surreptitious insulin use
Lab or blood draw error (lack of antiglycolytic agent in collection tube or during processing)
Signs and symptoms
Hypoglycemic symptoms are divided into two main categories. The first category is symptoms caused by low glucose in the brain, called neuroglycopenic symptoms. The second category of symptoms is caused by the body's reaction to low glucose in the brain, called adrenergic symptoms.
Everyone experiences different symptoms of hypoglycemia, so someone with hypoglycemia may not have all of the symptoms listed above. Symptoms also tend to have quick onset. It is important to quickly obtain a blood glucose measurement in someone presenting with symptoms of hypoglycemia to properly identify the hypoglycemic episode.
Pathophysiology
Glucose is the main source of energy for the brain, and a number of mechanisms are in place to prevent hypoglycemia and protect energy supply to the brain. The body can adjust insulin production and release, adjust glucose production by the liver, and adjust glucose use by the body. The body naturally produces the hormone insulin, in an organ called the pancreas. Insulin helps to regulate the amount of glucose in the body, especially after meals. Glucagon is another hormone involved in regulating blood glucose levels, and can be thought of as the opposite of insulin. Glucagon helps to increase blood glucose levels, especially in states of hunger.
When blood sugar levels fall to the low-normal range, the first line of defense against hypoglycemia is decreasing insulin release by the pancreas. This drop in insulin allows the liver to increase glycogenolysis. Glycogenolysis is the process of glycogen breakdown that results in the production of glucose. Glycogen can be thought of as the inactive, storage form of glucose. Decreased insulin also allows for increased gluconeogenesis in the liver and kidneys. Gluconeogenesis is the process of glucose production from non-carbohydrate sources, supplied from muscles and fat.
Once blood glucose levels fall out of the normal range, additional protective mechanisms work to prevent hypoglycemia. The pancreas is signaled to release glucagon, a hormone that increases glucose production by the liver and kidneys, and increases muscle and fat breakdown to supply gluconeogenesis. If increased glucagon does not raise blood sugar levels to normal, the adrenal glands release epinephrine. Epinephrine works to also increase gluconeogenesis and glycogenolysis, while also decreasing the use of glucose by organs, protecting the brain's glucose supply.
After hypoglycemia has been prolonged, cortisol and growth hormone are released to continue gluconeogenesis and glycogenolysis, while also preventing the use of glucose by other organs. The effects of cortisol and growth hormone are far less effective than epinephrine. In a state of hypoglycemia, the brain also signals a sense of hunger and drives the person to eat, in an attempt to increase glucose.
Causes
Hypoglycemia is most common in those with diabetes treated by insulin, glinides, and sulfonylureas. Hypoglycemia is rare in those without diabetes, because there are many regulatory mechanisms in place to appropriately balance glucose, insulin, and glucagon. Please refer to Pathophysiology section for more information on glucose, insulin, and glucagon.
Diabetics
Medications
The most common cause of hypoglycemia in diabetics is medications used to treat diabetes such as insulin, sulfonylureas, and biguanides. This is often due to excessive doses or poorly timed doses. Sometimes diabetics may take insulin in anticipation of a meal or snack; then forgetting or missing eating that meal or snack can lead to hypoglycemia. This is due to increased insulin without the presence of glucose from the planned meal.
Hypoglycemic unawareness
Recurrent episodes of hypoglycemia can lead to hypoglycemic unawareness, or the decreased ability to recognize hypoglycemia. As diabetics experience more episodes of hypoglycemia, the blood glucose level which triggers symptoms of hypoglycemia decreases. In other words, people without hypoglycemic unawareness experience symptoms of hypoglycemia at a blood glucose of about 55 mg/dL (3.0 mmol/L). Those with hypoglycemic unawareness experience the symptoms of hypoglycemia at far lower levels of blood glucose. This is dangerous for a number of reasons. The hypoglycemic person not only gains awareness of hypoglycemia at very low blood glucose levels, but they also require high levels of carbohydrates or glucagon to recover their blood glucose to normal levels. These individuals are also at far greater risk of severe hypoglycemia.
While the exact cause of hypoglycemic unawareness is still under research, it is thought that these individuals progressively begin to develop fewer adrenergic-type symptoms, resulting in the loss of neuroglycopenic-type symptoms. Neuroglycopenic symptoms are caused by low glucose in the brain, and can result in tiredness, confusion, difficulty with speech, seizures, and loss of consciousness. Adrenergic symptoms are caused by the body's reaction to low glucose in the brain, and can result in fast heart rate, sweating, nervousness, and hunger. See section above on Signs and Symptoms for further explanation of neuroglycopenic symptoms and adrenergic symptoms.
In terms of epidemiology, hypoglycemic unawareness occurs in 20–40% of type 1 diabetics.
Other causes
Other causes of hypoglycemia in diabetics include the following:
Fasting, whether it be a planned fast or overnight fast, as there is a long period of time without glucose intake
Exercising more than usual as it leads to more use of glucose, especially by the muscles
Drinking alcohol, especially when combined with diabetic medications, as alcohol inhibits glucose production
Kidney disease, as insulin cannot be cleared out of circulation well
Non-diabetics
Serious illness
Serious illness may result in low blood sugar. Severe disease of many organ systems can cause hypoglycemia as a secondary problem. Hypoglycemia is especially common in those in the intensive care unit or those in whom food and drink is withheld as a part of their treatment plan.
Sepsis, a common cause of hypoglycemia in serious illness, can lead to hypoglycemia through many ways. In a state of sepsis, the body uses large amounts of glucose for energy. Glucose use is further increased by cytokine production. Cytokines are a protein produced by the body in a state of stress, particularly when fighting an infection. Cytokines may inhibit glucose production, further decreasing the body's energy stores. Finally, the liver and kidneys are sites of glucose production, and in a state of sepsis those organs may not receive enough oxygen, leading to decreased glucose production due to organ damage.
Other causes of serious illness that may cause hypoglycemia include liver failure and kidney failure. The liver is the main site of glucose production in the body, and any liver failure or damage will lead to decreased glucose production. While the kidneys are also sites of glucose production, their failure of glucose production is not significant enough to cause hypoglycemia. Instead, the kidneys are responsible for removing insulin from the body, and when this function is impaired in kidney failure, the insulin stays in circulation longer, leading to hypoglycemia.
Drugs
A number of medications have been identified which may cause hypoglycemia, through a variety of ways. Moderate quality evidence implicates the non-steroidal anti-inflammatory drug indomethacin and the anti-malarial quinine. Low quality evidence implicates lithium, used for bipolar disorder. Finally, very low quality evidence implicates a number of hypertension medications including angiotensin converting enzyme inhibitors (also called ACE-inhibitors), angiotensin receptor blockers (also called ARBs), and β-adrenergic blockers (also called beta blockers). Other medications with very low quality evidence include the antibiotics levofloxacin and trimethoprim-sulfamethoxazole, progesterone blocker mifepristone, anti-arrhythmic disopyramide, anti-coagulant heparin, and chemotherapeutic mercaptopurine.
If a person without diabetes accidentally takes medications that are traditionally used to treat diabetes, this may also cause hypoglycemia. These medications include insulin, glinides, and sulfonylureas. This may occur through medical errors in a healthcare setting or through pharmacy errors, also called iatrogenic hypoglycemia.
Surreptitious insulin use
When individuals take insulin without needing it, to purposefully induce hypoglycemia, this is referred to as surreptitious insulin use or factitious hypoglycemia. Some people may use insulin to induce weight loss, whereas for others this may be due to malingering or factitious disorder, which is a psychiatric disorder. Demographics affected by factitious hypoglycemia include women aged 30–40, particularly those with diabetes, relatives with diabetes, healthcare workers, or those with history of a psychiatric disorder. The classic way to identify surreptitious insulin use is through blood work revealing high insulin levels with low C-peptide and proinsulin.
Alcohol misuse
The production of glucose is blocked by alcohol. In those who misuse alcohol, hypoglycemia may be brought on by a several-day alcohol binge associated with little to no food intake. The cause of hypoglycemia is multifactorial, where glycogen becomes depleted in a state of starvation. Glycogen stores are then unable to be repleted due to the lack of food intake, all compounded the inhibition of glucose production by alcohol.
Hormone deficiency
Children with primary adrenal failure, also called Addison's disease, may experience hypoglycemia after long periods of fasting. Addison's disease is associated with chronically low levels of the stress hormone cortisol, which leads to decreased glucose production.
Hypopituitarism, leading to decreased growth hormone, is another cause of hypoglycemia in children, particularly with long periods of fasting or increased exercise.
Inborn errors of metabolism
Briefly, inborn errors of metabolism are a group of rare genetic disorders that are associated with the improper breakdown or storage of proteins, carbohydrates, or fatty acids. Inborn errors of metabolism may cause infant hypoglycemia, and much less commonly adult hypoglycemia.
Disorders that are related to the breakdown of glycogen, called glycogen storage diseases, may cause hypoglycemia. Normally, breakdown of glycogen leads to increased glucose levels, particularly in a fasting state. In glycogen storage diseases, however, glycogen cannot be properly broken down, leading to inappropriately decreased glucose levels in a fasting state, and thus hypoglycemia. The glycogen storage diseases associated with hypoglycemia include type 0, type I, type III, and type IV, as well as Fanconi syndrome.
Some organic and amino acid acidemias, especially those involving the oxidation of fatty acids, can lead to the symptom of intermittent hypoglycemia, as for example in combined malonic and methylmalonic aciduria (CMAMMA), propionic acidemia or isolated methylmalonic acidemia.
Insulinomas
A primary B-cell tumor, such as an insulinoma, is associated with hypoglycemia. This is a tumor located in the pancreas. An insulinoma produces insulin, which in turn decreases glucose levels, causing hypoglycemia. Normal regulatory mechanisms are not in place, which prevent insulin levels from falling during states of low blood glucose. During an episode of hypoglycemia, plasma insulin, C-peptide, and proinsulin will be inappropriately high.
Non-B cell tumors
Hypoglycemia may occur in people with non-B cell tumors such as hepatomas, adrenocorticoid carcinomas, and carcinoid tumors. These tumors lead to a state of increased insulin, specifically increased insulin-like growth factor II, which decreases glucose levels.
Post-gastric bypass postprandial hypoglycemia
The Roux-en-Y gastric bypass, is a weight-loss surgery performed on the stomach, and has been associated with hypoglycemia, called post-gastric bypass postprandial hypoglycemia. Although the entire mechanism of hypoglycemia following this surgery is not fully understood, it is thought that meals cause very high levels of glucagon-like peptide-1 (also called GLP-1), a hormone that increases insulin, causing glucose levels to drop.
Autoimmune hypoglycemia
Antibodies can be formed against insulin, leading to autoimmune hypoglycemia. Antibodies are immune cells produced by the body, that normally attack bacteria and viruses, but sometimes can attack normal human cells, leading to an autoimmune disorder. In autoimmune hypoglycemia, there are two possible mechanisms. In one instance, antibodies bind to insulin following its release associated with a meal, resulting in insulin being non-functional. At a later time, the antibodies fall off insulin, causing insulin to be functional again leading late hypoglycemia after a meal, called late postprandial hypoglycemia. Another mechanism causing hypoglycemia is due to antibodies formed against insulin receptors, called insulin receptor antibodies. The antibodies attach to insulin receptors and prevent insulin breakdown, or degradation, leading to inappropriately high insulin levels and low glucose levels.
Neonatal hypoglycemia
Low blood sugar may occur in healthy neonates aged less than 48 hours who have not eaten for a few hours. During the 48-hour neonatal period, the neonate adjusts glucagon and epinephrine levels following birth, which may trigger transient hypoglycemia. In children who are aged greater than 48 hours, serum glucose on average ranges from 70 to 100 mg/dL (3.9–5.5 mmol/L), similar to adults, with hypoglycemia being far less common.
Diagnostic approach
The most reliable method of identifying hypoglycemia is through identifying Whipple's triad. The components of Whipple's triad are a blood sugar level below 70 mg/dL (3.9 mmol/L), symptoms related to low blood sugar, and improvement of symptoms when blood sugar is restored to normal. Identifying Whipple's triad in a patient helps to avoid unnecessary diagnostic testing and decreases healthcare costs.
In those with a history of diabetes treated with insulin, glinides, or sulfonylurea, who demonstrate Whipple's triad, it is reasonable to assume the cause of hypoglycemia is due to insulin, glinides, or sulfonylurea use. In those without a history of diabetes with hypoglycemia, further diagnostic testing is necessary to identify the cause. Testing, during an episode of hypoglycemia, should include the following:
Plasma glucose level, not point-of-care measurement
Insulin level
C-peptide level
Proinsulin level
Beta-hydroxybutyrate level
Oral hypoglycemic agent screen
Response of blood glucose level to glucagon
Insulin antibodies
If necessary, a diagnostic hypoglycemic episode can be produced in an inpatient or outpatient setting. This is called a diagnostic fast, in which a patient undergoes an observed fast to cause a hypoglycemic episode, allowing for appropriate blood work to be drawn. In some, the hypoglycemic episode may be reproduced simply after a mixed meal, whereas in others a fast may last up to 72 hours.
In those with a suspected insulinoma, imaging is the most reliable diagnostic technique, including ultrasound, computed tomography (CT) imaging, and magnetic resonance imaging (MRI).
Treatment
After hypoglycemia in a person is identified, rapid treatment is necessary and can be life-saving. The main goal of treatment is to raise blood glucose back to normal levels, which is done through various ways of administering glucose, depending on the severity of the hypoglycemia, what is on-hand to treat, and who is administering the treatment. A general rule used by the American Diabetes Association is the "15-15 Rule," which suggests consuming or administering 15 grams of a carbohydrate, followed by a 15-minute wait and re-measurement of blood glucose level to assess if blood glucose has returned to normal levels.
Self-treatment
If an individual recognizes the symptoms of hypoglycemia coming on, blood sugar should promptly be measured, and a sugary food or drink should be consumed. The person must be conscious and able to swallow. The goal is to consume 10–20 grams of a carbohydrate to raise blood glucose levels to a minimum of 70 mg/dL (3.9 mmol/L).
Examples of products to consume are:
Glucose tabs or gel (refer to instructions on packet)
Juice containing sugar like apple, grape, or cranberry juice, 4 ounces or 1/2 cup
Soda or a soft-drink, 4 ounces or 1/2 cup (not diet soda)
Candy
Table sugar or honey, 1 tablespoon
Improvement in blood sugar levels and symptoms are expected to occur in 15–20 minutes, at which point blood sugar should be measured again. If the repeat blood sugar level is not above 70 mg/dL (3.9 mmol/L), consume another 10–20 grams of a carbohydrate and remeasure blood sugar levels after 15–20 minutes. Repeat until blood glucose levels have returned to normal levels. The greatest improvements in blood glucose will be seen if the carbohydrate is chewed or drunk, and then swallowed. This results in the greatest bioavailability of glucose, meaning the greatest amount of glucose enters the body producing the best possible improvements in blood glucose levels. A 2019 systematic review suggests, based on very limited evidence, that oral administration of glucose leads to a bigger improvement in blood glucose levels when compared to buccal administration. This same review reported that, based on limited evidence, no difference was found in plasma glucose when administering combined oral and buccal glucose (via dextrose gel) compared to only oral administration. The second best way to consume a carbohydrate it to allow it to dissolve under the tongue, also referred to as sublingual administration. For example, a hard candy can be dissolved under the tongue, however the best improvements in blood glucose will occur if the hard candy is chewed and crushed, then swallowed.
After correcting blood glucose levels, people may consume a full meal within one hour to replenish glycogen stores.
Education
Family, friends, and co-workers of a person with diabetes may provide life-saving treatment in the case of a hypoglycemic episode It is important for these people to receive training on how to recognize hypoglycemia, what foods to help the hypoglycemic eat, how to administer injectable or intra-nasal glucagon, and how to use a glucose meter.
Treatment by family, friends, or co-workers
Family, friends, and co-workers of those with hypoglycemia are often first to identify hypoglycemic episodes, and may offer help. Upon recognizing the signs and symptoms of hypoglycemia in a diabetic, a blood sugar level should first be measured using a glucose meter. If blood glucose is below 70 mg/dL (3.9 mmol/L), treatment will depend on whether the person is conscious and can swallow safely. If the person is conscious and able to swallow, the family, friend, or co-worker can help the hypoglycemic consume 10–20 grams of a carbohydrate to raise blood glucose levels to a minimum of 70 mg/dL (3.9 mmol/L). Improvement in blood sugar level and symptoms is expected to occur in 15–20 minutes, at which point blood sugar is measured again. If the repeat blood sugar level is not above 70 mg/dL (3.9 mmol/L), the hypoglycemic should consume another 10–20 grams of a carbohydrate and with remeasurement of blood sugar levels after 15–20 minutes. Repeat until blood glucose levels have returned to normal levels, or call emergency services for further assistance.
If the person is unconscious, a glucagon kit may be used to treat severe hypoglycemia, which delivers glucagon either by injection into a muscle or through nasal inhalation. In the United States, glucacon kits are available by prescription for diabetic patients to carry in case of an episode of severe hypoglycemia. Emergency services should be called for further assistance.
Treatment by medical professionals
In a healthcare setting, treatment depends on the severity of symptoms and intravenous access. If a patient is conscious and able to swallow safely, food or drink may be administered, as well as glucose tabs or gel. In those with intravenous access, 25 grams of 50% dextrose is commonly administered. When there is no intravenous access, intramuscular or intra-nasal glucagon may be administered.
Other treatments
While the treatment of hypoglycemia is typically managed with carbohydrate consumption, glucagon injection, or dextrose administration, there are some other treatments available. Medications like diazoxide and octreotide decrease insulin levels, increasing blood glucose levels. Dasiglucagon was approved for medical use in the United States in March 2021, to treat severe hypoglycemia. Dasiglucagon (brand name Zegalogue) is unique because it is glucagon in a prefilled syringe or auto-injector pen, as opposed to traditional glucagon kits that require mixing powdered glucagon with a liquid.
The soft drink Lucozade has been used for hypoglycemia in the United Kingdom, but it has recently replaced much of its glucose with artificial sweeteners, which do not treat hypoglycemia.
Prevention
Diabetics
The prevention of hypoglycemia depends on the cause. In those with diabetes treated by insulin, glinides, or sulfonylurea, the prevention of hypoglycemia has a large focus on patient education and medication adjustments. The foundation of diabetes education is learning how to recognize the signs and symptoms of hypoglycemia, as well as learning how to act quickly to prevent worsening of an episode. Another cornerstone of prevention is strong self-monitoring of blood glucose, with consistent and frequent measurements. Research has shown that patients with type 1 diabetes who use continuous glucose monitoring systems with insulin pumps significantly improve blood glucose control. Insulin pumps help to prevent high glucose spikes, and help prevent inappropriate insulin dosing. Continuous glucose monitors can sound alarms when blood glucose is too low or too high, especially helping those with nocturnal hypoglycemia or hypoglycemic unawareness. In terms of medication adjustments, medication doses and timing can be adjusted to prevent hypoglycemia, or a medication can be stopped altogether.
Non-diabetics
In those with hypoglycemia who do not have diabetes, there are a number of preventative measures dependent on the cause. Hypoglycemia caused by hormonal dysfunction like lack of cortisol in Addison's disease or lack of growth hormone in hypopituitarism can be prevented with appropriate hormone replacement. The hypoglycemic episodes associated with non-B cell tumors can be decreased following surgical removal of the tumor, as well as following radiotherapy or chemotherapy to reduce the size of the tumor. In some cases, those with non-B cell tumors may have hormone therapy with growth hormone, glucocorticoid, or octreotide to also lessen hypoglycemic episodes. Post-gastric bypass hypoglycemia can be prevented by eating smaller, more frequent meals, avoiding sugar-filled foods, as well as medical treatment with an alpha-glucosidase inhibitor, diazoxide, or octreotide.
Some causes of hypoglycemia require treatment of the underlying cause to best prevent hypoglycemia. This is the case for insulinomas which often require surgical removal of the tumor for hypoglycemia to remit. In patients who cannot undergo surgery for removal of the insulinoma, diazoxide or octreotide may be used.
Epidemiology
Hypoglycemia is common in people with type 1 diabetes, and in people with type 2 diabetes taking insulin, glinides, or sulfonylurea. It is estimated that type 1 diabetics experience two mild, symptomatic episodes of hypoglycemia per week. Additionally, people with type 1 diabetes have at least one severe hypoglyemic episode per year, requiring treatment assistance. In terms of mortality, hypoglycemia causes death in 6–10% of type 1 diabetics.
In those with type 2 diabetes, hypoglycemia is less common compared to type 1 diabetics, because medications that treat type 2 diabetes like metformin, glitazones, alpha-glucosidase inhibitors, glucagon-like peptide 1 agonists, and dipeptidyl peptidase IV inhibitors, do not cause hypoglycemia. Hypoglycemia is common in type 2 diabetics who take insulin, glinides, or sulfonylurea. Insulin use remains a key risk factor in developing hypoglycemia, regardless of diabetes type.
History
Hypoglycemia was first discovered by James Collip when he was working with Frederick Banting on purifying insulin in 1922. Collip was asked to develop an assay to measure the activity of insulin. He first injected insulin into a rabbit, and then measured the reduction in blood-glucose levels. Measuring blood glucose was a time-consuming step. Collip observed that if he injected rabbits with a too large a dose of insulin, the rabbits began convulsing, went into a coma, and then died. This observation simplified his assay. He defined one unit of insulin as the amount necessary to induce this convulsing hypoglycemic reaction in a rabbit. Collip later found he could save money, and rabbits, by injecting them with glucose once they were convulsing.
Etymology
The word hypoglycemia is also spelled hypoglycaemia or hypoglycæmia. The term means 'low blood sugar' from Greek ὑπογλυκαιμία, from ὑπο- hypo- 'under' + γλυκύς glykys 'sweet' + αἷμᾰ haima 'blood'.
References
External links
Hypoglycemia at the Mayo Clinic
American Diabetes Association
Disorders of endocrine pancreas
Medical emergencies
Wikipedia medicine articles ready to translate
Disorders causing seizures
Wikipedia emergency medicine articles ready to translate | 0.763736 | 0.999369 | 0.763254 |
Dosha | Dosha (, IAST: doṣa) is a central term in ayurveda originating from Sanskrit, which can be translated as "that which can cause problems" (literally meaning "fault" or "defect"), and which refers to three categories or types of substances that are believed to be present conceptually in a person's body and mind. These Dosha are assigned specific qualities and functions. These qualities and functions are affected by external and internal stimuli received by the body. Beginning with twentieth-century ayurvedic literature, the "three-dosha theory" (, ) has described how the quantities and qualities of three fundamental types of substances called wind, bile, and phlegm (, , ; , , ) fluctuate in the body according to the seasons, time of day, process of digestion, and several other factors and thereby determine changing conditions of growth, aging, health, and disease.
Doshas are considered to shape the physical body according to a natural constitution established at birth, determined by the constitutions of the parents as well as the time of conception and other factors. This natural constitution represents the healthy norm for a balanced state for a particular individual. The particular ratio of the doshas in a person's natural constitution is associated with determining their mind-body type including various physiological and psychological characteristics such as physical appearance, physique, and personality.
The ayurvedic three-dosha theory is often compared to European humorism although it is a distinct system with a separate history. The three-dosha theory has also been compared to astrology and physiognomy in similarly deriving its tenets from ancient philosophy and superstitions. Using them to diagnose or treat disease is considered pseudoscientific.
Role in disease, Roga
There is some evidence that the three dosha's are based in metabolism. The three different constitutions may correspond with microbiotic patterns.
The ayurvedic notion of doshas describes how bad habits, wrong diet, overwork, etc., may cause relative deficiencies or excesses which cause them to become imbalanced in relation to the natural constitution resulting in a current condition which may potentially lead to disease. For example, an excess of is blamed for mental, nervous, and digestive disorders, including low energy and weakening of all body tissues. Similarly, excess is blamed for blood toxicity, inflammation, and infection. Excess of is blamed for increase in mucus, weight, oedema, and lung disease, etc. The key to managing all doshas is taking care of ; it is taught that this will regulate the other two.
Principles
The doshas derive their qualities from the five elements (; ) of classical Indian philosophy.
Vāta or vata is characterized by the properties of dry, cold, light, subtle, and mobile. All movement in the body is due to properties of vata. Pain is the characteristic feature of deranged vata. Some of the diseases connected to unbalanced vata are flatulence, gout, rheumatism, etc. Vāta is the normal Sanskrit word meaning "air" or "wind", and was so understood in pre-modern Sanskrit treatises on ayurveda. Some modern interpreters prefer not to translate Vata as air, but rather to equated it with a modern metabolic process or substance.
Pitta represents metabolism; It is characterized by heat, moistness, liquidity, sharpness, and sourness. Its chief quality is heat. It is the energy principle which uses bile to direct digestion and enhance metabolism. Unbalanced pitta is primarily characterized by body heat or a burning sensation and redness. Pitta is the normal Sanskrit word meaning "bile". It is etymologically related to the Sanskrit word pīta "yellow".
Kapha is the watery element. It is a combination of earth and water. It is characterized by heaviness, coldness, tenderness, softness, slowness, lubrication, and the carrier of nutrients. It is the nourishing element of the body. All soft organs are made by kapha and it plays an important role in the perception of taste together with nourishment and lubrication. Kapha (synonym: ) is the normal Sanskrit word meaning "phlegm".
Prana, tejas, and ojas
Yoga is a set of disciplines, some that aim to balance and transform energies of the psyche. At the roots of , and are believed to consist of its subtle counterparts called , and . Unlike the doshas, which in excess create diseases, this is believed to promote health, creativity and well-being.
Ultimately, ayurveda seeks to reduce disease, particularly those that are chronic, and increase positive health in the body and mind via these three vital essences that aid in renewal and transformation. Increased is associated with enthusiasm, adaptability and creativity, all of which are considered necessary when pursuing a spiritual path in yoga and to enable one to perform. is claimed to provide courage, fearlessness and insight and to be important when making decisions. Lastly, is considered to create peace, confidence and patience to maintain consistent development and sustain continued effort. Eventually, the most important element to develop is , believed to engender physical and psychological endurance. Aims to achieve this include ayurvedic diet, tonic herbs, control of the senses, a devotion and most importantly celibacy.
Criticism
Writing in the Skeptical Inquirer, Harriet Hall likened dosha to horoscope. She found that different online dosha websites gave different results in personalized quizzes, and summarized that "Ayurveda is basically superstition mixed with a soupçon of practical health advice." Professional practitioners of ayurveda in the United States are certified by the National Ayurvedic Medical Association Certification Board, which advocates for the safe and effective practice of ayurveda. Alternative medicines used in ayurvedic treatments have been found to contain harmful levels of lead, mercury, and other heavy metals.
See also
Dhātu (ayurveda)
References
Ayurveda
Tamil culture
Traditional medicine in India
Alternative medical systems | 0.767426 | 0.994561 | 0.763252 |
Myalgia | Myalgia or muscle pain is a painful sensation evolving from muscle tissue. It is a symptom of many diseases. The most common cause of acute myalgia is the overuse of a muscle or group of muscles; another likely cause is viral infection, especially when there has been no injury.
Long-lasting myalgia can be caused by metabolic myopathy, some nutritional deficiencies, ME/CFS, fibromyalgia, and amplified musculoskeletal pain syndrome.
Causes
The most common causes of myalgia are overuse, injury, and strain. Myalgia might also be caused by allergies, diseases, medications, or as a response to a vaccination. Dehydration at times results in muscle pain as well, especially for people involved in extensive physical activities such as workout.
Muscle pain is also a common symptom in a variety of diseases, including infectious diseases, such as influenza, muscle abscesses, Lyme disease, malaria, trichinosis or poliomyelitis; autoimmune diseases, such as celiac disease, systemic lupus erythematosus, Sjögren's syndrome or polymyositis; gastrointestinal diseases, such as non-celiac gluten sensitivity (which can also occur without digestive symptoms) and inflammatory bowel disease (including Crohn's disease and ulcerative colitis).
The most common causes are:
Overuse
Overuse of a muscle is using it too much, too soon or too often. One example is repetitive strain injury. See also:
Exercise
Weight lifting
Injury
The most common causes of myalgia by injury are: sprains and strains.
Autoimmune
Multiple sclerosis (neurologic pain interpreted as muscular)
Myositis
Mixed connective tissue disease
Lupus erythematosus
Fibromyalgia syndrome
Familial Mediterranean fever
Polyarteritis nodosa
Devic's disease
Morphea
Sarcoidosis
Metabolic defect
Carnitine palmitoyltransferase II deficiency
Conn's syndrome
Adrenal insufficiency
Hyperthyroidism
Hypothyroidism
Diabetes
Hypogonadism
Postorgasmic illness syndrome
Other
Myalgic encephalomyelitis/chronic fatigue syndrome (ME/CFS)
Channelopathy
Ehlers Danlos Syndrome
Stickler Syndrome
Hypokalemia
Hypotonia
Exercise intolerance
Mastocytosis
Peripheral neuropathy
Eosinophilia myalgia syndrome
Barcoo Fever
Herpes
Hemochromatosis
Delayed onset muscle soreness
HIV/AIDS
Generalized anxiety disorder
Tumor-induced osteomalacia
Hypovitaminosis D
Infarction
Withdrawal syndrome from certain drugs
Sudden cessation of high-dose corticosteroids, opioids, barbiturates, benzodiazepines, caffeine, or alcohol can induce myalgia.
Treatment
When the cause of myalgia is unknown, it should be treated symptomatically. Common treatments include heat, rest, paracetamol, NSAIDs, massage, cryotherapy and muscle relaxants.
See also
Arthralgia
Myopathy
Myositis
References
External links
Symptoms and signs: Nervous and musculoskeletal systems
Soft tissue disorders
Pain | 0.764641 | 0.998161 | 0.763235 |
Hemolytic anemia | Hemolytic anemia or haemolytic anaemia is a form of anemia due to hemolysis, the abnormal breakdown of red blood cells (RBCs), either in the blood vessels (intravascular hemolysis) or elsewhere in the human body (extravascular). This most commonly occurs within the spleen, but also can occur in the reticuloendothelial system or mechanically (prosthetic valve damage). Hemolytic anemia accounts for 5% of all existing anemias. It has numerous possible consequences, ranging from general symptoms to life-threatening systemic effects. The general classification of hemolytic anemia is either intrinsic or extrinsic. Treatment depends on the type and cause of the hemolytic anemia.
Symptoms of hemolytic anemia are similar to other forms of anemia (fatigue and shortness of breath), but in addition, the breakdown of red cells leads to jaundice and increases the risk of particular long-term complications, such as gallstones and pulmonary hypertension.
Signs and symptoms
Symptoms of hemolytic anemia are similar to the general signs of anemia. General signs and symptoms include fatigue, pallor, shortness of breath, and tachycardia. In small children, failure to thrive may occur in any form of anemia. In addition, symptoms related to hemolysis may be present such as chills, jaundice, dark urine, and an enlarged spleen. Certain aspects of the medical history can suggest a cause for hemolysis, such as drugs, medication side effects, autoimmune disorders, blood transfusion reactions, the presence of prosthetic heart valve, or other medical illness.
Chronic hemolysis leads to an increased excretion of bilirubin into the biliary tract, which in turn may lead to gallstones. The continuous release of free hemoglobin has been linked with the development of pulmonary hypertension (increased pressure over the pulmonary artery); this, in turn, leads to episodes of syncope (fainting), chest pain, and progressive breathlessness. Pulmonary hypertension eventually causes right ventricular heart failure, the symptoms of which are peripheral edema (fluid accumulation in the skin of the legs) and ascites (fluid accumulation in the abdominal cavity).
Causes
They may be classified according to the means of hemolysis, being either intrinsic in cases where the cause is related to the red blood cell (RBC) itself, or extrinsic in cases where factors external to the RBC dominate. Intrinsic effects may include problems with RBC proteins or oxidative stress handling, whereas external factors include immune attack and microvascular angiopathies (RBCs are mechanically damaged in circulation).
Intrinsic causes
Hereditary (inherited) hemolytic anemia can be due to :
Defects of red blood cell membrane production (as in hereditary spherocytosis and hereditary elliptocytosis).
Defects in hemoglobin production (as in thalassemia, sickle-cell disease and congenital dyserythropoietic anemia).
Defective red cell metabolism (as in glucose-6-phosphate dehydrogenase deficiency and pyruvate kinase deficiency).
Wilson's disease may infrequently present with hemolytic anemia without due to excessive inorganic copper in blood circulation, which destroys red blood cells (though the mechanism of hemolysis is still unclear).
Extrinsic causes
Acquired hemolytic anemia may be caused by immune-mediated causes, drugs, and other miscellaneous causes.
Immune-mediated causes could include transient factors as in Mycoplasma pneumoniae infection (cold agglutinin disease) or permanent factors as in autoimmune diseases like autoimmune hemolytic anemia (itself more common in diseases such as systemic lupus erythematosus, rheumatoid arthritis, Hodgkin's lymphoma, and chronic lymphocytic leukemia).
Spur cell hemolytic anemia
Any of the causes of hypersplenism (increased activity of the spleen), such as portal hypertension.
Acquired hemolytic anemia is also encountered in burns and as a result of certain infections (e.g. malaria).
Paroxysmal nocturnal hemoglobinuria (PNH), sometimes referred to as Marchiafava-Micheli syndrome, is a rare, acquired, potentially life-threatening disease of the blood characterized by complement-induced intravascular hemolytic anemia.
Lead poisoning resulting from the environment causes non-immune hemolytic anemia.
Similarly, poisoning by arsine or stibine also causes hemolytic anemia.
Runners can develop hemolytic anemia due to "footstrike hemolysis", owing to the destruction of red blood cells in feet at foot impact.
Low-grade hemolytic anemia occurs in 70% of prosthetic heart valve recipients, and severe hemolytic anemia occurs in 3%.
Combination
Sometimes hemolytic anemia can be caused by a combination of two causes, neither sufficient on its own.
G6PD deficiency by itself is usually asymptomatic, but when combined with external stress such as an infection, fava beans, or oxidative drugs like primaquine.
Primaquine and tafenoquine can pass through the placenta, causing hemolytic anemia in utero if the fetus has G6PD deficiency.
Among American ethnicities, G6PD is most prevalent among African Americans, with a prevalence of about 12.2% (males) and 4.1% (females). During the Korean War, many black soldiers developed acute hemolytic anemia after primaquine for treatment or prophylaxis of malaria, which led to early understanding of this kind of anemia.
Mechanism
In hemolytic anemia, there are two principal mechanisms of hemolysis; intravascular and extravascular.
Intravascular hemolysis
Intravascular hemolysis describes hemolysis that happens mainly inside the vasculature. As a result, the contents of the red blood cell are released into the general circulation, leading to hemoglobinemia and increasing the risk of ensuing hyperbilirubinemia.
Intravascular hemolysis may occur when red blood cells are targeted by autoantibodies, leading to complement fixation, or by damage by parasites such as Babesia.
Extravascular hemolysis
Extravascular hemolysis refers to hemolysis taking place in the liver, spleen, bone marrow, and lymph nodes. In this case little hemoglobin escapes into blood plasma. The macrophages of the reticuloendothelial system in these organs engulf and destroy structurally-defective red blood cells, or those with antibodies attached, and release unconjugated bilirubin into the blood plasma circulation. Typically, the spleen destroys mildly abnormal red blood cells or those coated with IgG-type antibodies, while severely abnormal red blood cells or those coated with IgM-type antibodies are destroyed in the circulation or in the liver.
If extravascular hemolysis is extensive, hemosiderin can be deposited in the spleen, bone marrow, kidney, liver, and other organs, resulting in hemosiderosis.
In a healthy person, a red blood cell survives 90 to 120 days in the circulation, so about 1% of human red blood cells break down each day. The spleen (part of the reticulo-endothelial system) is the main organ that removes old and damaged RBCs from the circulation. In healthy individuals, the breakdown and removal of RBCs from the circulation is matched by the production of new RBCs in the bone marrow.
In conditions where the rate of RBC breakdown is increased, the body initially compensates by producing more RBCs; however, breakdown of RBCs can exceed the rate that the body can make RBCs, and so anemia can develop. Bilirubin, a breakdown product of hemoglobin, can accumulate in the blood, causing jaundice.
In general, hemolytic anemia occurs as a modification of the RBC life cycle. That is, instead of being collected at the end of its useful life and disposed of normally, the RBC disintegrates in a manner allowing free iron-containing molecules to reach the blood. With their complete lack of mitochondria, RBCs rely on pentose phosphate pathway (PPP) for the materials needed to reduce oxidative damage. Any limitations of PPP can result in more susceptibility to oxidative damage and a short or abnormal lifecycle. If the cell is unable to signal to the reticuloendothelial phagocytes by externalizing phosphatidylserine, it is likely to lyse through uncontrolled means.
The distinguishing feature of intravascular hemolysis is the release of RBC contents into the blood stream. The metabolism and elimination of these products, largely iron-containing compounds capable of doing damage through Fenton reactions, is an important part of the condition. Several reference texts exist on the elimination pathways, for example.
Free hemoglobin can bind to haptoglobin, and the complex is cleared from the circulation; thus, a decrease in haptoglobin can support a diagnosis of hemolytic anemia. Alternatively, hemoglobin may oxidize and release the heme group that is able to bind to either albumin or hemopexin. The heme is ultimately converted to bilirubin and removed in stool and urine. Hemoglobin may be cleared directly by the kidneys resulting in fast clearance of free hemoglobin but causing the continued loss of hemosiderin loaded renal tubular cells for many days.
Additional effects of free hemoglobin seem to be due to specific reactions with NO.
Diagnosis
The diagnosis of hemolytic anemia can be suspected on the basis of a constellation of symptoms and is largely based on the presence of anemia, an increased proportion of immature red cells (reticulocytes) and a decrease in the level of haptoglobin, a protein that binds free hemoglobin. Examination of a peripheral blood smear and some other laboratory studies can contribute to the diagnosis. Symptoms of hemolytic anemia include those that can occur in all anemias as well as the specific consequences of hemolysis. All anemias can cause fatigue, shortness of breath, decreased ability to exercise when severe. Symptoms specifically related to hemolysis include jaundice and dark colored urine due to the presence of hemoglobin (hemoglobinuria). When restricted to the morning hemoglobinuria may suggest paroxysmal nocturnal haemoglobinuria. Direct examination of blood under a microscope in a peripheral blood smear may demonstrate red blood cell fragments called schistocytes, red blood cells that look like spheres (spherocytes), and/or red blood cells missing small pieces (bite cells). An increased number of newly made red blood cells (reticulocytes) may also be a sign of bone marrow compensation for anemia. Laboratory studies commonly used to investigate hemolytic anemia include blood tests for breakdown products of red blood cells, bilirubin and lactate dehydrogenase, a test for the free hemoglobin binding protein haptoglobin, and the direct Coombs test (also called direct antiglobulin test or DAT) to evaluate complement factors and/or antibodies binding to red blood cells:
Treatment
Definitive therapy depends on the cause:
Symptomatic treatment can be given by blood transfusion, if there is marked anemia. A positive Coombs test is a relative contraindication to transfuse the patient. In cold hemolytic anemia there is advantage in transfusing warmed blood.
In severe immune-related hemolytic anemia, steroid therapy is sometimes necessary.
In steroid resistant cases, consideration can be given to rituximab or addition of an immunosuppressant (azathioprine, cyclophosphamide).
Association of methylprednisolone and intravenous immunoglobulin can control hemolysis in acute severe cases.
Sometimes splenectomy can be helpful where extravascular hemolysis, or hereditary spherocytosis, is predominant (i.e., most of the red blood cells are being removed by the spleen).
Mitapivat was approved for medical use in the United States in February 2022.
Other animals
Hemolytic anemia affects nonhuman species as well as humans. It has been found, in a number of animal species, to result from specific triggers.
Some notable cases include hemolytic anemia found in black rhinos kept in captivity, with the disease, in one instance, affecting 20% of captive rhinos at a specific facility. The disease is also found in wild rhinos.
Dogs and cats differ slightly from humans in some details of their RBC composition and have altered susceptibility to damage, notably, increased susceptibility to oxidative damage from consumption of onion. Garlic is less toxic to dogs than onion.
References
External links
Haemolytic anaemias
Hematopathology | 0.764476 | 0.998359 | 0.763221 |
Neurobiological effects of physical exercise | The neurobiological effects of physical exercise involve possible interrelated effects on brain structure, brain function, and cognition. Research in humans has demonstrated that consistent aerobic exercise (e.g., 30 minutes every day) may induce improvements in certain cognitive functions, neuroplasticity and behavioral plasticity; some of these long-term effects may include increased neuron growth, increased neurological activity (e.g., c-Fos and BDNF signaling), improved stress coping, enhanced cognitive control of behavior, improved declarative, spatial, and working memory, and structural and functional improvements in brain structures and pathways associated with cognitive control and memory. The effects of exercise on cognition may affect academic performance in children and college students, improve adult productivity, preserve cognitive function in old age, preventing or treating certain neurological disorders, and improving overall quality of life.
In healthy adults, aerobic exercise has been shown to induce transient effects on cognition after a single exercise session and persistent effects on cognition following consistent exercise over the course of several months. People who regularly perform an aerobic exercise (e.g., running, jogging, brisk walking, swimming, and cycling) have greater scores on neuropsychological function and performance tests that measure certain cognitive functions, such as attentional control, inhibitory control, cognitive flexibility, working memory updating and capacity, declarative memory, spatial memory, and information processing speed.
Aerobic exercise has both short and long term effects on mood and emotional states by promoting positive affect, inhibiting negative affect, and decreasing the biological response to acute psychological stress. Aerobic exercise may affect both self-esteem and overall well-being (including sleep patterns) with consistent, long term participation. Regular aerobic exercise may improve symptoms associated with central nervous system disorders and may be used as adjunct therapy for these disorders. There is some evidence of exercise treatment efficacy for major depressive disorder and attention deficit hyperactivity disorder. The American Academy of Neurology's clinical practice guideline for mild cognitive impairment indicates that clinicians should recommend regular exercise (two times per week) to individuals who have been diagnosed with this condition.
Some preclinical evidence and emerging clinical evidence supports the use of exercise as an adjunct therapy for the treatment and prevention of drug addictions.
Reviews of clinical evidence also support the use of exercise as an adjunct therapy for certain neurodegenerative disorders, particularly Alzheimer's disease and Parkinson's disease. Regular exercise may be associated with a lower risk of developing neurodegenerative disorders.
Long-term effects
Neuroplasticity
Neuroplasticity is the process by which neurons adapt to a disturbance over time, and most often occurs in response to repeated exposure to stimuli. Aerobic exercise increases the production of neurotrophic factors (e.g., BDNF, IGF-1, VEGF) which mediate improvements in cognitive functions and various forms of memory by promoting blood vessel formation in the brain, adult neurogenesis, and other forms of neuroplasticity. Consistent aerobic exercise over a period of several months induces clinically significant improvements in executive functions and increased gray matter volume in nearly all regions of the brain, with the most marked increases occurring in brain regions that give rise to executive functions. The brain structures that show the greatest improvements in gray matter volume in response to aerobic exercise are the prefrontal cortex, caudate nucleus, and hippocampus; less significant increases in gray matter volume occur in the anterior cingulate cortex, parietal cortex, cerebellum, and nucleus accumbens. The prefrontal cortex, caudate nucleus, and anterior cingulate cortex are among the most significant brain structures in the dopamine and norepinephrine systems that give rise to cognitive control. Exercise-induced neurogenesis (i.e., the increases in gray matter volume) in the hippocampus is associated with measurable improvements in spatial memory. Higher physical fitness scores, as measured by VO2 max, are associated with better executive function, faster information processing speed, and greater gray matter volume of the hippocampus, caudate nucleus, and nucleus accumbens.
Structural growth
Reviews of neuroimaging studies indicate that consistent aerobic exercise increases gray matter volume in nearly all regions of the brain, with more pronounced increases occurring in brain regions associated with memory processing, cognitive control, motor function, and reward; the most prominent gains in gray matter volume are seen in the prefrontal cortex, caudate nucleus, and hippocampus, which support cognitive control and memory processing, among other cognitive functions. Moreover, the left and right halves of the prefrontal cortex, the hippocampus, and the cingulate cortex appear to become more functionally interconnected in response to consistent aerobic exercise. Three reviews indicate that marked improvements in prefrontal and hippocampal gray matter volume occur in healthy adults that regularly engage in medium intensity exercise for several months. Other regions of the brain that demonstrate moderate or less significant gains in gray matter volume during neuroimaging include the anterior cingulate cortex, parietal cortex, cerebellum, and nucleus accumbens.
Regular exercise has been shown to counter the shrinking of the hippocampus and memory impairment that naturally occurs in late adulthood. Sedentary adults over age 55 show a 1–2% decline in hippocampal volume annually. A neuroimaging study with a sample of 120 adults revealed that participating in regular aerobic exercise increased the volume of the left hippocampus by 2.12% and the right hippocampus by 1.97% over a one-year period. Subjects in the low intensity stretching group who had higher fitness levels at baseline showed less hippocampal volume loss, providing evidence for exercise being protective against age-related cognitive decline. In general, individuals that exercise more over a given period have greater hippocampal volumes and better memory function. Aerobic exercise has also been shown to induce growth in the white matter tracts in the anterior corpus callosum, which normally shrink with age.
The various functions of the brain structures that show exercise-induced increases in gray matter volume include:
Caudate nucleus – responsible for stimulus-response learning and inhibitory control; implicated in Parkinson's disease and ADHD
Cerebellum – responsible for motor coordination and motor learning
Hippocampus – responsible for storage and consolidation of declarative memory and spatial memory
Nucleus accumbens – responsible for incentive salience ("wanting" or desire, the form of motivation associated with reward) and positive reinforcement; implicated in addiction
Parietal cortex – responsible for sensory perception, working memory, and attention
Prefrontal and anterior cingulate cortices – required for the cognitive control of behavior, particularly: working memory, attentional control, decision-making, cognitive flexibility, social cognition, and inhibitory control of behavior; implicated in attention deficit hyperactivity disorder (ADHD) and addiction
Persistent effects on cognition
Concordant with the functional roles of the brain structures that exhibit increased gray matter volumes, regular exercise over a period of several months has been shown to persistently improve numerous executive functions and several forms of memory. In particular, consistent aerobic exercise has been shown to improve attentional control, information processing speed, cognitive flexibility (e.g., task switching), inhibitory control, working memory updating and capacity, declarative memory, and spatial memory. In healthy young and middle-aged adults, the effect sizes of improvements in cognitive function are largest for indices of executive functions and small to moderate for aspects of memory and information processing speed. It may be that in older adults, individuals benefit cognitively by taking part in both aerobic and resistance type exercise of at least moderate intensity. Individuals who have a sedentary lifestyle tend to have impaired executive functions relative to other more physically active non-exercisers. A reciprocal relationship between exercise and executive functions has also been noted: improvements in executive control processes, such as attentional control and inhibitory control, increase an individual's tendency to exercise.
Mechanism of effects
BDNF signaling
One of the most significant effects of exercise on the brain is increased synthesis and expression of BDNF, a neuropeptide and hormone, resulting in increased signaling through its receptor tyrosine kinase, tropomyosin receptor kinase B (TrkB). Since BDNF is capable of crossing the blood–brain barrier, higher peripheral BDNF synthesis also increases BDNF signaling in the brain. Exercise-induced increases in BDNF signaling are associated with improved cognitive function, improved mood, and improved memory. Furthermore, research has provided a great deal of support for the role of BDNF in hippocampal neurogenesis, synaptic plasticity, and neural repair. Engaging in moderate-high intensity aerobic exercise such as running, swimming, and cycling increases BDNF biosynthesis through myokine signaling, resulting in up to a threefold increase in blood plasma and BDNF levels; exercise intensity is positively correlated with the magnitude of increased BDNF biosynthesis and expression. A meta-analysis of studies involving the effect of exercise on BDNF levels found that consistent exercise modestly increases resting BDNF levels as well. This has important implications for exercise as a mechanism to reduce stress since stress is closely linked with decreased levels of BDNF in the hippocampus. In fact, studies suggest that BDNF contributes to the anxiety-reducing effects of antidepressants. The increase in BDNF levels caused by exercise helps reverse the stress-induced decrease in BDNF which mediates stress in the short term and buffers against stress-related diseases in the long term.
IGF-1 signaling
is a peptide and neurotrophic factor that mediates some of the effects of growth hormone; IGF-1 elicits its physiological effects by binding to a specific receptor tyrosine kinase, the IGF-1 receptor, to control tissue growth and remodeling. In the brain, IGF-1 functions as a neurotrophic factor that, like , plays a significant role in cognition, neurogenesis, and neuronal survival. Physical activity is associated with increased levels of IGF-1 in blood serum, which is known to contribute to neuroplasticity in the brain due to its capacity to cross the blood–brain barrier and blood–cerebrospinal fluid barrier; consequently, one review noted that IGF-1 is a key mediator of exercise-induced adult neurogenesis, while a second review characterized it as a factor which links "body fitness" with "brain fitness". The amount of IGF-1 released into blood plasma during exercise is positively correlated with exercise intensity and duration.
VEGF signaling
is a neurotrophic and angiogenic (i.e., blood vessel growth-promoting) signaling protein that binds to two receptor tyrosine kinases, VEGFR1 and VEGFR2, which are expressed in neurons and glial cells in the brain. Hypoxia, or inadequate cellular oxygen supply, strongly upregulates VEGF expression and VEGF exerts a neuroprotective effect in hypoxic neurons. Like and , aerobic exercise has been shown to increase VEGF biosynthesis in peripheral tissue which subsequently crosses the blood–brain barrier and promotes neurogenesis and blood vessel formation in the central nervous system. Exercise-induced increases in VEGF signaling have been shown to improve cerebral blood volume and contribute to exercise-induced neurogenesis in the hippocampus.
Irisin
A study using FNDC5 knock-out mice as well as artificial elevation of circulating irisin levels showed that irisin confers beneficial cognitive effects of physical exercise and that it can serve an exercise mimetic in mice in which it could "improve both the cognitive deficit and neuropathology in Alzheimer's disease mouse models". The mediator and its regulatory system is therefore being investigated for potential interventions to improve – or further improve – cognitive function or alleviate Alzheimer's disease in humans. Experiments indicate irisin may be linked to regulation of BDNF and neurogenesis in mice.
Short-term effects
Transient effects on cognition
In addition to the persistent effects on cognition that result from several months of daily exercise, acute exercise (i.e., a single bout of exercise) has been shown to transiently improve a number of cognitive functions. Reviews and meta-analyses of research on the effects of acute exercise on cognition in healthy young and middle-aged adults have concluded that information processing speed and a number of executive functions – including attention, working memory, problem solving, cognitive flexibility, verbal fluency, decision making, and inhibitory control – all improve for a period of up to 2 hours post-exercise. A systematic review of studies conducted on children also suggested that some of the exercise-induced improvements in executive function are apparent after single bouts of exercise, while other aspects (e.g., attentional control) only improve following consistent exercise on a regular basis. Other research has suggested immediate performative enhancements during exercise, such as exercise-concurrent improvements in processing speed and accuracy during both visual attention and working memory tasks.
Exercise-induced euphoria
Continuous exercise can produce a transient state of euphoria – an emotional state involving the experience of pleasure and feelings of profound contentment, elation, and well-being – which is colloquially known as a "runner's high" in distance running or a "rower's high" in rowing.
Effects on neurochemistry
β-Phenylethylamine
β-Phenylethylamine, commonly referred to as phenethylamine, is a human trace amine and potent catecholaminergic and glutamatergic neuromodulator that has similar psychostimulant and euphoriant effects and a similar chemical structure to amphetamine. Thirty minutes of moderate to high intensity physical exercise has been shown to induce an enormous increase in urinary , the primary metabolite of phenethylamine. Two reviews noted a study where the average 24 hour urinary concentration among participants following just 30 minutes of intense exercise increased by 77% relative to baseline concentrations in resting control subjects; the reviews suggest that phenethylamine synthesis sharply increases while an individual is exercising, during which time it is rapidly metabolized due to its short half-life of roughly 30 seconds. In a resting state, phenethylamine is synthesized in catecholamine neurons from by aromatic amino acid decarboxylase (AADC) at approximately the same rate at which dopamine is produced.
In light of this observation, the original paper and both reviews suggest that phenethylamine plays a prominent role in mediating the mood-enhancing euphoric effects of a runner's high, as both phenethylamine and amphetamine are potent euphoriants.
β-Endorphin
β-Endorphin (contracted from "endogenous morphine") is an endogenous opioid neuropeptide that binds to μ-opioid receptors, in turn producing euphoria and pain relief. A meta-analytic review found that exercise significantly increases the secretion of and that this secretion is correlated with improved mood states. Moderate intensity exercise produces the greatest increase in synthesis, while higher and lower intensity forms of exercise are associated with smaller increases in synthesis. A review on and exercise noted that an individual's mood improves for the remainder of the day following physical exercise and that one's mood is positively correlated with overall daily physical activity level.
However, humans studies showed that pharmacological blockade of endogenous endorphins does not inhibit a runner's high, while blockade of endocannabinoids may have such an effect.
Anandamide
Anandamide is an endogenous cannabinoid and retrograde neurotransmitter that binds to cannabinoid receptors (primarily CB1), in turn producing euphoria. It has been shown that aerobic exercise causes an increase in plasma anandamide levels, where the magnitude of this increase is highest at moderate exercise intensity (i.e., exercising at ~70–80% maximum heart rate). Increases in plasma anandamide levels are associated with psychoactive effects because anandamide is able to cross the blood–brain barrier and act within the central nervous system. Thus, because anandamide is a euphoriant and aerobic exercise is associated with euphoric effects, it has been proposed that anandamide partly mediates the short-term mood-lifting effects of exercise (e.g., the euphoria of a runner's high) via exercise-induced increases in its synthesis.
Cortisol and the psychological stress response
The "stress hormone", cortisol, is a glucocorticoid that binds to glucocorticoid receptors. Psychological stress induces the release of cortisol from the adrenal gland by activating the hypothalamic–pituitary–adrenal axis (HPA axis). Short-term increases in cortisol levels are associated with adaptive cognitive improvements, such as enhanced inhibitory control; however, excessively high exposure or prolonged exposure to high levels of cortisol causes impairments in cognitive control and has neurotoxic effects in the human brain. For example, chronic psychological stress decreases expression, which has detrimental effects on hippocampal volume and can lead to depression.
As a physical stressor, aerobic exercise stimulates cortisol secretion in an intensity-dependent manner; however, it does not result in long-term increases in cortisol production since this exercise-induced effect on cortisol is a response to transient negative energy balance. Aerobic exercise increases physical fitness and lowers neuroendocrine (i.e., ) reactivity and therefore reduces the biological response to psychological stress in humans (e.g., reduced cortisol release and attenuated heart rate response). Exercise also reverses stress-induced decreases in expression and signaling in the brain, thereby acting as a buffer against stress-related diseases like depression.
Glutamate and GABA
Glutamate, one of the most common neurochemicals in the brain, is an excitatory neurotransmitter involved in many aspects of brain function, including learning and memory. Based upon animal models, exercise appears to normalize the excessive levels of glutamate neurotransmission into the nucleus accumbens that occurs in drug addiction. A review of the effects of exercise on neurocardiac function in preclinical models noted that exercise-induced neuroplasticity of the rostral ventrolateral medulla (RVLM) has an inhibitory effect on glutamatergic neurotransmission in this region, in turn reducing sympathetic activity; the review hypothesized that this neuroplasticity in the RVLM is a mechanism by which regular exercise prevents inactivity-related cardiovascular disease.
Exerkines and other circulating compounds
Exerkines are putative "signalling moieties released in response to acute and/or chronic exercise, which exert their effects through endocrine, paracrine and/or autocrine pathways".
Effects in children
Engaging in active physical pursuits has demonstrated positive effects on the mental health of children and adolescents, enhances their academic performance, boosts cognitive function, and diminishes the likelihood of obesity and cardiovascular diseases among this demographic. Establishing consistent exercise routines with regular frequency and duration is pivotal. Cultivating beneficial exercise habits and sustaining adequate physical activity may support the overall physical and mental well-being of young individuals. Therefore, identifying factors that either impede or encourage exercise behaviors could be a significant strategy in promoting the development of healthy exercise habits among children and adolescents.
A 2003 meta-analysis found a positive effect of exercise in children on perceptual skills, intelligence quotient, achievement, verbal tests, mathematic tests, and academic readiness. The correlation was strongest for the age ranges of 4–7 and 11–13 years.
A 2010 meta-analysis of the effect of activity on children's executive function found that aerobic exercise may briefly aid children's executive function and also influence more lasting improvements to executive function. Other studies suggested that exercise is unrelated to academic performance, perhaps due to the parameters used to determine exactly what academic achievement is. This area of study has been a focus for education boards that make decisions on whether physical education should be implemented in the school curriculum, how much time should be dedicated to physical education, and its impact on other academic subjects.
Another study found that sixth-graders who participated in vigorous physical activity at least three times a week had the highest scores compared to those who participated in moderate or no physical activity at all. Children who participated in vigorous physical activity scored three points higher, on average, on their academic test, which consisted of math, science, English, and world studies.
Neuroimaging studies indicate that exercise may influence changes in brain structure and function. Some investigations have linked low levels of aerobic fitness in children with impaired executive function when older as adults, but lack of selective attention, response inhibition, and interference control may also explain this outcome.
Effects on central nervous system disorders
Exercise as prevention and treatment of drug addictions
Clinical and preclinical evidence indicate that consistent aerobic exercise, especially endurance exercise (e.g., marathon running), actually prevents the development of certain drug addictions and is an effective adjunct treatment for drug addiction, and psychostimulant addiction in particular. Consistent aerobic exercise magnitude-dependently (i.e., by duration and intensity) may reduce drug addiction risk, which appears to occur through the reversal of drug-induced, addiction-related neuroplasticity. Moreover, aerobic exercise decreases psychostimulant self-administration, reduces the reinstatement (i.e., relapse) of drug-seeking, and induces opposite effects on striatal dopamine receptor D2 (DRD2) signaling (increased DRD2 density) to those induced by pathological stimulant use (decreased DRD2 density). Consequently, consistent aerobic exercise may lead to better treatment outcomes when used as an adjunct treatment for drug addiction. , more clinical research is still needed to understand the mechanisms and confirm the efficacy of exercise in drug addiction treatment and prevention.
Attention deficit hyperactivity disorder
Regular physical exercise, particularly aerobic exercise, is an effective add-on treatment for ADHD in children and adults, particularly when combined with stimulant medication (i.e., amphetamine or methylphenidate), although the best intensity and type of aerobic exercise for improving symptoms are not currently known. In particular, the long-term effects of regular aerobic exercise in ADHD individuals include better behavior and motor abilities, improved executive functions (including attention, inhibitory control, and planning, among other cognitive domains), faster information processing speed, and better memory. Parent-teacher ratings of behavioral and socio-emotional outcomes in response to regular aerobic exercise include: better overall function, reduced ADHD symptoms, better self-esteem, reduced levels of anxiety and depression, fewer somatic complaints, better academic and classroom behavior, and improved social behavior. Exercising while on stimulant medication augments the effect of stimulant medication on executive function. It is believed that these short-term effects of exercise are mediated by an increased abundance of synaptic dopamine and norepinephrine in the brain.
Major depressive disorder
A number of medical reviews have indicated that exercise has a marked and persistent antidepressant effect in humans, an effect believed to be mediated through enhanced signaling in the brain. Several systematic reviews have analyzed the potential for physical exercise in the treatment of depressive disorders. The 2013 Cochrane Collaboration review on for depression noted that, based upon limited evidence, it is more effective than a control intervention and comparable to psychological or antidepressant drug therapies. Three subsequent 2014 systematic reviews that included the Cochrane review in their analysis concluded with similar findings: one indicated that physical exercise is effective as an adjunct treatment (i.e., treatments that are used together) with antidepressant medication; the other two indicated that physical exercise has marked antidepressant effects and recommended the inclusion of physical activity as an adjunct treatment for mild–moderate depression and mental illness in general. One systematic review noted that yoga may be effective in alleviating symptoms of prenatal depression. Another review asserted that evidence from clinical trials supports the efficacy of physical exercise as a treatment for depression over a 2–4 month period. These benefits have also been noted in old age, with a review conducted in 2019 finding that exercise is an effective treatment for clinically diagnosed depression in older adults.
A meta-analysis from July 2016 concluded that physical exercise improves overall quality of life in individuals with depression relative to controls.
Cerebrovascular disease
Physical exercise plays a significant role in the prevention and management of stroke. It is well established that physical activity decrease the risk of ischemic stroke and intracerebral haemorrhage. Engaging in physical activity before experiencing a stroke has been found to have a positive impact on the severity and outcomes of stroke. Exercise has the potential to increase the expression of VEGF, caveolin, and angiopoietin in the brain. These changes may promote angiogenesis and neovascularization that contribute to improved blood supply to the stroke affected areas of the brain. Exercise may affect the activation of endothelial nitric oxide synthase (eNOS) and subsequent production of nitric oxide (NO). The increase in NO production may lead to improved post-stroke cerebral blood flow, ensuring a sufficient oxygen and nutrient supply to the brain. Physical activity has been associated with increased expression and activation of hypoxia-inducible factor 1 alpha (HIF-1α), heat shock proteins, and brain-derived neurotrophic factor (BDNF). These factors play crucial roles in promoting cellular survival, neuroprotection, and repair processes in the brain following a stroke. Exercise also inhibit glutamate and caspase activities, which are involved in neuronal death pathways. Additionally, it may promote neurogenesis in the brain. These effects collectively contribute to the reduction of brain infarction and edema, leading to potential improvements in neurological and functional outcomes. The neuroprotective properties of physical activity in relation to haemorrhagic strokes are less studied. Pre-stroke physical activity has been associated with improved outcomes after intracerebral haemorrhages. Furthermore, physical activity may reduce the volume of intracerebral haemorrhages. Being physically active after stroke also enhance the functional recovery.
Mild cognitive impairment
The American Academy of Neurology's January 2018 update of their clinical practice guideline for mild cognitive impairment states that clinicians should recommend regular exercise (two times per week) to individuals who have been diagnosed with this condition. This guidance is based upon a moderate amount of high-quality evidence which supports the efficacy of regular physical exercise (twice weekly over a 6-month period) for improving cognitive symptoms in individuals with mild cognitive impairment.
Neurodegenerative disorders
Alzheimer's disease
Alzheimer's disease is a cortical neurodegenerative disorder and the most prevalent form of dementia, representing approximately 65% of all cases of dementia; it is characterized by impaired cognitive function, behavioral abnormalities, and a reduced capacity to perform basic activities of daily life. Two reviews found evidence for possible positive effects of physical exercise on cognitive function, the rate of cognitive decline, and the ability to perform activities of daily living in individuals with Alzheimer's disease. A subsequent review found higher levels of physical activity may be associated with reduced risk of dementia and cognitive decline.
Parkinson's disease
Parkinson's disease symptoms reflect various functional impairments and limitations, such as postural instability, gait disturbance, immobility, and frequent falls. Some evidence suggests that physical exercise may lower the risk of Parkinson's disease. A 2017 study found that strength and endurance training in people with Parkinson's disease had positive effects lasting for several weeks. A 2023 Cochrane review on the effects of physical exercise in people with Parkinson's disease indicated that aquatic exercise might reduce severity of motor symptoms and improve quality of life. Furthermore, endurance training, functional training, and multi-domain training (i.e., engaging in several types of exercise) may provide improvements.
See also
Brain fitness
Exercise is Medicine
Exercise prescription
Exercise therapy
Memory improvement
Neuroinflammation#Exercise
Nootropic
Notes
References
Addiction
Addiction medicine
Aerobic exercise
Antidepressants
Attention
Cognition
Cognitive neuroscience
Epigenetics
Euphoriants
Exercise physiology
Memory
Neuropsychology
Physical exercise
Physical psychiatric treatments
Treatment of depression
Sports science | 0.769449 | 0.991867 | 0.763192 |
Corpse decomposition | Decomposition is the process in which the organs and complex molecules of animal and human bodies break down into simple organic matter over time. In vertebrates, five stages of decomposition are typically recognized: fresh, bloat, active decay, advanced decay, and dry/skeletonized. Knowing the different stages of decomposition can help investigators in determining the post-mortem interval (PMI). The rate of decomposition of human remains can vary due to environmental factors and other factors. Environmental factors include temperature, burning, humidity, and the availability of oxygen. Other factors include body size, clothing, and the cause of death.
Stages and characteristics
The five stages of decomposition—fresh (autolysis), bloat, active decay, advanced decay, and dry/skeletonized—have specific characteristics that are used to identify which stage the remains are in. These stages are illustrated by reference to an experimental study of the decay of a pig corpse.
Fresh
At this stage the remains are usually intact and free of insects. The corpse progresses through algor mortis (a reduction in body temperature until ambient temperature is reached), rigor mortis (the temporary stiffening of the limbs due to chemical changes in the muscles), and livor mortis (pooling of the blood on the side of the body that is closest to the ground).
Bloat
At this stage, the microorganisms residing in the digestive system begin to digest the tissues of the body, excreting gases that cause the torso and limbs to bloat, and producing foul-smelling chemicals including putrescine and cadaverine. Cells in tissues break down and release hydrolytic enzymes, and the top layer of skin may become loosened, leading to skin slippage. Decomposition of the gastrointestinal tract results in a dark, foul-smelling liquid called "purge fluid" that is forced out of the nose and mouth due to gas pressure in the intestine. The bloat stage is characterized by a shift in the bacterial population from aerobic to anaerobic bacterial species.
Active decay
At this stage, the tissues begin to liquify and the skin will start to blacken. Blowflies target decomposing corpses early on, using specialized smell receptors, and lay their eggs in orifices and open wounds. The size and development stage of maggots can be used to give a measure of the minimum time since death. Insect activity occurs in a series of waves, and identifying the insects present can give additional information on the postmortem interval. Adipocere, or corpse wax, may be formed, inhibiting further decomposition.
Advanced decay
During advanced decay, most of the remains have discolored and often blackened. Putrefaction, in which tissues and cells break down and liquidize as the body decays, will be almost complete. A decomposing human body in the earth will eventually release approximately of nitrogen, of phosphorus, of potassium, and of magnesium for every kilogram of dry body mass, making changes in the chemistry of the soil around it that may persist for years.
Dry/skeletonized remains
Once bloating has ceased, the soft tissue of remains typically collapses in on itself. At the end of active decay, the remains are often dried out and begin to skeletonize.
Environmental factors
Temperature
The climate and temperature in which a corpse decomposes can have great effect on the rate of decomposition; higher temperatures accelerate the physiological reactions in the body after death and speed up the rate of decomposition, and cooler temperatures may slow the rate of decomposition.
In summer conditions, the body can decompose to bones in nine days. Warm climates can mean that finger prints cannot be obtained after four days, and in colder climates or seasons they may remain for up to fifty days after death.
Humidity
The amount of moisture in the environment in which a corpse decomposes also has an effect on the rate of decomposition. Humid environments will speed up the rate of decomposition and will influence adipocere formation. In contrast, more arid environments will dry up faster it's will overall decompose more slowly.
Oxygen availability
Whether the corpse is in a more anaerobic or aerobic environment will also influence the rate of decomposition. The more oxygen there is available the more rapid decomposition will take place. This is because the microorganisms required for decomposition require oxygen to live and thus facilitate decomposition. Lower oxygen levels will have the opposite effect.
Burial
Burial postpones the rate of decomposition, in part because even a few inches of soil covering the corpse will prevent blowflies from laying their eggs on the corpse. The depth of burial will influence the rate of decomposition as it will deter decomposers such as scavengers and insects. This will also lower the available oxygen and impede decomposition as it will limit the function of microorganisms. The pH of the soil will also be a factor when it comes the rate of decomposition, as it influences the types of decomposers. Moisture in soil will also slow down decomposition as it facilitates anaerobic metabolism.
Wet environments
Submersion in water typically slows decomposition. The rate of loss of heat is higher in water and the progression through algor mortis is therefore faster. Cool temperatures slow bacterial growth. Once bloat begins, the body will typically float to the surface and become exposed to flies. Scavengers in the water, which vary with the location, also contribute to decay. Factors affecting decomposition include water depth, temperature, tides, currents, seasons, dissolved oxygen, geology, acidity, salinity, sedimentation, and insect and scavenging activity. Human remains found in aquatic surroundings are often incomplete and poorly preserved, making investigating the circumstances of death much more difficult.
If a person has drowned, the body will likely initially submerge and go into a position that has been named "the drowning position." This position is when the front of the body is face down in the water, with their extremities reaching down towards the bottom of the body of water. Their back is typically slightly arched down and inwards. This position is important to note as when this occurs in shallow water their extremities may drag across the bottom of the body of the water, leaving injuries. After death, when a body is submerged in water a process called Saponification occurs. This is the process in which adipocere is formed. Adipocere is a wax-like substance that covers bodies created by the hydrolysis of triglycerides in adipose tissue. This occurs mainly in submersion, burial environments or areas with lots of carbon but has been noted in marine environments.
Other factors
Body size
Body size is an important factor that will also influence the rate of decomposition. A larger body mass and more fat will decompose more rapidly. This is because after death, fats will liquify, accounting for a large portion of decomposition. People with a lower fat percentage will decompose more slowly. This includes smaller adults and especially children.
Clothing
Clothing and other types of coverings affect the rate of decomposition because it limits the body's exposure to external factors such as weathering and soil. It slows decomposition by delaying scavenging by animals. However, insect activity would increase since the wrapping will harbor more heat and protection from the sun, providing an ideal environment for maggot growth which facilitates organic decay.
Cause of death
The cause of death can also influence the rate of decomposition, mainly by speeding it up. Fatal wounds like stab wounds or other lacerations on the body attract insects as it provides a good spot to oviposit and, as a result, could increase the rate of decomposition.
Experimental analysis of decomposition on corpse farms
Corpse farms are used to study the decay of the human body and to gain insight into how environmental and endogenous factors affect progression through the stages of decomposition. In summer, high temperatures can accelerate the stages of decomposition: heat encourages the breakdown of organic material, and bacteria also grow faster in a warm environment, accelerating bacterial digestion of tissue. However, natural mummification, normally thought of as a consequence of arid conditions, can occur if the remains are exposed to intense sunlight. In winter, not all bodies go through the bloat stage. Bacterial growth is much reduced at temperatures below 4 °C. Corpse farms are also used to study the interactions of insects with decaying bodies.
References
Medical aspects of death
Corpses
Articles containing video clips | 0.765624 | 0.996789 | 0.763166 |
Zoonosis | A zoonosis (; plural zoonoses) or zoonotic disease is an infectious disease of humans caused by a pathogen (an infectious agent, such as a bacterium, virus, parasite, or prion) that can jump from a non-human (usually a vertebrate) to a human and vice versa.
Major modern diseases such as Ebola and salmonellosis are zoonoses. HIV was a zoonotic disease transmitted to humans in the early part of the 20th century, though it has now evolved into a separate human-only disease. Human infection with animal influenza viruses is rare, as they do not transmit easily to or among humans. However, avian and swine influenza viruses in particular possess high zoonotic potential, and these occasionally recombine with human strains of the flu and can cause pandemics such as the 2009 swine flu. Taenia solium infection is one of the neglected tropical diseases with public health and veterinary concern in endemic regions. Zoonoses can be caused by a range of disease pathogens such as emergent viruses, bacteria, fungi and parasites; of 1,415 pathogens known to infect humans, 61% were zoonotic. Most human diseases originated in non-humans; however, only diseases that routinely involve non-human to human transmission, such as rabies, are considered direct zoonoses.
Zoonoses have different modes of transmission. In direct zoonosis the disease is directly transmitted from non-humans to humans through media such as air (influenza) or bites and saliva (rabies). In contrast, transmission can also occur via an intermediate species (referred to as a vector), which carry the disease pathogen without getting sick. When humans infect non-humans, it is called reverse zoonosis or anthroponosis. The term is from Ancient Greek: ζῷον zoon "animal" and νόσος nosos "sickness".
Host genetics plays an important role in determining which non-human viruses will be able to make copies of themselves in the human body. Dangerous non-human viruses are those that require few mutations to begin replicating themselves in human cells. These viruses are dangerous since the required combinations of mutations might randomly arise in the natural reservoir.
Causes
The emergence of zoonotic diseases originated with the domestication of animals. Zoonotic transmission can occur in any context in which there is contact with or consumption of animals, animal products, or animal derivatives. This can occur in a companionistic (pets), economic (farming, trade, butchering, etc.), predatory (hunting, butchering, or consuming wild game), or research context.
Recently, there has been a rise in frequency of appearance of new zoonotic diseases. "Approximately 1.67 million undescribed viruses are thought to exist in mammals and birds, up to half of which are estimated to have the potential to spill over into humans", says a study led by researchers at the University of California, Davis. According to a report from the United Nations Environment Programme and International Livestock Research Institute a large part of the causes are environmental like climate change, unsustainable agriculture, exploitation of wildlife, and land use change. Others are linked to changes in human society such as an increase in mobility. The organizations propose a set of measures to stop the rise.
Contamination of food or water supply
The most significant zoonotic pathogens causing foodborne diseases are Escherichia coli O157:H7, Campylobacter, Caliciviridae, and Salmonella.
In 2006 a conference held in Berlin focused on the issue of zoonotic pathogen effects on food safety, urging government intervention and public vigilance against the risks of catching food-borne diseases from farm-to-table dining.
Many food-borne outbreaks can be linked to zoonotic pathogens. Many different types of food that have an animal origin can become contaminated. Some common food items linked to zoonotic contaminations include eggs, seafood, meat, dairy, and even some vegetables.
Outbreaks involving contaminated food should be handled in preparedness plans to prevent widespread outbreaks and to efficiently and effectively contain outbreaks.
Farming, ranching and animal husbandry
Contact with farm animals can lead to disease in farmers or others that come into contact with infected farm animals. Glanders primarily affects those who work closely with horses and donkeys. Close contact with cattle can lead to cutaneous anthrax infection, whereas inhalation anthrax infection is more common for workers in slaughterhouses, tanneries, and wool mills. Close contact with sheep who have recently given birth can lead to infection with the bacterium Chlamydia psittaci, causing chlamydiosis (and enzootic abortion in pregnant women), as well as increase the risk of Q fever, toxoplasmosis, and listeriosis, in the pregnant or otherwise immunocompromised. Echinococcosis is caused by a tapeworm, which can spread from infected sheep by food or water contaminated by feces or wool. Avian influenza is common in chickens, and, while it is rare in humans, the main public health worry is that a strain of avian influenza will recombine with a human influenza virus and cause a pandemic like the 1918 Spanish flu. In 2017, free-range chickens in the UK were temporarily ordered to remain inside due to the threat of avian influenza. Cattle are an important reservoir of cryptosporidiosis, which mainly affects the immunocompromised. Reports have shown mink can also become infected. In Western countries, hepatitis E burden is largely dependent on exposure to animal products, and pork is a significant source of infection, in this respect. Similarly, the human coronavirus OC43, the main cause of the common cold, can use the pig as a zoonotic reservoir, constantly reinfecting the human population.
Veterinarians are exposed to unique occupational hazards when it comes to zoonotic disease. In the US, studies have highlighted an increased risk of injuries and lack of veterinary awareness of these hazards. Research has proved the importance for continued clinical veterinarian education on occupational risks associated with musculoskeletal injuries, animal bites, needle-sticks, and cuts.
A July 2020 report by the United Nations Environment Programme stated that the increase in zoonotic pandemics is directly attributable to anthropogenic destruction of nature and the increased global demand for meat and that the industrial farming of pigs and chickens in particular will be a primary risk factor for the spillover of zoonotic diseases in the future. Habitat loss of viral reservoir species has been identified as a significant source in at least one spillover event.
Wildlife trade or animal attacks
The wildlife trade may increase spillover risk because it directly increases the number of interactions across animal species, sometimes in small spaces. The origin of the COVID-19 pandemic is traced to the wet markets in China.
Zoonotic disease emergence is demonstrably linked to the consumption of wildlife meat, exacerbated by human encroachment into natural habitats and amplified by the unsanitary conditions of wildlife markets. These markets, where diverse species converge, facilitate the mixing and transmission of pathogens, including those responsible for outbreaks of HIV-1, Ebola, and mpox, and potentially even the COVID-19 pandemic. Notably, small mammals often harbor a vast array of zoonotic bacteria and viruses, yet endemic bacterial transmission among wildlife remains largely unexplored. Therefore, accurately determining the pathogenic landscape of traded wildlife is crucial for guiding effective measures to combat zoonotic diseases and documenting the societal and environmental costs associated with this practice.
Rabies
Insect vectors
African sleeping sickness
Dirofilariasis
Eastern equine encephalitis
Japanese encephalitis
Saint Louis encephalitis
Scrub typhus
Tularemia
Venezuelan equine encephalitis
West Nile fever
Western equine encephalitis
Zika fever
Pets
Pets can transmit a number of diseases. Dogs and cats are routinely vaccinated against rabies. Pets can also transmit ringworm and Giardia, which are endemic in both animal and human populations. Toxoplasmosis is a common infection of cats; in humans it is a mild disease although it can be dangerous to pregnant women. Dirofilariasis is caused by Dirofilaria immitis through mosquitoes infected by mammals like dogs and cats. Cat-scratch disease is caused by Bartonella henselae and Bartonella quintana, which are transmitted by fleas that are endemic to cats. Toxocariasis is the infection of humans by any of species of roundworm, including species specific to dogs (Toxocara canis) or cats (Toxocara cati). Cryptosporidiosis can be spread to humans from pet lizards, such as the leopard gecko. Encephalitozoon cuniculi is a microsporidial parasite carried by many mammals, including rabbits, and is an important opportunistic pathogen in people immunocompromised by HIV/AIDS, organ transplantation, or CD4+ T-lymphocyte deficiency.
Pets may also serve as a reservoir of viral disease and contribute to the chronic presence of certain viral diseases in the human population. For instance, approximately 20% of domestic dogs, cats, and horses carry anti-hepatitis E virus antibodies and thus these animals probably contribute to human hepatitis E burden as well. For non-vulnerable populations (e.g., people who are not immunocompromised) the associated disease burden is, however, small. Furthermore, the trade of non domestic animals such as wild animals as pets can also increase the risk of zoonosis spread.
Exhibition
Outbreaks of zoonoses have been traced to human interaction with, and exposure to, other animals at fairs, live animal markets, petting zoos, and other settings. In 2005, the Centers for Disease Control and Prevention (CDC) issued an updated list of recommendations for preventing zoonosis transmission in public settings. The recommendations, developed in conjunction with the National Association of State Public Health Veterinarians, include educational responsibilities of venue operators, limiting public animal contact, and animal care and management.
Hunting and bushmeat
Hunting involves humans tracking, chasing, and capturing wild animals, primarily for food or materials like fur. However, other reasons like pest control or managing wildlife populations can also exist. Transmission of zoonotic diseases, those leaping from animals to humans, can occur through various routes: direct physical contact, airborne droplets or particles, bites or vector transport by insects, oral ingestion, or even contact with contaminated environments. Wildlife activities like hunting and trade bring humans closer to dangerous zoonotic pathogens, threatening global health.
According to the Center for Diseases Control and Prevention (CDC) hunting and consuming wild animal meat ("bushmeat") in regions like Africa can expose people to infectious diseases due to the types of animals involved, like bats and primates. Unfortunately, common preservation methods like smoking or drying aren't enough to eliminate these risks. Although bushmeat provides protein and income for many, the practice is intricately linked to numerous emerging infectious diseases like Ebola, HIV, and SARS, raising critical public health concerns.
A review published in 2022 found evidence that zoonotic spillover linked to wildmeat consumption has been reported across all continents.
Deforestation, biodiversity loss and environmental degradation
Kate Jones, Chair of Ecology and Biodiversity at University College London, says zoonotic diseases are increasingly linked to environmental change and human behavior. The disruption of pristine forests driven by logging, mining, road building through remote places, rapid urbanization, and population growth is bringing people into closer contact with animal species they may never have been near before. The resulting transmission of disease from wildlife to humans, she says, is now "a hidden cost of human economic development". In a guest article, published by IPBES, President of the EcoHealth Alliance and zoologist Peter Daszak, along with three co-chairs of the 2019 Global Assessment Report on Biodiversity and Ecosystem Services, Josef Settele, Sandra Díaz, and Eduardo Brondizio, wrote that "rampant deforestation, uncontrolled expansion of agriculture, intensive farming, mining and infrastructure development, as well as the exploitation of wild species have created a 'perfect storm' for the spillover of diseases from wildlife to people."
Joshua Moon, Clare Wenham, and Sophie Harman said that there is evidence that decreased biodiversity has an effect on the diversity of hosts and frequency of human-animal interactions with potential for pathogenic spillover.
An April 2020 study, published in the Proceedings of the Royal Society Part B journal, found that increased virus spillover events from animals to humans can be linked to biodiversity loss and environmental degradation, as humans further encroach on wildlands to engage in agriculture, hunting, and resource extraction they become exposed to pathogens which normally would remain in these areas. Such spillover events have been tripling every decade since 1980. An August 2020 study, published in Nature, concludes that the anthropogenic destruction of ecosystems for the purpose of expanding agriculture and human settlements reduces biodiversity and allows for smaller animals such as bats and rats, which are more adaptable to human pressures and also carry the most zoonotic diseases, to proliferate. This in turn can result in more pandemics.
In October 2020, the Intergovernmental Science-Policy Platform on Biodiversity and Ecosystem Services published its report on the 'era of pandemics' by 22 experts in a variety of fields and concluded that anthropogenic destruction of biodiversity is paving the way to the pandemic era and could result in as many as 850,000 viruses being transmitted from animals – in particular birds and mammals – to humans. The increased pressure on ecosystems is being driven by the "exponential rise" in consumption and trade of commodities such as meat, palm oil, and metals, largely facilitated by developed nations, and by a growing human population. According to Peter Daszak, the chair of the group who produced the report, "there is no great mystery about the cause of the Covid-19 pandemic, or of any modern pandemic. The same human activities that drive climate change and biodiversity loss also drive pandemic risk through their impacts on our environment."
Climate change
According to a report from the United Nations Environment Programme and International Livestock Research Institute, entitled "Preventing the next pandemic – Zoonotic diseases and how to break the chain of transmission", climate change is one of the 7 human-related causes of the increase in the number of zoonotic diseases. The University of Sydney issued a study, in March 2021, that examines factors increasing the likelihood of epidemics and pandemics like the COVID-19 pandemic. The researchers found that "pressure on ecosystems, climate change and economic development are key factors" in doing so. More zoonotic diseases were found in high-income countries.
A 2022 study dedicated to the link between climate change and zoonosis found a strong link between climate change and the epidemic emergence in the last 15 years, as it caused a massive migration of species to new areas, and consequently contact between species which do not normally come in contact with one another. Even in a scenario with weak climatic changes, there will be 15,000 spillover of viruses to new hosts in the next decades. The areas with the most possibilities for spillover are the mountainous tropical regions of Africa and southeast Asia. Southeast Asia is especially vulnerable as it has a large number of bat species that generally do not mix, but could easily if climate change forced them to begin migrating.
A 2021 study found possible links between climate change and transmission of COVID-19 through bats. The authors suggest that climate-driven changes in the distribution and robustness of bat species harboring coronaviruses may have occurred in eastern Asian hotspots (southern China, Myanmar, and Laos), constituting a driver behind the evolution and spread of the virus.
Secondary Transmission
Zoonotic diseases contribute significantly to the burdened public health system as vulnerable groups such the elderly, children, childbearing women and immune-compromised individuals are at risk. According to the World Health Organization (WHO), any disease or infection that is primarily ‘naturally’ transmissible from vertebrate animals to humans or from humans to animals is classified as a zoonosis. Factors such as climate change, urbanization, animal migration and trade, travel and tourism, vector biology, anthropogenic factors, and natural factors have greatly influenced the emergence, re-emergence, distribution, and patterns of zoonoses.
Zoonotic diseases generally refer to diseases of animal origin in which direct or vector mediated animal-to-human transmission is the usual source of human infection. Animal populations are the principal reservoir of the pathogen and horizontal infection in humans is rare. A few examples in this category include lyssavirus infections, Lyme borreliosis, plague, tularemia, leptospirosis, ehrlichiosis, Nipah virus, West Nile virus (WNV) and hantavirus infections. Secondary transmission encompasses a category of diseases of animal origin in which the actual transmission to humans is a rare event but, once it has occurred, human-to-human transmission maintains the infection cycle for some period of time. Some examples include human immunodeficiency virus (HIV)/acquired immune deficiency syndrome (AIDS), certain influenza A strains, Ebola virus and severe acute respiratory syndrome (SARS).
One example is Ebola which is spread by direct transmission to humans from handling bushmeat (wild animals hunted for food) and contact with infected bats or close contact with infected animals, including chimpanzees, fruit bats, and forest antelope. Secondary transmission also occurs from human to human by direct contact with blood, bodily fluids, or skin of patients with or who died of Ebola virus disease. Some examples of pathogens with this pattern of secondary transmission are human immunodeficiency virus/acquired immune deficiency syndrome, influenza A, Ebola virus and severe acute respiratory syndrome. Recent infections of these emerging and re-emerging zoonotic infections have occurred as a results of many ecological and sociological changes globally.
History
During most of human prehistory groups of hunter-gatherers were probably very small. Such groups probably made contact with other such bands only rarely. Such isolation would have caused epidemic diseases to be restricted to any given local population, because propagation and expansion of epidemics depend on frequent contact with other individuals who have not yet developed an adequate immune response. To persist in such a population, a pathogen either had to be a chronic infection, staying present and potentially infectious in the infected host for long periods, or it had to have other additional species as reservoir where it can maintain itself until further susceptible hosts are contacted and infected. In fact, for many "human" diseases, the human is actually better viewed as an accidental or incidental victim and a dead-end host. Examples include rabies, anthrax, tularemia, and West Nile fever. Thus, much of human exposure to infectious disease has been zoonotic.
Many diseases, even epidemic ones, have zoonotic origin and measles, smallpox, influenza, HIV, and diphtheria are particular examples. Various forms of the common cold and tuberculosis also are adaptations of strains originating in other species. Some experts have suggested that all human viral infections were originally zoonotic.
Zoonoses are of interest because they are often previously unrecognized diseases or have increased virulence in populations lacking immunity. The West Nile virus first appeared in the United States in 1999, in the New York City area. Bubonic plague is a zoonotic disease, as are salmonellosis, Rocky Mountain spotted fever, and Lyme disease.
A major factor contributing to the appearance of new zoonotic pathogens in human populations is increased contact between humans and wildlife. This can be caused either by encroachment of human activity into wilderness areas or by movement of wild animals into areas of human activity. An example of this is the outbreak of Nipah virus in peninsular Malaysia, in 1999, when intensive pig farming began within the habitat of infected fruit bats. The unidentified infection of these pigs amplified the force of infection, transmitting the virus to farmers, and eventually causing 105 human deaths.
Similarly, in recent times avian influenza and West Nile virus have spilled over into human populations probably due to interactions between the carrier host and domestic animals. Highly mobile animals, such as bats and birds, may present a greater risk of zoonotic transmission than other animals due to the ease with which they can move into areas of human habitation.
Because they depend on the human host for part of their life-cycle, diseases such as African schistosomiasis, river blindness, and elephantiasis are not defined as zoonotic, even though they may depend on transmission by insects or other vectors.
Use in vaccines
The first vaccine against smallpox by Edward Jenner in 1800 was by infection of a zoonotic bovine virus which caused a disease called cowpox. Jenner had noticed that milkmaids were resistant to smallpox. Milkmaids contracted a milder version of the disease from infected cows that conferred cross immunity to the human disease. Jenner abstracted an infectious preparation of 'cowpox' and subsequently used it to inoculate persons against smallpox. As a result of vaccination, smallpox has been eradicated globally, and mass inoculation against this disease ceased in 1981. There are a variety of vaccine types, including traditional inactivated pathogen vaccines, subunit vaccines, live attenuated vaccines. There are also new vaccine technologies such as viral vector vaccines and DNA/RNA vaccines, which include many of the COVID-19 vaccines.
Lists of diseases
See also
References
Bibliography
.
H. Krauss, A. Weber, M. Appel, B. Enders, A. v. Graevenitz, H. D. Isenberg, H. G. Schiefer, W. Slenczka, H. Zahner: Zoonoses. Infectious Diseases Transmissible from Animals to Humans. 3rd Edition, 456 pages. ASM Press. American Society for Microbiology, Washington, D.C., 2003. .
External links
AVMA Collections: Zoonosis Updates
WHO tropical diseases and zoonoses
Detection and Forensic Analysis of Wildlife and Zoonotic Disease
Publications in Zoonotics and Wildlife Disease
A message from nature: coronavirus. United Nations Environment Programme
Animal diseases
Disease ecology
Infectious diseases | 0.764515 | 0.998225 | 0.763158 |
Typhus | Typhus, also known as typhus fever, is a group of infectious diseases that include epidemic typhus, scrub typhus, and murine typhus. Common symptoms include fever, headache, and a rash. Typically these begin one to two weeks after exposure.
The diseases are caused by specific types of bacterial infection. Epidemic typhus is caused by Rickettsia prowazekii spread by body lice, scrub typhus is caused by Orientia tsutsugamushi spread by chiggers, and murine typhus is caused by Rickettsia typhi spread by fleas.
Vaccines have been developed, but none are commercially available. Prevention is achieved by reducing exposure to the organisms that spread the disease. Treatment is with the antibiotic doxycycline. Epidemic typhus generally occurs in outbreaks when poor sanitary conditions and crowding are present. While once common, it is now rare. Scrub typhus occurs in Southeast Asia, Japan, and northern Australia. Murine typhus occurs in tropical and subtropical areas of the world.
Typhus has been described since at least 1528. The name comes from the Greek , meaning 'hazy' or 'smoky' and commonly used as a word for delusion, describing the state of mind of those infected. While typhoid means 'typhus-like', typhus and typhoid fever are distinct diseases caused by different types of bacteria, the latter by specific strains of Salmonella typhi. However, in some languages such as German, the term does mean 'typhoid fever', and the here-described typhus is called by another name, such as the language's equivalent of 'lice fever'.
Signs and symptoms
These signs and symptoms refer to epidemic typhus, as it is the most important of the typhus group of diseases.
Signs and symptoms begin with sudden onset of fever and other flu-like symptoms about one to two weeks after being infected. Five to nine days after the symptoms have started, a rash typically begins on the trunk and spreads to the extremities. This rash eventually spreads over most of the body, sparing the face, palms, and soles. Signs of meningoencephalitis begin with the rash and continue into the second or third weeks.[citation needed] Other signs of meningoencephalitis include sensitivity to light (photophobia), altered mental status (delirium), or coma. Untreated cases are often fatal.
Signs and symptoms of scrub typhus usually start within 1 to 2 weeks after being infected. These symptoms include fever, headaches, chills, swollen lymph nodes, nausea/vomiting, and a rash at the site of infection called an eschar. More severe symptoms may damage the lungs, brain, kidney, meninges, and heart.
Causes
Multiple diseases include the word "typhus" in their descriptions. Types include:
Diagnosis
The main method of diagnosing typhus of all types is laboratory testing. It is most commonly done with an indirect immunofluorescence antibody IFA test for all types of typhus. This tests a sample for the antibodies associated with typhus. It can also be done with either immunohistochemistry (IHC) or polymerase chain reaction (PCR) tests excluding scrub typhus. Scrub typhus is not tested with IHC or PCR but is instead tested with the IFA test as well as indirect immunuoperoxidase (IIP) assays.
Prevention
As of 2024, no vaccine is commercially available. A vaccine has been in development for scrub typhus known as the scrub typhus vaccine.
Scrub typhus
Scrub typhus is caused by mites, so avoid the outdoors when scrub is common in the area. Make sure your clothing is treated with permethrin to prevent mite bites. Lastly, make sure to use bug spray to keep mites away as well. For children and babies, you additionally have to make sure their clothing covers their limbs. For babies put a mosquito cover over their stroller which also protects them from mites.
Epidemic typhus
Epidemic typhus is caused by body lice and thrives in areas with overcrowding. To avoid lice you should stay away from highly populated areas. Also, make sure to regularly clean yourself and your clothing to help kill lice. This also goes for things like bedding and towels. Make sure to not share any fabric items with anyone who has lice or typhus. Lastly, treat clothing with permethrin because it helps kill lice.
Murine typhus
Murine typhus is caused by flea bites so take steps to avoid fleas. This can be done by making sure pets do not have fleas and if they do, treat them, stay away from wild animals, use insect repellent to keep fleas away, and wear gloves when dealing with sick or dead animals. Take steps to ensure rodents or other wildlife do not get into your home.
Treatment
The American Public Health Association recommends treatment based upon clinical findings and before culturing confirms the diagnosis. Without treatment, death may occur in 10% to 60% of people with epidemic typhus, with people over age 50 having the highest risk of death. In the antibiotic era, death is uncommon if doxycycline is given. In one study of 60 people hospitalized with epidemic typhus, no one died when given doxycycline or chloramphenicol.
Epidemiology
According to the World Health Organization, in 2010 the death rate from typhus was about one of every 5,000,000 people per year.
Only a few areas of epidemic typhus exist today. Since the late 20th century, cases have been reported in Burundi, Rwanda, Ethiopia, Algeria, and a few areas in South and Central America.
Except for two cases, all instances of epidemic typhus in the United States have occurred east of the Mississippi River. An examination of a cluster of cases in Pennsylvania concluded the source of the infection was flying squirrels. Sylvatic cycle (diseases transmitted from wild animals) epidemic typhus remains uncommon in the US. The Centers for Disease Control and Prevention have documented only 47 cases from 1976 to 2010. An outbreak of flea-borne murine typhus was identified in downtown Los Angeles, California, in October 2018.
History
Middle Ages
The first reliable description of typhus appears in 1489 AD during the Spanish siege of Baza against the Moors during the War of Granada (1482–1492). These accounts include descriptions of fever; red spots over arms, back, and chest; attention deficit, progressing to delirium; and gangrenous sores and the associated smell of rotting flesh. During the siege, the Spaniards lost 3,000 men to enemy action, but an additional 17,000 died of typhus.
In historical times, "jail fever" or "gaol fever" was common in English prisons, and is believed by modern authorities to have been typhus. It often occurred when prisoners were crowded together into dark, filthy rooms where lice spread easily. Thus, "imprisonment until the next term of court" was often equivalent to a death sentence. Prisoners brought before the court sometimes infected members of the court. The Black Assize of Exeter 1586 was another notable outbreak. During the Lent assizes court held at Taunton in 1730, gaol fever caused the death of the Lord Chief Baron, as well as the High Sheriff, the sergeant, and hundreds of others. During a time when persons were executed for capital offenses, more prisoners died from 'gaol fever' than were put to death by all the public executioners in the British realm. In 1759, an English authority estimated that each year, a quarter of the prisoners had died from gaol fever. In London, gaol fever frequently broke out among the ill-kept prisoners of Newgate Prison and then moved into the general city population. In May 1750, the Lord Mayor of London, Sir Samuel Pennant, and many court personnel were fatally infected in the courtroom of the Old Bailey, which adjoined Newgate Prison.
Early modern epidemics
Epidemics occurred routinely throughout Europe from the 16th to the 19th centuries, including during the English Civil War, the Thirty Years' War, and the Napoleonic Wars. Pestilence of several kinds raged among combatants and civilians in Germany and surrounding lands from 1618 to 1648. According to Joseph Patrick Byrne, "By war's end, typhus may have killed more than 10 percent of the total German population, and disease in general accounted for 90 percent of Europe's casualties."
19th century
During Napoleon's retreat from Moscow in 1812, more French soldiers died of typhus than were killed by the Russians.
A major epidemic occurred in Ireland between 1816 and 1819, during the famine caused by a worldwide reduction in temperature known as the Year Without a Summer. An estimated 100,000 people perished. Typhus appeared again in the late 1830s, and yet another major typhus epidemic occurred during the Great Irish Famine between 1846 and 1849. The typhus outbreak along with typhoid fever is said to be responsible for 400,000 deaths. The Irish typhus spread to England, where it was sometimes called "Irish fever" and was noted for its virulence. It killed people of all social classes, as lice were endemic and inescapable, but it hit particularly hard in the lower or "unwashed" social strata.
In the United States, a typhus epidemic broke out in Philadelphia in 1837 and killed the son of Franklin Pierce (14th President of the United States) in Concord, New Hampshire, in 1843. Several epidemics occurred in Baltimore, Memphis, and Washington, DC, between 1865 and 1873. Typhus was also a significant killer during the US Civil War, although typhoid fever was the more prevalent cause of US Civil War "camp fever". Typhoid fever is caused by the bacterium Salmonella enterica Serovar Typhi.
In Canada alone, the typhus epidemic of 1847 killed more than 20,000 people from 1847 to 1848, mainly Irish immigrants in fever sheds and other forms of quarantine, who had contracted the disease aboard the crowded coffin ships in fleeing the Great Irish Famine. Officials neither knew how to provide sufficient sanitation under conditions of the time nor understood how the disease spread.
20th century
Typhus was endemic in Poland and several neighboring countries prior to World War I (1914–1918), but became epidemic during the war. Delousing stations were established for troops on the Western Front during World War I, but typhus ravaged the armies of the Eastern Front, where over 150,000 died in Serbia alone. Fatalities were generally between 10% and 40% of those infected and the disease was a major cause of death for those nursing the sick.
In 1922, the typhus epidemic reached its peak in Soviet territory, with some 20 to 30 million cases in Russia. Although typhus had ravaged Poland with some 4 million cases reported, efforts to stem the spread of disease in that country had largely succeeded by 1921 through the efforts of public health pioneers such as Hélène Sparrow and Rudolf Weigl. In Russia during the civil war between the White and Red Armies, epidemic typhus killed 2–3 million people, many of whom were civilians. In 1937 and 1938, there was a typhus epidemic in Chile. On 6 March 1939, Prime Minister of France Édouard Daladier stated to the French parliament, he would return 300,000 of the Spanish refugees fleeing from the 1938 Spanish Civil War; reasons included the typhus spread in the French refugee camps, as well as France's sovereign recognition of Francisco Franco.
During World War II, many German POWs after the loss at Stalingrad died of typhus. Typhus epidemics killed those confined to POW camps, ghettos, and Nazi concentration camps who were held in unhygienic conditions. Pictures of mass graves including people who died from typhus can be seen in footage shot at Bergen-Belsen concentration camp. Among thousands of prisoners in concentration camps such as Theresienstadt and Bergen-Belsen who died of typhus were Anne Frank, age 15, and her sister Margot, age 19, in the latter camp.
The first typhus vaccine was developed by the Polish zoologist Rudolf Weigl in the interwar period; the vaccine did not prevent the disease but reduced its mortality.
21st century
Beginning in 2018, a typhus outbreak spread through Los Angeles County primarily affecting homeless people. In 2019, city attorney Elizabeth Greenwood revealed that she, too, was infected with typhus as a result of a flea bite at her office in Los Angeles City Hall. Pasadena also experienced a sudden uptick in typhus with 22 cases in 2018 but, without being able to attribute this to one location, the Pasadena Public Health Department did not identify the cases as an "outbreak". Over the past decade as well murine typhus cases have been rising with the highest number of cases being 171 in 2022.
References
Rickettsioses
Wikipedia infectious disease articles ready to translate
Wikipedia medicine articles ready to translate
Vaccine-preventable diseases | 0.763508 | 0.999517 | 0.763139 |
Self-preservation | Self-preservation is a behavior or set of behaviors that ensures the survival of an organism. It is thought to be universal among all living organisms. For sentient organisms, pain and fear are integral parts of this mechanism. Pain motivates the individual to withdraw from damaging situations, to protect a damaged body part while it heals, and to avoid similar experiences in the future. Most pain resolves promptly once the painful stimulus is removed and the body has healed, but sometimes pain persists despite removal of the stimulus and apparent healing of the body; and sometimes pain arises in the absence of any detectable stimulus, damage or disease. Fear causes the organism to seek safety and may cause a release of adrenaline, which has the effect of increased strength and heightened senses such as hearing, smell, and sight. Self-preservation may also be interpreted figuratively, in regard to the coping mechanisms one needs to prevent emotional trauma from distorting the mind (see Defence mechanisms).
Even the most simple of living organisms (for example, the single-celled bacteria) are typically under intense selective pressure to evolve a response that would help avoid a damaging environment, if such an environment exists. Organisms also evolve while adaptingeven thrivingin a benign environment (for example, a marine sponge modifies its structure in response to current changes, in order to better absorb and process nutrients). Self-preservation is therefore an almost universal hallmark of life. However, when introduced to a novel threat, many species will have a self-preservation response either too specialised, or not specialised enough, to cope with that particular threat. An example is the dodo, which evolved in the absence of natural predators and hence lacked an appropriate, general self-preservation response to heavy predation by humans and rats, showing no fear of them.
Self-preservation is essentially the process of an organism preventing itself from being harmed or killed and is considered a basic instinct in most organisms. Most call it a "survival instinct". Self-preservation is thought to be tied to an organism's reproductive fitness and can be more or less present according to perceived reproduction potential. If perceived reproductive potential is low enough, self-destructive behavior (i.e., the opposite) is not uncommon in social species. Self-preservation is also thought by some to be the basis of rational and logical thought and behavior.
Overview
An organism's fitness is measured by its ability to pass on its genes. The most straightforward way to accomplish this is to survive to a reproductive age, mate, and then have offspring. These offspring will hold at least a portion of their parent's genes, up to all of the parent's genes in asexual organisms. But in order for this to happen, an organism must first survive long enough to reproduce, and this would mainly consist of adopting selfish behaviors that would allow organisms to maximize their own chances for survival.
Self-destructive behavior
Animals in a social group (of kin) often work cooperatively in order to survive, but when one member perceives itself as a burden for an extended period of time, it may commit self-destructive behavior. This allows its relatives to have a better chance at survival, and if enough close relatives survive, then its genes get indirectly passed on. This behavior works in the exact opposite direction of the survival instinct and could be considered a highly altruistic behavior evolved from a cooperative group. Self-destructive behavior is not the same as risk-taking behavior (see below in Social implications), although risk-taking behavior could turn into destructive behavior.
Social implications
The desire for self-preservation has led to countless laws and regulations surrounding a culture of safety in society. Seat belt laws, speed limits, texting regulations, and the "stranger danger" campaign are examples of societal guides and regulations to enhance survival, and these laws are heavily influenced by the pursuit of self-preservation.
Economic impacts
Self-preservation urges animals to collect energy and resources required to prolong life as well as resources that increase chances of survival. Basic needs are available to most humans (roughly 7 out of 8 people), and usually rather cheaply. The instinct that drives humans to gather resources now drives them to over-consumption or to patterns of collection and possession that essentially make hoarding resources the priority.
Cellular self preservation
Self-preservation is not just limited to individual organisms; this can be scaled up or down to other levels of life. Narula and Young indicate that cardiac myocytes have an acute sense of self-preservation. They are able to duck, dart, and dodge foreign substances that may harm the cell. In addition, when a myocardiac arresta heart attackoccurs, it is actually the cardiac myocytes entering a state of hibernation in an attempt to wait out a lack of resources. While this is ultimately deadly to the organism, it prolongs the cell's survival as long as possible for hopeful resuscitation.
Group self preservation
When scaled in the opposite direction, Hughes-Jones makes the argument that "social groups that fight each other are self‐sustaining, self‐replicating wholes containing interdependent parts" indicating that the group as a whole can have self-preservation with the individuals acting as the cells.
He makes an analogy between the survival practices such as hygiene and the ritual nature within small human groups or the nations that engage in religious warfare with the complex survival mechanisms of multi-cellular organisms that evolved from the cooperative association of single cell organisms in order to better protect themselves.
See also
Antipredator adaptation
Collective intelligence
Conatus
Dear enemy recognition
Death – result of failure to survive
Outline of death – topic tree of the subjects related to the end of life
Fight-or-flight response
Outline of self
Self-defense
Will to live
References
Ethology
Self-care
Evolutionary biology
Survival | 0.7699 | 0.991213 | 0.763135 |
Isolation (health care) | In health care facilities, isolation represents one of several measures that can be taken to implement in infection control: the prevention of communicable diseases from being transmitted from a patient to other patients, health care workers, and visitors, or from outsiders to a particular patient (reverse isolation). Various forms of isolation exist, in some of which contact procedures are modified, and others in which the patient is kept away from all other people. In a system devised, and periodically revised, by the U.S. Centers for Disease Control and Prevention (CDC), various levels of patient isolation comprise application of one or more formally described "precaution".
Isolation is most commonly used when a patient is known to have a contagious (transmissible from person-to-person) viral or bacterial illness. Special equipment is used in the management of patients in the various forms of isolation. These most commonly include items of personal protective equipment (gowns, masks, and gloves) and engineering controls (positive pressure rooms, negative pressure rooms, laminar air flow equipment, and various mechanical and structural barriers). Dedicated isolation wards may be pre-built into hospitals, or isolation units may be temporarily designated in facilities in the midst of an epidemic emergency.
Isolation should not be confused with quarantine or biocontainment. Quarantine is the compulsory separation and confinement, with restriction of movement, of individuals or groups who have potentially been exposed to an infectious microorganism, to prevent further infections, should infection occur. Biocontainment refers to laboratory biosafety in microbiology laboratories in which the physical containment (BSL-3, BSL-4) of highly pathogenic organisms is accomplished through built-in engineering controls.
When isolation is applied to a community or a geographic area it is known as a cordon sanitaire. Reverse isolation of a community, to protect its inhabitants from coming into contact with an infectious disease, is known as protective sequestration.
Importance
Contagious diseases can spread to others through various forms. Four types of infectious disease transmission can occur:
contact transmission, which can be through direct physical contact, indirect contact through fomites, or droplet contact in which airborne infections spread short distances,
vehicular transmission, which involves contaminated objects,
airborne transmission, which involves spread of infectious particles through air,
vector transmission, which is spread through insects or animals.
Depending on the contagious disease, transmission can occur within a person's home, school, worksite, health care facility, and other shared spaces within the community. Even if a person takes all necessary precautions to protect oneself from disease, such as being up-to-date with vaccines and practicing good hygiene, he or she can still get sick. Some people may not be able to protect themselves from diseases and may develop serious complications if they contract the disease. Therefore, disease isolation is an important infection prevention and control practice used to protect others from disease. Disease isolation can prevent healthcare-acquired infections of hospital-acquired infections (HCAIs), reduce threats of antibiotic resistance infections, and respond to new and emerging infectious disease threats globally.
Types of precautions
The U.S. Centers for Disease Control and Prevention (CDC) created various levels of disease isolation (also described "precaution"). These precautions are also reviewed and revised by the CDC.
Universal/standard
Universal precautions refer to the practice, in medicine, of avoiding contact with patients' bodily fluids, by means of the wearing of nonporous articles such as medical gloves, goggles, and face shields. The practice was widely introduced in 1985–88. In 1987, the practice of universal precautions was adjusted by a set of rules known as body substance isolation. In 1996, both practices were replaced by the latest approach known as standard precautions. Use of personal protective equipment is now recommended in all health settings.
One of the most standard practices for all medical professionals to reduce spread of disease is hand hygiene, or removing microorganisms from your hands. Frequent hand hygiene is essential for protection of healthcare workers and patients from hospital-acquired infection. Hospitals have specific approved disinfectants and approved methods for hand washing; defined by the American Nursing Association (ANA) and American Association of Nurse Anesthetists (AANA), proper hand washing with soap and water is defined as, splash water on hands, apply antiseptic soap, and scrub for at least 20 seconds. Approved hand washing with alcohol based sanitizers is, apply sanitizer to middle of hand and rub hands together covering all surfaces and fingernails until dry without touching anything.
Transmission-based
Transmission-based precautions are additional infection control precautions – over and above universal/standard precautions – and the latest routine infection prevention and control practices applied for patients who are known or suspected to be infected or colonized with infectious agents, including certain epidemiologically important pathogens. The latter require additional control measures to effectively prevent transmission. There are three types of transmission-based precautions:
Contact precautions are intended to prevent transmission of infectious agents, including epidemiologically important microorganisms, which are spread by direct or indirect contact with the patient or the patient's environment.
Droplet precautions are intended to prevent transmission of pathogens spread through close respiratory or mucous membrane contact with respiratory secretions.
Preventative measures such as personal protective equipment can be worn to prevent direct contact with mucous membrane and respiratory secretion. Many techniques can be applied in order to stop the spread of disease such as gloves. Along with gloves, gowns are also advised to be worn; gowns must be fitted with correct coverage, be tied tightly around the back, and disposed of in the proper receptacles prior to removal of gloves. Eye protection, hair coverings, and surgical masks are also required; all PPE, eye protection, hair coverings, and masks must be properly fitted to the face, covering eyes, nose, hairs, and mouths, be pre-tested in order to assure they are the correct size, and be sanitized or disposed of after contact with patient.
Airborne precautions prevent transmission of infectious agents that remain infectious over long distances when suspended in the air (e.g., rubeola virus [measles], varicella virus [chickenpox], M. tuberculosis, and possibly SARS-CoV).
Airborne pathogens can remain in the air and on objects for long periods of time; one of the easiest ways to prevent this spread is through disinfection and sterilization. The American Nurses Association and American Association of Nurse Anesthesiology set guidelines for sterilization and disinfection based on the Spaulding Disinfection and Sterilization Classification Scheme (SDSCS). The SDSCS classifies sterilization techniques into three categories: critical, semi-critical, and non-critical. For critical situations, or situations involving contact with sterile tissue or the vascular system, sterilize devices with sterilants that destroy all bacteria, rinse with sterile water, and use of chemical germicides. In semi-critical situations, or situations with contact of mucous membranes or non-intact skin, high-level disinfectants are required. Cleaning and disinfecting devices with high-level disinfectants, rinsing with sterile water, and drying all equipment surfaces to prevent microorganism growth are methods nurses and doctors must follow. For non-critical situations, or situations involving electronic devices, stethoscopes, blood pressure cuffs, beds, monitors and other general hospital equipment, intermediate level disinfection is required. “Clean all equipment between patients with alcohol, use protective covering for non-critical surfaces that are difficult to clean, and hydrogen peroxide gas. . .for reusable items that are difficult to clean."
Isolation
According to the CDC, isolation is the act of separating a sick individual with a contagious disease from healthy individuals without that contagious disease in order to protect the general public from exposure of a contagious disease.
Special equipment is used in the management of patients in the various forms of isolation. These most commonly include items of personal protective equipment (gowns, masks, and gloves) and engineering controls (positive pressure rooms, negative pressure rooms, laminar air flow equipment, and various mechanical and structural barriers). Dedicated isolation wards may be pre-built into hospitals, or isolation units may be temporarily designated in facilities in the midst of an epidemic emergency.
Many forms of isolation exist.
Contact isolation is used to prevent the spread of diseases that can be spread through contact with open wounds. Health care workers making contact with a patient on contact isolation are required to wear gloves, and in some cases, a gown.
Respiratory isolation is used for diseases that are spread through particles that are exhaled. Those having contact with or exposure to such a patient are required to wear a mask.
Reverse isolation is a way to prevent a patient in a compromised health situation from being contaminated by other people or objects. It often involves the use of laminar air flow and mechanical barriers (to avoid physical contact with others) to isolate the patient from any harmful pathogens present in the external environment.
High isolation is used to prevent the spread of unusually highly contagious, or high consequence, infectious diseases (e.g., smallpox, Ebola virus). It stipulates mandatory use of: (1) gloves (or double gloves if appropriate), (2) protective eyewear (goggles or face shield), (3) a waterproof gown (or total body Tyvek suit, if appropriate), and (4) a respirator (at least FFP2 or N95 NIOSH equivalent), not simply a surgical mask. Sometimes negative pressure rooms or powered air-purifying respirators (PAPRs) are also used.
Strict isolation is used for diseases spread through the air and in some cases by contact. Patients must be placed in isolation to prevent the spread of infectious diseases. Those who are kept in strict isolation are often kept in a special room at the facility designed for that purpose. Such rooms are equipped with a special lavatory and caregiving equipment, and a sink and waste disposal are provided for workers upon leaving the area.
Self-isolation
Self-isolation, seclusion or home isolation is the act of quarantining oneself to prevent infection of oneself or others, either voluntarily or to comply with relevant regulations or guidance. The practice became notable during the COVID-19 pandemic. Key features are:
staying at home
separating oneself from other peoplefor example, trying not to be in the same room as other people at the same time
asking friends, family members or delivery services to carry out errands, such as getting groceries, medicines or other shopping
asking delivery drivers to leave items outside for collection.
The Irish Health Service Executive recommends regularly monitoring symptoms and not disposing of rubbish until the self-isolation ends, warning also that "self-isolation can be boring or frustrating. It may affect your mood and feelings. You may feel low, worried or have problems sleeping. You may find it helps to stay in touch with friends or relatives by phone or on social media."
The UK Government states that anyone who is self-isolating should "not go to work, school, or public areas, and do not use public transport or taxis.
Nobody should go out even to buy food or other essentials, and any exercise must be taken within your home". As of March 2020 UK employers may provide sick pay to support self-isolation. Citizens Advice says that people on zero-hours contracts can also receive sick pay. For the purposes of people who have traveled to the UK, "self-isolate" and "self-isolation" are legally defined terms whose meaning is set out in the Health Protection (Coronavirus, International Travel) (England) Regulations 2020.
Isolation of health care workers
Disease isolation is relevant to the work and safety of health care workers. Health care workers may be regularly exposed to various types of illnesses and are at risk of getting sick. Disease spread can occur between a patient and a health care worker, even if the health care workers take all necessary precautions to minimize transmission, including proper hygiene and being up-to-date with vaccines. If a health care worker gets sick with a communicable disease, the possible spread may occur to other health care workers or susceptible patients within the health care facility. This can include patients with a weakened immune system and may be at risk for serious complications.
Health care workers who become infected with certain contagious agents may not be permitted to work with patients for a period of time. The Occupational Safety and Health Administration (OSHA) has implemented several standards and directives applicable to protecting health care workers from the spread of infectious agents. These include bloodborne pathogens, personal protective equipment, and respiratory protections. The CDC has also released resource for health care facilities to assist in assessing and reducing risk for occupational exposure to infectious diseases. The purpose of these standards and guidelines is to prevent the spread of disease to others in a health care facility.
Consequences
Disease isolation is rarely disputed for its importance in protecting others from disease. However, it is important to consider the consequences disease isolation may have on an individual. For instance, patients may not be able to receive visitors, and in turn, become lonely. Patients may experience depression, anxiety, and anger. Small children may feel their isolation is a punishment. Staff may need to spend more time with patients. Patients may not be able to receive certain types of care due to the risk that other patients may become contaminated. This includes forms of care that involve use of equipment common to all patients at the facility, or that involve transporting the patient to an area of the facility common to all patients. Given the impact of isolation on patients, social and emotional support may be needed.
Although a majority of health care professionals advocate for disease isolation as an effective means of reducing disease transmission, some health care professionals are concerned with implementing such control protocols given the possible negative consequences on patients. Patients isolated with Methicillin Resistant Staphylococcus Aureus (MRSA) can also be negatively impacted by having less documented care/bedside visits from attending and residents.
Ethics
Disease isolation serves as an important method to protect the general community from disease, especially in a hospital or community-wide outbreak. However, this intervention poses an ethical question on rights of the individual versus rights of the general community.
In cases of disease outbreaks, isolation can be argued as an ethical and necessary precaution for protecting the community from further disease transmission. This can be seen during the 2014 Disneyland measles outbreak and 2014 Ebola outbreak. This can be justified using felicific calculus to predict the outcomes (consequences) of moral action between the individual rights versus the rights of the general public during disease isolation. This justifies that disease isolation is most likely to result in the greatest amount of positive outcomes for the largest number of people.
Disease isolation can also be justified as a morally legitimate ethical practice in public health based on the reciprocal relationship between the individual and the state. The individual is obligated to protect others by preventing further spread of disease, respect the instructions from public health authorities and sequester themselves in their homes and not attend public gatherings, and act as a first responder (if a healthcare professional) by providing services to protect and restore public health. The state, on the other hand, is obligated to provide support to individuals burdened as a result of restrictive measures (e.g. compensation for missed work, providing access to food and other necessities for those medically isolated, assistance for first responders to balance personal/professional obligations), ensure several legal protections are in place for those subjected to restrictive measures and communicate all relevant information regarding the necessity of restriction.
The United Nations and the Siracusa Principles
Guidance on when and how human rights can be restricted to prevent the spread of infectious disease is found in the Siracusa Principles, a non-binding document developed by the Siracusa International Institute for Criminal Justice and Human Rights and adopted by the United Nations Economic and Social Council in 1984. The Siracusa Principles state that restrictions on human rights under the International Covenant on Civil and Political Rights must meet standards of legality, evidence-based necessity, proportionality, and gradualism, noting that public health can be used as grounds for limiting certain rights if the state needs to take measures "aimed at preventing disease or injury or providing care for the sick and injured." Limitations on rights (such as medical isolation) must be "strictly necessary," meaning that they must:
respond to a pressing public or social need (health)
proportionately pursue a legitimate aim (prevent the spread of infectious disease)
be the least restrictive means required for achieving the purpose of the limitation
be provided for and carried out in accordance with the law
be neither arbitrary nor discriminatory
only limit rights that are within the jurisdiction of the state seeking to impose the limitation.
In addition, when medical isolation is imposed, public health ethics specify that:
all restrictive actions must be well-supported by data and scientific evidence
all information must be made available to the public
all actions must be explained clearly to those whose rights are restricted and to the public
all actions must be subject to regular review and reconsideration.
Finally, the state is ethically obligated to guarantee that:
infected people will not be threatened or abused
basic needs such as food, water, medical care, and preventive care will be provided
communication with loved ones and with caretakers will be permitted
constraints on freedom will be applied equally, regardless of social considerations
patients will be compensated fairly for economic and material losses, including salary.
See also
Seclusion
Quarantine
Social distancing
Barrier nursing
Body substance isolation
Support bubble
References
Further reading
Ahlawat, A., Mishra, S. K., Birks, J. W., Costabile, F., & Wiedensohler, A. (2020). "Preventing airborne transmission of SARS-CoV-2 in hospitals and nursing homes." International Journal of Environmental Research and Public Health, 17(22), 8553.
Faria, M. A. (2002, June 1). Medical history - hygiene and sanitation. Hacienda Publishing
Bowdle, A., Jelacic, S., Shishido, S., & Munoz-Price, L. S. (2020). Infection prevention precautions for routine anesthesia care during the SARS-CoV-2 pandemic. Anesthesia and Analgesia, 131(5), 1342–1354.
"Infection Prevention and Control Guidelines For Anesthesia Care." (2015, February 15). Park Ridge, Illinois: American Association of Nurse Anesthesiology.
External links
Chart showing recommendations for various forms of isolation
!
Infection-control measures
Nursing
Medical hygiene | 0.776188 | 0.983179 | 0.763132 |
Common cold | The common cold or the cold is a viral infectious disease of the upper respiratory tract that primarily affects the respiratory mucosa of the nose, throat, sinuses, and larynx. Signs and symptoms may appear in as little as two days after exposure to the virus. These may include coughing, sore throat, runny nose, sneezing, headache, and fever. People usually recover in seven to ten days, but some symptoms may last up to three weeks. Occasionally, those with other health problems may develop pneumonia.
Well over 200 virus strains are implicated in causing the common cold, with rhinoviruses, coronaviruses, adenoviruses and enteroviruses being the most common. They spread through the air or indirectly through contact with objects in the environment, followed by transfer to the mouth or nose. Risk factors include going to child care facilities, not sleeping well, and psychological stress. The symptoms are mostly due to the body's immune response to the infection rather than to tissue destruction by the viruses themselves. The symptoms of influenza are similar to those of a cold, although usually more severe and less likely to include a runny nose.
There is no vaccine for the common cold. The primary methods of prevention are hand washing; not touching the eyes, nose or mouth with unwashed hands; and staying away from sick people. Some evidence supports the use of face masks. There is also no cure, but the symptoms can be treated. Zinc may reduce the duration and severity of symptoms if started shortly after the onset of symptoms. Nonsteroidal anti-inflammatory drugs (NSAIDs) such as ibuprofen may help with pain. Antibiotics, however, should not be used, as all colds are caused by viruses, and there is no good evidence that cough medicines are effective.
The common cold is the most frequent infectious disease in humans. Under normal circumstances, the average adult gets two to three colds a year, while the average child may get six to eight. Infections occur more commonly during the winter. These infections have existed throughout human history.
Signs and symptoms
The typical symptoms of a cold include cough, runny nose, sneezing, nasal congestion, and a sore throat, sometimes accompanied by muscle ache, fatigue, headache, and loss of appetite. A sore throat is present in about 40% of cases, a cough in about 50%, and muscle aches in about 50%. In adults, a fever is generally not present but it is common in infants and young children. The cough is usually mild compared to that accompanying influenza. While a cough and a fever indicate a higher likelihood of influenza in adults, a great deal of similarity exists between these two conditions. A number of the viruses that cause the common cold may also result in asymptomatic infections.
The color of the mucus or nasal secretion may vary from clear to yellow to green and does not indicate the class of agent causing the infection.
Progression
A cold usually begins with fatigue, a feeling of being chilled, sneezing, and a headache, followed in a couple of days by a runny nose and cough. Symptoms may begin within sixteen hours of exposure and typically peak two to four days after onset. They usually resolve in seven to ten days, but some can last for up to three weeks. The average duration of cough is eighteen days and in some cases people develop a post-viral cough which can linger after the infection is gone. In children, the cough lasts for more than ten days in 35–40% of cases and continues for more than 25 days in 10%.
Causes
Viruses
The common cold is an infection of the upper respiratory tract which can be caused by many different viruses. The most commonly implicated is a rhinovirus (30–80%), a type of picornavirus with 99 known serotypes. Other commonly implicated viruses include adenoviruses, enteroviruses, parainfluenza and RSV. Frequently more than one virus is present. In total, more than 200 viral types are associated with colds. The viral cause of some common colds (20–30%) is unknown.
Transmission
The common cold virus is typically transmitted via airborne droplets, direct contact with infected nasal secretions, or fomites (contaminated objects). Which of these routes is of primary importance has not been determined. As with all respiratory pathogens once presumed to transmit via respiratory droplets, it is highly likely to be carried by the aerosols generated during routine breathing, talking, and singing. The viruses may survive for prolonged periods in the environment (over 18 hours for rhinoviruses) and can be picked up by people's hands and subsequently carried to their eyes or noses where infection occurs. Transmission from animals is considered highly unlikely; an outbreak documented at a British scientific base on Adelaide Island after seventeen weeks of isolation was thought to have been caused by transmission from a contaminated object or an asymptomatic human carrier, rather than from the husky dogs which were also present at the base.
Transmission is common in daycare and schools due to the proximity of many children with little immunity and poor hygiene. These infections are then brought home to other members of the family. There is no evidence that recirculated air during commercial flight is a method of transmission. People sitting close to each other appear to be at greater risk of infection.
Other
Herd immunity, generated from previous exposure to cold viruses, plays an important role in limiting viral spread, as seen with younger populations that have greater rates of respiratory infections. Poor immune function is a risk factor for disease. Insufficient sleep and malnutrition have been associated with a greater risk of developing infection following rhinovirus exposure; this is believed to be due to their effects on immune function. Breast feeding decreases the risk of acute otitis media and lower respiratory tract infections among other diseases, and it is recommended that breast feeding be continued when an infant has a cold. In the developed world breast feeding may not be protective against the common cold in and of itself.
Pathophysiology
The symptoms of the common cold are believed to be primarily related to the immune response to the virus. The mechanism of this immune response is virus-specific. For example, the rhinovirus is typically acquired by direct contact; it binds to humans via ICAM-1 receptors and the CDHR3 receptor through unknown mechanisms to trigger the release of inflammatory mediators. These inflammatory mediators then produce the symptoms. It does not generally cause damage to the nasal epithelium. The respiratory syncytial virus (RSV), on the other hand, is contracted by direct contact and airborne droplets. It then replicates in the nose and throat before spreading to the lower respiratory tract. RSV does cause epithelium damage. Human parainfluenza virus typically results in inflammation of the nose, throat, and bronchi. In young children, when it affects the trachea, it may produce the symptoms of croup, due to the small size of their airways.
Diagnosis
The distinction between viral upper respiratory tract infections is loosely based on the location of symptoms, with the common cold affecting primarily the nose (rhinitis), throat (pharyngitis), and lungs (bronchitis). There can be significant overlap, and more than one area can be affected. Self-diagnosis is frequent. Isolation of the viral agent involved is rarely performed, and it is generally not possible to identify the virus type through symptoms.
Prevention
The only useful ways to reduce the spread of cold viruses are physical and engineering measures such as using correct hand washing technique, respirators, and improvement of indoor air. In the healthcare environment, gowns and disposable gloves are also used. Droplet precautions cannot reliably protect against inhalation of common-cold-laden aerosols. Instead, airborne precautions such as respirators, ventilation, and HEPA/high MERV filters, are the only reliable protection against cold-laden aerosols. Isolation or quarantine is not used as the disease is so widespread and symptoms are non-specific. There is no vaccine to protect against the common cold. Vaccination has proven difficult as there are so many viruses involved and because they mutate rapidly. Creation of a broadly effective vaccine is, therefore, highly improbable.
Regular hand washing appears to be effective in reducing the transmission of cold viruses, especially among children. Whether the addition of antivirals or antibacterials to normal hand washing provides greater benefit is unknown. Wearing face masks when around people who are infected may be beneficial; however, there is insufficient evidence for maintaining a greater social distance.
It is unclear whether zinc supplements affect the likelihood of contracting a cold. Routine vitamin C supplements do not reduce the risk or severity of the common cold, though they may reduce its duration.
Management
Treatments of the common cold primarily involve medications and other therapies for symptomatic relief. Getting plenty of rest, drinking fluids to maintain hydration, and gargling with warm salt water are reasonable conservative measures. Much of the benefit from symptomatic treatment is, however, attributed to the placebo effect. no medications or herbal remedies had been conclusively demonstrated to shorten the duration of infection.
Symptomatic
Treatments that may help with symptoms include pain medication and medications for fevers such as ibuprofen and acetaminophen (paracetamol). However, it is not clear whether acetaminophen helps with symptoms. It is not known if over-the-counter cough medications are effective for treating an acute cough. Cough medicines are not recommended for use in children due to a lack of evidence supporting effectiveness and the potential for harm. In 2009, Canada restricted the use of over-the-counter cough and cold medication in children six years and under due to concerns regarding risks and unproven benefits. The misuse of dextromethorphan (an over-the-counter cough medicine) has led to its ban in a number of countries. Intranasal corticosteroids have not been found to be useful.
In adults, short term use of nasal decongestants may have a small benefit. Antihistamines may improve symptoms in the first day or two; however, there is no longer-term benefit and they have adverse effects such as drowsiness. Other decongestants such as pseudoephedrine appear effective in adults. Combined oral analgesics, antihistaminics, and decongestants are generally effective for older children and adults. Ipratropium nasal spray may reduce the symptoms of a runny nose but has little effect on stuffiness. Ipratropium may also help with coughs in adults. The safety and effectiveness of nasal decongestant use in children is unclear.
Due to lack of studies, it is not known whether increased fluid intake improves symptoms or shortens respiratory illness. As of 2017, heated and humidified air, such as via RhinoTherm, is of unclear benefit. One study has found chest vapor rub to provide some relief of nocturnal cough, congestion, and sleep difficulty.
Some experts advise against physical exercise if there are symptoms such as fever, widespread muscle aches or fatigue. It is regarded as safe to perform moderate exercise if the symptoms are confined to the head, including runny nose, nasal congestion, sneezing, or a minor sore throat. There is a popular belief that having a hot drink can help with cold symptoms, but evidence to support this is very limited.
Antibiotics and antivirals
Antibiotics have no effect against viral infections, including the common cold. Due to their side effects, antibiotics cause overall harm but nevertheless are still frequently prescribed. Some of the reasons that antibiotics are so commonly prescribed include people's expectations for them, physicians' desire to help, and the difficulty in excluding complications that may be amenable to antibiotics. There are no effective antiviral drugs for the common cold even though some preliminary research has shown benefits.
Zinc
Zinc supplements may shorten the duration of colds by up to 33% and reduce the severity of symptoms if supplementation begins within 24 hours of the onset of symptoms. Some zinc remedies directly applied to the inside of the nose have led to the loss of the sense of smell. A 2017 review did not recommend the use of zinc for the common cold for various reasons; whereas a 2017 and 2018 review both recommended the use of zinc, but also advocated further research on the topic.
Alternative medicine
While there are many alternative medicines and Chinese herbal medicines supposed to treat the common cold, there is insufficient scientific evidence to support their use. As of 2015, there is weak evidence to support nasal irrigation with saline. There is no firm evidence that Echinacea products or garlic provide any meaningful benefit in treating or preventing colds.
Vitamins C and D
Vitamin C supplementation does not affect the incidence of the common cold, but may reduce its duration. There is no conclusive evidence that vitamin D supplementation is efficacious in the prevention or treatment of respiratory tract infections.
Prognosis
The common cold is generally mild and self-limiting with most symptoms generally improving in a week. In children, half of cases resolve in 10 days and 90% in 15 days. Severe complications, if they occur, are usually in the very old, the very young, or those who are immunosuppressed. Secondary bacterial infections may occur resulting in sinusitis, pharyngitis, or an ear infection. It is estimated that sinusitis occurs in 8% and ear infection in 30% of cases.
Epidemiology
The common cold is the most common human disease and affects people all over the globe. Adults typically have two to three infections annually, and children may have six to ten colds a year (and up to twelve colds a year for school children). Rates of symptomatic infections increase in the elderly due to declining immunity.
Weather
A common misconception is that one can "catch a cold" merely through prolonged exposure to cold weather. Although it is now known that colds are viral infections, the prevalence of many such viruses are indeed seasonal, occurring more frequently during cold weather. The reason for the seasonality has not been conclusively determined. Possible explanations may include cold temperature-induced changes in the respiratory system, decreased immune response, and low humidity causing an increase in viral transmission rates, perhaps due to dry air allowing small viral droplets to disperse farther and stay in the air longer.
The apparent seasonality may also be due to social factors, such as people spending more time indoors near infected people, and especially children at school. Although normal exposure to cold does not increase one's risk of infection, severe exposure leading to significant reduction of body temperature (hypothermia) may put one at a greater risk for the common cold: although controversial, the majority of evidence suggests that it may increase susceptibility to infection.
History
While the cause of the common cold was identified in the 1950s, the disease appears to have been with humanity since its early history. Its symptoms and treatment are described in the Egyptian Ebers papyrus, the oldest existing medical text, written before the 16th century BCE. The name "cold" came into use in the 16th century, due to the similarity between its symptoms and those of exposure to cold weather.
In the United Kingdom, the Common Cold Unit (CCU) was set up by the Medical Research Council in 1946 and it was where the rhinovirus was discovered in 1956. In the 1970s, the CCU demonstrated that treatment with interferon during the incubation phase of rhinovirus infection protects somewhat against the disease, but no practical treatment could be developed. The unit was closed in 1989, two years after it completed research of zinc gluconate lozenges in the prevention and treatment of rhinovirus colds, the only successful treatment in the history of the unit.
Research directions
Antivirals have been tested for effectiveness in the common cold; as of 2009, none had been both found effective and licensed for use. There are trials of the anti-viral drug pleconaril which shows promise against picornaviruses as well as trials of BTA-798. The oral form of pleconaril had safety issues and an aerosol form is being studied. The genomes of all known human rhinovirus strains have been sequenced.
Societal impact
The economic impact of the common cold is not well understood in much of the world. In the United States, the common cold leads to 75–100 million physician visits annually at a conservative cost estimate of $7.7 billion per year. Americans spend $2.9 billion on over-the-counter drugs and another $400 million on prescription medicines for symptom relief. More than one-third of people who saw a doctor received an antibiotic prescription, which has implications for antibiotic resistance. An estimated 22–189 million school days are missed annually due to a cold. As a result, parents missed 126 million workdays to stay home to care for their children. When added to the 150 million workdays missed by employees who have a cold, the total economic impact of cold-related work loss exceeds $20 billion per year. This accounts for 40% of time lost from work in the United States.
References
Notes
Bibliography
External links
Acute upper respiratory infections
Animal viral diseases
Airborne diseases
Common cold
Coronavirus-associated diseases
Enterovirus-associated diseases
Inflammations
Wikipedia medicine articles ready to translate (full)
Wikipedia emergency medicine articles ready to translate | 0.763816 | 0.999066 | 0.763103 |
Mold | A mold or mould is one of the structures that certain fungi can form. The dust-like, colored appearance of molds is due to the formation of spores containing fungal secondary metabolites. The spores are the dispersal units of the fungi. Not all fungi form molds. Some fungi form mushrooms; others grow as single cells and are called microfungi (for example yeasts).
A large and taxonomically diverse number of fungal species form molds. The growth of hyphae results in discoloration and a fuzzy appearance, especially on food. The network of these tubular branching hyphae, called a mycelium, is considered a single organism. The hyphae are generally transparent, so the mycelium appears like very fine, fluffy white threads over the surface. Cross-walls (septa) may delimit connected compartments along the hyphae, each containing one or multiple, genetically identical nuclei. The dusty texture of many molds is caused by profuse production of asexual spores (conidia) formed by differentiation at the ends of hyphae. The mode of formation and shape of these spores is traditionally used to classify molds. Many of these spores are colored, making the fungus much more obvious to the human eye at this stage in its life-cycle.
Molds are considered to be microbes and do not form a specific taxonomic or phylogenetic grouping, but can be found in the divisions Zygomycota and Ascomycota. In the past, most molds were classified within the Deuteromycota. Mold had been used as a common name for now non-fungal groups such as water molds or slime molds that were once considered fungi.
Molds cause biodegradation of natural materials, which can be unwanted when it becomes food spoilage or damage to property. They also play important roles in biotechnology and food science in the production of various pigments, foods, beverages, antibiotics, pharmaceuticals and enzymes. Some diseases of animals and humans can be caused by certain molds: disease may result from allergic sensitivity to mold spores, from growth of pathogenic molds within the body, or from the effects of ingested or inhaled toxic compounds (mycotoxins) produced by molds.
Biology
There are thousands of known species of mold fungi with diverse life-styles including saprotrophs, mesophiles, psychrophiles and thermophiles, and a very few opportunistic pathogens of humans. They all require moisture for growth and some live in aquatic environments. Like all fungi, molds derive energy not through photosynthesis but from the organic matter on which they live, utilizing heterotrophy. Typically, molds secrete hydrolytic enzymes, mainly from the hyphal tips. These enzymes degrade complex biopolymers such as starch, cellulose and lignin into simpler substances which can be absorbed by the hyphae. In this way, molds play a major role in causing decomposition of organic material, enabling the recycling of nutrients throughout ecosystems. Many molds also synthesize mycotoxins and siderophores which, together with lytic enzymes, inhibit the growth of competing microorganisms. Molds can also grow on stored food for animals and humans, making the food unpalatable or toxic and are thus a major source of food losses and illness. Many strategies for food preservation (salting, pickling, jams, bottling, freezing, drying) are to prevent or slow mold growth as well as the growth of other microbes.
Molds reproduce by producing large numbers of small spores, which may contain a single nucleus or be multinucleate. Mold spores can be asexual (the products of mitosis) or sexual (the products of meiosis); many species can produce both types. Some molds produce small, hydrophobic spores that are adapted for wind dispersal and may remain airborne for long periods; in some the cell walls are darkly pigmented, providing resistance to damage by ultraviolet radiation. Other mold spores have slimy sheaths and are more suited to water dispersal. Mold spores are often spherical or ovoid single cells, but can be multicellular and variously shaped. Spores may cling to clothing or fur; some are able to survive extremes of temperature and pressure.
Although molds can grow on dead organic matter everywhere in nature, their presence is visible to the unaided eye only when they form large colonies. A mold colony does not consist of discrete organisms but is an interconnected network of hyphae called a mycelium. All growth occurs at hyphal tips, with cytoplasm and organelles flowing forwards as the hyphae advance over or through new food sources. Nutrients are absorbed at the hyphal tip. In artificial environments such as buildings, humidity and temperature are often stable enough to foster the growth of mold colonies, commonly seen as a downy or furry coating growing on food or other surfaces.
Few molds can begin growing at temperatures of or below, so food is typically refrigerated at this temperature. When conditions do not enable growth to take place, molds may remain alive in a dormant state depending on the species, within a large range of temperatures. The many different mold species vary enormously in their tolerance to temperature and humidity extremes. Certain molds can survive harsh conditions such as the snow-covered soils of Antarctica, refrigeration, highly acidic solvents, anti-bacterial soap and even petroleum products such as jet fuel.
Xerophilic molds are able to grow in relatively dry, salty, or sugary environments, where water activity (aw) is less than 0.85; other molds need more moisture.
Common molds
Common genera of molds include:
Acremonium
Alternaria
Aspergillus
Cladosporium
Fusarium
Mucor
Penicillium
Rhizopus
Stachybotrys
Trichoderma
Trichophyton
Food production
The Kōji molds are a group of Aspergillus species, notably Aspergillus oryzae, and secondarily A. sojae, that have been cultured in eastern Asia for many centuries. They are used to ferment a soybean and wheat mixture to make soybean paste and soy sauce. Koji molds break down the starch in rice, barley, sweet potatoes, etc., a process called saccharification, in the production of sake, shōchū and other distilled spirits. Koji molds are also used in the preparation of Katsuobushi.
Red rice yeast is a product of the mold Monascus purpureus grown on rice, and is common in Asian diets. The yeast contains several compounds collectively known as monacolins, which are known to inhibit cholesterol synthesis. A study has shown that red rice yeast used as a dietary supplement, combined with fish oil and healthy lifestyle changes, may help reduce "bad" cholesterol as effectively as certain commercial statin drugs. Nonetheless, other work has shown it may not be reliable (perhaps due to non-standardization) and even toxic to liver and kidneys.
Some sausages, such as salami, incorporate starter cultures of molds to improve flavor and reduce bacterial spoilage during curing. Penicillium nalgiovense, for example, may appear as a powdery white coating on some varieties of dry-cured sausage.
Other molds that have been used in food production include:
Fusarium venenatum – quorn
Geotrichum candidum – cheese
Neurospora sitophila – oncom
Penicillium spp. – various cheeses including Brie and Blue cheese
Rhizomucor miehei – microbial rennet for making vegetarian and other cheeses
Rhizopus oligosporus – tempeh
Rhizopus oryzae – tempeh, jiuqu for jiuniang or precursor for making Chinese rice wine
Pharmaceuticals from molds
Alexander Fleming's accidental discovery of the antibiotic penicillin involved a Penicillium mold called Penicillium rubrum (although the species was later established to be Penicillium rubens). Fleming continued to investigate penicillin, showing that it could inhibit various types of bacteria found in infections and other ailments, but he was unable to produce the compound in large enough amounts necessary for production of a medicine. His work was expanded by a team at Oxford University; Clutterbuck, Lovell, and Raistrick, who began to work on the problem in 1931. This team was also unable to produce the pure compound in any large amount, and found that the purification process diminished its effectiveness and negated the anti-bacterial properties it had.
Howard Florey, Ernst Chain, Norman Heatley, Edward Abraham, also all at Oxford, continued the work. They enhanced and developed the concentration technique by using organic solutions rather than water, and created the "Oxford Unit" to measure penicillin concentration within a solution. They managed to purify the solution, increasing its concentration by 45–50 times, but found that a higher concentration was possible. Experiments were conducted and the results published in 1941, though the quantities of penicillin produced were not always high enough for the treatments required. As this was during the Second World War, Florey sought US government involvement. With research teams in the UK and some in the US, industrial-scale production of crystallized penicillin was developed during 1941–1944 by the USDA and by Pfizer.
Several statin cholesterol-lowering drugs (such as lovastatin, from Aspergillus terreus) are derived from molds.
The immunosuppressant drug cyclosporine, used to suppress the rejection of transplanted organs, is derived from the mold Tolypocladium inflatum.
Health effects
Molds are ubiquitous, and mold spores are a common component of household and workplace dust; however, when mold spores are present in large quantities, they can present a health hazard to humans, potentially causing allergic reactions and respiratory problems.
Some molds also produce mycotoxins that can pose serious health risks to humans and animals. Some studies claim that exposure to high levels of mycotoxins can lead to neurological problems and, in some cases, death. Prolonged exposure, e.g. daily home exposure, may be particularly harmful. Research on the health impacts of mold has not been conclusive. The term "toxic mold" refers to molds that produce mycotoxins, such as Stachybotrys chartarum, and not to all molds in general.
Mold in the home can usually be found in damp, dark or steamy areas, e.g. bathrooms, kitchens, cluttered storage areas, recently flooded areas, basement areas, plumbing spaces, areas with poor ventilation and outdoors in humid environments. Symptoms caused by mold allergy are: watery, itchy eyes; a chronic cough; headaches or migraines; difficulty breathing; rashes; tiredness; sinus problems; nasal blockage and frequent sneezing.
Molds can also pose a hazard to human and animal health when they are consumed following the growth of certain mold species in stored food. Some species produce toxic secondary metabolites, collectively termed mycotoxins, including aflatoxins, ochratoxins, fumonisins, trichothecenes, citrinin, and patulin. These toxic properties may be used for the benefit of humans when the toxicity is directed against other organisms; for example, penicillin adversely affects the growth of Gram-positive bacteria (e.g. Clostridium species), certain spirochetes and certain fungi.
Growth in buildings and homes
Mold growth in buildings generally occurs as fungi colonize porous building materials, such as wood. Many building products commonly incorporate paper, wood products, or solid wood members, such as paper-covered drywall, wood cabinets, and insulation. Interior mold colonization can lead to a variety of health problems as microscopic airborne reproductive spores, analogous to tree pollen, are inhaled by building occupants. High quantities of indoor airborne spores as compared to exterior conditions are strongly suggestive of indoor mold growth. Determination of airborne spore counts is accomplished by way of an air sample, in which a specialized pump with a known flow rate is operated for a known period of time. To account for background levels, air samples should be drawn from the affected area, a control area, and the exterior.
The air sampler pump draws in air and deposits microscopic airborne particles on a culture medium. The medium is cultured in a laboratory and the fungal genus and species are determined by visual microscopic observation. Laboratory results also quantify fungal growth by way of a spore count for comparison among samples. The pump operation time is recorded and when multiplied by pump flow rate results in a specific volume of air obtained. Although a small volume of air is actually analyzed, common laboratory reports extrapolate the spore count data to estimate spores that would be present in a cubic meter of air.
Mold spores are drawn to specific environments, making it easier for them to grow. These spores will usually only turn into a full-blown outbreak if certain conditions are met. Various practices can be followed to mitigate mold issues in buildings, the most important of which is to reduce moisture levels that can facilitate mold growth. Air filtration reduces the number of spores available for germination, especially when a High Efficiency Particulate Air (HEPA) filter is used. A properly functioning AC unit also reduces the relative humidity in rooms. The United States Environmental Protection Agency (EPA) currently recommends that relative humidity be maintained below 60%, ideally between 30% and 50%, to inhibit mold growth.
Eliminating the moisture source is the first step at fungal remediation. Removal of affected materials may also be necessary for remediation, if materials are easily replaceable and not part of the load-bearing structure. Professional drying of concealed wall cavities and enclosed spaces such as cabinet toekick spaces may be required. Post-remediation verification of moisture content and fungal growth is required for successful remediation. Many contractors perform post-remediation verification themselves, but property owners may benefit from independent verification. Left untreated, mold can potentially cause serious cosmetic and structural damage to a property.
Use in art
Various artists have used mold in various artistic fashions. Daniele Del Nero, for example, constructs scale models of houses and office buildings and then induces mold to grow on them, giving them an unsettling, reclaimed-by-nature look. Stacy Levy sandblasts enlarged images of mold onto glass, then allows mold to grow in the crevasses she has made, creating a macro-micro portrait. Sam Taylor-Johnson, director of Nowhere Boy and Fifty Shades of Grey (among others), has made a number of time-lapse films capturing the gradual decay of classically arranged still lifes.
See also
Slime mold
Water mold
References
External links
The EPA's guide to mold
Fungus common names
Articles containing video clips | 0.76413 | 0.99865 | 0.763099 |
Mucous membrane | A mucous membrane or mucosa is a membrane that lines various cavities in the body of an organism and covers the surface of internal organs. It consists of one or more layers of epithelial cells overlying a layer of loose connective tissue. It is mostly of endodermal origin and is continuous with the skin at body openings such as the eyes, eyelids, ears, inside the nose, inside the mouth, lips, the genital areas, the urethral opening and the anus. Some mucous membranes secrete mucus, a thick protective fluid. The function of the membrane is to stop pathogens and dirt from entering the body and to prevent bodily tissues from becoming dehydrated.
Structure
The mucosa is composed of one or more layers of epithelial cells that secrete mucus, and an underlying lamina propria of loose connective tissue. The type of cells and type of mucus secreted vary from organ to organ and each can differ along a given tract.
Mucous membranes line the digestive, respiratory and reproductive tracts and are the primary barrier between the external world and the interior of the body; in an adult human the total surface area of the mucosa is about 400 square meters while the surface area of the skin is about 2 square meters. Along with providing a physical barrier, they also contain key parts of the immune system and serve as the interface between the body proper and the microbiome.
Examples
Some examples include:
Endometrium: the mucosa of the uterus
Gastric mucosa
Intestinal mucosa
Nasal mucosa
Olfactory mucosa
Oral mucosa
Penile mucosa
Respiratory mucosa
Vaginal mucosa
Frenulum of tongue
Anal canal
Conjunctiva
Development
Developmentally, the majority of mucous membranes are of endodermal origin. Exceptions include the palate, cheeks, floor of the mouth, gums, lips and the portion of the anal canal below the pectinate line, which are all ectodermal in origin.
Function
One of its functions is to keep the tissue moist (for example in the respiratory tract, including the mouth and nose). It also plays a role in absorbing and transforming nutrients. Mucous membranes also protect the body from itself. For instance, mucosa in the stomach protects it from stomach acid, and mucosa lining the bladder protects the underlying tissue from urine. In the uterus, the mucous membrane is called the endometrium, and it swells each month and is then eliminated during menstruation.
Nutrition
Niacin and vitamin A are essential nutrients that help maintain mucous membranes.
See also
Alkaline mucus
Mucin
Mucociliary clearance
Mucocutaneous boundary
Mucosal immunology
Mucosal-associated invariant T cell
Mucosal melanoma
Rete pegs
References
Anatomy
Membrane biology | 0.765427 | 0.996916 | 0.763067 |
Occupational hygiene | Occupational hygiene or industrial hygiene (IH) is the anticipation, recognition, evaluation, control, and confirmation (ARECC) of protection from risks associated with exposures to hazards in, or arising from, the workplace that may result in injury, illness, impairment, or affect the well-being of workers and members of the community. These hazards or stressors are typically divided into the categories biological, chemical, physical, ergonomic and psychosocial. The risk of a health effect from a given stressor is a function of the hazard multiplied by the exposure to the individual or group. For chemicals, the hazard can be understood by the dose response profile most often based on toxicological studies or models. Occupational hygienists work closely with toxicologists (see Toxicology) for understanding chemical hazards, physicists (see Physics) for physical hazards, and physicians and microbiologists for biological hazards (see Microbiology, Tropical medicine, Infection). Environmental and occupational hygienists are considered experts in exposure science and exposure risk management. Depending on an individual's type of job, a hygienist will apply their exposure science expertise for the protection of workers, consumers and/or communities.
The profession of occupational hygienist
The British Occupational Hygiene Society (BOHS) defines that "occupational hygiene is about the prevention of ill-health from work, through recognizing, evaluating and controlling the risks". The International Occupational Hygiene Association (IOHA) refers to occupational hygiene as the discipline of anticipating, recognizing, evaluating and controlling health hazards in the working environment with the objective of protecting worker health and well-being and safeguarding the community at large. The term occupational hygiene (used in the UK and Commonwealth countries as well as much of Europe) is synonymous with industrial hygiene (used in the US, Latin America, and other countries that received initial technical support or training from US sources). The term industrial hygiene traditionally stems from industries with construction, mining or manufacturing, and occupational hygiene refers to all types of industry such as those listed for industrial hygiene as well as financial and support services industries and refers to "work", "workplace" and "place of work" in general. Environmental hygiene addresses similar issues to occupational hygiene but is likely to be about broad industry or broad issues affecting the local community, broader society, region or country.
The profession of occupational hygiene uses strict and rigorous scientific methodology and often requires professional judgment based on experience and education in determining the potential for hazardous exposure risks in workplace and environmental studies. These aspects of occupational hygiene can often be referred to as the "art" of occupational hygiene and is used in a similar sense to the "art" of medicine. In fact "occupational hygiene" is both an aspect of preventive medicine and in particular occupational medicine, in that its goal is to prevent industrial disease, using the science of risk management, exposure assessment and industrial safety. Ultimately professionals seek to implement "safe" systems, procedures or methods to be applied in the workplace or to the environment. Prevention of exposure to long working hours has been identified as a focus for occupational hygiene when a landmark United Nations study estimated that this occupational hazard causes an estimated 745,000 occupational fatalities per year worldwide, the largest burden of disease attributed to any single occupational hazard.
Industrial hygiene refers to the science of anticipating, recognizing, evaluating, and controlling workplaces to prevent illness or injuries to the workers. Industrial hygienists use various environmental monitoring and analytical methods to establish how workers are exposed. In turn, they employ techniques such as engineering and work practice controls to control any potential health hazards.
Anticipation involves identifying potential hazards in the workplace before they are introduced. The uncertainty of health hazards ranges from reasonable expectations to mere speculations. However, it implies that the industrial hygienist must understand the nature of changes in the processes, products, environments, and workforces of the workplaces and how they can affect workers' well-being.
Recognition of engineering, work practice, and administrative controls are the primary means of reducing the workers` exposure to occupational hazards. Timely recognition of hazards minimizes the workers' exposure to the hazards by removing or reducing the hazard's source or isolating the workers from the hazards.
Evaluation of a worksite is a significant step that helps the industrial hygienists establish jobs and worksites that are a potential source of problems. During the evaluation, the industrial hygienist measures and identifies the problem tasks, exposures, and tasks. The most effective worksites assessment includes all the jobs, work activities, and operations. The industrial hygienists inspect research and evaluations of how given physical or chemical hazards affect the workers' health. If the workplace contains a health hazard, the industrial hygienist recommends appropriate corrective actions.
Control measures include removing toxic chemicals and replacing harmful toxic materials with less hazardous ones. It also involves confining work operations or enclosing work processes and installing general and local ventilation systems. Controls change how the task is performed. Some of the basic work practice controls include: following the laid procedures to reduce exposures while at the workplace, inspecting and maintaining processes regularly, and implementing reasonable workplace procedures.
History
The industrial hygiene profession gained respectability back in 1700 when Bernardino Ramazzini published a comprehensive book on industrial medicine. The book was written in Italian and was known as De Morbis Artificum Diatriba, meaning “The Diseases of Workmen”. The book detailed the accurate description of the occupational diseases that most of his time workers suffered from. Ramazzini was critical to the industrial hygiene profession's future because he asserted that occupational diseases should be studied in the workplace environment and not in hospital wards.
Industrial hygiene in the United States started taking shape in the early 20th century. There before, many workers risked their lives daily to work in industrial settings such as manufacturing, mills, constructions, and mines. Currently, the statistics on work safety are usually measured by the number of injuries and deaths yearly. Before the 20th century, these kinds of statistics were hard to come by because it appeared no one cared enough to make tracking of the job injuries and deaths a priority.
Industrial hygiene received another boost in the early 20th century when Alice Hamilton led an effort to improve industrial hygiene. She began by observing industrial conditions first and then startled mine owners, factory managers, and other state officials with evidence that there was a correlation between workers' illnesses and their exposure to chemical toxins. She presented definitive proposals for eliminating unhealthful working conditions. As a result, the US federal government also began investigating health conditions in the industry. In 1911, the states passed the first workers' compensation laws.
The social role of occupational hygiene
Occupational hygienists have been involved historically with changing the perception of society about the nature and extent of hazards and preventing exposures in the workplace and communities. Many occupational hygienists work day-to-day with industrial situations that require control or improvement to the workplace situation. However larger social issues affecting whole industries have occurred in the past e.g. since 1900, asbestos exposures that have affected the lives of tens of thousands of people. Occupational hygienists have become more engaged in understanding and managing exposure risks to consumers from products with regulations such as REACh (Registration, Evaluation, Authorisation and Restriction of Chemicals) enacted in 2006.
More recent issues affecting broader society are, for example in 1976, Legionnaires' disease or legionellosis. More recently again in the 1990s, radon, and in the 2000s, the effects of mold from indoor air quality situations in the home and at work. In the later part of the 2000s, concern has been raised about the health effects of nanoparticles.
Many of these issues have required the coordination of medical and paraprofessionals in detecting and then characterizing the nature of the issue, both in terms of the hazard and in terms of the risk to the workplace and ultimately to society. This has involved occupational hygienists in research, collection of data and development of suitable and satisfactory control methodologies.
General activities
The occupational hygienist may be involved with the assessment and control of physical, chemical, biological or environmental hazards in the workplace or community that could cause injury or disease. Physical hazards may include noise, temperature extremes, illumination extremes, ionizing or non-ionizing radiation, and ergonomics. Chemical hazards related to dangerous goods or hazardous substances are frequently investigated by occupational hygienists. Other related areas including indoor air quality (IAQ) and safety may also receive the attention of the occupational hygienist. Biological hazards may stem from the potential for legionella exposure at work or the investigation of biological injury or effects at work, such as dermatitis may be investigated.
As part of the investigation process, the occupational hygienist may be called upon to communicate effectively regarding the nature of the hazard, the potential for risk, and the appropriate methods of control. Appropriate controls are selected from the hierarchy of control: by elimination, substitution, engineering, administration and personal protective equipment (PPE) to control the hazard or eliminate the risk. Such controls may involve recommendations as simple as appropriate PPE such as a 'basic' particulate dust mask to occasionally designing dust extraction ventilation systems, work places or management systems to manage people and programs for the preservation of health and well-being of those who enter a workplace.
Examples of occupational hygiene include:
Analysis of physical hazards such as noise, which may require use of hearing protection earplugs and/or earmuffs to prevent hearing loss.
Developing plans and procedures to protect against infectious disease exposure in the event of a flu pandemic.
Monitoring the air for hazardous contaminants which may potentially lead to worker illness or death.
Workplace assessment methods
Although there are many aspects to occupational hygiene work the most known and sought after is in determining or estimating potential or actual exposures to hazards. For many chemicals and physical hazards, occupational exposure limits have been derived using toxicological, epidemiological and medical data allowing hygienists to reduce the risks of health effects by implementing the "Hierarchy of Hazard Controls". Several methods can be applied in assessing the workplace or environment for exposure to a known or suspected hazard. Occupational hygienists do not rely on the accuracy of the equipment or method used but in knowing with certainty and precision the limits of the equipment or method being used and the error or variance given by using that particular equipment or method. Well known methods for performing occupational exposure assessments can be found in the book A Strategy for Assessing and Managing Occupational Exposures, published by AIHA Press.
The main steps outlined for assessing and managing occupational exposures:
Basic Characterization (identify agents, hazards, people potentially exposed and existing exposure controls)
Exposure Assessment (select occupational exposure limits, hazard bands, relevant toxicological data to determine if exposures are "acceptable", "unacceptable" or "uncertain")
Exposure Controls (for "unacceptable" or "uncertain" exposures)
Further Information Gathering (for "uncertain" exposures)
Hazard Communication (for all exposures)
Reassessment (as needed) / Management of Change
Basic characterization, hazard identification and walk-through surveys
The first step in understanding health risks related to exposures requires the collection of "basic characterization" information from available sources. A traditional method applied by occupational hygienists to initially survey a workplace or environment is used to determine both the types and possible exposures from hazards (e.g. noise, chemicals, radiation). The walk-through survey can be targeted or limited to particular hazards such as silica dust, or noise, to focus attention on control of all hazards to workers. A full walk-through survey is frequently used to provide information on establishing a framework for future investigations, prioritizing hazards, determining the requirements for measurement and establishing some immediate control of potential exposures. The Health Hazard Evaluation Program from the National Institute for Occupational Safety and Health is an example of an industrial hygiene walk-through survey. Other sources of basic characterization information include worker interviews, observing exposure tasks, material safety data sheets, workforce scheduling, production data, equipment and maintenance schedules to identify potential exposure agents and people possibly exposed.
The information that needs to be gathered from sources should apply to the specific type of work from which the hazards can come from. As mentioned previously, examples of these sources include interviews with people who have worked in the field of the hazard, history and analysis of past incidents, and official reports of work and the hazards encountered. Of these, the personnel interviews may be the most critical in identifying undocumented practices, events, releases, hazards and other relevant information. Once the information is gathered from a collection of sources, it is recommended for these to be digitally archived (to allow for quick searching) and to have a physical set of the same information in order for it to be more accessible. One innovative way to display the complex historical hazard information is with a historical hazards identification map, which distills the hazard information into an easy to use graphical format.
Sampling
An occupational hygienist may use one or a number of commercially available electronic measuring devices to measure noise, vibration, ionizing and non-ionizing radiation, dust, solvents, gases, and so on. Each device is often specifically designed to measure a specific or particular type of contaminant. Electronic devices need to be calibrated before and after use to ensure the accuracy of the measurements taken and often require a system of certifying the precision of the instrument.
Collecting occupational exposure data is resource- and time-intensive, and can be used for different purposes, including evaluating compliance with government regulations and for planning preventive interventions. The usability of occupational exposure data is influenced by these factors:
Data storage (e.g. use of electronic and centralized databases with retention of all records)
Standardization of data collection
Collaboration between researchers, safety and health professionals and insurers
In 2018, in an effort to standardize industrial hygiene data collection among workers compensation insurers and to determine the feasibility of pooling collected IH data, IH air and noise survey forms were collected. Data fields were evaluated for importance and a study list of core fields was developed, and submitted to an expert panel for review before finalization. The final core study list was compared to recommendations published by the American Conference of Governmental Industrial Hygienists (ACGIH) and the American Industrial Hygiene Association (AIHA). Data fields essential to standardizing IH data collection were identified and verified. The "essential" data fields are available and could contribute to improved data quality and its management if incorporated into IH data management systems.
Canada and several European countries have been working to establish occupational exposure databases with standardized data elements and improved data quality. These databases include MEGA, COLCHIC, and CWED.
Dust sampling
Nuisance dust is considered to be the total dust in air including inhalable and respirable fractions.
Various dust sampling methods exist that are internationally recognised. Inhalable dust is determined using the modern equivalent of the Institute of Occupational Medicine (IOM) MRE 113A monitor. Inhalable dust is considered to be dust of less than 100 micrometers aerodynamic equivalent diameter (AED) that enters through the nose and or mouth.
Respirable dust is sampled using a cyclone dust sampler design to sample for a specific fraction of dust AED at a set flow rate. The respirable dust fraction is dust that enters the 'deep lung' and is considered to be less than 10 micrometers AED.
Nuisance, inhalable and respirable dust fractions are all sampled using a constant volumetric pump for a specific sampling period. By knowing the mass of the sample collected and the volume of air sampled, a concentration for the fraction sampled can be given in milligrams (mg) per meter cubed (m3). From such samples, the amount of inhalable or respirable dust can be determined and compared to the relevant occupational exposure limits.
By use of inhalable, respirable or other suitable sampler (7 hole, 5 hole, etc.), these dust sampling methods can also used to determine metal exposure in the air. This requires collection of the sample on a methyl cellulose ester (MCE) filter and acid digestion of the collection media in the laboratory followed by measuring metal concentration through atomic absorption spectroscopy or atomic emission spectroscopy. Both the UK Health and Safety Laboratory and NIOSH Manual of Analytical Methods have specific methodologies for a broad range of metals in air found in industrial processing (smelting, foundries, etc.).
A further method exists for the determination of asbestos, fiberglass, synthetic mineral fiber and ceramic mineral fiber dust in air. This is the membrane filter method (MFM) and requires the collection of the dust on a gridded filter for estimation of exposure by the counting of 'conforming' fibers in 100 fields through a microscope. Results are quantified on the basis of number of fibers per milliliter of air (f/mL). Many countries strictly regulate the methodology applied to the MFM.
Chemical sampling
Two types of chemically absorbent tubes are used to sample for a wide range of chemical substances. Traditionally a chemical absorbent 'tube' (a glass or stainless steel tube of between 2 and 10 mm internal diameter) filled with very fine absorbent silica (hydrophilic) or carbon, such as coconut charcoal (lipophilic), is used in a sampling line where air is drawn through the absorbent material for between four hours (minimum workplace sample) to 24 hours (environmental sample) period. The hydrophilic material readily absorbs water-soluble chemical and the lipophilic material absorbs non water-soluble materials. The absorbent material is then chemically or physically extracted and measurements performed using various gas chromatography or mass spectrometry methods. These absorbent tube methods have the advantage of being usable for a wide range of potential contaminates. However, they are relatively expensive methods, are time-consuming and require significant expertise in sampling and chemical analysis. A frequent complaint of workers is in having to wear the sampling pump (up to 1 kg) for several days of work to provide adequate data for the required statistical certainty determination of the exposure.
In the last few decades, advances have been made in 'passive' badge technology. These samplers can now be purchased to measure one chemical (e.g. formaldehyde) or a chemical type (e.g. ketones) or a broad spectrum of chemicals (e.g. solvents). They are relatively easy to set up and use. However, considerable cost can still be incurred in analysis of the 'badge'. They weigh 20 to 30 grams and workers do not complain about their presence. Unfortunately 'badges' may not exist for all types of workplace sampling that may be required, and the charcoal or silica method may sometimes have to be applied.
From the sampling method, results are expressed in milligrams per cubic meter (mg/m3) or parts per million (PPM) and compared to the relevant occupational exposure limits.
It is a critical part of the exposure determination that the method of sampling for the specific contaminate exposure is directly linked to the exposure standard used. Many countries regulate both the exposure standard, the method used to determine the exposure and the methods to be used for chemical or other analysis of the samples collected.
Noise sampling
Two types of noise are environmental noise, which is unwanted sound that occurs outdoors, and occupational noise, the sound that is received by employees while they are in the workplace. Environmental noise can originate from various sources depending on the activity, location, and time. Environmental noise can be generated from transportation such as road, rail, and air traffic, or construction and building services, and even domestic and leisure activities.
There is a legal limit on noise that the environmental noise is 70 dB(A) over 24 hours of average exposure. Similarly, the limit of occupational noise is 85 dB(A) per NIOSH, or 90 dB(A) per OSHA for an 8-hour work period. In order to enforce these limits, these are the methods to measure noise, including sound level meter (SLM), Sound Level Meter App, integrating sound level meter (ISLM), impulse sound level meter (Impulse SLM), noise dosimeter, and personal sound exposure meter (PSEM).
Sound level meter (SLM): measures the sound level at a single point of time and consequently requires multiple measurements to be taken at different times of the day. The SLM is primarily used for measuring relatively stable sound levels; there is increased difficulty in measuring the average sound exposure if the noise levels vary greatly.
Sound Level Meter App is a program that can be downloaded to a mobile device. It receives noise through the phone's built-in or external microphone and displays the sound level measurement from the app's sound level meters and noise dosimeters.
Integrating sound level meter (ISLM): measures the equivalent sound levels within the measurement period. Because the ISLM measures noise in a particular area, it is difficult to measure a worker's personal exposure as they move throughout a workspace.
Impulse sound level meter (Impulse SLM): measures the peak of each sound impulse. The most optimal conditions to measure the peaks occur when there is little background noise.
Noise dosimeter: collects the sound level for a given point in time, as well as different sound levels across time. The noise dosimeter can measure personal exposure levels and can be used in the areas with a high risk of fire.
Personal sound exposure meter (PSEM): worn by employees while they work. The advantage of the PSEM is that it eliminates the need for noise assessors to follow up with workers when the assessors measure the noise levels of the work areas.
Excessive noise can lead to occupational hearing loss. 12% of workers report having hearing difficulties, making this the third most common chronic disease in the U.S. Among these workers, 24% have hearing difficulties caused by occupational noise, with 8% affected by tinnitus, and 4% having both hearing difficulties and tinnitus.
Ototoxic chemicals including solvents, metals, compounds, asphyxiants, nitriles, and pharmaceuticals, may contribute further to hearing loss.
Exposure management and controls
The hierarchy of control defines the approach used to reduce exposure risks protecting workers and communities. These methods include elimination, substitution, engineering controls (isolation or ventilation), administrative controls and personal protective equipment. Occupational hygienists, engineers, maintenance, management and employees should all be consulted for selecting and designing the most effective and efficient controls based on the hierarchy of control.
Professional societies
The development of industrial hygiene societies originated in the United States, beginning with the first convening of members for the American Conference of Governmental Industrial Hygienists in 1938, and the formation of the American Industrial Hygiene Association in 1939. In the United Kingdom, the British Occupational Hygiene Society started in 1953. Through the years, professional occupational societies have formed in many different countries, leading to the formation of the International Occupational Hygiene Association in 1987, in order to promote and develop occupational hygiene worldwide through the member organizations. The IOHA has grown to 29 member organizations, representing over 20,000 occupational hygienists worldwide, with representation from countries present in every continent.
Peer-reviewed literature
There are several academic journals specifically focused on publishing studies and research in the occupational health field. The Journal of Occupational and Environmental Hygiene (JOEH) has been published jointly since 2004 by the American Industrial Hygiene Association and the American Conference of Governmental Industrial Hygienists, replacing the former American Industrial Hygiene Association Journal and Applied Occupational & Environmental Hygiene journals. Another seminal occupational hygiene journal would be The Annals of Occupational Hygiene, published by the British Occupational Hygiene Society since 1958. Further, NIOSH maintains a searchable bibliographic database (NIOSHTIC-2) of occupational safety and health publications, documents, grant reports, and other communication products.
Occupational hygiene as a career
Examples of occupational hygiene careers include:
Compliance officer on behalf of regulatory agency
Professional working on behalf of company for the protection of the workforce
Consultant working on behalf of companies
Researcher performing laboratory or field occupational hygiene work
Education
The basis of the technical knowledge of occupational hygiene is from competent training in the following areas of science and management:
Basic sciences (biology, chemistry, mathematics (statistics), physics)
Occupational diseases (illness, injury and health surveillance (biostatistics, epidemiology, toxicology))
Health hazards (biological, chemical and physical hazards, ergonomics and human factors)
Working environments (mining, industrial, manufacturing, transport and storage, service industries and offices)
Programme management principles (professional and business ethics, work site and incident investigation methods, exposure guidelines, occupational exposure limits, jurisdictional based regulations, hazard identification, risk assessment and risk communication, data management, fire evacuation and other emergency responses)
Sampling, measurement and evaluation practices (instrumentation, sampling protocols, methods or techniques, analytical chemistry)
Hazard controls (elimination, substitution, engineering, administrative, PPE and air conditioning and extraction ventilation)
Environment (air pollution, hazardous waste)
However, it is not rote knowledge that identifies a competent occupational hygienist. There is an "art" to applying the technical principles in a manner that provides a reasonable solution for workplace and environmental issues. In effect an experienced "mentor", who has experience in occupational hygiene is required to show a new occupational hygienist how to apply the learned scientific and management knowledge in the workplace and to the environment issue to satisfactorily resolve the problem.
To be a professional occupational hygienist, experience in as wide a practice as possible is required to demonstrate knowledge in areas of occupational hygiene. This is difficult for "specialists" or those who practice in narrow subject areas. Limiting experience to individual subject like asbestos remediation, confined spaces, indoor air quality, or lead abatement, or learning only through a textbook or “review course” can be a disadvantage when required to demonstrate competence in other areas of occupational hygiene.
Information presented in Wikipedia can be considered to be only an outline of the requirements for professional occupational hygiene training. This is because the actual requirements in any country, state or region may vary due to educational resources available, industry demand or regulatory mandated requirements.
During 2010, the Occupational Hygiene Training Association (OHTA) through sponsorship provided by the IOHA initiated a training scheme for those with an interest in or those requiring training in occupational hygiene. These training modules can be downloaded and used freely. The available subject modules (Basic Principles in Occupational Hygiene, Health Effects of Hazardous Substances, Measurement of Hazardous Substances, Thermal Environment, Noise, Asbestos, Control, Ergonomics) are aimed at the ‘foundation’ and ‘intermediate’ levels in occupational hygiene. Although the modules can be used freely without supervision, attendance at an accredited training course is encouraged. These training modules are available from ohtatraining.org
Academic programs offering industrial hygiene bachelor's or master's degrees in United States may apply to the Accreditation Board for Engineering and Technology (ABET) to have their program accredited. As of October 1, 2006, 27 institutions have accredited their industrial hygiene programs. Accreditation is not available for doctoral programs.
In the U.S., the training of IH professionals is supported by NIOSH through their NIOSH Education and Research Centers.
Professional credentials
Australia
In 2005, the Australian Institute of Occupational Hygiene (AIOH) accredited professional occupational hygienists through a certification scheme. Occupational Hygienists in Australia certified through this scheme are entitled to use the phrase Certified Occupational Hygienist (COH) as part of their qualifications.
Hong Kong
Registered Professional Hygienist Registration & Examination Board (RPH R&EB) is set up by the Council of the Hong Kong Institute of Occupational & Environmental Hygiene (HKIOEH) with an aim to enhance the professional development of occupational hygienists and to provide a path for persons who reach professional maturity in the field of occupational hygiene to obtain qualification recognised by peer professionals. Under HKIOEH, RPH R&EB operates the registration program of Registered Professional Hygienist (RPH) and qualifying examination in a standard meeting the practice as recognised by the National Accreditation Recognition (NAR) Committee of the International Occupational Hygiene Association (IOHA).
Saudi Arabia
The Saudi Arabian Ministry of Health's Occupational Health Directorate and Labor Office are the government agencies responsible for decisions and surveillance related to occupational hygiene. Professional occupational hygiene and safety education programs surveilled under these offices are available through Saudi Arabian colleges.
United States
Practitioners who successfully meet specific education and work-experience requirements and pass a written examination administered by the Board for Global EHS Credentialing (BGC) are authorized to use the term Certified Industrial Hygienist (CIH) or the discontinued Certified Associate Industrial Hygienist (CAIH). Both of these terms have been codified into law in many states in the United States to identify minimum qualifications of individuals having oversight over certain activities that may affect employee and general public health.
After the initial certification, the CIH or CAIH maintains their certification by meeting on-going requirements for ethical behavior, education, and professional activities (e.g., active practice, technical committees, publishing, teaching).
Certification examinations are offered during a spring and fall testing window each year worldwide.
The CIH designation is the most well known and recognized industrial hygiene designation throughout the world. There are approximately 6800 CIHs in the world making BGC the largest industrial hygiene certification organization. The CAIH certification program was discontinued in 2006. Those who were certified as a CAIH retain their certification through ongoing certification maintenance. People who are currently certified by BGC can be found in a public roster.
The BGC is a recognized certification board by the International Occupational Hygiene Association (IOHA). The CIH certification has been accredited internationally by the International Organization for Standardization/International Electrotechnical Commission (ISO/IEC 17024). In the United States, the CIH has been accredited by the Council of Engineering and Scientific Specialty Boards (CESB).
Canada
In Canada, a practitioner who successfully completes a written test and an interview administered by the Canadian Registration Board of Occupational Hygienists can be recognized as a Registered Occupational Hygienist (ROH) or Registered Occupational Hygiene Technician (ROHT). There is also designation to be recognized as a Canadian Registered Safety Professional (CRSP).
United Kingdom
The Faculty of Occupational Hygiene, part of the British Occupational Hygiene Society, represents the interests of professional occupational hygienists.
Membership of the Faculty of Occupational Hygiene is confined to BOHS members who hold a recognized professional qualification in occupational hygiene.
There are three grades of Faculty membership:
Licentiate (LFOH) holders will have obtained the BOHS Certificate of Operational Competence in Occupational Hygiene and have at least three years’ practical experience in the field.
Members (MFOH) are normally holders of the Diploma of Professional Competence in Occupational Hygiene and have at least five years’ experience at a senior level.
Fellows (FFOH) are senior members of the profession who have made a distinct contribution to the advancement of occupational hygiene.
All Faculty members participate in a Continuous Professional Development (CPD) scheme designed to maintain a high level of current awareness and knowledge in occupational hygiene.
India
The Indian Society of Industrial hygiene was formed in 1981 at Chennai, India. Subsequently, its secretariat was shifted to Kanpur. The society has registered about 400 members, about 90 of whom are life members. The society publishes a newsletter, "Industrial Hygiene Link". The current address of the secretary of the society is Shyam Singh Gautam, Secretary, Indian Society of Industrial Hygiene, 11, Shakti Nagar, Rama Devi, Kanpur 2008005 Mobile number 8005187037.
See also
References
Further reading
World Health Organization Occupational Health Publications
International Labour Organization Encyclopaedia of Occupational Health and Safety, [1]
UK HSEline
EPA Indoor Air Quality on-line educator
Canada hazard information
A list of MSDS sites (Partly commercial)
(US) NIOSH Pocket Guide
(US) Agency for Toxic Substances and Disease Registry
(US) National Library of Medicine Toxicology Data Network
(US) National Toxicology Program
International Agency for Research on Cancer
RTECS (by subscription only)
Chemfinder
Inchem
Many larger businesses maintain their own product and chemical information.
There are also many subscription services available (CHEMINFO, OSH, CHEMpendium, Chem Alert, Chemwatch, Infosafe, RightAnswer.com's TOMES Plus, OSH Update, OSH-ROM, et cetera).
External links
(OSHA) passed standards on exposure to hexavalent chromium - Hexavalent Chromium National Emphasis Program
American Conference of Governmental Industrial Hygienists (ACGIH)
American Industrial Hygiene Association
Government of Hong Kong Occupational Safety and Health Council, Air Contaminants in the Workplace
View a PowerPoint Presentation Explaining What Industrial Hygiene Is - developed and made available by AIHA
The National Institute for Occupational Safety and Health Manual of Analytical Methods (NMAM)
UK Health and Safety Executive, Health and Safety Laboratory, Methods for the Determination of Hazardous Substances (MDHS)
International Organization for Standardization (ISO)
International Occupational Hygiene Association (IOHA)
Workplace Health Without Borders (WHWB)
Industrial hygiene
Occupational safety and health | 0.775835 | 0.983532 | 0.763058 |
Asphyxia | Asphyxia or asphyxiation is a condition of deficient supply of oxygen to the body which arises from abnormal breathing. Asphyxia causes generalized hypoxia, which affects all the tissues and organs, some more rapidly than others. There are many circumstances that can induce asphyxia, all of which are characterized by the inability of a person to acquire sufficient oxygen through breathing for an extended period of time. Asphyxia can cause coma or death.
In 2015, about 9.8 million cases of unintentional suffocation occurred which resulted in 35,600 deaths. The word asphyxia is from Ancient Greek "without" and , "squeeze" (throb of heart).
Causes
Situations that can cause asphyxia include but are not limited to: airway obstruction, the constriction or obstruction of airways, such as from asthma, laryngospasm, or simple blockage from the presence of foreign materials; from being in environments where oxygen is not readily accessible: such as underwater, in a low oxygen atmosphere, or in a vacuum; environments where sufficiently oxygenated air is present, but cannot be adequately breathed because of air contamination such as excessive smoke.
Other causes of oxygen deficiency include
but are not limited to:
Acute respiratory distress syndrome
Carbon monoxide inhalation, such as that from a car exhaust and the smoke produced by a lit cigarette: carbon monoxide has a higher affinity than oxygen to the hemoglobin in the blood's red blood corpuscles, bonding with it tenaciously, and, in the process, displacing oxygen and preventing the blood from transporting oxygen around the body
Contact with certain chemicals, including pulmonary agents (such as phosgene) and blood agents (such as hydrogen cyanide)
Choking by obstruction of a foreign body in the airway (for example: when eating)
Drowning
Drug overdose
Exposure to extreme low pressure or vacuum from spacesuit damage (see space exposure)
Hanging, whether suspension or short drop hanging
Self-induced hypocapnia by hyperventilation, as in shallow water or deep water blackout and the choking game
Inert gas asphyxiation
Congenital central hypoventilation syndrome, or primary alveolar hypoventilation, a disorder of the autonomic nervous system in which a patient must consciously breathe; although it is often said that people with this disease will die if they fall asleep, this is not usually the case.
Respiratory diseases
Sleep apnea
A seizure which stops breathing activity
Strangling
Breaking the wind pipe
Prolonged exposure to chlorine gas
Smothering
Smothering is a mechanical obstruction of the flow of air from the environment into the mouth and/or nostrils, for instance, by covering the mouth and nose with a hand, pillow, or a plastic bag. Smothering can be either partial or complete, where partial indicates that the person being smothered is able to inhale some air, although less than required. In a normal situation, smothering requires at least partial obstruction of both the nasal cavities and the mouth to lead to asphyxia. Smothering with the hands or chest is used in some combat sports to distract the opponent, and create openings for transitions, as the opponent is forced to react to the smothering.
In some cases, when performing certain routines, smothering is combined with simultaneous compressive asphyxia. One example is overlay, in which an adult accidentally rolls over onto an infant during co-sleeping, an accident that often goes unnoticed and is mistakenly thought to be sudden infant death syndrome.
Other accidents involving a similar mechanism are cave-ins, or when an individual is buried in sand, snow, dirt, or grain.
In homicidal cases, the term burking is often ascribed to a killing method that involves simultaneous smothering and compression of the torso. The term "burking" comes from the method William Burke and William Hare used to kill their victims during the West Port murders. They killed the usually intoxicated victims by sitting on their chests and suffocating them by putting a hand over their nose and mouth, while using the other hand to push the victim's jaw up. The corpses had no visible injuries, and were supplied to medical schools for money.
Compressive asphyxia
Compressive asphyxia (also called chest compression) is mechanically limiting expansion of the lungs by compressing the torso, preventing breathing. "Traumatic asphyxia" or "crush asphyxia" usually refers to compressive asphyxia resulting from being crushed or pinned under a large weight or force, or in a crowd crush. An example of traumatic asphyxia is a person who jacks up a car to work on it from below, and is crushed by the vehicle when the jack fails. Constrictor snakes such as boa constrictors kill through slow compressive asphyxia, tightening their coils every time the prey breathes out rather than squeezing forcefully. In cases of an adult co-sleeping with an infant ("overlay"), the heavy sleeping adult may move on top of the infant, causing compression asphyxia.
In fatal crowd disasters, compressive asphyxia from being crushed against the crowd causes all or nearly all deaths, rather than blunt trauma from trampling. This is what occurred at the Ibrox disaster in 1971, where 66 Rangers fans died; the 1979 The Who concert disaster where 11 died; the Luzhniki disaster in 1982, when 66 FC Spartak Moscow fans died; the Hillsborough disaster in 1989, where 97 Liverpool fans were crushed to death in an overcrowded terrace, 95 of the 97 from compressive asphyxia, 93 dying directly from it and 3 others from related complications; the 2021 Meron crowd crush where 45 died; the Astroworld Festival crowd crush in 2021, where 10 died; and the Seoul Halloween crowd crush in 2022, where at least 159 died during Halloween celebrations.
In confined spaces, people are forced to push against each other; evidence from bent steel railings in several fatal crowd accidents has shown horizontal forces over 4500 N (equivalent to a weight of approximately 450 kg or 1000 lbs). In cases where people have stacked up on each other in a human pile, it has been estimated that those at the bottom are subjected to around 380 kg (840 lbs) of compressive weight.
"Positional" or "restraint" asphyxia is when a person is restrained and left alone prone, such as in a police vehicle, and is unable to reposition themself in order to breathe. The death can be in the vehicle, or following loss of consciousness to be followed by death while in a coma, having presented with anoxic brain damage. The asphyxia can be caused by facial compression, neck compression, or chest compression. This occurs mostly during restraint and handcuffing situations by law enforcement, including psychiatric incidents. The weight of the restraint(s) doing the compression may contribute to what is attributed to positional asphyxia. Therefore, passive deaths following custody restraint that are presumed to be the result of positional asphyxia may actually be examples of asphyxia occurring during the restraint process.
Chest compression is a technique used in various grappling combat sports, where it is sometimes called wringing, either to tire the opponent or as complementary or distractive moves in combination with pinning holds, or sometimes even as submission holds. Examples of chest compression include the knee-on-stomach position; or techniques such as leg scissors (also referred to as body scissors and in budō referred to as do-jime; 胴絞, "trunk strangle" or "body triangle") where a participant wraps his or her legs around the opponent's midsection and squeezes them together.
Pressing is a form of torture or execution using compressive asphyxia.
Perinatal asphyxia
Perinatal asphyxia is the medical condition resulting from deprivation of oxygen (hypoxia) to a newborn infant long enough to cause apparent harm. It results most commonly from a drop in maternal blood pressure or interference during delivery with blood flow to the infant's brain. This can occur as a result of inadequate circulation or perfusion, impaired respiratory effort, or inadequate ventilation. There has long been a scientific debate over whether newborn infants with asphyxia should be resuscitated with 100% oxygen or normal air. It has been demonstrated that high concentrations of oxygen lead to generation of oxygen free radicals, which have a role in reperfusion injury after asphyxia. Research by Ola Didrik Saugstad and others led to new international guidelines on newborn resuscitation in 2010, recommending the use of normal air instead of 100% oxygen.
Mechanical asphyxia
Classifications of different forms of asphyxia vary among literature, with differences in defining the concept of mechanical asphyxia being the most obvious.
In DiMaio and DiMaio's 2001 textbook on forensic pathology, mechanical asphyxia is caused by pressure from outside the body restricting respiration. Similar narrow definitions of mechanical asphyxia have occurred in Azmak's 2006 literature review of asphyxial deaths and Oehmichen and Auer's 2005 book on forensic neuropathology. According to DiMaio and DiMaio, mechanical asphyxia encompasses positional asphyxia, traumatic asphyxia, and "human pile" deaths.
In Shkrum and Ramsay's 2007 textbook on forensic pathology, mechanical asphyxia occurs when any mechanical means cause interference with the exchange of oxygen and carbon dioxide in the body. Similar broad definitions of mechanical asphyxia have occurred in Saukko and Knight's 2004 book on asphyxia, and Dolinak and Matshes' 2005 book on forensic pathology. According to Shkrum and Ramsay, mechanical asphyxia encompasses smothering, choking, positional asphyxia, traumatic asphyxia, wedging, strangulation and drowning.
Sauvageau and Boghossian propose in 2010 that mechanical asphyxia should be officially defined as caused by "restriction of respiratory movements, either by the position of the body or by external chest compression", thus encompassing only positional asphyxia and traumatic asphyxia.
First aid
If there are symptoms of mechanical asphyxia, it is necessary to call the Emergency Medical Services. In some countries, such as the US, there may also be self-acting groups of voluntary first responders who have been trained in first aid. In case of mechanical asphyxia, first aid can be provided on your own.
First aid for choking on food
In case of choking on a foreign body:
Stand behind the affected person and wrap your arms around him/her.
Push inwards and upwards under the ribs with a sudden movement by your second hand.
If the performed actions were not effective, repeat them until you free respiratory tract of the affected person from a foreign body.
See also
References
Further reading
External links
Cross-side to chest compression choke | 0.763562 | 0.99931 | 0.763035 |
Mucus | Mucus is a slippery aqueous secretion produced by, and covering, mucous membranes. It is typically produced from cells found in mucous glands, although it may also originate from mixed glands, which contain both serous and mucous cells. It is a viscous colloid containing inorganic salts, antimicrobial enzymes (such as lysozymes), immunoglobulins (especially IgA), and glycoproteins such as lactoferrin and mucins, which are produced by goblet cells in the mucous membranes and submucosal glands. Mucus serves to protect epithelial cells in the linings of the respiratory, digestive, and urogenital systems, and structures in the visual and auditory systems from pathogenic fungi, bacteria and viruses. Most of the mucus in the body is produced in the gastrointestinal tract.
Amphibians, fish, snails, slugs, and some other invertebrates also produce external mucus from their epidermis as protection against pathogens, to help in movement, and to line fish gills. Plants produce a similar substance called mucilage that is also produced by some microorganisms.
Respiratory system
In the human respiratory system, mucus is part of the airway surface liquid (ASL), also known as epithelial lining fluid (ELF), that lines most of the respiratory tract. The airway surface liquid consists of a sol layer termed the periciliary liquid layer and an overlying gel layer termed the mucus layer. The periciliary liquid layer is so named as it surrounds the cilia and lies on top of the surface epithelium. The periciliary liquid layer surrounding the cilia consists of a gel meshwork of cell-tethered mucins and polysaccharides. The mucus blanket aids in the protection of the lungs by trapping foreign particles before they can enter them, in particular through the nose during normal breathing.
Mucus is made up of a fluid component of around 95% water, the mucin secretions from the goblet cells, and the submucosal glands (2–3% glycoproteins), proteoglycans (0.1–0.5%), lipids (0.3–0.5%), proteins, and DNA. The major mucins secreted – MUC5AC and MUC5B - are large polymers that give the mucus its rheologic or viscoelastic properties. MUC5AC is the main gel-forming mucin secreted by goblet cells, in the form of threads and thin sheets. MUC5B is a polymeric protein secreted from submucosal glands and some goblet cells, and this is in the form of strands.
In the airways—the trachea, bronchi, and bronchioles—the lining of mucus is produced by specialized airway epithelial cells called goblet cells, and submucosal glands. Small particles such as dust, particulate pollutants, and allergens, as well as infectious agents and bacteria are caught in the viscous nasal or airway mucus and prevented from entering the system. This process, together with the continual movement of the cilia on the respiratory epithelium toward the oropharynx (mucociliary clearance), helps prevent foreign objects from entering the lungs during breathing. This explains why coughing often occurs in those who smoke cigarettes. The body's natural reaction is to increase mucus production. In addition, mucus aids in moisturizing the inhaled air and prevents tissues such as the nasal and airway epithelia from drying out.
Mucus is produced continuously in the respiratory tract. Mucociliary action carries it down from the nasal passages and up from the rest of the tract to the pharynx, with most of it being swallowed subconsciously. Sometimes in times of respiratory illness or inflammation, mucus can become thickened with cell debris, bacteria, and inflammatory cells. It is then known as phlegm which may be coughed up as sputum to clear the airway.
Respiratory tract
Increased mucus production in the upper respiratory tract is a symptom of many common ailments, such as the common cold, and influenza. Nasal mucus may be removed by blowing the nose or by using nasal irrigation. Excess nasal mucus, as with a cold or allergies, due to vascular engorgement associated with vasodilation and increased capillary permeability caused by histamines, may be treated cautiously with decongestant medications. Thickening of mucus as a "rebound" effect following overuse of decongestants may produce nasal or sinus drainage problems and circumstances that promote infection.
During cold, dry seasons, the mucus lining nasal passages tends to dry out, meaning that mucous membranes must work harder, producing more mucus to keep the cavity lined. As a result, the nasal cavity can fill up with mucus. At the same time, when air is exhaled, water vapor in breath condenses as the warm air meets the colder outside temperature near the nostrils. This causes an excess amount of water to build up inside nasal cavities. In these cases, the excess fluid usually spills out externally through the nostrils.
In the lower respiratory tract impaired mucociliary clearance due to conditions such as primary ciliary dyskinesia may result in mucus accumulation in the bronchi. The dysregulation of mucus homeostasis is the fundamental characteristic of cystic fibrosis, an inherited disease caused by mutations in the CFTR gene, which encodes a chloride channel. This defect leads to the altered electrolyte composition of mucus, which triggers its hyperabsorption and dehydration. Such low-volume, viscous, acidic mucus has a reduced antimicrobial function, which facilitates bacterial colonisation. The thinning of the mucus layer ultimately affects the periciliary liquid layer, which becomes dehydrated, compromising ciliary function, and impairing mucociliary clearance. A respiratory therapist can recommend airway clearance therapy which uses a number of clearance techniques to help with the clearance of mucus.
Mucus hypersecretion
In the lower respiratory tract excessive mucus production in the bronchi and bronchioles is known as mucus hypersecretion. Chronic mucus hypersecretion results in the chronic productive cough of chronic bronchitis, and is generally synonymous with this. Excessive mucus can narrow the airways, limit airflow, and accelerate a decline in lung function.
Digestive system
In the human digestive system, mucus is used as a lubricant for materials that must pass over membranes, e.g., food passing down the esophagus. Mucus is extremely important in the gastrointestinal tract. It forms an essential layer in the colon and in the small intestine that helps reduce intestinal inflammation by decreasing bacterial interaction with intestinal epithelial cells. The layer of mucus of the gastric mucosa lining the stomach is vital to protect the stomach lining from the highly acidic environment within it.
Reproductive system
In the human female reproductive system, cervical mucus prevents infection and provides lubrication during sexual intercourse. The consistency of cervical mucus varies depending on the stage of a woman's menstrual cycle. At ovulation cervical mucus is clear, runny, and conducive to sperm; post-ovulation, mucus becomes thicker and is more likely to block sperm. Several fertility awareness methods rely on observation of cervical mucus, as one of three primary fertility signs, to identify a woman's fertile time at the mid-point of the cycle. Awareness of the woman's fertile time allows a couple to time intercourse to improve the odds of pregnancy. It is also proposed as a method to avoid pregnancy.
Clinical significance
In general, nasal mucus is clear and thin, serving to filter air during inhalation. During times of infection, mucus can change color to yellow or green either as a result of trapped bacteria or due to the body's reaction to viral infection. For example, Staphylococcus aureus infection may turn the mucus yellow. The green color of mucus comes from the heme group in the iron-containing enzyme myeloperoxidase secreted by white blood cells as a cytotoxic defense during a respiratory burst.
In the case of bacterial infection, the bacterium becomes trapped in already-clogged sinuses, breeding in the moist, nutrient-rich environment. Sinusitis is an uncomfortable condition that may include congestion of mucus. A bacterial infection in sinusitis will cause discolored mucus and would respond to antibiotic treatment; viral infections typically resolve without treatment. Almost all sinusitis infections are viral and antibiotics are ineffective and not recommended for treating typical cases.
In the case of a viral infection such as cold or flu, the first stage and also the last stage of the infection cause the production of a clear, thin mucus in the nose or back of the throat. As the body begins to react to the virus (generally one to three days), mucus thickens and may turn yellow or green. Viral infections cannot be treated with antibiotics, and are a major avenue for their misuse. Treatment is generally symptom-based; often it is sufficient to allow the immune system to fight off the virus over time.
Obstructive lung diseases often result from impaired mucociliary clearance that can be associated with mucus hypersecretion, and these are sometimes referred to as mucoobstructive lung diseases. Techniques of airway clearance therapy can help to clear secretions, maintain respiratory health, and prevent inflammation in the airways.
A unique umbilical cord lining epithelial stem cell expresses MUC1, termed (CLEC-muc). This has been shown to have good potential in the regeneration of the cornea.
Properties of mucus
Tunable swelling capacity
Mucus is able to absorb water or dehydrate through pH variations. The swelling capacity of mucus stems from the bottlebrush structure of mucin within which hydrophilic segments provide a large surface area for water absorption. Moreover, the tunability of swelling effect is controlled by polyelectrolyte effect.
Polyelectrolyte effect in mucus
Polymers with charged molecules are called polyelectrolytes. Mucins, a kind of polyelectrolyte proteoglycans, are the main component of mucus, which provides the polyelectrolyte effect in mucus. The process of inducing this effect comprises two steps: attraction of counter-ions and water compensation. When exposed in physiological ionic solution, the charged groups in the polyelectrolytes attract counter-ions with opposite charges, thereby leading to a solute concentration gradient. An osmotic pressure is introduced to equalize the concentration of solute throughout the system by driving water to flow from the low concentration areas to the high concentration areas. In short, the influx and outflux of water within mucus, managed by the polyelectrolyte effect, contribute to mucus' tunable swelling capacity.
Mechanism of pH-tunable swelling
The ionic charges of mucin are mainly provided by acidic amino acids including aspartic acid (pKa=3.9) and glutamic acid (pKa=4.2). The charges of acidic amino acids will change with environmental pH value due to acid dissociation and association. Aspartic acid, for example, has a negative side chain when the pH value is above 3.9, while a neutrally charged side chain will be introduced as pH value drops below 3.9. Thus, the number of negative charges in mucus is influenced by the pH value of surrounding environment. That is, the polyelectrolyte effect of mucus is largely affected by the pH value of solution due to the charge variation of acidic amino acid residues on the mucin backbone. For instance, the charged residue on mucin is protonated at a normal pH value of the stomach, approximately pH 2. In this case, there is scarcely polyelectrolyte effect, thereby causing compact mucus with little swelling capacity. However, a kind of bacteria, Helicobacter pylori, is prone to producing base to elevate the pH value in stomach, leading to the deprotonation of aspartic acids and glutamic acids, i.e., from neutral to negative-charged. The negative charges in the mucus greatly increase, thus inducing the polyelectrolyte effect and the swelling of the mucus. This swelling effect increases the pore size of the mucus and decreases mucus' viscosity, which allows bacteria to penetrate and migrate into the mucus and cause disease.
Charge selectivity
The high selective permeability of mucus plays a crucial role in the healthy state of human beings by limiting the penetration of molecules, nutrients, pathogens, and drugs. The charge distribution within mucus serves as a charge selective diffusion barrier, thus significantly affecting the transportation of agents. Among particles with various surface zeta potentials, cationic particles tend to have a low depth of penetration, neutral ones possess medium penetration, and anionic ones have the largest penetration depth. Furthermore, the effect of charge selectivity changes when the status of the mucus varies, i.e., native mucus has a threefold higher potential to limit agent penetration than purified mucus.
Other animals
Mucus is also produced by a number of other animals. All fish are covered in mucus secreted from glands all over their bodies. Invertebrates such as snails and slugs secrete mucus called snail slime to enable movement, and to prevent their bodies from drying out. Their reproductive systems also make use of mucus for example in the covering of their eggs. In the unique mating ritual of Limax maximus the mating slugs lower themselves from elevated locations by a mucus thread. Mucus is an essential constituent of hagfish slime used to deter predators. Mucus is produced by the endostyle in some tunicates and larval lampreys to help in filter feeding.
See also
Alkaline mucus
Empty nose syndrome
Feces
Lung flute
Mucoadhesion
Mucophagy
Sniffle
Spinnbarkeit
References
Body fluids
Excretion
Exocrine system
Symptoms and signs: Respiratory system | 0.764677 | 0.997823 | 0.763013 |
Malignant hyperthermia | Malignant hyperthermia (MH) is a type of severe reaction that occurs in response to particular medications used during general anesthesia, among those who are susceptible. Symptoms include muscle rigidity, fever, and a fast heart rate. Complications can include muscle breakdown and high blood potassium. Most people who are susceptible to MH are generally unaffected when not exposed to triggering agents.
Exposure to triggering agents (certain volatile anesthetic agents or succinylcholine) can lead to the development of MH in those who are susceptible. Susceptibility can occur due to at least six genetic mutations, with the most common one being of the RYR1 gene. These genetic variations are often inherited in an autosomal dominant manner. The condition may also occur as a new mutation or be associated with a number of inherited muscle diseases, such as central core disease.
In susceptible individuals, the medications induce the release of stored calcium ions within muscle cells. The resulting increase in calcium concentrations within the cells cause the muscle fibers to contract. This generates excessive heat and results in metabolic acidosis. Diagnosis is based on symptoms in the appropriate situation. Family members may be tested to see if they are susceptible by muscle biopsy or genetic testing.
Treatment is with dantrolene and rapid cooling along with other supportive measures. The avoidance of potential triggers is recommended in susceptible people. The condition affects one in 5,000 to 50,000 cases where people are given anesthetic gases. Males are more often affected than females. The risk of death with proper treatment is about 5% while without it is around 75%. While cases that appear similar to MH have been documented since the early 20th century, the condition was only formally recognized in 1960.
Signs and symptoms
The typical signs of malignant hyperthermia are due to a hypercatabolic state, which presents as a very high temperature, an increased heart rate and abnormally rapid breathing, increased carbon dioxide production, increased oxygen consumption, mixed acidosis, rigid muscles, and rhabdomyolysis. These signs can develop any time during the administration of the anesthetic triggering agents. Rarely, signs may develop up to 40 minutes after the end of anaesthesia.
Causes
Malignant hyperthermia is a disorder that can be considered a gene–environment interaction. In most people with malignant hyperthermia susceptibility, they have few or no symptoms unless they are exposed to a triggering agent. The most common triggering agents are volatile anesthetic gases, such as halothane, sevoflurane, desflurane, isoflurane, enflurane or the depolarizing muscle relaxants suxamethonium and decamethonium used primarily in general anesthesia. In rare cases, the biological stresses of physical exercise or heat may be the trigger. In fact, malignant hyperthermia susceptibility (MHS), predisposed by mutations in the skeletal muscle calcium release channel (RYR1), is one of the most severe heat-related illnesses. The MHS-associated heat susceptibilities predominantly affect children and metabolically active young adults, often leading to life- threatening hypermetabolic responses to heat.
Other anesthetic drugs do not trigger malignant hyperthermia. Some examples of drugs that don't cause MH include local anesthetics (lidocaine, bupivacaine, mepivacaine), opiates (morphine, fentanyl), ketamine, barbiturates, nitrous oxide, propofol, etomidate, and benzodiazepines. The nondepolarizing muscle relaxants pancuronium, cisatracurium, atracurium, mivacurium, vecuronium and rocuronium also do not cause MH.
There is mounting evidence that some individuals with malignant hyperthermia susceptibility may develop MH with exercise and/or on exposure to hot environments.
Genetics
Malignant hyperthermia's inheritance is autosomal dominant with variable penetrance. The defect is typically located on the long arm of chromosome 19 (19q13.2) involving the ryanodine receptor. More than 25 different mutations in this gene are linked with malignant hyperthermia. These mutations tend to cluster in one of three domains within the protein, designated MH1-3. MH1 and MH2 are located in the N-terminus of the protein, which interacts with L-type calcium channels and . MH3 is located in the transmembrane forming C-terminus. This region is important for allowing passage through the protein following opening.
Chromosome 7q and chromosome 17 have also been implicated. It has also been postulated that MH and central core disease may be allelic and thus can be co-inherited.
Pathophysiology
Disease mechanism
In a large proportion (50–70%) of cases, the propensity for malignant hyperthermia is due to a mutation of the ryanodine receptor (type 1), located on the sarcoplasmic reticulum (SR), the organelle within skeletal muscle cells that stores calcium. RYR1 opens in response to conformational changes in the L-type calcium channels following membrane depolarisation, thereby resulting in a drastic increase in intracellular calcium levels and muscle contraction. RYR1 has two sites believed to be important for reacting to changing concentrations: the A-site and the I-site. The A-site is a high affinity binding site that mediates RYR1 opening. The I-site is a lower affinity site that mediates the protein's closing. Caffeine, halothane, and other triggering agents act by drastically increasing the affinity of the A-site for and concomitantly decreasing the affinity of the I-site in mutant proteins. also affect RYR1 activity, causing the protein to close by acting at either the A- or I-sites. In MH mutant proteins, the affinity for at either one of these sites is greatly reduced. The result of these alterations is greatly increased release due to a lowered activation and heightened deactivation threshold. The process of sequestering this excess consumes large amounts of adenosine triphosphate (ATP), the main cellular energy carrier, and generates the excessive heat (hyperthermia) that is the hallmark of the disease. The muscle cell is damaged by the depletion of ATP and possibly the high temperatures, and cellular constituents "leak" into the circulation, including potassium, myoglobin, creatine, phosphate and creatine kinase.
The other known causative gene for MH is CACNA1S, which encodes an L-type voltage-gated calcium channel α-subunit. There are two known mutations in this protein, both affecting the same residue, R1086. This residue is located in the large intracellular loop connecting domains 3 and 4, a domain possibly involved in negatively regulating RYR1 activity. When these mutant channels are expressed in human embryonic kidney (HEK 293) cells, the resulting channels are five times more sensitive to activation by caffeine (and presumably halothane) and activate at 5–10mV more hyperpolarized. Furthermore, cells expressing these channels have an increased basal cytosolic concentration. As these channels interact with and activate RYR1, these alterations result in a drastic increase of intracellular , and, thereby, muscle excitability.
Other mutations causing MH have been identified, although in most cases the relevant gene remains to be identified.
Animal model
Research into malignant hyperthermia was limited until the discovery of "porcine stress syndrome" (PSS) in Danish Landrace and other pig breeds selected for muscling, a condition in which stressed pigs develop "pale, soft, exudative" flesh (a manifestation of the effects of malignant hyperthermia) rendering their meat less marketable at slaughter. This "awake triggering" was not observed in humans, and initially cast doubts on the value of the animal model, but subsequently, susceptible humans were discovered to "awake trigger" (develop malignant hyperthermia) in stressful situations. This supported the use of the pig model for research. Pig farmers use halothane cones in swine yards to expose piglets to halothane. Those that die were MH-susceptible, thus saving the farmer the expense of raising a pig whose meat he would not be able to market. This also reduced the use of breeding stock carrying the genes for PSS. The condition in swine is also due to a defect in ryanodine receptors.
Gillard et al. discovered the causative mutation in humans only after similar mutations had first been described in pigs.
Horses also develop malignant hyperthermia. A causative mutated allele, ryanodine receptor 1 gene (RyR1) at nucleotide C7360G, generating a R2454G amino acid substitution. has been identified in the American Quarter Horse and breeds with Quarter Horse ancestry, inherited as an autosomal dominant. It can be caused by overwork, anesthesia, or stress. In dogs, its inheritance is autosomal recessive.
An MH mouse has been constructed, bearing the R163C mutation prevalent in humans. These mice display signs similar to human MH patients, including sensitivity to halothane (increased respiration, body temperature, and death). Blockade of RYR1 by dantrolene prevents adverse reaction to halothane in these mice, as with humans. Muscle from these mice also shows increased -induced depolarization and an increased caffeine sensitivity.
Diagnosis
During an attack
The earliest signs may include: masseter muscle contracture following administration of succinylcholine, a rise in end-tidal carbon dioxide concentration (despite increased minute ventilation), unexplained tachycardia, and muscle rigidity. Despite the name, elevation of body temperature is often a late sign, but may appear early in severe cases. Respiratory acidosis is universally present and many patients have developed metabolic acidosis at the time of diagnosis. A fast rate of breathing (in a spontaneously breathing patient), cyanosis, hypertension, abnormal heart rhythms, and high blood potassium may also be seen. Core body temperatures should be measured in any patient undergoing general anesthesia longer than 30 minutes.
Malignant hyperthermia is diagnosed on clinical grounds, but various laboratory investigations may prove confirmatory. These include a raised creatine kinase level, elevated potassium, increased phosphate (leading to decreased calcium) and—if determined—raised myoglobin; this is the result of damage to muscle cells. Severe rhabdomyolysis may lead to acute kidney failure, so kidney function is generally measured on a frequent basis. Patients may also experience premature ventricular contractions due to the increased levels of potassium released from the muscles during episodes.
Susceptibility testing
Muscle testing
The main candidates for testing are those with a close relative who has had an episode of MH or have been shown to be susceptible. The standard procedure is the "caffeine-halothane contracture test", CHCT. A muscle biopsy is carried out at an approved research center, under local anesthesia. The fresh biopsy is bathed in solutions containing caffeine or halothane and observed for contraction; under good conditions, the sensitivity is 97% and the specificity 78%. Negative biopsies are not definitive, so any patient who is suspected of MH by their medical history or that of blood relatives is generally treated with non-triggering anesthetics, even if the biopsy was negative. Some researchers advocate the use of the "calcium-induced calcium release" test in addition to the CHCT to make the test more specific.
Less invasive diagnostic techniques have been proposed. Intramuscular injection of halothane 6 vol% has been shown to result in higher than normal increases in local among patients with known malignant hyperthermia susceptibility. The sensitivity was 100% and specificity was 75%. For patients at similar risk to those in this study, this leads to a positive predictive value of 80% and negative predictive value of 100%. This method may provide a suitable alternative to more invasive techniques.
A 2002 study examined another possible metabolic test. In this test, intramuscular injection of caffeine was followed by local measurement of the ; those with known MH susceptibility had a significantly higher (63 versus 44 mmHg). The authors propose larger studies to assess the test's suitability for determining MH risk.
Genetic testing
Genetic testing is being performed in a limited fashion to determine susceptibility to MH. In people with a family history of MH, analysis for RYR1 mutations may be useful.
Criteria
A 1994 consensus conference led to the formulation of a set of diagnostic criteria. The higher the score (above 6), the more likely a reaction constituted MH:
Respiratory acidosis (end-tidal above 55 mmHg/7.32 kPa or arterial above 60 mmHg/7.98 kPa)
Heart involvement (unexplained sinus tachycardia, ventricular tachycardia or ventricular fibrillation)
Metabolic acidosis (base excess lower than -8, pH <7.25)
Muscle rigidity (generalized rigidity including severe masseter muscle rigidity)
Muscle breakdown (CK >20,000/L units, cola colored urine or excess myoglobin in urine or serum, potassium above 6 mmol/L)
Temperature increase (rapidly increasing temperature, T >38.8 °C)
Other (rapid reversal of MH signs with dantrolene, elevated resting serum CK levels)
Family history (autosomal dominant pattern)
Prevention
In the past, the prophylactic use of dantrolene was recommended for MH-susceptible patients undergoing general anesthesia. However, multiple retrospective studies have demonstrated the safety of trigger-free general anesthesia in these patients in the absence of prophylactic dantrolene administration. The largest of these studies looked at the charts of 2214 patients who underwent general or regional anesthesia for an elective muscle biopsy. About half (1082) of the patients were muscle biopsy positive for MH. Only five of these patients exhibited signs consistent with MH, four of which were treated successfully with parenteral dantrolene, and the remaining one recovered with only symptomatic therapy. After weighing its questionable benefits against its possible adverse effects (including nausea, vomiting, muscle weakness and prolonged duration of action of nondepolarizing neuromuscular blocking agents), experts no longer recommend the use of prophylactic dantrolene prior to trigger-free general anesthesia in MH-susceptible patients.
Anesthesia machine preparation
Anesthesia for people with known MH susceptible requires avoidance of triggering agent concentrations above 5 parts per million (all volatile anesthetic agents and succinylcholine). Most other drugs are safe (including nitrous oxide), as are regional anesthetic techniques. Where general anesthesia is planned, it can be provided safely by either flushing the machine or using charcoal filters.
To flush the machine, first remove or disable the vaporizers and then flush the machine with 10 L/min or greater fresh gas flow rate for at least 20 minutes. While flushing the machine the ventilator should be set to periodically ventilate a new breathing circuit. The soda lime should also be replaced. After machine preparation, anesthesia should be induced and maintained with non-triggering agents. The time required to flush a machine varies for different machines and volatile anesthetics. This prevention technique was optimized to prepare older generation anesthesia machines. Modern anesthetic machines have more rubber and plastic components which provide a reservoir for volatile anesthetics, and should be flushed for 60 minutes.
Charcoal filters can be used to prepare an anesthesia machine in less than 60 seconds for people at risk of malignant hyperthermia. These filters prevent residual anesthetic from triggering malignant hyperthermia for up to 12 hours, even at low fresh gas flows. Prior to placing the charcoal filters, the machine should be flushed with fresh gas flows greater than 10 L/min for 90 seconds.
Treatment
The current treatment of choice is the intravenous administration of dantrolene, the only known antidote, discontinuation of triggering agents, and supportive therapy directed at correcting hyperthermia, acidosis, and organ dysfunction. Treatment must be instituted rapidly on clinical suspicion of the onset of malignant hyperthermia.
Dantrolene
Dantrolene is a muscle relaxant that appears to work directly on the ryanodine receptor to prevent the release of calcium. After the widespread introduction of treatment with dantrolene, the mortality of malignant hyperthermia fell from 80% in the 1960s to less than 5%. Dantrolene remains the only drug known to be effective in the treatment of MH. The recommended dose of dantrolene is 2.5 mg/kg, repeated as necessary. It is recommended that each hospital keeps a minimum stock of 36 dantrolene vials (720 mg), sufficient for four doses in a 70-kg person.
Training
Fast recognition and treatment of MH utilizes skills and procedures that are utilized with a low-frequency and high-risk. Conducting MH crisis training for perioperative teams can identify system failures as well as improve response to these events. Simulation techniques to include the use of cognitive aids have also been shown to improve communication in clinical treatment of MH.
Prognosis
Prognosis is poor if this condition is not aggressively treated. In the 1970s, mortality was greater than 80%; however, with the current management mortality is now less than 5%.
Epidemiology
It occurs in between 1:5,000 and 1:100,000 in procedures involving general anaesthesia. This disorder occurs worldwide and affects all racial groups.
In the Manawatu region of New Zealand, up to 1 in 200 people are at high risk of the condition.
History
The syndrome was first recognized in Royal Melbourne Hospital, Australia in an affected family by Denborough et al. in 1962. Denborough did much of his subsequent work on the condition at the Royal Canberra Hospital. Similar reactions were found in pigs. The efficacy of dantrolene as a treatment was discovered by South African anesthesiologist Gaisford Harrison and reported in a 1975 article published in the British Journal of Anaesthesia. After further animal studies corroborated the possible benefit from dantrolene, a 1982 study confirmed its usefulness in humans.
In 1981, the Malignant Hyperthermia Association of the United States (MHAUS) hotline was established to provide telephone support to clinical teams treating patients with suspected malignant hyperthermia. The hotline became active in 1982 and since that time MHAUS has provided continuous access to board-certified anesthesiologists to assist teams in treatment.
Other animals
Other animals, including certain pig breeds, dogs, and horses, are susceptible to malignant hyperthermia.
In dogs its inheritance is autosomal dominant. The syndrome has been reported in Pointers, Greyhounds, Labrador Retrievers, Saint Bernards, Springer Spaniels, Bichon Frises, Golden Retrievers, and Border Collies.
In pigs its inheritance is autosomal recessive.
In horses its inheritance is autosomal dominant, and most associated with the American Quarter Horse although it can occur in other breeds.
Research
Azumolene is a 30-fold more water-soluble analog of dantrolene that also works to decrease the release of intracellular calcium by its action on the ryanodine receptor. In MH-susceptible swine, azumolene was as potent as dantrolene. It has yet to be studied in vivo in humans, but may present a suitable alternative to dantrolene in the treatment of MH.
References
External links
GeneReview/NIH/UW entry on Malignant Hyperthermia Susceptibility
Anesthesia
Channelopathies
Rare diseases
Complications of surgical and medical care
Wikipedia medicine articles ready to translate
Wikipedia emergency medicine articles ready to translate
Muscular disorders | 0.7646 | 0.997918 | 0.763008 |
Human-to-human transmission | Human-to-human transmission (HHT) is an epidemiologic vector, especially in case the disease is borne by individuals known as superspreaders. In these cases, the basic reproduction number of the virus, which is the average number of additional people that a single case will infect without any preventative measures, can be as high as 203.9. Interhuman transmission is a synonym for HHT.
The World Health Organization designation of a pandemic hinges on the demonstrable fact that there is sustained HHT in two regions of the world.
Synopsis
Relevant microbes may be viruses, bacteria, or fungi, and they may be spread through breathing, talking, coughing, sneezing, spraying of liquids, toilet flushing or any activities which generate aerosol particles or droplets or generate fomites, such as raising of dust.
Transfer efficiency depends not only on surface, but also on pathogen type. For example, avian influenza survives on both porous and non-porous materials for 144 hours.
The microbes may also be transmitted by poor use of cutlery or improper sanitation of dishes or bedlinen. Particularly problematic are toilet practices, which lead to the fecal–oral route. STDs are by definition spread through this vector.
List of HHT diseases
Examples of some HHT diseases are listed below.
measles: vaccine available
mumps: vaccine available
chicken pox: vaccine available
small pox
bubonic plague: slim non-nil risk
pneumonic plague: 1910-11 Manchurian plague
tuberculosis
Norovirus
monkeypox
SARS-CoV-1
SARS-CoV-2: vaccine available
MERS
Avian flu
Sexually transmitted infections (STIs) or sexually transmitted diseases (STDs):
Syphilis, aka French pox
References
Sources
Epidemiology
Parasitology
Infectious diseases
Sanitation
Hygiene
Global health
Epidemics | 0.793823 | 0.961133 | 0.762969 |
Healthy diet | A healthy diet is a diet that maintains or improves overall health. A healthy diet provides the body with essential nutrition: fluid, macronutrients such as protein, micronutrients such as vitamins, and adequate fibre and food energy.
A healthy diet may contain fruits, vegetables, and whole grains, and may include little to no ultra-processed foods or sweetened beverages. The requirements for a healthy diet can be met from a variety of plant-based and animal-based foods, although additional sources of vitamin B12 are needed for those following a vegan diet. Various nutrition guides are published by medical and governmental institutions to educate individuals on what they should be eating to be healthy. Nutrition facts labels are also mandatory in some countries to allow consumers to choose between foods based on the components relevant to health.
Recommendations
World Health Organization
The World Health Organization (WHO) makes the following five recommendations with respect to both populations and individuals:
Maintain a healthy weight by eating roughly the same number of calories that your body is using.
Limit intake of fats to no more than 30% of total caloric intake, preferring unsaturated fats to saturated fats. Avoid trans fats.
Eat at least 400 grams of fruits and vegetables per day (not counting potatoes, sweet potatoes, cassava, and other starchy roots). A healthy diet also contains legumes (e.g. lentils, beans), whole grains, and nuts.
Limit the intake of simple sugars to less than 10% of caloric intake (below 5% of calories or 25 grams may be even better).
Limit salt/sodium from all sources and ensure that salt is iodized. Less than 5 grams of salt per day can reduce the risk of cardiovascular disease.
The WHO has stated that insufficient vegetables and fruit is the cause of 2.8% of deaths worldwide.
Other WHO recommendations include:
ensuring that the foods chosen have sufficient vitamins and certain minerals;
avoiding directly poisonous (e.g. heavy metals) and carcinogenic (e.g. benzene) substances;
avoiding foods contaminated by human pathogens (e.g. E. coli, tapeworm eggs);
and replacing saturated fats with polyunsaturated fats in the diet, which can reduce the risk of coronary artery disease and diabetes.
United States Department of Agriculture
The Dietary Guidelines for Americans by the United States Department of Agriculture (USDA) recommends three healthy patterns of diet, summarized in the table below, for a 2000 kcal diet. These guidelines are increasingly adopted by various groups and institutions for recipe and meal plan development.
The guidelines emphasize both health and environmental sustainability and a flexible approach. The committee that drafted it wrote: "The major findings regarding sustainable diets were that a diet higher in plant-based foods, such as vegetables, fruits, whole grains, legumes, nuts, and seeds, and lower in calories and animal-based foods is more health promoting and is associated with less environmental impact than is the current U.S. diet. This pattern of eating can be achieved through a variety of dietary patterns, including the "Healthy U.S.-style Pattern", the "Healthy Vegetarian Pattern" and the "Healthy Mediterranean-style Pattern". Food group amounts are per day, unless noted per week.
American Heart Association / World Cancer Research Fund / American Institute for Cancer Research
The American Heart Association, World Cancer Research Fund, and American Institute for Cancer Research recommend a diet that consists mostly of unprocessed plant foods, with emphasis on a wide range of whole grains, legumes, and non-starchy vegetables and fruits. This healthy diet includes a wide range of non-starchy vegetables and fruits which provide different colors including red, green, yellow, white, purple, and orange. The recommendations note that tomato cooked with oil, allium vegetables like garlic, and cruciferous vegetables like cauliflower, provide some protection against cancer. This healthy diet is low in energy density, which may protect against weight gain and associated diseases. Finally, limiting consumption of sugary drinks, limiting energy-rich foods, including "fast foods" and red meat, and avoiding processed meats improves health and longevity. Overall, researchers and medical policymakers conclude that this healthy diet can reduce the risk of chronic disease and cancer.
It is recommended that children consume 25 grams or less of added sugar (100 calories) per day. Other recommendations include no extra sugars in those under two years old and less than one soft drink per week. As of 2017, decreasing total fat is no longer recommended, but instead, the recommendation to lower risk of cardiovascular disease is to increase consumption of monounsaturated fats and polyunsaturated fats, while decreasing consumption of saturated fats.
Harvard School of Public Health
The Nutrition Source of Harvard School of Public Health (HSPH) makes the following dietary recommendations:
Eat healthy fats: healthy fats are necessary and beneficial for health. HSPH "recommends the opposite of the low-fat message promoted for decades by the USDA" and "does not set a maximum on the percentage of calories people should get each day from healthy sources of fat." Healthy fats include polyunsaturated and monounsaturated fats, found in vegetable oils, nuts, seeds, and fish. Foods containing trans fats are to be avoided, while foods high in saturated fats like red meat, butter, cheese, ice cream, coconut and palm oil negatively impact health and should be limited.
Eat healthy protein: the majority of protein should come from plant sources when possible: lentils, beans, nuts, seeds, whole grains; avoid processed meats like bacon.
Eat mostly vegetables, fruit, and whole grains.
Drink water. Consume sugary beverages, juices, and milk only in moderation. Artificially sweetened beverages contribute to weight gain because sweet drinks cause cravings. 100% fruit juice is high in calories. The ideal amount of milk and calcium is not known today.
Pay attention to salt intake from commercially prepared foods: most of the dietary salt comes from processed foods, "not from salt added to cooking at home or even from salt added at the table before eating."
Vitamins and minerals: must be obtained from food because they are not produced in our body. They are provided by a diet containing healthy fats, healthy protein, vegetables, fruit, milk and whole grains.
Pay attention to the carbohydrates package: the type of carbohydrates in the diet is more important than the amount of carbohydrates. Good sources for carbohydrates are vegetables, fruits, beans, and whole grains. Avoid sugared sodas, 100% fruit juice, artificially sweetened drinks, and other highly processed food.
Other than nutrition, the guide recommends staying active and maintaining a healthy body weight.
Others
David L. Katz, who reviewed the most prevalent popular diets in 2014, noted:
The weight of evidence strongly supports a theme of healthful eating while allowing for variations on that theme. A diet of minimally processed foods close to nature, predominantly plants, is decisively associated with health promotion and disease prevention and is consistent with the salient components of seemingly distinct dietary approaches.
Efforts to improve public health through diet are forestalled not for want of knowledge about the optimal feeding of Homo sapiens but for distractions associated with exaggerated claims, and our failure to convert what we reliably know into what we routinely do. Knowledge in this case is not, as of yet, power; would that it were so.
Marion Nestle expresses the mainstream view among scientists who study nutrition:
The basic principles of good diets are so simple that I can summarize them in just ten words: eat less, move more, eat lots of fruits and vegetables. For additional clarification, a five-word modifier helps: go easy on junk foods. Follow these precepts and you will go a long way toward preventing the major diseases of our overfed society—coronary heart disease, certain cancers, diabetes, stroke, osteoporosis, and a host of others.... These precepts constitute the bottom line of what seem to be the far more complicated dietary recommendations of many health organizations and national and international governments—the forty-one "key recommendations" of the 2005 Dietary Guidelines, for example. ... Although you may feel as though advice about nutrition is constantly changing, the basic ideas behind my four precepts have not changed in half a century. And they leave plenty of room for enjoying the pleasures of food.
Historically, a healthy diet was defined as a diet comprising more than 55% of carbohydrates, less than 30% of fat and about 15% of proteins. This view is currently shifting towards a more comprehensive framing of dietary needs as a global need of various nutrients with complex interactions, instead of per nutrient type needs.
Specific conditions
Diabetes
A healthy diet in combination with being active can help those with diabetes keep their blood sugar in check. The US CDC advises individuals with diabetes to plan for regular, balanced meals and to include more nonstarchy vegetables, reduce added sugars and refined grains, and focus on whole foods instead of highly processed foods. Generally, people with diabetes and those at risk are encouraged to increase their fiber intake.
Hypertension
A low-sodium diet is beneficial for people with high blood pressure. A 2008 Cochrane review concluded that a long-term (more than four weeks) low-sodium diet lowers blood pressure, both in people with hypertension (high blood pressure) and in those with normal blood pressure.
The DASH diet (Dietary Approaches to Stop Hypertension) is a diet promoted by the National Heart, Lung, and Blood Institute (part of the NIH, a United States government organization) to control hypertension. A major feature of the plan is limiting intake of sodium, and the diet also generally encourages the consumption of nuts, whole grains, fish, poultry, fruits, and vegetables while lowering the consumption of red meats, sweets, and sugar. It is also "rich in potassium, magnesium, and calcium, as well as protein".
The Mediterranean diet, which includes limiting consumption of red meat and using olive oil in cooking, has also been shown to improve cardiovascular outcomes.
Obesity
Healthy diets in combination with physical exercise can be used by people who are overweight or obese to lose weight, although this approach is not by itself an effective long-term treatment for obesity and is primarily effective for only a short period (up to one year), after which some of the weight is typically regained. A meta-analysis found no difference between diet types (low-fat, low-carbohydrate, and low-calorie), with a weight loss. This level of weight loss is by itself insufficient to move a person from an 'obese' body mass index (BMI) category to a 'normal' BMI.
Gluten-related disorders
Gluten, a mixture of proteins found in wheat and related grains including barley, rye, oat, and all their species and hybrids (such as spelt, kamut, and triticale), causes health problems for those with gluten-related disorders, including celiac disease, non-celiac gluten sensitivity, gluten ataxia, dermatitis herpetiformis, and wheat allergy. In these people, the gluten-free diet is the only available treatment.
Epilepsy
The ketogenic diet is a treatment to reduce epileptic seizures for adults and children when managed by a health care team.
Research
Preliminary research indicated that a diet high in fruit and vegetables may decrease the risk of cardiovascular disease and death, but not cancer. Eating a healthy diet and getting enough exercise can maintain body weight within the normal range and reduce the risk of obesity in most people. A 2021 scientific review of evidence on diets for lowering the risk of atherosclerosis found that:
low consumption of salt and foods of animal origin, and increased intake of plant-based foods—whole grains, fruits, vegetables, legumes, and nuts—are linked with reduced atherosclerosis risk. The same applies for the replacement of butter and other animal/tropical fats with olive oil and other unsaturated-fat-rich oil. [...] With regard to meat, new evidence differentiates processed and red meat—both associated with increased CVD risk—from poultry, showing a neutral relationship with CVD for moderate intakes. [...] New data endorse the replacement of most high glycemic index (GI) foods with both whole grain and low GI cereal foods.
Scientific research is also investigating impacts of nutrition on health- and lifespans beyond any specific range of diseases.
Moreover, not only do the components of diets matter but the total caloric content and eating patterns may also impact health – dietary restriction such as caloric restriction is considered to be potentially healthy to include in eating patterns in various ways in terms of health- and lifespan.
Unhealthy diets
An unhealthy diet is a major risk factor for a number of chronic diseases including: high blood pressure, high cholesterol, diabetes, abnormal blood lipids, overweight/obesity, cardiovascular diseases, and cancer. The World Health Organization has estimated that 2.7 million deaths each year are attributable to a diet low in fruit and vegetables during the 21st century. Globally, such diets are estimated to cause about 19% of gastrointestinal cancer, 31% of ischaemic heart disease, and 11% of strokes, thus making it one of the leading preventable causes of death worldwide, and the 4th leading risk factor for any disease. As an example, the Western pattern diet is "rich in red meat, dairy products, processed and artificially sweetened foods, and salt, with minimal intake of fruits, vegetables, fish, legumes, and whole grains," contrasted by the Mediterranean diet which is associated with less morbidity and mortality.
Dietary patterns that lead to non-communicable diseases generate productivity losses. A true cost accounting (TCA) assessment on the hidden impacts of agrifood systems estimated that unhealthy dietary patterns generate more than USD 9 trillion in health-related hidden costs in 2020, which is 73 percent of the total quantified hidden costs of global agrifood systems (USD 12.7 trillion). Globally, the average productivity losses per person from dietary intake is equivalent to 7 percent of GDP purchasing power parity (PPP) in 2020; low-income countries report the lowest value (4 percent), while other income categories report 7 percent or higher.
Fad diet
Some publicized diets, often referred to as fad diets, make exaggerated claims of fast weight loss or other health advantages, such as longer life or detoxification without clinical evidence; many fad diets are based on highly restrictive or unusual food choices. Celebrity endorsements (including celebrity doctors) are frequently associated with such diets, and the individuals who develop and promote these programs often profit considerably.
Public health
Consumers are generally aware of the elements of a healthy diet, but find nutrition labels and diet advice in popular media confusing.
Vending machines are criticized for being avenues of entry into schools for junk food promoters, but there is little in the way of regulation and it is difficult for most people to properly analyze the real merits of a company referring to itself as "healthy." The Committee of Advertising Practice in the United Kingdom launched a proposal to limit media advertising for food and soft drink products high in fat, salt, or sugar. The British Heart Foundation released its own government-funded advertisements, labeled "Food4Thought", which were targeted at children and adults to discourage unhealthy habits of consuming junk food.
From a psychological and cultural perspective, a healthier diet may be difficult to achieve for people with poor eating habits. This may be due to tastes acquired in childhood and preferences for sugary, salty, and fatty foods. In 2018, the UK chief medical officer recommended that sugar and salt be taxed to discourage consumption. The UK government 2020 Obesity Strategy encourages healthier choices by restricting point-of-sale promotions of less-healthy foods and drinks.
The effectiveness of population-level health interventions has included food pricing strategies, mass media campaigns and worksite wellness programs. One peso per liter of sugar-sweetened beverages (SSB) price intervention implemented in Mexico produced a 12% reduction in SSB purchasing. Mass media campaigns in Pakistan and the USA aimed at increasing vegetable and fruit consumption found positive changes in dietary behavior. Reviews of the effectiveness of worksite wellness interventions found evidence linking the programs to weight loss and increased fruit and vegetable consumption.
Other animals
Animals that are kept by humans also benefit from a healthy diet, but the requirements of such diets may be very different from the ideal human diet.
See also
Commercial determinants of health
Health food trends
Healthy eating pyramid
List of diets
Meals
Nutritionism
Nutrition scale
Nutritional rating systems
Planetary Health Diet
Plant-based diet
Table of food nutrients
References
External links
WHO fact sheet on healthy diet
Diet, Nutrition, and the Prevention of Chronic Diseases, by a Joint WHO/FAO Expert consultation (2003)
Dietetics
Diets
Nutrition guides | 0.764504 | 0.997956 | 0.762942 |
Occupational medicine | Occupational and Environmental Medicine (OEM), previously called industrial medicine, is a board certified medical specialty under the American Board of Preventative Medicine that specializes in the prevention and treatment of work-related illnesses and injuries.
OEM physicians are trained in both clinical medicine and public health. They may work in a clinical capacity providing direct patient care to workers through worker's compensation programs or employee health programs and performing medical screening services for employers. Corporate medical directors are typically occupational medicine physicians who often have specialized training in the hazards relevant to their industry. OEM physicians are employed by the US military in light of the significant and unique exposures faced by this population of workers. Public health departments, the Occupational Safety and Health Administration (OSHA) and the National Institute of Occupational Safety and Health (NIOSH) commonly employ physicians specialized in occupational medicine. They often advise international bodies, governmental and state agencies, organizations, and trade unions.
The specialty of Occupational Medicine rose in prominence following the industrial revolution. Factory workers and laborers in a broad host of emergent industries at the time were becoming profoundly ill and often dying due to work exposures which prompted formal efforts to better understand, recognize, treat and prevent occupational injury and disease.
More recently occupational medicine gained visibility during the COVID-19 Pandemic as spread of the illness was intricately linked to the workplace necessitating dramatic adjustments in workplace health, safety and surveillance practices.
In the United States, the American College of Preventive Medicine oversees board certification of physicians in Occupational and Environmental Medicine
Mission
Occupational medicine aims to prevent diseases and promote wellness among workers. Occupational health physicians must:
Have knowledge of potential hazards in the workplace including toxic properties of materials used.
Be able to evaluate employee fitness for work.
Be able to diagnose and treat occupational disease and injury.
Know about rehabilitation methods, health education, and government laws and regulations concerning workplace and environmental health.
Be able to manage health service delivery.
OM can be described as:
work that combines clinical medicine, research, and advocacy for people who need the assistance of health professionals to obtain some measure of justice and health care for illnesses they suffer as a result of companies pursuing the biggest profits they can make, no matter what the effect on workers or the communities they operate in.
History
The first textbook of occupational medicine, De Morbis Artificum Diatriba (Diseases of Workers), was written by Italian physician Bernardino Ramazzini in 1700.
Notable Occupational Medicine Physicians
Dr. Alice Hamilton
Dr. Stephen M Levin
Dr. Archibald Cochrane (Preventative Medicine)
Governmental bodies
United States
National Institute for Occupational Safety and Health (NIOSH)
Occupational Safety and Health Administration (OSHA)
Russian Federation
Research Institute of Occupational Medicine of the Russian Academy of Sciences (Moscow)
Non-governmental organizations
International
International Commission on Occupational Health (ICOH)
Institute of Occupational Medicine (IOM)
Canadian
Occupational Medicine Specialists of Canada
Japan
Japan Society of Occupational Health
United Kingdom
Faculty of Occupational Medicine
United States
American College of Occupational and Environmental Medicine (ACOEM)
American Osteopathic College of Occupational & Preventive Medicine (AOCOPM)
Europe
European Society for Environmental and Occupational Medicine (EOM)
Australasia
ANZSOM Australia https://www.anzsom.org.au/
ANZSOM New Zealand https://anzsom.org.nz/
See also
American Board of Preventive Medicine
American Osteopathic Board of Preventive Medicine
Industrial and organizational psychology
National Occupational Research Agenda
Occupational disease
Occupational Health and Safety
Occupational health nursing
Occupational health psychology
Occupational Health Science (journal)
Occupational hygiene
Occupational Medicine (journal)
Trauma und Berufskrankheit
Notes
References
Occupational safety and health
Occupational diseases
Medical specialties
Occupational medicine | 0.771382 | 0.989059 | 0.762942 |
Sclerosis (medicine) | Sclerosis is the stiffening of a tissue or anatomical feature, usually caused by a replacement of the normal organ-specific tissue with connective tissue. The structure may be said to have undergone sclerotic changes or display sclerotic lesions, which refers to the process of sclerosis.
Common medical conditions whose pathology involves sclerosis include:
Amyotrophic lateral sclerosis—also known as Lou Gehrig's disease or motor neurone disease—a progressive, incurable, usually fatal disease of motor neurons.
Atherosclerosis, a deposit of fatty materials, such as cholesterol, in the arteries which causes hardening.
Focal segmental glomerulosclerosis is a disease that attacks the kidney's filtering system (glomeruli) causing serious scarring and thus a cause of nephrotic syndrome in children and adolescents, as well as an important cause of kidney failure in adults.
Hippocampal sclerosis, a brain damage often seen in individuals with temporal lobe epilepsy.
Lichen sclerosus, an inflammatory skin disease that most often affects the vulva and the penis.
Multiple sclerosis, or focal sclerosis, is a central nervous system disease which affects coordination.
Osteosclerosis, a condition where the bone density is significantly increased, resulting in decreased lucency on radiographs.
Otosclerosis, a disease of the ears.
Primary lateral sclerosis, progressive muscle weakness in the voluntary muscles.
Primary sclerosing cholangitis, a hardening of the bile duct by scarring and repeated inflammation.
Systemic sclerosis (progressive systemic scleroderma), a rare, chronic disease which affects the skin, and in some cases also blood vessels and internal organs.
Tuberous sclerosis, a rare genetic disease which affects multiple systems.
References
External links
Medical terminology | 0.765905 | 0.996128 | 0.762939 |
Torso | The torso or trunk is an anatomical term for the central part, or the core, of the body of many animals (including humans), from which the head, neck, limbs, tail and other appendages extend. The tetrapod torso — including that of a human — is usually divided into the thoracic segment (also known as the upper torso, where the forelimbs extend), the abdominal segment (also known as the "mid-section" or "midriff"), and the pelvic and perineal segments (sometimes known together with the abdomen as the lower torso, where the hindlimbs extend).
Anatomy
Major organs
In humans, most critical organs, with the notable exception of the brain, are housed within the torso. In the upper chest, the heart and lungs are protected by the rib cage, and the abdomen contains most of the organs responsible for digestion: the stomach, which breaks down partially digested food via gastric acid; the liver, which respectively produces bile necessary for digestion; the large and small intestines, which extract nutrients from food; the anus, from which fecal wastes are egested; the rectum, which stores feces; the gallbladder, which stores and concentrates bile; the kidneys, which produce urine, the ureters, which pass it to the bladder for storage; and the urethra, which excretes urine and in a male passes sperm through the seminal vesicles. Finally, the pelvic region houses both the male and female reproductive organs.
Major muscle groups
The torso also harbours many of the main groups of muscles in the tetrapod body, including the pectoral, abdominal, lateral and epaxial muscles.
Nerve supply
The organs, muscles, and other contents of the torso are supplied by nerves, which mainly originate as nerve roots from the thoracic and lumbar parts of the spinal cord. Some organs also receive a nerve supply from the vagus nerve. The sensation to the skin is provided by the lateral and dorsal cutaneous branches.
See also
Belly cast
Waist
Belvedere Torso
References | 0.764355 | 0.998132 | 0.762927 |
Alcohol intoxication | Alcohol intoxication, also known in overdose as alcohol poisoning, commonly described as drunkenness or inebriation, is the behavior and physical effects caused by a recent consumption of alcohol. In addition to the toxicity of ethanol, the main psychoactive component of alcoholic beverages, other physiological symptoms may arise from the activity of acetaldehyde, a metabolite of alcohol. These effects may not arise until hours after ingestion and may contribute to the condition colloquially known as a hangover. The term intoxication is commonly used when large amount of alcohol is consumed along with physical symptoms and deleterious health effects.
Symptoms of intoxication at lower doses may include mild sedation and poor coordination. At higher doses, there may be slurred speech, trouble walking, and vomiting. Extreme doses may result in a respiratory depression, coma, or death. Complications may include seizures, aspiration pneumonia, low blood sugar, and injuries or self-harm such as suicide. Alcohol intoxication can lead to alcohol-related crime with perpetrators more likely to be intoxicated than victims.
Alcohol intoxication typically begins after two or more alcoholic drinks. Alcohol has the potential for abuse. Risk factors include a social situation where heavy drinking is common and a person having an impulsive personality. Diagnosis is usually based on the history of events and physical examination. Verification of events by witnesses may be useful. Legally, alcohol intoxication is often defined as a blood alcohol concentration (BAC) of greater than 5.4–17.4 mmol/L (25–80 mg/dL or 0.025–0.080%). This can be measured by blood or breath testing. Alcohol is broken down in the human body at a rate of about 3.3 mmol/L (15 mg/dL) per hour, depending on an individual's metabolic rate (metabolism). The DSM-5 defines alcohol intoxication as at least one of the following symptoms that developed during or close after alcohol ingestion: slurred speech, incoordination, unsteady walking/movement, nystagmus (uncontrolled eye movement), attention or memory impairment, or near unconsciousness or coma.
Management of alcohol intoxication involves supportive care. Typically this includes putting the person in the recovery position, keeping the person warm, and making sure breathing is sufficient. Gastric lavage and activated charcoal have not been found to be useful. Repeated assessments may be required to rule out other potential causes of a person's symptoms.
Acute intoxication has been documented throughout history, and alcohol remains one of the world's most widespread recreational drugs. Some religions, such as Islam, consider alcohol intoxication to be a sin. Others, such as Christianity, use it in rituals.
Symptoms
Vomiting
Slow Breathing (fewer than eight breaths per minute)
Seizures
Blue, grey or pale skin
Hypothermia (Low Body Temperature)
Felling lethargic (Trouble staying conscious)
Alcohol intoxication leads to negative health effects due to the recent drinking of large amount of ethanol (alcohol). When severe it may become a medical emergency. Some effects of alcohol intoxication, such as euphoria and lowered social inhibition, are central to alcohol's desirability.
As drinking increases, people become sleepy or fall into a stupor. At very high blood alcohol concentrations, for example above 0.3%, the respiratory system becomes depressed and the person may stop breathing. Comatose patients may aspirate their vomit (resulting in vomitus in the lungs, which may cause "drowning" and later pneumonia if survived). CNS depression and impaired motor coordination along with poor judgment increase the likelihood of accidental injury occurring. It is estimated that about one-third of alcohol-related deaths are due to accidents and another 14% are from intentional injury.
In addition to respiratory failure and accidents caused by its effects on the central nervous system, alcohol causes significant metabolic derangements. Hypoglycaemia occurs due to ethanol's inhibition of gluconeogenesis, especially in children, and may cause lactic acidosis, ketoacidosis, and acute kidney injury. Metabolic acidosis is compounded by respiratory failure. Patients may also present with hypothermia.
Pathophysiology
Alcohol is metabolized by a normal liver at the rate of about 8 grams of pure ethanol per hour. 8 grams or is one British standard unit. An "abnormal" liver with conditions such as hepatitis, cirrhosis, gall bladder disease, and cancer is likely to result in a slower rate of metabolism.
Diagnosis
Alcohol intoxication is described as a mental and behavioural disorder by the International Classification of Diseases. (ICD-10). Definitive diagnosis relies on a blood test for alcohol, usually performed as part of a toxicology screen. Law enforcement officers in the United States and other countries often use breathalyzer units and field sobriety tests as more convenient and rapid alternatives to blood tests. There are also various models of breathalyzer units that are available for consumer use. Because these may have varying reliability and may produce different results than the tests used for law-enforcement purposes, the results from such devices should be conservatively interpreted.
Many informal intoxication tests exist, which, in general, are unreliable and not recommended as deterrents to excessive intoxication or as indicators of the safety of activities such as motor vehicle driving, heavy equipment operation, machine tool use, etc.
For determining whether someone is intoxicated by alcohol by some means other than a blood-alcohol test, it is necessary to rule out other conditions such as hypoglycemia, stroke, usage of other intoxicants, mental health issues, and so on. It is best if their behavior has been observed while the subject is sober to establish a baseline. Several well-known criteria can be used to establish a probable diagnosis. For a physician in the acute-treatment setting, acute alcohol intoxication can mimic other acute neurological disorders or is frequently combined with other recreational drugs that complicate diagnosis and treatment.
Management
Acute alcohol poisoning is a medical emergency due to the risk of death from respiratory depression or aspiration of vomit if vomiting occurs while the person is unresponsive. Emergency treatment strives to stabilize and maintain an open airway and sufficient breathing while waiting for the alcohol to metabolize. This can be done by removal of any vomit or, if the person is unconscious or has impaired gag reflex, intubation of the trachea.
Other measures may include
Administer the vitamin thiamine to prevent Wernicke–Korsakoff syndrome, which can cause a seizure (more usually a treatment for chronic alcoholism, but in the acute context usually co-administered to ensure maximal benefit).
Hemodialysis if the blood concentration is very high at >130 mmol/L (>600 mg/dL)
Provide oxygen therapy as needed via nasal cannula or non-rebreather mask.
Administration of intravenous fluids in cases involving hypoglycemia and electrolyte imbalance.
While the medication metadoxine may speed the breakdown of alcohol, use in alcohol intoxication requires further study as of 2017. It is approved in a number of countries in Europe, as well as India and Brazil.
Additional medication may be indicated for treatment of nausea, tremor, and anxiety.
Clinical findings
Hospital admissions
Alcohol intoxication was found to be prevalent in clinical populations within the United States involving people treated for trauma and in the age group of people aged within their 18th–24th years (in a study of a group for the years 1999–2004). In the United States during the years 2010–2012, acute intoxication was found to be the direct cause of an average of 2,221 deaths, in the sample group of those aged within their 15th year or older. The same mortality route is thought to cause indirectly more than 30,000 deaths per year.
Prognosis
A normal liver detoxifies the blood of alcohol over a period of time that depends on the initial level and the patient's overall physical condition. An abnormal liver will take longer but still succeeds, provided the alcohol does not cause liver failure.
People having drunk heavily for several days or weeks may have withdrawal symptoms after the acute intoxication has subsided.
A person consuming a dangerous amount of alcohol persistently can develop memory blackouts and idiosyncratic intoxication or pathological drunkenness symptoms. Long-term persistent consumption of excessive amounts of alcohol can cause liver damage and have other deleterious health effects.
Society and culture
Alcohol intoxication is a risk factor in some cases of catastrophic injury, in particular for unsupervised recreational activity. A study in the province of Ontario based on epidemiological data from 1986, 1989, 1992, and 1995 states that 79.2% of the 2,154 catastrophic injuries recorded for the study were preventable, of which 346 (17%) involved alcohol consumption. The activities most commonly associated with alcohol-related catastrophic injury were snowmobiling (124), fishing (41), diving (40), boating (31) and canoeing (7), swimming (31), riding an all-terrain vehicle (24), and cycling (23). These events are often associated with unsupervised young males, often inexperienced in the activity, and may result in drowning. Alcohol use is also associated with unsafe sex.
Legal issues
Laws on drunkenness vary. In the United States, it is a criminal offense for a person to be drunk while driving a motorized vehicle, except in Wisconsin, where it is only a fine for the first offense. It is also a criminal offense to fly an aircraft or (in some American states) to assemble or operate an amusement park ride while drunk. Similar laws also exist in the United Kingdom and most other countries.
In some jurisdictions, it is also an offense to serve alcohol to an already-intoxicated person, and, often, alcohol can only be sold by persons qualified to serve responsibly through alcohol server training.
The blood alcohol content (BAC) for legal operation of a vehicle is typically measured as a percentage of a unit volume of blood. This percentage ranges from 0.00% in Romania and the United Arab Emirates; to 0.05% in Australia, South Africa, Germany, Scotland, and New Zealand (0.00% for underage individuals); to 0.08% in England and Wales, the United States and Canada.
The United States Federal Aviation Administration prohibits crew members from performing their duties within eight hours of consuming an alcoholic beverage, while under the influence of alcohol, or with a BAC greater than 0.04%.
In the United States, the United Kingdom, and Australia, public intoxication is a crime (also known as "being drunk and disorderly" or "being drunk and incapable").
In some countries, there are special facilities, sometimes known as "drunk tanks", for the temporary detention of persons found to be drunk.
Religious views
Christianity
Some religious groups permit the consumption of alcohol; some permit consumption but prohibit intoxication; others prohibit any amount of alcohol consumption altogether. Many denominations of Christianity, such as Catholicism, Orthodoxy and Lutheranism, use wine as a part of the Eucharist and permit its consumption, but consider it sinful to become intoxicated.
Romans 13:13–14, 1 Corinthians 6:9–11, Galatians 5:19–21 and Ephesians 5:18 are among a number of other Bible passages that speak against intoxication.
While some Protestant Christian denominations prohibit the consumption of alcohol based upon biblical passages that condemn drunkenness, but others allow a moderate rate of consumption.
In the Church of Jesus Christ of Latter-day Saints, alcohol consumption is forbidden, and teetotalism has become a distinguishing feature of its members. Jehovah's Witnesses allow moderate alcohol consumption among its members.
Islam
In the Quran, there is a prohibition on the consumption of grape-based alcoholic beverages, and intoxication is considered an abomination in the hadith of Muhammad. The schools of thought of Islamic jurisprudence have interpreted this as a strict prohibition of the consumption of all types of alcohol and declared it to be haram, although other uses may be permitted.
Buddhism
In Buddhism, in general, the consumption of intoxicants is discouraged for both monastics and lay followers. Many Buddhists observe a basic code of ethics known as the five precepts, of which the fifth precept is an undertaking to refrain from the consumption of intoxicating substances (except for medical reasons). In the bodhisattva vows of the Brahmajala Sutra, observed by Mahayana Buddhist communities, distribution of intoxicants is likewise discouraged, as well as consumption.
Hinduism
In the Gaudiya Vaishnavism branch of Hinduism, one of the four regulative principles forbids the taking of intoxicants, including alcohol.
Judaism
In the Bible, the Book of Proverbs contains several chapters related to the negative effects of drunkenness and warns to stay away from intoxicating beverages. The Book of Genesis refers to the use of wine by Lot's daughters to rape him. The story of Samson in the Book of Judges tells of a monk from the Israelite tribe of Dan who, as a Nazirite, is prohibited from cutting his hair and drinking wine. Proverbs 31:4 warns against kings and other rulers drinking wine and similar alcoholic beverages, Proverbs 31:6–7 promotes giving such beverages to the perishing and wine to those whose lives are bitter as a coping mechanism against the likes of poverty and other troubles.
In Judaism, in accordance with the biblical stance against drinking, drinking wine is restricted for priests. The biblical command to sanctify the Sabbath and other holidays has been interpreted as having three ceremonial meals with wine or grape juice, known as Kiddush. A number of Jewish marriage ceremonies end with the bride and groom drinking a shared cup of wine after reciting seven blessings; this occurs after a fasting day in some Ashkenazi traditions. It has been customary and in many cases even mandated to drink moderately so as to stay sober, and only after the prayers are over.
During the Seder on Passover, there is an obligation to drink four ceremonial cups of wine while reciting the Haggadah. It has been assumed as the source of the wine-drinking ritual at communion in some Christian groups. During Purim, there is an obligation to become intoxicated; however, as with many other decrees, this has been avoided in many communities by allowing sleep during the day as a replacement.
During the U.S. Prohibition era in the 1920s, a rabbi from the Reform Judaism movement proposed using grape juice for the ritual instead of wine. Although refuted at first, the practice became widely accepted by orthodox Jews as well.
Other animals
In the film Animals Are Beautiful People, an entire section was dedicated to showing many different animals including monkeys, elephants, hogs, giraffes, and ostriches, eating over-ripe marula tree fruit causing them to sway and lose their footing in a manner similar to human drunkenness. Birds may become intoxicated with fermented berries and some die colliding with hard objects when flying under the influence.
In elephant warfare, practiced by the Greeks during the Maccabean revolt and by Hannibal during the Punic wars, it has been recorded that the elephants would be given wine before the attack, and only then would they charge forward after being agitated by their driver.
It is a regular practice to give small amounts of beer to race horses in Ireland. Ruminant farm animals have natural fermentation occurring in their stomach, and adding alcoholic beverages in small amounts to their drink will generally do them no harm, and will not cause them to become drunk.
Alcoholic beverages are extremely harmful to dogs, and often for reasons of additives such as xylitol, an artificial sweetener in some mixers. Dogs can absorb ethyl alcohol in dangerous amounts through their skin as well as through drinking the liquid or consuming it in foods. Even fermenting bread dough can be dangerous to dogs. In 1999, one of the royal footmen for Britain's Queen Elizabeth II was demoted from Buckingham Palace due to his "party trick" of spiking the meals and drinks of the Queen's pet corgi dogs with alcohol which in turn would lead the dogs to run around drunk.
See also
A Night of Serious Drinking
Alcohol and sex
Alcohol enema
Alcohol flush reaction
Disulfiram-alcohol reaction
Driving under the influence
In vino veritas
Long-term effects of alcohol consumption
Low alcoholic drinks
Short-term effects of alcohol consumption
References
Bibliography
Bales, Robert F. "Attitudes toward Drinking in the Irish Culture". In: Pittman, David J. and Snyder, Charles R. (Eds.) Society, Culture and Drinking Patterns. New York: Wiley, 1962, pp. 157–187.
Gentry, Kenneth L. Jr., God Gave Wine: What the Bible Says about Alcohol. Lincoln, Calif.: Oakdown, 2001.
Rorabaugh, W.J. "The Alcoholic Republic," Chapter 2 & 5, Oxford University Press.
Sigmund, Paul. St. Thomas Aquinas on Politics and Ethics. W.W. Norton & Company, Inc, 1988, p. 77.
Walton, Stuart. Out of It. A Cultural History of Intoxication. Penguin Books, 2002. .
Slingerland, Edward. Drunk: How We Sipped, Danced, and Stumbled Our Way to Civilization. Little, Brown Spark, 2021.
External links
Alcohol overdose: NIAAA
Alcohol poisoning: NHS Choices
Alcohol abuse
Drinking culture
Intox
Substance intoxication
Wikipedia medicine articles ready to translate | 0.764177 | 0.998343 | 0.76291 |
Escherichia | Escherichia is a genus of Gram-negative, non-spore-forming, facultatively anaerobic, rod-shaped bacteria from the family Enterobacteriaceae. In those species which are inhabitants of the gastrointestinal tracts of warm-blooded animals, Escherichia species provide a portion of the microbially derived vitamin K for their host. A number of the species of Escherichia are pathogenic. The genus is named after Theodor Escherich, the discoverer of Escherichia coli. Escherichia are facultative aerobes, with both aerobic and anaerobic growth, and an optimum temperature of 37 °C. Escherichia are usually motile by flagella, produce gas from fermentable carbohydrates, and do not decarboxylate lysine or hydrolyze arginine. Species include E. albertii, E. fergusonii, E. hermannii, E. ruysiae, E. marmotae and most notably, the model organism and clinically relevant E. coli. Formerly, Shimwellia blattae and Pseudescherichia vulneris were also classified in this genus.
Pathogenesis
While many Escherichia are commensal members of the gut microbiota, certain strains of some species, most notably the pathogenic serotypes of E. coli, are human pathogens, and are the most common cause of urinary tract infections, significant sources of gastrointestinal disease, ranging from simple diarrhea to dysentery-like conditions, as well as a wide range of other pathogenic states classifiable in general as colonic escherichiosis. While E. coli is responsible for the vast majority of Escherichia-related pathogenesis, other members of the genus have also been implicated in human disease. Escherichia are associated with the imbalance of microbiota of the lower reproductive tract of women. These species are associated with inflammation.
See also
E. coli O157:H7
List of bacterial genera named after personal names
References
External links
Escherichia genomes and related information at PATRIC, a Bioinformatics Resource Center funded by NIAID
Gut flora bacteria
Gram-negative bacteria
Pathogenic bacteria
Bacteria genera | 0.771552 | 0.98878 | 0.762895 |
Meningism | Meningism is a set of symptoms similar to those of meningitis but not caused by meningitis. Whereas meningitis is inflammation of the meninges (membranes that cover the central nervous system), meningism is caused by nonmeningitic irritation of the meninges, usually associated with acute febrile illness, especially in children and adolescents. Meningism involves the triad (3-symptom syndrome) of nuchal rigidity (neck stiffness), photophobia (intolerance of bright light) and headache. It therefore requires differentiating from other CNS problems with similar symptoms, including meningitis and some types of intracranial hemorrhage. Related clinical signs include Kernig's sign and three signs all named Brudzinski's sign.
Although nosologic coding systems, such as ICD-10 and MeSH, define meningism/meningismus as meningitis-like but in fact not meningitis, many physicians use the term meningism in a loose sense clinically to refer to any meningitis-like set of symptoms before the cause is definitively known. In this sense, the word implies "suspected meningitis". The words meningeal symptoms can be used instead to avoid ambiguity, thus reserving the term meningism for its strict sense.
Signs and symptoms
The main clinical signs that indicate meningism are nuchal rigidity, Kernig's sign and Brudzinski's signs. None of the signs are particularly sensitive; in adults with meningitis, nuchal rigidity was present in 30% and Kernig's or Brudzinski's sign only in 5%.
Nuchal rigidity
Nuchal rigidity is the inability to flex the neck forward due to rigidity of the neck muscles; if flexion of the neck is painful but full range of motion is present, nuchal rigidity is absent.
Kernig's sign
Kernig's sign (after Waldemar Kernig (1840–1917), a Russian neurologist) is positive when the thigh is flexed at the hip and knee at 90 degree angles, and subsequent extension in the knee is painful (leading to resistance). This may indicate subarachnoid hemorrhage or meningitis. Patients may also show opisthotonus—spasm of the whole body that leads to legs and head being bent back and body bowed forward.
Brudzinski's signs
Jozef Brudzinski (1874–1917), a Polish pediatrician, is credited with several signs in meningitis. The most commonly used sign (Brudzinski's neck sign) is positive when the forced flexion of the neck elicits a reflex flexion of the hips, with the patient lying supine.
Other signs attributed to Brudzinski:
The symphyseal sign, in which pressure on the pubic symphysis leads to abduction of the leg and reflexive hip and knee flexion.
The cheek sign, in which pressure on the cheek below the zygoma leads to rising and flexion in the forearm.
Brudzinski's reflex, in which passive flexion of one knee into the abdomen leads to involuntary flexion in the opposite leg, and stretching of a limb that was flexed leads to contralateral extension.
See also
Meningitis
Meningoencephalitis
References
External links
FPnotebook page on meningeal signs
Image of Kernig's sign
Symptoms and signs: Nervous system | 0.771042 | 0.989366 | 0.762843 |
Achlorhydria | Achlorhydria and hypochlorhydria refer to states where the production of hydrochloric acid in gastric secretions of the stomach and other digestive organs is absent or low, respectively. It is associated with various other medical problems.
Signs and symptoms
Irrespective of the cause, achlorhydria can result as known complications of bacterial overgrowth and intestinal metaplasia and symptoms are often consistent with those diseases:
gastroesophageal reflux disease
abdominal discomfort
early satiety
weight loss
diarrhea
constipation
abdominal bloating
anemia
stomach infection
malabsorption of food
carcinoma of stomach
Since acidic pH facilitates the absorption of iron, achlorhydric patients often develop iron deficiency anemia. The acidic environment of stomach helps conversion of pepsinogen into pepsin, which is highly important in digesting the protein into smaller components, such as a complex protein into simple peptides and amino acids inside the stomach, which are later absorbed by the gastrointestinal tract.
Bacterial overgrowth and B12 deficiency (pernicious anemia) can cause micronutrient deficiencies that result in various clinical neurological manifestations, including visual changes, paresthesias, ataxia, limb weakness, gait disturbance, memory defects, hallucinations and personality and mood changes.
Risk of particular infections, such as Vibrio vulnificus (commonly from seafood) is increased. Even without bacterial overgrowth, low stomach acid (high pH) can lead to nutritional deficiencies through decreased absorption of basic electrolytes (magnesium, zinc, etc.) and vitamins (including vitamin C, vitamin K, and the B complex of vitamins). Such deficiencies may be involved in the development of a wide range of pathologies, from fairly benign neuromuscular issues to life-threatening diseases.
Causes
The slowing of the body's basal metabolic rate associated with hypothyroidism.
Pernicious anemia where there is antibody production against parietal cells which normally produce gastric acid.
The use of antacids or drugs that decrease gastric acid production (such as H2-receptor antagonists) or transport (such as proton pump inhibitors).
A symptom of rare diseases such as mucolipidosis (type IV).
A symptom of Helicobacter pylori infection which neutralizes and decreases secretion of gastric acid to aid its survival in the stomach.
A symptom of atrophic gastritis or of stomach cancer.
Radiation therapy involving the stomach.
Gastric bypass procedures such as a duodenal switch and RNY, where the largest acid producing parts of the stomach are either removed or blinded.
VIPomas (vasoactive intestinal peptides) and somatostatinomas are both islet cell tumors of the pancreas.
Pellagra, caused by niacin deficiency.
Chloride, sodium, potassium, zinc and/or iodine deficiency, as these elements are needed to produce adequate levels of stomach acid (HCl).
Sjögren's syndrome, an autoimmune disorder that destroys many of the body's moisture-producing enzymes.
Ménétrier's disease, characterized by hyperplasia of mucous cells in the stomach also causing excess protein loss, leading to hypoalbuminemia (presents with abdominal pain and edema).
Risk Factors
Prevalence
Achlorhydria is present in about 2.5% of the population under 60 years old and about 5% of the population over 60 years old. The incidence increases to around 12% in populations over 80 years old. An absence of hydrochloric acid increases with advancing age. A lack of hydrochloric acid produced by the stomach is one of the most common age-related causes of a harmed digestive system.
Among men and women, 27% experience a varying degree of achlorhydria. US researchers found that over 30% of women and men over the age of 60 have little to no acid secretion in the stomach. Additionally, 40% of postmenopausal women have shown to have no basal gastric acid secretion in the stomach, with 39.8% occurring in females 80 to 89 years old.
Comorbidities
Autoimmune disorders are also linked to advancing age, specifically autoimmune gastritis, which is when the body produces unwelcomed antibodies and causes inflammation of the stomach. Autoimmune disorders are also a cause for small bacterial growth in the bowel and a deficiency of Vitamin B-12. These have also proved to be factors of acid secretion in the stomach. Autoimmune conditions can often be managed with various treatments; however, little is known about how or if these treatments effect achlorhydria.
Thyroid hormones can contribute to changes in the level of hydrochloric acid in the stomach. Hypothyroidism is associated with a greater risk of developing achlorhydria.
Long term usage of medications or drugs
Extended use of anti-acids, antibiotics, and other drugs can contribute to hypochlorhydria. Proton pump inhibitors (PPIs) are very commonly used to temporarily relieve symptoms conditions such as gastroesophageal reflux and peptic ulcers. Risk increases as these drugs are taken over a longer time period, often many years, typically beyond the recommended therapeutic usage.
Stress can also be linked to symptoms associated with achlorhydria including constant belching, constipation, and abdominal pain.
Diagnosis
For practical purposes, gastric pH and endoscopy should be done in someone with suspected achlorhydria. Older testing methods using fluid aspiration through a nasogastric tube can be done, but these procedures can cause significant discomfort and are less efficient ways to obtain a diagnosis.
A complete 24-hour profile of gastric acid secretion is best obtained during an esophageal pH monitoring study.
Achlorhydria may also be documented by measurements of extremely low levels of pepsinogen A (PgA) in blood serum. The diagnosis may be supported by high serum gastrin levels.
The "Heidelberg test" is an alternative way to measure stomach acid and diagnose hypochlorhydria/achlorhydria.
A check can exclude deficiencies in iron, calcium, prothrombin time, vitamin B-12, vitamin D, and thiamine. Complete blood count with indices and peripheral smears can be examined to exclude anemia. Elevation of serum folate is suggestive of small bowel bacterial overgrowth. Bacterial folate can be absorbed into the circulation.
Once achlorhydria is confirmed, a hydrogen breath test can check for bacterial overgrowth.
Treatment
Treatment focuses on addressing the underlying cause of symptoms.
Treatment of gastritis that leads to pernicious anemia consists of parenteral vitamin B-12 injection. Associated immune-mediated conditions (e.g., insulin-dependent diabetes mellitus, autoimmune thyroiditis) should also be treated. However, treatment of these disorders has no known effect in the treatment of achlorhydria.
Achlorhydria associated with Helicobacter pylori infection may respond to H. pylori eradication therapy, although resumption of gastric acid secretion may only be partial and it may not always reverse the condition completely.
Antimicrobial agents, including metronidazole, amoxicillin/clavulanate potassium, ciprofloxacin, and rifaximin, can be used to treat bacterial overgrowth.
Achlorhydria resulting from long-term proton-pump inhibitor (PPI) use may be treated by dose reduction or withdrawal of the PPI.
Prognosis
Little is known on the prognosis of achlorhydria, although there have been reports of an increased risk of gastric cancer.
A 2007 review article noted that non-Helicobacter bacterial species can be cultured from achlorhydric (pH > 4.0) stomachs, whereas normal stomach pH only permits the growth of Helicobacter species. Bacterial overgrowth may cause false-positive H. pylori test results due to the change in pH from urease activity.
Small bowel bacterial overgrowth is a chronic condition. Retreatment may be necessary once every 1–6 months. Prudent use of antibacterials now calls for an antimicrobial stewardship policy to manage antibiotic resistance.
See also
Atrophic gastritis
Fundic gland polyposis
Hyperchlorhydria
Isopropamide
References
External links
Stomach disorders | 0.770278 | 0.990323 | 0.762824 |
Blood type distribution by country | This list concerns blood type distribution between countries and regions. Blood type (also called a blood group) is a classification of blood, based on the presence and absence of antibodies and inherited antigenic substances on the surface of red blood cells (RBCs). These antigens may be proteins, carbohydrates, glycoproteins, or glycolipids, depending on the blood group system.
ABO and Rh distribution by country
{| class="wikitable sortable" style="text-align:right;"
|+ ABO and Rh blood type distribution by country & dependency (population averages)
|-
! style="text-align:left;" | Country/Dependency
! Population
! O+ !! A+ !! B+ !! AB+ !! O− !! A− !! B− !! AB−
|-
! style="text-align:left" |
| 3,074,579 || style="color:#000000;background:#ffffcc"|34.1% || style="color:#000000;background:#ffffcc"|31.2% || style="color:#000000;background:#ccffff"|14.5% || style="color:#000000;background:#ddddff"|5.2% || style="color:#000000;background:#ddddff"|6.0% || style="color:#000000;background:#ddddff"|5.5% || 2.6% || 0.9%
|-
! style="text-align:left" |
| 43,576,691 || style="color:#000000;background:#ffcccc"|40.0% || style="color:#000000;background:#ffffcc"|30.0% || style="color:#000000;background:#ccffff"|15.0% || 4.25% || style="color:#000000;background:#ddddff"|6.6%|| 2.3%||1.1%|| 0.75%
|-
! style="text-align:left" |
| 45,479,118 || style="color:#000000;background:#ffccff"|50.34% || style="color:#000000;background:#ccffcc" |31.09% || style="color:#000000;background:#ddddff" |8.20% || 2.16% || 4.29%|| 2.98%||0.74%|| 0.20%
|-
! style="text-align:left" |
| 3,021,324 || style="color:#000000;background:#ccffcc"|29.0% || style="color:#000000;background:#ffcccc"|46.3% || style="color:#000000;background:#ccffff"|12.0% || style="color:#000000;background:#ddddff"|5.6% || 2.0% || 3.7% || 1.0% || 0.4%
|-
! style="text-align:left" |
| 25,466,459 || style="color:#000000;background:#ffcccc"|38.0% || style="color:#000000;background:#ffffcc"|32.0% || style="color:#000000;background:#ccffff"|12.0% || 4.0% || style="color:#000000;background:#ddddff"|7.0% || style="color:#000000;background:#ddddff"|6.0% || 2.0% || 1.0%
|-
! style="text-align:left" |
| 8,859,449 || style="color:#000000;background:#ffffcc"|30.0% || style="color:#000000;background:#ffffcc"|37.0% || style="color:#000000;background:#ccffff"|12.0% || style="color:#000000;background:#ddddff"|5.0% || style="color:#000000;background:#ddddff"|6.0% || style="color:#000000;background:#ddddff"|7.0% || 2.0% || 0.45%
|-
! style="text-align:left" |
| 10,205,810 || style="color:#000000;background:#ccffcc"|29.8% || style="color:#000000;background:#ffffcc"|30.0% || style="color:#000000;background:#ccffcc"|21.1% || style="color:#000000;background:#ddddff"|9.0% || 3.3% || 3.4% || 2.4% || 1.0%
|-
! style="text-align:left" |
| 1,505,003 || style="color:#000000;background:#ffccff"|48.48% || style="color:#000000;background:#ccffff"|15.35% || style="color:#000000;background:#ccffcc"|22.61% || 3.67% || 3.27% || 1.33% || 1.04% || 0.25%
|-
! style="text-align:left" |
| 164,098,818 || style="color:#000000;background:#ccffcc"|29.21% || style="color:#000000;background:#ccffcc" |26.3% || style="color:#000000;background:#ffffcc"|33.12% || style="color:#000000;background:#ddddff"|9.59% ||0.53% ||0.48% ||0.6% ||0.17%
|-
! style="text-align:left" |
| 9,441,842 || style="color:#000000;background:#ffffcc"|32.3% || style="color:#000000;background:#ffffcc"|30.6% || style="color:#000000;background:#ccffff"|15.3% || style="color:#000000;background:#ddddff"|6.8% || style="color:#000000;background:#ddddff"|5.7% || style="color:#000000;background:#ddddff"|5.4% || 2.7% || 1.2%
|-
! style="text-align:left" |
| 11,720,716 || style="color:#000000;background:#ffcccc"|38.0% || style="color:#000000;background:#ffffcc"|34.0% || style="color:#000000;background:#ddddff"|8.5% || 4.0% || style="color:#000000;background:#ddddff"|7.0% || style="color:#000000;background:#ddddff"|6.0% || 1.5% || 1.0%
|-
! style="text-align:left" |
| 857,423 || style="color:#000000;background:#ffffcc"|38.15% || style="color:#000000;background:#ccffcc"|29.45% || style="color:#000000;background:#ccffcc"|23.86% || style="color:#000000;background:#ddddff"|8.41% || 0.06% || 0.04% || 0.04% || 0.01%
|-
! style="text-align:left" |
| 11,639,909 || style="color:#000000;background:#ffccff"|51.53% || style="color:#000000;background:#ccffcc"|20.45% || style="color:#000000;background:#ccffff"|10.11% || 1.15% || 4.39% || 2.73% || 0.54% || 0.1%
|-
! style="text-align:left" |
| 3,835,586 || style="color:#000000;background:#ffffcc"|31.0% || style="color:#000000;background:#ffffcc"|36.0% || style="color:#000000;background:#ccffff"|12.0% || style="color:#000000;background:#ddddff"|6.0% || style="color:#000000;background:#ddddff"|5.0% || style="color:#000000;background:#ddddff"|7.0% || 2.0% || 1.0%
|-
! style="text-align:left" |
| 211,715,973 || style="color:#000000;background:#ffcccc"|36.0% || style="color:#000000;background:#ffffcc"|34.0% || style="color:#000000;background:#ddddff"|8.0% || 2.5% || style="color:#000000;background:#ddddff"|9.0% || style="color:#000000;background:#ddddff"|8.0% || 2.0% || 0.5%
|-
! style="text-align:left" |
| 6,966,899 || style="color:#000000;background:#ccffcc"|28.0% || style="color:#000000;background:#ffffcc"|37.4% || style="color:#000000;background:#ccffff"|12.8% || style="color:#000000;background:#ddddff"|6.8% || style="color:#000000;background:#ddddff"|5.0% || style="color:#000000;background:#ddddff"|6.6% || 2.2% || 1.2%
|-
! style="text-align:left" |
| 21,382,659 || style="color:#000000;background:#ffffcc"|39.94% || style="color:#000000;background:#ccffcc"|20.79% || style="color:#000000;background:#ccffcc"|26.34% || style="color:#000000;background:#ddddff"| 5.17% || 3.36% || 1.75% || 2.22% || 0.43%
|-
! style="text-align:left" |
| 16,926,984 || style="color:#000000;background:#ffcccc"|46.7% || style="color:#000000;background:#ccffcc"|27.2% || style="color:#000000;background:#ccffff"|18.5% || 4.9% || 1.3% || 0.8% || 0.5% || 0.1%
|-
! style="text-align:left" |
| 27,744,989 || style="color:#000000;background:#ffcccc"|46.83% || style="color:#000000;background:#ccffcc"|24.15% || style="color:#000000;background:#ccffcc"|21.06% || 4.29% || 1.79% || 0.92% || 0.8% || 0.16%
|-
! style="text-align:left" |
| 37,694,085 || style="color:#000000;background:#ffcccc"|39.0% || style="color:#000000;background:#ffffcc"|36.0% || style="color:#000000;background:#ddddff"|7.6% || 2.5% || style="color:#000000;background:#ddddff"|7.0% || style="color:#000000;background:#ddddff"|6.0% || 1.4% || 0.5%
|-
! style="text-align:left" |
| 18,186,770 || style="color:#000000;background:#ffccff"|55.01% || style="color:#000000;background:#ccffcc"|28.08% || style="color:#000000;background:#ddddff"|8.02% || 1.81% || 4.19% || 2.14% || 0.61% || 0.14%
|-
! style="text-align:left" |
| 49,084,841|| style="color:#000000;background:#ffccff"|61.3% || style="color:#000000;background:#ccffcc"|21.11% || style="color:#000000;background:#ddddff"|7.28% || 1.47% || style="color:#000000;background:#ddddff"|5.13%|| 2.7%|| 0.7%|| 0.31%
|-
! style="text-align:left" |
| 5,097,988 || style="color:#000000;background:#ffccff"|49.7% || style="color:#000000;background:#ccffcc"|28.5% || style="color:#000000;background:#ccffff"|12.4% || 3.0% || 3.4% || 1.9% || 0.9% || 0.2%
|-
! style="text-align:left" |
| 4,227,746 || style="color:#000000;background:#ccffcc"|29.0% || style="color:#000000;background:#ffffcc"|36.0% || style="color:#000000;background:#ccffff"|15.0% || style="color:#000000;background:#ddddff"|5.0% || style="color:#000000;background:#ddddff"|5.0% || style="color:#000000;background:#ddddff"|6.0% || 3.0% || 1.0%
|-
! style="text-align:left" |
| 11,059,062 || style="color:#000000;background:#ffccff"|45.8% || style="color:#000000;background:#ccffcc"|33.5% || style="color:#000000;background:#ccffff"|10.2% || 2.9% || 3.6% || 2.8% || 1.0% || 0.2%
|-
! style="text-align:left" |
| 1,266,676 || style="color:#000000;background:#ffffcc"|35.22% || style="color:#000000;background:#ffcccc"|40.35% || style="color:#000000;background:#ccffff"|11.11% || 4.72% || 3.85% || 3.48% || 0.87% || 0.40%
|-
! style="text-align:left" |
| 10,702,498 || style="color:#000000;background:#ccffcc"|27.0% || style="color:#000000;background:#ffffcc"|36.0% || style="color:#000000;background:#ccffff"|15.0% || style="color:#000000;background:#ddddff"|7.0% || style="color:#000000;background:#ddddff"|5.0% || style="color:#000000;background:#ddddff"|6.0% || 3.0% || 1.0%
|-
! style="text-align:left" |
| 101,780,263 || style="color:#000000;background:#ffccff"|59.5% || style="color:#000000;background:#ccffcc"|21.3% || style="color:#000000;background:#ccffff"|15.2% || 2.4% || 1.0% || 0.3% || 0.2% || 0.1%
|-
! style="text-align:left" |
| 5,869,410 || style="color:#000000;background:#ffffcc"|35.0% || style="color:#000000;background:#ffffcc"|37.0% || style="color:#000000;background:#ddddff"|8.0% || 4.0% || style="color:#000000;background:#ddddff"|6.0% || style="color:#000000;background:#ddddff"|7.0% || 2.0% || 1.0%
|-
! style="text-align:left" |
| 10,499,707 || style="color:#000000;background:#ffcccc"|47.2% || style="color:#000000;background:#ccffcc"|26.4% || style="color:#000000;background:#ccffff"|16.9% || 2.1% || 3.7% || 2.1% || 1.4% || 0.2%
|-
! style="text-align:left" |
| 16,904,867 || style="color:#000000;background:#ffccff"|75.0% || style="color:#000000;background:#ccffff"|14.0% || style="color:#000000;background:#ddddff"|7.1% || 0.5% || 2.38% || 0.7% || 0.3% || 0.02%
|-
! style="text-align:left" |
| 104,124,440 || style="color:#000000;background:#ffffcc"|36.44% || style="color:#000000;background:#ffffcc"|34.94% || style="color:#000000;background:#ccffcc"|20.96% || style="color:#000000;background:#ddddff"|8.65% || n/a || n/a || n/a || n/a
|-
! style="text-align:left" |
| 6,481,102 || style="color:#000000;background:#ffccff"|62.0% || style="color:#000000;background:#ccffcc"|23.0% || style="color:#000000;background:#ccffff"|11.0% || 1.0% || 1.0% || 1.0% || 0.7% || 0.3%
|-
! style="text-align:left" |
| 1,228,624 || style="color:#000000;background:#ccffcc"|29.5% || style="color:#000000;background:#ffffcc"|30.8% || style="color:#000000;background:#ccffcc"|20.7% || style="color:#000000;background:#ddddff"|6.3%|| 4.3% || 4.5% || 3.0% || 0.9%
|-
! style="text-align:left" |
| 108,113,150 || style="color:#000000;background:#ffffcc"|39.0% || style="color:#000000;background:#ccffcc"|28.0% || style="color:#000000;background:#ccffcc"|21.0% || style="color:#000000;background:#ddddff"|5.0% || 3.0% || 2.0% || 1.0% || 1.0%
|-
! style="text-align:left" |
| 935,974 || style="color:#000000;background:#ffcccc"|43.0% || style="color:#000000;background:#ffffcc"|33.3% || style="color:#000000;background:#ccffff"|16.5% || 4.8% || 1.0%|| 0.7%|| 0.5%|| 0.2%
|-
! style="text-align:left" |
| 5,571,665 || style="color:#000000;background:#ccffcc"|28.0% || style="color:#000000;background:#ffffcc"|35.0% || style="color:#000000;background:#ccffff"|16.0% || style="color:#000000;background:#ddddff"|7.0% || style="color:#000000;background:#ddddff"|5.0% || style="color:#000000;background:#ddddff"|6.0% || 2.0% || 1.0%
|-
! style="text-align:left" |
| 67,848,156 || style="color:#000000;background:#ffffcc"|36.5% || style="color:#000000;background:#ffffcc"|38.2% || style="color:#000000;background:#ddddff"|7.7% || 2.5%|| style="color:#000000;background:#ddddff"|6.5% || style="color:#000000;background:#ddddff"|6.8% || 1.4% || 0.4%
|-
! style="text-align:left" |
| 2,284,912 || style="color:#000000;background:#ffccff"|57.55% || style="color:#000000;background:#ccffcc"|20.52% || style="color:#000000;background:#ccffff"|17.19% || 2.54% || 1.35% || 0.48% || 0.41% || 0.06%
|-
! style="text-align:left" |
| 4,933,674 || style="color:#000000;background:#ffffcc"|34.8% || style="color:#000000;background:#ffffcc"|32.3% || style="color:#000000;background:#ccffff"|11.9% || style="color:#000000;background:#ddddff"|6.0% || style="color:#000000;background:#ddddff"|6.2% || style="color:#000000;background:#ddddff"|5.7% || 2.1% || 0.5%
|-
! style="text-align:left" |
| 80,159,662 || style="color:#000000;background:#ffffcc"|35.0% || style="color:#000000;background:#ffffcc"|37.0% || style="color:#000000;background:#ddddff"|9.0% || 4.0% || style="color:#000000;background:#ddddff"|6.0% || style="color:#000000;background:#ddddff"|6.0% || 2.0% || 1.0%
|-
! style="text-align:left" |
| 29,340,248 || style="color:#000000;background:#ffccff"|53.8% || style="color:#000000;background:#ccffff"|17.6% || style="color:#000000;background:#ccffff"|18.3% || 2.8% || 4.5% || 1.3% || 1.3% || 0.2%
|-
! style="text-align:left" |
| 10,607,051 || style="color:#000000;background:#ffcccc"|37.8% || style="color:#000000;background:#ffffcc"|32.2% || style="color:#000000;background:#ccffff"|11.0% || 4.0% || style="color:#000000;background:#ddddff"|6.6% || style="color:#000000;background:#ddddff"|5.7% || 2.0% || 0.7%
|-
! style="text-align:left" |
| 12,527,440 || style="color:#000000;background:#ffcccc"|46.88% || style="color:#000000;background:#ccffcc"|21.64% || style="color:#000000;background:#ccffcc"|22.86% || 4.52% || 2.0% || 0.9% || 1.0% || 0.2%
|-
! style="text-align:left" |
| 9,235,340 || style="color:#000000;background:#ffccff"|57.5% || style="color:#000000;background:#ccffcc"|25.0% || style="color:#000000;background:#ddddff"|7.8% || 2.5% || 2.7% || 1.7% || 0.6% || 0.2%
|-
! style="text-align:left" |
| 7,249,907 || style="color:#000000;background:#ffcccc"|41.5% || style="color:#000000;background:#ccffcc"|26.13% || style="color:#000000;background:#ccffcc"|25.34% || style="color:#000000;background:#ddddff"|6.35% || 0.32% || 0.17% || 0.14% || 0.05%
|-
! style="text-align:left" |
| 9,771,827 || style="color:#000000;background:#ccffcc"|27.0% || style="color:#000000;background:#ffffcc"|33.0% || style="color:#000000;background:#ccffff"|16.0% || style="color:#000000;background:#ddddff"|8.0% || style="color:#000000;background:#ddddff"|5.0% || style="color:#000000;background:#ddddff"|7.0% || 3.0% || 1.0%
|-
! style="text-align:left" |
| 350,734 || style="color:#000000;background:#ffccff"|46.8% || style="color:#000000;background:#ccffcc"|27.2% || style="color:#000000;background:#ddddff"|9.0% || 2.0% || style="color:#000000;background:#ddddff"|8.2% || 4.8%|| 1.6%|| 0.4%
|-
! style="text-align:left" |
| 1,339,330,514 || style="color:#000000;background:#ffcccc"|32.53% || style="color:#000000;background:#ccffcc"|21.80% || style="color:#000000;background:#ffffcc"|32.10% || style="color:#000000;background:#ddddff"|7.70% || 2.03%|| 1.36%|| 2.00%|| 0.48%
|-
! style="text-align:left" |
| 267,026,366 || style="color:#000000;background:#ffffcc"|36.82% || style="color:#000000;background:#ccffcc"|25.87% || style="color:#000000;background:#ccffcc"|28.85% || style="color:#000000;background:#ddddff"|7.96% || 0.18% || 0.13% || 0.15% || 0.04%
|-
! style="text-align:left" |
| 84,923,314 || style="color:#000000;background:#ffcccc"|36.5% || style="color:#000000;background:#ccffcc"|27.0% || style="color:#000000;background:#ccffcc"|22.2% || style="color:#000000;background:#ddddff"|4.0% || style="color:#000000;background:#ddddff"|5.0% || 2.0% || 2.5% || 0.8%
|-
! style="text-align:left" |
| 38,872,655 || style="color:#000000;background:#ffffcc"|32.1% || style="color:#000000;background:#ccffcc"|25.0% || style="color:#000000;background:#ccffcc"|25.6% || style="color:#000000;background:#ddddff"|7.4% || 3.6% || 2.7% || 2.7% || 0.9%
|-
! style="text-align:left" |
| 5,176,569 || style="color:#000000;background:#ffccff"|47.0% || style="color:#000000;background:#ccffcc"|26.0% || style="color:#000000;background:#ddddff"|9.0% || 2.0% || style="color:#000000;background:#ddddff"|8.0% || style="color:#000000;background:#ddddff"|5.0% || 2.0%|| 1.0%
|-
! style="text-align:left" |
| 8,675,475 || style="color:#000000;background:#ffffcc"|32.0% || style="color:#000000;background:#ffffcc"|34.0% || style="color:#000000;background:#ccffff"|17.0% || style="color:#000000;background:#ddddff"|7.0% || 3.0% || 4.0% || 2.0% || 1.0%
|-
! style="text-align:left" |
| 62,402,659 || style="color:#000000;background:#ffcccc"|39.0% || style="color:#000000;background:#ffffcc"|36.0% || style="color:#000000;background:#ddddff"|7.5% || 2.5% || style="color:#000000;background:#ddddff"|7.0% || style="color:#000000;background:#ddddff"|6.0% || 1.5% || 0.5%
|-
! style="text-align:left" |
| 28,088,455 || style="color:#000000;background:#ffcccc"|47.24% || style="color:#000000;background:#ccffcc"|20.19% || style="color:#000000;background:#ccffcc"|21.7% || 3.82% || 3.73% || 1.54% || 1.48% || 0.3%
|-
! style="text-align:left" |
| 2,808,570 || style="color:#000000;background:#ffccff"|51.1% || style="color:#000000;background:#ccffcc"|20.0% || style="color:#000000;background:#ccffcc"|20.0% || 1.9% || 3.5% || 2.0% || 1.0% || 0.5%
|-
! style="text-align: left" |
| 125,507,472 || style="color:#000000;background:#ccffcc"|29.9% || style="color:#000000;background:#ffffcc"|39.8% || style="color:#000000;background:#ccffff"|19.9% || style="color:#000000;background:#ddddff"|9.9% || 0.15% || 0.2% || 0.1% || 0.05%
|-
! style="text-align:left" |
| 10,909,567 || style="color:#000000;background:#ffffcc"|33.03% || style="color:#000000;background:#ffffcc"|32.86% || style="color:#000000;background:#ccffff"|16.56% || style="color:#000000;background:#ddddff"|6.28% || 4.4% || 3.97% || 2.06% || 0.04%
|-
! style="text-align:left" |
| 19,091,949 || style="color:#000000;background:#ffffcc"|30.7% || style="color:#000000;background:#ccffcc"|29.8% || style="color:#000000;background:#ccffcc"|24.2% || style="color:#000000;background:#ddddff"|8.3% || 2.3% || 2.2% || 1.8% || 0.4%
|-
! style="text-align:left" |
| 53,527,936 || style="color:#000000;background:#ffcccc"|45.6% || style="color:#000000;background:#ccffcc"|25.2% || style="color:#000000;background:#ccffcc"|21.28% || 4.2% || 1.8% || 1.0% || 0.9% || 0.02%
|-
! style="text-align:left" |
| 7,574,356 || style="color:#000000;background:#ffffcc"|37.52% || style="color:#000000;background:#ccffff"|19.73% || style="color:#000000;background:#ffffcc"|35.36% || style="color:#000000;background:#ddddff"|6.85% || 0.2% || 0.1% || 0.2% || 0.05%
|-
! style="text-align:left" |
| 1,881,232 || style="color:#000000;background:#ffffcc"|30.6% || style="color:#000000;background:#ffffcc"|31.0% || style="color:#000000;background:#ccffff"|17.0% || style="color:#000000;background:#ddddff"|6.0% || style="color:#000000;background:#ddddff"|5.4% || style="color:#000000;background:#ddddff"|6.0% || 3.0% || 1.0%
|-
! style="text-align:left" |
| 5,469,612 || style="color:#000000;background:#ffffcc"|38.4% || style="color:#000000;background:#ffffcc"|32.3% || style="color:#000000;background:#ddddff"|9.5% || 3.2% || style="color:#000000;background:#ddddff"|7.7% || style="color:#000000;background:#ddddff"|6.5% || 1.7% || 0.7%
|-
! style="text-align:left" |
| 6,890,535 || style="color:#000000;background:#ffccff"|42.64% || style="color:#000000;background:#ccffcc"|20.86% || style="color:#000000;background:#ccffff"|11.19% || 4.5% || style="color:#000000;background:#ddddff"|7.26% || 3.24% || 1.64% || 0.67%
|-
! style="text-align:left" |
| 39,137 || style="color:#000000;background:#ffffcc"|34.0% || style="color:#000000;background:#ffffcc"|37.0% || style="color:#000000;background:#ccffff"|10.0% || 4.0% || style="color:#000000;background:#ddddff"|6.0% || style="color:#000000;background:#ddddff"|6.5% || 1.8% || 0.7%
|-
! style="text-align:left" |
| 2,731,464 || style="color:#000000;background:#ffcccc"|36.0% || style="color:#000000;background:#ffffcc"|33.0% || style="color:#000000;background:#ccffff"|11.0% || 4.0% || style="color:#000000;background:#ddddff"|7.0% || style="color:#000000;background:#ddddff"|6.0% || 2.0% || 0.7%
|-
! style="text-align:left" |
| 628,381 || style="color:#000000;background:#ffffcc"|35.0% || style="color:#000000;background:#ffffcc"|37.0% || style="color:#000000;background:#ddddff"|9.0% || 4.0% || style="color:#000000;background:#ddddff"|6.0% || style="color:#000000;background:#ddddff"|6.0% || 2.0% || 1.0%
|-
! style="text-align:left" |
| 614,458 || style="color:#000000;background:#ffcccc"|41.5% || style="color:#000000;background:#ccffcc"|26.1% || style="color:#000000;background:#ccffcc"|25.4% || style="color:#000000;background:#ddddff"|6.3% || 0.33% || 0.09% || 0.17% || 0.05%
|-
! style="text-align:left" |
| 32,652,083 || style="color:#000000;background:#ffffcc"|34.32% || style="color:#000000;background:#ffffcc"|30.35% || style="color:#000000;background:#ccffcc"|27.37% || style="color:#000000;background:#ddddff"|7.46% || 0.17% || 0.15% || 0.14% || 0.04%
|-
! style="text-align:left" |
| 457,267 || style="color:#000000;background:#ffffcc"|38.0% || style="color:#000000;background:#ffcccc"|41.0% || style="color:#000000;background:#ddddff"|7.0% || 3.0% || style="color:#000000;background:#ddddff"|5.0% || 4.5% || 1.0% || 0.5%
|-
! style="text-align:left" |
| 4,005,475 || style="color:#000000;background:#ffcccc"|46.3% || style="color:#000000;background:#ccffcc"|26.68% || style="color:#000000;background:#ccffff"|17.47% || 3.85% || 2.8% || 1.6% || 1.1% || 0.2%
|-
! style="text-align:left" |
| 1,379,365 || style="color:#000000;background:#ffffcc"|38.3% || style="color:#000000;background:#ccffcc"|26.0% || style="color:#000000;background:#ccffcc"|25.0% || style="color:#000000;background:#ddddff"|6.7% || 1.7% || 1.0% || 1.0% || 0.3%
|-
! style="text-align:left" |
| 128,649,565 || style="color:#000000;background:#ffccff"|59.09% || style="color:#000000;background:#ccffcc"|26.23% || style="color:#000000;background:#ddddff"|8.53% || 1.73% || 2.73% || 1.21% || 0.40% || 0.08%
|-
! style="text-align:left" |
| 3,364,496 || style="color:#000000;background:#ccffcc"|28.5% || style="color:#000000;background:#ffffcc"|31.8% || style="color:#000000;background:#ccffff"|17.6% || style="color:#000000;background:#ddddff"|7.0% || style="color:#000000;background:#ddddff"|5.0% || style="color:#000000;background:#ddddff"|6.0% || 3.0% || 1.1%
|-
! style="text-align:left" |
| 3,198,913 || style="color:#000000;background:#ffffcc"|36.4% || style="color:#000000;background:#ccffcc"|29.2% || style="color:#000000;background:#ccffcc"|8.1% || style="color:#000000;background:#ddddff"|% || style="color:#000000;background:#ccffff"| 13.3% || style="color:#000000;background:#ddddff"|8.0% || 2.0% || 0.01%
|-
! style="text-align:left" |
| 35,561,654 || style="color:#000000;background:#ffcccc"|42.3% || style="color:#000000;background:#ffffcc"|40.8% || style="color:#000000;background:#ccffff"|14.0% || 4.0% || 4.5% || 3.1% || 1.5% || 0.4%
|-
! style="text-align:left" |
| 56,590,071 || style="color:#000000;background:#ffffcc"|35.7% || style="color:#000000;background:#ccffcc"|23.8% || style="color:#000000;background:#ffffcc"|32.7% || style="color:#000000;background:#ddddff"|6.95% || 0.3%|| 0.2%|| 0.3%|| 0.05%
|-
! style="text-align:left" |
| 2,678,191 || style="color:#000000;background:#ffccff"|50.58% || style="color:#000000;background:#ccffcc"|20.49% || style="color:#000000;background:#ccffcc"|20.21% || 1.02% || 4.22% || 1.71% || 1.69% || 0.08%
|-
! style="text-align:left" |
| 30,327,877 || style="color:#000000;background:#ffffcc"|35.2% || style="color:#000000;background:#ffffcc"|36.3% || style="color:#000000;background:#ccffcc"|27.1% || 2.6% || 0.3% || 0.2% || 0.2% || 0.1%
|-
! style="text-align:left" |
| 17,280,397 || style="color:#000000;background:#ffcccc"|38.2% || style="color:#000000;background:#ffffcc" |36.6% || style="color:#000000;background:#ddddff" |7.7% || 2.5% || style="color:#000000;background:#ddddff"|6.8% || style="color:#000000;background:#ddddff"|6.4% || 1.3% || 0.5%
|-
! style="text-align:left" |
| 4,925,477 || style="color:#000000;background:#ffcccc"|39.0% || style="color:#000000;background:#ffffcc"|31.0% || style="color:#000000;background:#ccffff"|10.0% || 2.0% || style="color:#000000;background:#ccffff"|10.9% || style="color:#000000;background:#ddddff"|6.0% || 1.0% || 0.1%
|-
! style="text-align:left" |
| 6,243,931 || style="color:#000000;background:#ffccff"|62.0% || style="color:#000000;background:#ccffcc"|20.0% || style="color:#000000;background:#ccffff"| 11.0% || 4.0% || 1.0% || 1.0% || 0.7% || 0.3%
|-
! style="text-align:left" |
| 219,463,862 || style="color:#000000;background:#ffccff"|50.23% || style="color:#000000;background:#ccffcc"|21.61% || style="color:#000000;background:#ccffff"|19.59% || 3.47% || 2.7% || 1.16% || 1.05% || 0.19%
|-
! style="text-align:left" |
| 25,643,466 || style="color:#000000;background:#ccffcc"|27.15% || style="color:#000000;background:#ffffcc"|31.08% || style="color:#000000;background:#ffffcc"|30.15% || style="color:#000000;background:#ccffff"|11.32% || 0.08% || 0.1% || 0.1% || 0.03%
|-
! style="text-align:left" |
| 2,125,971 || style="color:#000000;background:#ffffcc"|30.0% || style="color:#000000;background:#ffffcc"|34.0% || style="color:#000000;background:#ccffff"|15.0% || style="color:#000000;background:#ddddff"|6.0% || style="color:#000000;background:#ddddff"|5.0% || style="color:#000000;background:#ddddff"|6.0% || 3.0% || 1.0%
|-
! style="text-align:left" |
| 5,467,439 || style="color:#000000;background:#ffffcc"|33.2% || style="color:#000000;background:#ffcccc"|41.6% || style="color:#000000;background:#ddddff"|6.8% || 3.4% || style="color:#000000;background:#ddddff"|5.8% || style="color:#000000;background:#ddddff"|7.4% || 1.2% || 0.6%
|-
! style="text-align:left" |
| 238,181,034 || style="color:#000000;background:#ffffcc"|30.04% || style="color:#000000;background:#ccffcc"|21.53% || style="color:#000000;background:#ffffcc"|30.24% || style="color:#000000;background:#ddddff"|8.83% ||3.1% ||2.22% ||3.13% ||0.91%
|-
! style="text-align:left" |
| 7,259,456 || style="color:#000000;background:#ffccff"|55.7% || style="color:#000000;background:#ffffcc"|32.2% || style="color:#000000;background:#ddddff"|9.6% || 2.1% || 1.8%|| 0.5%|| 0.2%|| 0.1%
|-
! style="text-align:left" |
| 7,272,639 || style="color:#000000;background:#ffccff"|63.07% || style="color:#000000;background:#ccffcc"|21.32% || 4.72% || 1.38% || style="color:#000000;background:#ddddff"|5.89% || 2.97% || 0.49% || 0.15%
|-
! style="text-align:left" |
| 31,914,989 || style="color:#000000;background:#ffccff"|70.0% || style="color:#000000;background:#ccffff"|18.4% || style="color:#000000;background:#ddddff"|7.8% || 1.6% || 1.4% || 0.5% || 0.28% || 0.02%
|-
! style="text-align:left" |
| 109,180,815 || style="color:#000000;background:#ffcccc"|45.9% || style="color:#000000;background:#ccffcc"|22.9% || style="color:#000000;background:#ccffcc"|24.9% || style="color:#000000;background:#ddddff"|5.97% || 0.1% || 0.1% || 0.1% || 0.03%
|-
! style="text-align:left" |
| 38,282,325 || style="color:#000000;background:#ffffcc"|31.0% || style="color:#000000;background:#ffffcc"|32.0% || style="color:#000000;background:#ccffff"|15.0% || style="color:#000000;background:#ddddff"|7.0% || style="color:#000000;background:#ddddff"|6.0% || style="color:#000000;background:#ddddff"|6.0% || 2.0% || 1.0%
|-
! style="text-align:left" |
| 10,302,674 || style="color:#000000;background:#ffffcc"|36.2% || style="color:#000000;background:#ffffcc"|39.8% || style="color:#000000;background:#ddddff"|6.6% || 2.9% || style="color:#000000;background:#ddddff"|6.1% || style="color:#000000;background:#ddddff"|6.8% || 1.1% || 0.5%
|-
! style="text-align:left" |
| 1,397,897,720 || style="color:#000000;background:#ffcccc"|42.8% || style="color:#000000;background:#ccffcc" |25.5% || style="color:#000000;background:#ccffcc" |25.8% || style="color:#000000;background:#ccccff" |5.9% ||0.7% ||0.1% ||0.2% ||0.1%
|-
! style="text-align:left" |
| 21,230,362 || style="color:#000000;background:#ccffcc" |28.0% || style="color:#000000;background:#ffffcc" |37.0% || style="color:#000000;background:#ccffff"|14.0% || style="color:#000000;background:#ddddff"|7.0% || style="color:#000000;background:#ddddff" |5.0% || style="color:#000000;background:#ddddff" |6.0% || 2.0% || 1.0%
|-
! style="text-align:left" |
| 145,478,097 || style="color:#000000;background:#ffffcc"|36.0% || style="color:#000000;background:#ffffcc"|31.0% || style="color:#000000;background:#ccffff"|19.0% || 2.1% || style="color:#000000;background:#ddddff"|6.0% || 4.0% || 1.0% || 0.9%
|-
! style="text-align:left" |
| 34,173,498 || style="color:#000000;background:#ffccff"|47.8% || style="color:#000000;background:#ccffff"|16.0% || style="color:#000000;background:#ccffff"|17.9% || 4.0% || 4.0% || 2.0% || 1.0% || 0.3%
|-
! style="text-align:left" |
| 7,012,165 || style="color:#000000;background:#ffffcc"|31.92% || style="color:#000000;background:#ffffcc"|35.28% || style="color:#000000;background:#ccffff"|12.6% || 4.2% || style="color:#000000;background:#ddddff"|6.08% || style="color:#000000;background:#ddddff"|6.72% || 2.4% || 0.8%
|-
! style="text-align:left" |
| 6,209,660 || style="color:#000000;background:#ffcccc"|44.7% || style="color:#000000;background:#ccffcc"|23.9% || style="color:#000000;background:#ccffcc"|24.5% || style="color:#000000;background:#ddddff"|5.6% || 0.6% || 0.3% || 0.3% || 0.1%
|-
! style="text-align:left" |
| 5,440,602 || style="color:#000000;background:#ccffcc"|27.2% || style="color:#000000;background:#ffffcc"|35.7% || style="color:#000000;background:#ccffff"|15.3% || style="color:#000000;background:#ddddff"|6.8% || 4.8% || style="color:#000000;background:#ddddff"|6.3% || 2.7% || 1.2%
|-
! style="text-align:left" |
| 2,102,678 || style="color:#000000;background:#ffffcc"|31.0% || style="color:#000000;background:#ffffcc"|33.0% || style="color:#000000;background:#ccffff"|12.0% || style="color:#000000;background:#ddddff"|6.0% || style="color:#000000;background:#ddddff"|7.0% || style="color:#000000;background:#ddddff"|7.0% || 3.0% || 1.0%
|-
! style="text-align:left" |
| 12,094,640 || style="color:#000000;background:#ffccff"|52.8% || style="color:#000000;background:#ccffff"|19.36% || style="color:#000000;background:#ccffff"|12.32% || 3.52% || style="color:#000000;background:#ddddff"|7.2% || 2.64% || 1.68% || 0.48%
|-
! style="text-align:left" |
| 56,463,617 || style="color:#000000;background:#ffffcc"|39.0% || style="color:#000000;background:#ffffcc"|32.0% || style="color:#000000;background:#ccffff"|12.0% || 3.0% || style="color:#000000;background:#ddddff"|6.0% || style="color:#000000;background:#ddddff"|5.0% || 2.0% || 1.0%
|-
! style="text-align:left" |
| 51,835,110 || style="color:#000000;background:#ccffcc"|29% || style="color:#000000;background:#ffffcc"|32% || style="color:#000000;background:#ffffcc"|31% || style="color:#000000;background:#ddddff"|8% || 0.19% || 0.1% || 0.1% || 0.01%
|-
! style="text-align:left" |
| 23,044,123 || style="color:#000000;background:#ffccff"|43.42% || style="color:#000000;background:#ccffff"|21.0% || style="color:#000000;background:#ccffcc"|25.78% || style="color:#000000;background:#ddddff"|5.13% || 2.12% || 1.04% || 1.25% || 0.26%
|-
! style="text-align:left" |
| 50,015,792 || style="color:#000000;background:#ffcccc"|35.0% || style="color:#000000;background:#ffffcc"|36.0% || style="color:#000000;background:#ddddff"|8.0% || 2.5% || style="color:#000000;background:#ddddff"|9.0% || 7.0% || 2.0% || 0.5%
|-
! style="text-align:left" |
| 45,561,556 || style="color:#000000;background:#ffccff"|48.0% || style="color:#000000;background:#ccffcc"|27.7% || style="color:#000000;background:#ccffff"|15.2% || 2.3% || 3.5% || 1.8% || 0.8% || 0.2%
|-
! style="text-align:left" |
| 10,202,491 || style="color:#000000;background:#ffffcc"|32.0% || style="color:#000000;background:#ffffcc"|37.0% || style="color:#000000;background:#ccffff"|10.0% || style="color:#000000;background:#ddddff"|5.0% || style="color:#000000;background:#ddddff"|6.0% || style="color:#000000;background:#ddddff"|7.0% || 2.0% || 1.0%
|-
! style="text-align:left" |
| 8,403,994 || style="color:#000000;background:#ffffcc"|35.0% || style="color:#000000;background:#ffffcc"|38.0% || style="color:#000000;background:#ddddff"|8.0% || 4.0% || style="color:#000000;background:#ddddff"|6.9% || style="color:#000000;background:#ddddff"|7.0% || 1.0% || 0.1%
|-
! style="text-align:left" |
| 19,398,448 || style="color:#000000;background:#ffccff"|43.0% || style="color:#000000;background:#ccffcc"|30.0% || style="color:#000000;background:#ccffff"|14.0% || 3.7% || style="color:#000000;background:#ddddff"|5.0%|| 3.0%|| 1.0%|| 0.3%
|-
! style="text-align:left" |
| 23,603,049 || style="color:#000000;background:#ffcccc"|43.9% || style="color:#000000;background:#ccffcc"|25.9% || style="color:#000000;background:#ccffcc"|23.9% || style="color:#000000;background:#ddddff"|6.0% || 0.28% || 0.01% || 0.01% || 0.01%
|-
! style="text-align:left" |
| 68,977,400 || style="color:#000000;background:#ffcccc"|40.8% || style="color:#000000;background:#ccffff"|16.9% || style="color:#000000;background:#ffffcc"|36.8% || 4.97% || 0.2%|| 0.1%|| 0.2%|| 0.03%
|-
! style="text-align:left" |
| 11,811,335 || style="color:#000000;background:#ffcccc"|41.86% || style="color:#000000;background:#ccffcc"|28.21% || style="color:#000000;background:#ccffff"|16.38% || 4.55% || 4.14% || 2.79% ||1.62% || 0.45%
|-
! style="text-align:left" |
| 82,017,514 || style="color:#000000;background:#ccffcc"|29.4% || style="color:#000000;background:#ffffcc"|38.3% || style="color:#000000;background:#ccffff"|13.2% || style="color:#000000;background:#ddddff"|6.4% || 4.4% || style="color:#000000;background:#ddddff"|5.5% || 2.1% || 0.7%
|-
! style="text-align:left" |
| 44,712,143 || style="color:#000000;background:#ffcccc"|49.29% || style="color:#000000;background:#ccffcc"|24.11% || style="color:#000000;background:#ccffcc"|20.29% || 4.41% || 1.01% || 0.49% || 0.41% || 0.09%
|-
! style="text-align:left" |
| 43,922,939 || style="color:#000000;background:#ffffcc"|32.0% || style="color:#000000;background:#ffffcc"|34.0% || style="color:#000000;background:#ccffff"|15.0% || style="color:#000000;background:#ddddff"|5.0% || style="color:#000000;background:#ddddff"|5.0% || style="color:#000000;background:#ddddff"|6.0% || 2.0% || 1.0%
|-
! style="text-align:left" |
| 9,992,083 || style="color:#000000;background:#ffccff"|44.1% || style="color:#000000;background:#ccffff"|21.9% || style="color:#000000;background:#ccffcc"|20.9% || 4.3% || 4.3% || 2.1% || 2.0% || 0.4%
|-
! style="text-align:left" |
| 66,971,395 || style="color:#000000;background:#ffcccc"|35%|| style="color:#000000;background:#ffffcc" |30% || style="color:#000000;background:#ddddff" |8% || 2% || style="color:#000000;background:#ccffff" |13%|| 5% || 2% || 1%
|-
! style="text-align:left" |
| 334,998,398 || style="color:#000000;background:#ffcccc"|37.4% || style="color:#000000;background:#ffffcc"|35.7% || style="color:#000000;background:#ddddff"|8.5% || 3.4% || style="color:#000000;background:#ddddff"|9.8% || 4.1% || 1.5% || 0.6%
|-
! style="text-align:left" |
| 30,842,796 || style="color:#000000;background:#ccffcc"|29.42% || style="color:#000000;background:#ffffcc"|30.93% || style="color:#000000;background:#ccffcc"|24.98% || style="color:#000000;background:#ddddff"|9.27% || 1.68% || 1.77% || 1.42% || 0.53%
|-
! style="text-align:left" |
| 28,644,603 || style="color:#000000;background:#ffccff"|58.3% || style="color:#000000;background:#ccffcc"|28.2% || style="color:#000000;background:#ddddff" |5.6% || 1.9% || 4.0%|| 1.5%|| 0.4%|| 0.1%
|-
! style="text-align:left" |
| 98,721,275|| style="color:#000000;background:#ffcccc"|41.7% || style="color:#000000;background:#ccffcc"|20.9% || style="color:#000000;background:#ffffcc"|30.8% || 4.98% || 0.3% || 0.1% || 0.2% || 0.02%
|-
! style="text-align:left" |
| 29,884,405 || style="color:#000000;background:#ffccff"|47.84% || style="color:#000000;background:#ccffcc"|27.5% || style="color:#000000;background:#ccffff"|15.32% || 2.14% || 3.66% || 2.1% || 1.17% || 0.16%
|-
! style="text-align:left" |
| 14,546,314 || style="color:#000000;background:#ffccff"|36.4% || style="color:#000000;background:#ccffff"|29.3% || style="color:#000000;background:#ddddff"|8.1% || 2.0% || style="color:#000000;background:#ccffff"| 14.1% || style="color:#000000;background:#ddddff"|8.1% || 2.0% || 0.01%
|-
|-class="sortbottom"
! World
| 7,772,850,805 || style="color:#000000;background:#ffcccc"|38.4% || style="color:#000000;background:#ccffcc"|27.3% || style="color:#000000;background:#ddddff"|8.1% || style="color:#000000;background:#ccccff| 2.0% || style="color:#000000;background:#ccffff"| 14.1% || style="color:#000000;background:#ddddff"|8.1% || 2.0% || 0.01%
|}
Blood group B has its highest frequency in the Middle East, where it ranks as the largest share of the population. In Southeast Asia its share of the population is lowest, especially in Indonesia, secondarily in East Asia, Northern Asia and neighboring Central Asia, and its incidence diminishes both towards the east and the west, falling to single-digit percentages in Netherlands, Norway, Portugal and Switzerland. It is believed to have been entirely absent from Native American and Australian Aboriginal populations prior to the arrival of Europeans in those areas.
Blood group A is associated with high frequencies in Europe, especially in Scandinavia and Central Europe, although its highest frequencies occur in some Australian Aboriginal populations and the Blackfoot Indians of Montana, US.
Maps of ABO alleles among native populations
In the ABO blood group system, there are three alleles: i, IA, and IB. As both IA and IB are dominant over i, only ii people have type O blood. Individuals with IAIA or IAi have type A blood, and individuals with IBIB or IBi have type B. Those with IAIB'' have type AB.
See also
ABO blood group system
Blood type
References
Blood
Human genetics | 0.763901 | 0.998562 | 0.762802 |
Extracellular fluid | In cell biology, extracellular fluid (ECF) denotes all body fluid outside the cells of any multicellular organism. Total body water in healthy adults is about 50–60% (range 45 to 75%) of total body weight; women and the obese typically have a lower percentage than lean men. Extracellular fluid makes up about one-third of body fluid, the remaining two-thirds is intracellular fluid within cells. The main component of the extracellular fluid is the interstitial fluid that surrounds cells
Extracellular fluid is the internal environment of all multicellular animals, and in those animals with a blood circulatory system, a proportion of this fluid is blood plasma. Plasma and interstitial fluid are the two components that make up at least 97% of the ECF. Lymph makes up a small percentage of the interstitial fluid. The remaining small portion of the ECF includes the transcellular fluid (about 2.5%). The ECF can also be seen as having two components – plasma and lymph as a delivery system, and interstitial fluid for water and solute exchange with the cells.
The extracellular fluid, in particular the interstitial fluid, constitutes the body's internal environment that bathes all of the cells in the body. The ECF composition is therefore crucial for their normal functions, and is maintained by a number of homeostatic mechanisms involving negative feedback. Homeostasis regulates, among others, the pH, sodium, potassium, and calcium concentrations in the ECF. The volume of body fluid, blood glucose, oxygen, and carbon dioxide levels are also tightly homeostatically maintained.
The volume of extracellular fluid in a young adult male of 70 kg (154 lbs) is 20% of body weight – about fourteen liters. Eleven liters are interstitial fluid and the remaining three liters are plasma.
Components
The main component of the extracellular fluid (ECF) is the interstitial fluid, or tissue fluid, which surrounds the cells in the body. The other major component of the ECF is the intravascular fluid of the circulatory system called blood plasma. The remaining small percentage of ECF includes the transcellular fluid. These constituents are often called "fluid compartments". The volume of extracellular fluid in a young adult male of 70 kg, is 20% of body weight – about fourteen liters.
Interstitial fluid
Interstitial fluid is essentially comparable to plasma. The interstitial fluid and plasma make up about 97% of the ECF, and a small percentage of this is lymph.
Interstitial fluid is the body fluid between blood vessels and cells, containing nutrients from capillaries by diffusion and holding waste products discharged by cells due to metabolism. 11 liters of the ECF are interstitial fluid and the remaining three liters are plasma. Plasma and interstitial fluid are very similar because water, ions, and small solutes are continuously exchanged between them across the walls of capillaries, through pores and capillary clefts.
Interstitial fluid consists of a water solvent containing sugars, salts, fatty acids, amino acids, coenzymes, hormones, neurotransmitters, white blood cells and cell waste-products. This solution accounts for 26% of the water in the human body. The composition of interstitial fluid depends upon the exchanges between the cells in the biological tissue and the blood. This means that tissue fluid has a different composition in different tissues and in different areas of the body.
The plasma that filters through the blood capillaries into the interstitial fluid does not contain red blood cells or platelets as they are too large to pass through but can contain some white blood cells to help the immune system.
Once the extracellular fluid collects into small vessels (lymph capillaries) it is considered to be lymph, and the vessels that carry it back to the blood are called the lymphatic vessels. The lymphatic system returns protein and excess interstitial fluid to the circulation.
The ionic composition of the interstitial fluid and blood plasma vary due to the Gibbs–Donnan effect. This causes a slight difference in the concentration of cations and anions between the two fluid compartments.
Transcellular fluid
Transcellular fluid is formed from the transport activities of cells, and is the smallest component of extracellular fluid. These fluids are contained within epithelial lined spaces. Examples of this fluid are cerebrospinal fluid, aqueous humor in the eye, serous fluid in the serous membranes lining body cavities, perilymph and endolymph in the inner ear, and joint fluid. Due to the varying locations of transcellular fluid, the composition changes dramatically. Some of the electrolytes present in the transcellular fluid are sodium ions, chloride ions, and bicarbonate ions.
Function
Extracellular fluid provides the medium for the exchange of substances between the ECF and the cells, and this can take place through dissolving, mixing and transporting in the fluid medium. Substances in the ECF include dissolved gases, nutrients, and electrolytes, all needed to maintain life. ECF also contains materials secreted from cells in soluble form, but which quickly coalesce into fibers (e.g. collagen, reticular, and elastic fibres) or precipitates out into a solid or semisolid form (e.g. proteoglycans which form the bulk of cartilage, and the components of bone). These and many other substances occur, especially in association with various proteoglycans, to form the extracellular matrix, or the "filler" substance, between the cells throughout the body. These substances occur in the extracellular space, and are therefore all bathed or soaked in ECF, without being part of it.
Oxygenation
One of the main roles of extracellular fluid is to facilitate the exchange of molecular oxygen from blood to tissue cells and for carbon dioxide, CO2, produced in cell mitochondria, back to the blood. Since carbon dioxide is about 20 times more soluble in water than oxygen, it can relatively easily diffuse in the aqueous fluid between cells and blood.
However, hydrophobic molecular oxygen has very poor water solubility and prefers hydrophobic lipid crystalline structures. As a result of this, plasma lipoproteins can carry significantly more O2 than in the surrounding aqueous medium.
If hemoglobin in erythrocytes is the main transporter of oxygen in the blood, plasma lipoproteins may be its only carrier in the ECF.
The oxygen-carrying capacity of lipoproteins, reduces in ageing and inflammation. This results in changes of ECF functions, reduction of tissue O2 supply and contributes to development of tissue hypoxia. These changes in lipoproteins are caused by oxidative or inflammatory damage.
Regulation
The internal environment is stabilised in the process of homeostasis. Complex homeostatic mechanisms operate to regulate and keep the composition of the ECF stable. Individual cells can also regulate their internal composition by various mechanisms.
There is a significant difference between the concentrations of sodium and potassium ions inside and outside the cell. The concentration of sodium ions is considerably higher in the extracellular fluid than in the intracellular fluid. The converse is true of the potassium ion concentrations inside and outside the cell. These differences cause all cell membranes to be electrically charged, with the positive charge on the outside of the cells and the negative charge on the inside. In a resting neuron (not conducting an impulse) the membrane potential is known as the resting potential, and between the two sides of the membrane is about −70 mV.
This potential is created by sodium–potassium pumps in the cell membrane, which pump sodium ions out of the cell, into the ECF, in return for potassium ions which enter the cell from the ECF. The maintenance of this difference in the concentration of ions between the inside of the cell and the outside, is critical to keep normal cell volumes stable, and also to enable some cells to generate action potentials.
In several cell types voltage-gated ion channels in the cell membrane can be temporarily opened under specific circumstances for a few microseconds at a time. This allows a brief inflow of sodium ions into the cell (driven in by the sodium ion concentration gradient that exists between the outside and inside of the cell). This causes the cell membrane to temporarily depolarize (lose its electrical charge) forming the basis of action potentials.
The sodium ions in the ECF also play an important role in the movement of water from one body compartment to the other. When tears are secreted, or saliva is formed, sodium ions are pumped from the ECF into the ducts in which these fluids are formed and collected. The water content of these solutions results from the fact that water follows the sodium ions (and accompanying anions) osmotically. The same principle applies to the formation of many other body fluids.
Calcium ions have a great propensity to bind to proteins. This changes the distribution of electrical charges on the protein, with the consequence that the 3D (or tertiary) structure of the protein is altered. The normal shape, and therefore function of very many of the extracellular proteins, as well as the extracellular portions of the cell membrane proteins, is dependent on a very precise ionized calcium concentration in the ECF. The proteins that are particularly sensitive to changes in the ECF ionized calcium concentration are several of the clotting factors in the blood plasma, which are functionless in the absence of calcium ions, but become fully functional on the addition of the correct concentration of calcium salts. The voltage gated sodium ion channels in the cell membranes of nerves and muscle have an even greater sensitivity to changes in the ECF ionized calcium concentration. Relatively small decreases in the plasma ionized calcium levels (hypocalcemia) cause these channels to leak sodium into the nerve cells or axons, making them hyper-excitable, thus causing spontaneous muscle spasms (tetany) and paraesthesia (the sensation of "pins and needles") of the extremities and round the mouth. When the plasma ionized calcium rises above normal (hypercalcemia) more calcium is bound to these sodium channels having the opposite effect, causing lethargy, muscle weakness, anorexia, constipation and labile emotions.
The tertiary structure of proteins is also affected by the pH of the bathing solution. In addition, the pH of the ECF affects the proportion of the total amount of calcium in the plasma which occurs in the free, or ionized form, as opposed to the fraction that is bound to protein and phosphate ions. A change in the pH of the ECF therefore alters the ionized calcium concentration of the ECF. Since the pH of the ECF is directly dependent on the partial pressure of carbon dioxide in the ECF, hyperventilation, which lowers the partial pressure of carbon dioxide in the ECF, produces symptoms that are almost indistinguishable from low plasma ionized calcium concentrations.
The extracellular fluid is constantly "stirred" by the circulatory system, which ensures that the watery environment which bathes the body's cells is virtually identical throughout the body. This means that nutrients can be secreted into the ECF in one place (e.g. the gut, liver, or fat cells) and will, within about a minute, be evenly distributed throughout the body. Hormones are similarly rapidly and evenly spread to every cell in the body, regardless of where they are secreted into the blood. Oxygen taken up by the lungs from the alveolar air is also evenly distributed at the correct partial pressure to all the cells of the body. Waste products are also uniformly spread to the whole of the ECF, and are removed from this general circulation at specific points (or organs), once again ensuring that there is generally no localized accumulation of unwanted compounds or excesses of otherwise essential substances (e.g. sodium ions, or any of the other constituents of the ECF). The only significant exception to this general principle is the plasma in the veins, where the concentrations of dissolved substances in individual veins differ, to varying degrees, from those in the rest of the ECF. However, this plasma is confined within the waterproof walls of the venous tubes, and therefore does not affect the interstitial fluid in which the body's cells live. When the blood from all the veins in the body mixes in the heart and lungs, the differing compositions cancel out (e.g. acidic blood from active muscles is neutralized by the alkaline blood homeostatically produced by the kidneys). From the left atrium onward, to every organ in the body, the normal, homeostatically regulated values of all of the ECF's components are therefore restored.
Interaction between the blood plasma, interstitial fluid and lymph
The arterial blood plasma, interstitial fluid and lymph interact at the level of the blood capillaries. The capillaries are permeable and water can move freely in and out. At the arteriolar end of the capillary the blood pressure is greater than the hydrostatic pressure in the tissues. Water will therefore seep out of the capillary into the interstitial fluid. The pores through which this water moves are large enough to allow all the smaller molecules (up to the size of small proteins such as insulin) to move freely through the capillary wall as well. This means that their concentrations across the capillary wall equalize, and therefore have no osmotic effect (because the osmotic pressure caused by these small molecules and ions – called the crystalloid osmotic pressure to distinguish it from the osmotic effect of the larger molecules that cannot move across the capillary membrane – is the same on both sides of capillary wall).
The movement of water out of the capillary at the arteriolar end causes the concentration of the substances that cannot cross the capillary wall to increase as the blood moves to the venular end of the capillary. The most important substances that are confined to the capillary tube are plasma albumin, the plasma globulins and fibrinogen. They, and particularly the plasma albumin, because of its molecular abundance in the plasma, are responsible for the so-called "oncotic" or "colloid" osmotic pressure which draws water back into the capillary, especially at the venular end.
The net effect of all of these processes is that water moves out of and back into the capillary, while the crystalloid substances in the capillary and interstitial fluids equilibrate. Since the capillary fluid is constantly and rapidly renewed by the flow of the blood, its composition dominates the equilibrium concentration that is achieved in the capillary bed. This ensures that the watery environment of the body's cells is always close to their ideal environment (set by the body's homeostats).
A small proportion of the solution that leaks out of the capillaries is not drawn back into the capillary by the colloid osmotic forces. This amounts to between 2–4 liters per day for the body as a whole. This water is collected by the lymphatic system and is ultimately discharged into the left subclavian vein, where it mixes with the venous blood coming from the left arm, on its way to the heart. The lymph flows through lymph capillaries to lymph nodes where bacteria and tissue debris are removed from the lymph, while various types of white blood cells (mainly lymphocytes) are added to the fluid. In addition the lymph which drains the small intestine contains fat droplets called chylomicrons after the ingestion of a fatty meal. This lymph is called chyle which has a milky appearance, and imparts the name lacteals (referring to the milky appearance of their contents) to the lymph vessels of the small intestine.
Extracellular fluid may be mechanically guided in this circulation by the vesicles between other structures. Collectively this forms the interstitium, which may be considered a newly identified biological structure in the body. However, there is some debate over whether the interstitium is an organ.
Electrolytic constituents
Main cations:
Sodium (Na+) 136–146 mM
Potassium (K+) 3.8–5.0 mM
Calcium (Ca2+) 1.0–1.4 mM
Main anions:
Chloride (Cl−) 103–112 mM
Bicarbonate (HCO3−) 22–28 mM
Phosphate (HPO42−) 0.8–1.4 mM
See also
Effective circulating volume (ECV)
Fluid compartments
References
External links
Britannica.com
Biology-online.org
Body fluids
Cell biology | 0.766427 | 0.995241 | 0.76278 |
Erethism | Erethism, also known as erethismus mercurialis, mad hatter disease, or mad hatter syndrome, is a neurological disorder which affects the whole central nervous system, as well as a symptom complex, derived from mercury poisoning. Erethism is characterized by behavioral changes such as irritability, low self-confidence, depression, apathy, shyness and timidity, and in some extreme cases with prolonged exposure to mercury vapors, by delirium, personality changes and memory loss. People with erethism often have difficulty with social interactions. Associated physical problems may include a decrease in physical strength, headaches, general pain, and tremors, as well as an irregular heartbeat.
Mercury is an element that is found worldwide in soil, rocks, and water. People who get erethism are often exposed to mercury through their jobs. Some of the higher risk jobs that can lead to occupational exposure of workers to mercury are working in a chlor-alkali, thermometer, glassblowing, or fluorescent light bulb factory, and working in construction, dental clinics, or in gold and silver mines. In factories, workers are exposed to mercury primarily through the base products and processes involved in making the final end consumer product. In dental clinics it is primarily through their interaction and installation of dental amalgams to treat dental caries. In the case of mining, mercury is used in the process to purify and completely extract the precious metals.
Some elemental and chemical forms of mercury (vapor, methylmercury, inorganic mercury) are more toxic than other forms. The human fetus and medically compromised people (for example, patients with lung or kidney problems) are the most susceptible to the toxic effects of mercury.
Mercury poisoning can also occur outside of occupational exposures including in the home. Inhalation of mercury vapor may stem from cultural and religious rituals where mercury is sprinkled on the floor of a home or car, burned in a candle, or mixed with perfume. Due to widespread use and popular concern, the risk of toxicity from dental amalgam has been exhaustively investigated. It has conclusively been shown to be safe although in 2020 the FDA issued new guidance for at-risk populations who should avoid mercury amalgam.
Historically, this was common among old England felt-hatmakers who had long-term exposure to vapors from the mercury they used to stabilize the wool in a process called felting, where hair was cut from a pelt of an animal such as a rabbit. The industrial workers were exposed to the mercury vapors, giving rise to the expression "mad as a hatter". Some believe that the character the Mad Hatter in Lewis Carroll's Alice in Wonderland is an example of someone with erethism, but the origin of this account is unclear. The character was almost certainly based on Theophilus Carter, an eccentric furniture dealer who was well known to Carroll.
Signs and symptoms
Acute mercury exposure has given rise to psychotic reactions such as delirium, hallucinations, and suicidal tendency. Occupational exposure has resulted in erethism, with irritability, excitability, excessive shyness, and insomnia as the principal features of a broad-ranging functional disturbance. With continuing exposure, a fine tremor develops, initially involving the hands and later spreading to the eyelids, lips, and tongue, causing violent muscular spasms in the most severe cases. The tremor is reflected in the handwriting which has a characteristic appearance. In milder cases, erethism and tremor regress slowly over a period of years following removal from exposure. Decreased nerve conduction velocity in mercury-exposed workers has been demonstrated. Long-term, low-level exposure has been found to be associated with less pronounced symptoms of erethism, characterized by fatigue, irritability, loss of memory, vivid dreams, and depression (WHO, 1976).
Effects of chronic occupational exposure to mercury, such as that commonly experienced by affected hatters, include mental confusion, emotional disturbances, and muscular weakness. Severe neurological damage and kidney damage can also occur. Signs and symptoms can include red fingers, red toes, red cheeks, sweating, loss of hearing, bleeding from the ears and mouth, loss of appendages such as teeth, hair, and nails, lack of coordination, poor memory, shyness, insomnia, nervousness, tremors, and dizziness. A survey of exposed U.S. hatters revealed predominantly neurological symptomatology, including intention tremor. After chronic exposure to the mercury vapours, hatters tended to develop characteristic psychological traits, such as pathological shyness and marked irritability (see box). Such manifestations among hatters prompted several popular names for erethism, including "mad hatter disease", "mad hatter syndrome", "hatter's shakes" and "Danbury shakes".
Biomarkers of exposure
While hatters in the past were diagnosed with erethism through their symptoms, it was sometimes harder to prove that erethism was the result of mercury exposure, as seen in the case of the hatters of New Jersey below. Today, although erethism from the hat making industry is no longer an issue, it persists in other high-risk occupations. As a result, methods have been established to measure the mercury exposure of workers more accurately. They include the collection and testing of mercury levels in blood, hair, nails, and urine. Most of these biomarkers have a shorter half-life for mercury (e.g. in blood the half-life is usually only around 2–4 days), which makes some of them better for testing acute, high doses of mercury exposure. However, mercury in urine has a much longer half-life (measured in weeks to months), and unlike the other biomarkers is more representative of the total body burden of inorganic and elemental mercury. This makes it the ideal biomarker for measuring occupational exposure to mercury because it is suitable to measuring low, chronic exposure, and specifically exposure to inorganic and elemental mercury (i.e. mercury vapor), which are the two types most likely to be encountered in a higher risk occupation.
History among hatters
Especially in the 19th century, inorganic mercury in the form of mercuric nitrate was commonly used in the production of felt for hats. During a process called carroting, in which furs from small animals such as rabbits, hares or beavers were separated from their skins and matted together, an orange-colored solution containing mercuric nitrate was used as a smoothing agent. The resulting felt was then repeatedly shaped into large cones, shrunk in boiling water and dried. In treated felts, a slow reaction released volatile free mercury. Hatters (or milliners) who came into contact with vapours from the impregnated felt often worked in confined areas.
Use of mercury in hatmaking is thought to have been adopted by the Huguenots in 17th-century France, at a time when the dangers of mercury exposure were already known. This process was initially kept a trade secret in France, where hatmaking rapidly became a hazardous occupation. At the end of the 17th century the Huguenots carried the secret to England, following the revocation of the Edict of Nantes. During the Victorian era the hatters' malaise became proverbial, as reflected in popular expressions like "mad as a hatter" (see below) and "the hatters' shakes".
The first description of symptoms of mercury poisoning among hatters appears to have been made in St Petersburg, Russia, in 1829. In the United States, a thorough occupational description of mercury poisoning among New Jersey hatters was published locally by Addison Freeman in 1860. Adolph Kussmaul's definitive clinical description of mercury poisoning published in 1861 contained only passing references to hatmakers, including a case originally reported in 1845 of a 15-year-old Parisian girl, the severity of whose tremors following two years of carroting prompted opium treatment. In Britain, the toxicologist Alfred Swaine Taylor reported the disease in a hatmaker in 1864.
In 1869, the French Academy of Medicine demonstrated the health hazards posed to hatmakers. Alternatives to mercury use in hatmaking became available by 1874. In the United States, a hydrochloride-based process was patented in 1888 to obviate the use of mercury, but was ignored.
In 1898, legislation was passed in France to protect hatmakers from the risks of mercury exposure. By the turn of the 20th century, mercury poisoning among British hatters had become a rarity.
In the United States, the mercury-based process continued to be adopted until as late as 1941, when it was abandoned mainly due to the wartime need for the heavy metal in the manufacture of detonators. Thus, for much of the 20th century mercury poisoning remained common in the U.S. hatmaking industries, including those located in Danbury, Connecticut (giving rise to the expression the "Danbury shakes").
Another 20th-century cohort of affected hatmakers has been studied in Tuscany, Italy.
Hatters of New Jersey
The experience of hatmakers in New Jersey is well documented and has been reviewed by Richard Wedeen. In 1860, at a time when the hatmaking industry in towns such as Newark, Orange and Bloomfield was growing rapidly, a physician from Orange called J. Addison Freeman published an article titled "Mercurial Disease Among Hatters" in the Transactions of the Medical Society of New Jersey. This groundbreaking paper provided a clinical account of the effects of chronic mercury poisoning among the workforce, coupled with an occupational description of the use of mercuric nitrate during carroting and inhalation of mercury vapour later in the process (during finishing, forming and sizing). Freeman concluded that "A proper regard for the health of this class of citizens demands that mercury should not be used so extensively in the manufacture of hats, and that if its use is essential, that the hat finishers' room should be large, with a high ceiling, and well ventilated." Freeman's call for prevention went unheeded.
In 1878, an inspection of 25 firms around Newark conducted by Dr L. Dennis on behalf of the Essex County Medical Society revealed "mercurial disease" in 25% of 1,589 hatters. Dennis recognized that this prevalence figure was probably an underestimate, given the workers' fear of being fired if they admitted to being diseased. Although Dennis did recommend the use of fans in the workplace he attributed most of the hatters' health problems to excessive alcohol use (thus using the stigma of drunkenness in a mainly immigrant workforce to justify the unsanitary working conditions provided by employers).
Some voluntary reductions in mercury exposure were implemented after Lawrence T. Fell, a former journeyman hatter from Orange who had become a successful manufacturer, was appointed Inspector of Factories in 1883. In the late nineteenth century, a pressing health issue among hatters was tuberculosis. This deadly communicable disease was rife in the extremely unhygienic wet and steamy enclosed spaces in which the hatters were expected to work (in its annual report for 1889, the New Jersey Bureau of Labor and Industries expressed incredulity at the conditions—see box). Two-thirds of the recorded deaths of hatters in Newark and Orange between 1873 and 1876 were caused by pulmonary disease, most often in men under 30 years of age, and elevated death rates from tuberculosis persisted into the twentieth century. Consequently, public health campaigns to prevent tuberculosis spreading from the hatters into the wider community tended to eclipse the issue of mercury poisoning. For instance, in 1886 J. W. Stickler, working on behalf of the New Jersey Board of Health, promoted prevention of tuberculosis among hatters, but deemed mercurialism "uncommon", despite having reported tremors in 15–50% of the workers he had surveyed.
While hatters seemed to regard the shakes as an inevitable price to pay for their work rather than a readily preventable disease, their employers professed ignorance of the problem. In a 1901 survey of 11 employers of over a thousand hatters in Newark and Orange, the head of the Bureau of Statistics of New Jersey, William Stainsby, found a lack of awareness of any disease peculiar to hatters apart from tuberculosis and rheumatism (though one employer remarked that "work at the trade develops an inordinate craving for strong drink").
By 1934 the U.S. Public Health Service estimated that 80% of American felt makers had mercurial tremors. Nevertheless, trade union campaigns (led by the United States Hat Finishers Association, originally formed in 1854) never addressed the issue and, unlike in France, no relevant legislation was ever adopted in the United States. Instead, it seems to have been the need for mercury in the war effort that eventually brought to an end the use of mercuric nitrate in U.S. hatmaking; in a meeting convened by the U.S. Public Health Service in 1941, the manufacturers voluntarily agreed to adopt a readily available alternative process using hydrogen peroxide.
"Mad as a hatter"
Although the expression "mad as a hatter" was associated with the syndrome, the origin of the phrase is uncertain.
Lewis Carroll's iconic Mad Hatter character in Alice's Adventures in Wonderland displays markedly eccentric behavior, which includes taking a bite out of a teacup. Carroll would have been familiar with the phenomenon of dementia among hatters, but the literary character is thought to be directly inspired by Theophilus Carter, an eccentric furniture dealer who did not show signs of mercury poisoning.
The actor Johnny Depp has said of his portrayal of a carrot-orange haired Mad Hatter in Tim Burton's 2010 film, Alice in Wonderland that the character "was poisoned ... and it was coming out through his hair, through his fingernails and eyes".
See also
Danbury Hatters' case
Minamata disease
Notes
References
Sources
Industrial hygiene
Mercury poisoning
Neurological disorders
Occupational diseases
Shyness | 0.765324 | 0.99666 | 0.762768 |
Pyoderma | Pyoderma means any skin disease that is pyogenic (has pus). These include superficial bacterial infections such as impetigo, impetigo contagiosa, ecthyma, folliculitis, Bockhart's impetigo, furuncle, carbuncle, tropical ulcer, etc. Autoimmune conditions include pyoderma gangrenosum. Pyoderma affects more than 111 million children worldwide, making it one of the three most common skin disorders in children along with scabies and tinea.
See also
List of cutaneous conditions
References
External links
Dermatologic terminology | 0.770611 | 0.989805 | 0.762754 |
Enterocolitis | Enterocolitis is an inflammation of the digestive tract, involving enteritis of the small intestine and colitis of the colon. It may be caused by various infections, with bacteria, viruses, fungi, parasites, or other causes. Common clinical manifestations of enterocolitis are frequent diarrheal defecations, with or without nausea, vomiting, abdominal pain, fever, chills, and alteration of general condition. General manifestations are given by the dissemination of the infectious agent or its toxins throughout the body, or – most frequently – by significant losses of water and minerals, the consequence of diarrhea and vomiting.
Signs and symptoms
Symptoms of enterocolitis include abdominal pain, diarrhea, nausea, vomiting, fever, and loss of appetite.
Cause
Among the causal agents of acute enterocolitis are:
bacteria: Salmonella, Shigella, Escherichia coli (E. coli), Campylobacter etc.
viruses: enteroviruses, rotaviruses, norovirus, adenoviruses
fungi: candidiasis, especially in immunosuppressed patients or who have previously received prolonged antibiotic treatment
parasites: Giardia lamblia (with a high frequency of infestation in the population, but not always with clinical manifestations), Balantidium coli, Blastocystis homnis, Cryptosporidium (diarrhea in people with immunosuppression), Entamoeba histolytica (produces amebian dysentery, common in tropical areas).
Diagnosis
Types
Specific types of enterocolitis include:
necrotizing enterocolitis (most common in premature infants)
pseudomembranous enterocolitis (also called "Pseudomembranous colitis")
Treatment
Treatment depends on aetiology e.g. Antibiotics such as metronidazole for bacterial infection, antiviral drug therapy for viral infection and
anti-helminths for parasitic infections
See also
Gastroenteritis
References
External links
Inflammations
Intestinal infectious diseases | 0.767022 | 0.994436 | 0.762754 |
Blood plasma | Blood plasma is a light amber-colored liquid component of blood in which blood cells are absent, but which contains proteins and other constituents of whole blood in suspension. It makes up about 55% of the body's total blood volume. It is the intravascular part of extracellular fluid (all body fluid outside cells). It is mostly water (up to 95% by volume), and contains important dissolved proteins (6–8%; e.g., serum albumins, globulins, and fibrinogen), glucose, clotting factors, electrolytes (, , , , , etc.), hormones, carbon dioxide (plasma being the main medium for excretory product transportation), and oxygen. It plays a vital role in an intravascular osmotic effect that keeps electrolyte concentration balanced and protects the body from infection and other blood-related disorders.
Blood plasma can be separated from whole blood through blood fractionation, by adding an anticoagulant to a tube filled with blood, which is spun in a centrifuge until the blood cells fall to the bottom of the tube. The blood plasma is then poured or drawn off. For point-of-care testing applications, plasma can be extracted from whole blood via filtration or via agglutination to allow for rapid testing of specific biomarkers. Blood plasma has a density of approximately . Blood serum is blood plasma without clotting factors. Plasmapheresis is a medical therapy that involves blood plasma extraction, treatment, and reintegration.
Fresh frozen plasma is on the WHO Model List of Essential Medicines, the most important medications needed in a basic health system. It is of critical importance in the treatment of many types of trauma which result in blood loss, and is therefore kept stocked universally in all medical facilities capable of treating trauma (e.g., trauma centers, hospitals, and ambulances) or that pose a risk of patient blood loss such as surgical suite facilities.
Volume
Blood plasma volume may be expanded by or drained to extravascular fluid when there are changes in Starling forces across capillary walls. For example, when blood pressure drops in circulatory shock, Starling forces drive fluid into the interstitium, causing third spacing.
Standing still for a prolonged period will cause an increase in transcapillary hydrostatic pressure. As a result, approximately 12% of blood plasma volume will cross into the extravascular compartment. This plasma shift causes an increase in hematocrit, serum total protein, blood viscosity and, as a result of increased concentration of coagulation factors, it causes orthostatic hypercoagulability.
Plasma proteins
Albumins
Serum albumins are the most common plasma proteins, and they are responsible for maintaining the osmotic pressure of the blood. Without albumins, the consistency of blood would be closer to that of water. The increased viscosity of blood prevents fluid from entering the bloodstream from outside the capillaries. Albumins are produced in the liver, assuming the absence of a hepatocellular deficiency.
Globulins
The second most common type of protein in the blood plasma are globulins. Important globulins include immunoglobins which are important for the immune system and transport hormones and other compounds around the body. There are three main types of globulins. Alpha-1 and Alpha-2 globulins are formed in the liver and play an important role in mineral transport and the inhibition of blood coagulation. An example of beta globulin found in blood plasma includes low-density lipoproteins (LDL) which are responsible for transporting fat to the cells for steroid and membrane synthesis. Gamma globulin, better known as immunoglobulins, are produced by plasma B cells, and provides the human body with a defense system against invading pathogens and other immune diseases.
Fibrinogen
Fibrinogen proteins make up most of the remaining proteins in the blood. Fibrinogens are responsible for clotting blood to help prevent blood loss.
Color
Plasma is normally yellow due to bilirubin, carotenoids, hemoglobin, and transferrin. In abnormal cases, plasma can have varying shades of orange, green, or brown. The green color can be due to ceruloplasmin or sulfhemoglobin. The latter may form due to medicines that are able to form sulfonamides once ingested. A dark brown or reddish color can appear due to hemolysis, in which methemoglobin is released from broken blood cells. Plasma is normally relatively transparent, but sometimes it can be opaque. Opaqueness is typically due to elevated content of lipids like cholesterol and triglycerides.
Plasma vs. serum in medical diagnostics
Blood plasma and blood serum are often used in blood tests. Tests can be done on plasma, serum or both. In addition, some tests have to be done with whole blood, such as the determination of the amount of blood cells in blood via flow cytometry.
History
Plasma was already well known when described by William Harvey in de Motu Cordis in 1628, but knowledge of it probably dates as far back as Vesalius (1514–1564). The discovery of fibrinogen by William Henson, , made it easier to study plasma, as ordinarily, upon coming in contact with a foreign surface – something other than the vascular endothelium – clotting factors become activated and clotting proceeds rapidly, trapping RBCs etc. in the plasma and preventing separation of plasma from the blood. Adding citrate and other anticoagulants is a relatively recent advance. Upon the formation of a clot, the remaining clear fluid (if any) is blood serum, which is essentially plasma without the clotting factors
The use of blood plasma as a substitute for whole blood and for transfusion purposes was proposed in March 1918, in the correspondence columns of the British Medical Journal, by Gordon R. Ward. "Dried plasmas" in powder or strips of material format were developed and first used in World War II. Prior to the United States' involvement in the war, liquid plasma and whole blood were used.
The origin of plasmapheresis
Dr. José Antonio Grifols Lucas, a scientist from Vilanova i la Geltrú, Spain, founded Laboratorios Grifols in 1940. Dr. Grifols pioneered a first-of-its-kind technique called plasmapheresis, where a donor's red blood cells would be returned to the donor's body almost immediately after the separation of the blood plasma. This technique is still in practice today, almost 80 years later. In 1945, Dr. Grifols opened the world's first plasma donation center.
Blood for Britain
The "Blood for Britain" program during the early 1940s was quite successful (and popular in the United States) based on Charles Drew's contribution. A large project began in August 1940 to collect blood in New York City hospitals for the export of plasma to Britain. Drew was appointed medical supervisor of the "Plasma for Britain" project. His notable contribution at this time was to transform the test tube methods of many blood researchers into the first successful mass production techniques.
Nevertheless, the decision was made to develop a dried plasma package for the armed forces as it would reduce breakage and make the transportation, packaging, and storage much simpler. The resulting dried plasma package came in two tin cans containing 400 cc bottles. One bottle contained enough distilled water to reconstitute the dried plasma contained within the other bottle. In about three minutes, the plasma would be ready to use and could stay fresh for around four hours. The Blood for Britain program operated successfully for five months, with total collections of almost 15,000 people donating blood, and with over 5,500 vials of blood plasma.
Following the Supplying Blood Plasma to England project, Drew was named director of the Red Cross blood bank and assistant director of the National Research Council, in charge of blood collection for the United States Army and Navy. Drew argued against the armed forces directive that blood/plasma was to be separated by the race of the donor. Drew insisted that there was no racial difference in human blood and that the policy would lead to needless deaths as soldiers and sailors were required to wait for "same race" blood.
By the end of the war the American Red Cross had provided enough blood for over six million plasma packages. Most of the surplus plasma was returned to the United States for civilian use. Serum albumin replaced dried plasma for combat use during the Korean War.
Plasma donation
Plasma as a blood product prepared from blood donations is used in blood transfusions, typically as fresh frozen plasma (FFP) or Plasma Frozen within 24 hours after phlebotomy (PF24). When donating whole blood or packed red blood cell (PRBC) transfusions, O- is the most desirable and is considered a "universal donor," since it has neither A nor B antigens and can be safely transfused to most recipients. Type AB+ is the "universal recipient" type for PRBC donations. However, for plasma the situation is somewhat reversed. Blood donation centers will sometimes collect only plasma from AB donors through apheresis, as their plasma does not contain the antibodies that may cross react with recipient antigens. As such, AB is often considered the "universal donor" for plasma. Special programs exist just to cater to the male AB plasma donor, because of concerns about transfusion related acute lung injury (TRALI) and female donors who may have higher leukocyte antibodies. However, some studies show an increased risk of TRALI despite increased leukocyte antibodies in women who have been pregnant.
United Kingdom
Following fears of variant Creutzfeldt-Jakob disease (vCJD) being spread through the blood supply, the British government began to phase out blood plasma from U.K. donors and by the end of 1999 had imported all blood products made with plasma from the United States. In 2002, the British government purchased Life Resources Incorporated, an American blood supply company, to import plasma. The company became Plasma Resources UK (PRUK) which owned Bio Products Laboratory. In 2013, the British government sold an 80% stake in PRUK to American hedge fund Bain Capital, in a deal estimated to be worth £200 million. The sale was met with criticism in the UK. In 2009, the U.K. stopped importing plasma from the United States, as it was no longer a viable option due to regulatory and jurisdictional challenges.
At present (2024), blood donated in the United Kingdom is used by UK Blood Services for the manufacture of plasma blood components (Fresh Frozen Plasma (FFP) and cryoprecipitate). However, plasma from UK donors is still not used for the commercial manufacture of fractionated plasma medicines.
Synthetic blood plasma
Simulated body fluid (SBF) is a solution having a similar ion concentration to that of human blood plasma. SBF is normally used for the surface modification of metallic implants, and more recently in gene delivery application.
See also
Blood plasma fractionation
Chromatography in blood processing
Diag Human
Hypoxia preconditioned plasma
Intravascular volume status
References
Blood
Blood products
Body fluids
Hematology
Transfusion medicine | 0.763795 | 0.998587 | 0.762716 |
Magnesium in biology | Magnesium is an essential element in biological systems. Magnesium occurs typically as the Mg2+ ion. It is an essential mineral nutrient (i.e., element) for life and is present in every cell type in every organism. For example, adenosine triphosphate (ATP), the main source of energy in cells, must bind to a magnesium ion in order to be biologically active. What is called ATP is often actually Mg-ATP. As such, magnesium plays a role in the stability of all polyphosphate compounds in the cells, including those associated with the synthesis of DNA and RNA.
Over 300 enzymes require the presence of magnesium ions for their catalytic action, including all enzymes utilizing or synthesizing ATP, or those that use other nucleotides to synthesize DNA and RNA.
In plants, magnesium is necessary for synthesis of chlorophyll and photosynthesis.
Function
A balance of magnesium is vital to the well-being of all organisms. Magnesium is a relatively abundant ion in Earth's crust and mantle and is highly bioavailable in the hydrosphere. This availability, in combination with a useful and very unusual chemistry, may have led to its utilization in evolution as an ion for signaling, enzyme activation, and catalysis. However, the unusual nature of ionic magnesium has also led to a major challenge in the use of the ion in biological systems. Biological membranes are impermeable to magnesium (and other ions), so transport proteins must facilitate the flow of magnesium, both into and out of cells and intracellular compartments.
Human health
Inadequate magnesium intake frequently causes muscle spasms, and has been associated with cardiovascular disease, diabetes, high blood pressure, anxiety disorders, migraines, osteoporosis, and cerebral infarction. Acute deficiency (see hypomagnesemia) is rare, and is more common as a drug side-effect (such as chronic alcohol or diuretic use) than from low food intake per se, but it can occur in people fed intravenously for extended periods of time.
The most common symptom of excess oral magnesium intake is diarrhea. Supplements based on amino acid chelates (such as glycinate, lysinate etc.) are much better-tolerated by the digestive system and do not have the side-effects of the older compounds used, while sustained-release dietary supplements prevent the occurrence of diarrhea. Since the kidneys of adult humans excrete excess magnesium efficiently, oral magnesium poisoning in adults with normal renal function is very rare. Infants, which have less ability to excrete excess magnesium even when healthy, should not be given magnesium supplements, except under a physician's care.
Pharmaceutical preparations with magnesium are used to treat conditions including magnesium deficiency and hypomagnesemia, as well as eclampsia. Such preparations are usually in the form of magnesium sulfate or chloride when given parenterally. Magnesium is absorbed with reasonable efficiency (30% to 40%) by the body from any soluble magnesium salt, such as the chloride or citrate. Magnesium is similarly absorbed from Epsom salts, although the sulfate in these salts adds to their laxative effect at higher doses. Magnesium absorption from the insoluble oxide and hydroxide salts (milk of magnesia) is erratic and of poorer efficiency, since it depends on the neutralization and solution of the salt by the acid of the stomach, which may not be (and usually is not) complete.
Magnesium orotate may be used as adjuvant therapy in patients on optimal treatment for severe congestive heart failure, increasing survival rate and improving clinical symptoms and patient's quality of life.
In 2021, magnesium salts were the 211th most commonly prescribed medication in the United States, with more than 2million prescriptions.
Nerve conduction
Magnesium can affect muscle relaxation through direct action on cell membranes. Mg2+ ions close certain types of calcium channels, which conduct positively charged calcium ions into neurons. With an excess of magnesium, more channels will be blocked and nerve cells activity will decrease.
Hypertension
Intravenous magnesium sulphate is used in treating pre-eclampsia. For other than pregnancy-related hypertension, a meta-analysis of 22 clinical trials with dose ranges of 120 to 973 mg/day and a mean dose of 410 mg, concluded that magnesium supplementation had a small but statistically significant effect, lowering systolic blood pressure by 3–4 mm Hg and diastolic blood pressure by 2–3 mm Hg. The effect was larger when the dose was more than 370 mg/day.
Diabetes and glucose tolerance
Higher dietary intakes of magnesium correspond to lower diabetes incidence. For people with diabetes or at high risk of diabetes, magnesium supplementation lowers fasting glucose.
Mitochondria
Magnesium is essential as part of the process that generates adenosine triphosphate.
Mitochondria are often referred to as the "powerhouses of the cell" because their primary role is generating energy for cellular processes. They achieve this by breaking down nutrients, primarily glucose, through a series of chemical reactions known as cellular respiration. This process ultimately produces adenosine triphosphate (ATP), the cell's main energy currency.
Vitamin D
Magnesium and vitamin D have a synergistic relationship in the body, meaning they work together to optimize each other's functions:
Magnesium activates vitamin D
Vitamin D influences magnesium absorption.
Bone health: They play crucial roles in calcium absorption and bone metabolism.
Muscle function: They contribute to muscle contraction and relaxation, impacting physical performance and overall well-being.
Immune function: They support a healthy immune system and may help reduce inflammation.
Overall, maintaining adequate levels of both magnesium and vitamin D is essential for optimal health and well-being.
Testosterone
It is theorized that the process of making testosterone from cholesterol, needs magnesium to function properly.
Studies have shown that significant gains in testosterone occur after taking 10 mg magnesium/kg body weight/day.
Dietary recommendations
The U.S. Institute of Medicine (IOM) updated Estimated Average Requirements (EARs) and Recommended Dietary Allowances (RDAs) for magnesium in 1997. If there is not sufficient information to establish EARs and RDAs, an estimate designated Adequate Intake (AI) is used instead. The current EARs for magnesium for women and men ages 31 and up are 265 mg/day and 350 mg/day, respectively. The RDAs are 320 and 420 mg/day. RDAs are higher than EARs so as to identify amounts that will cover people with higher than average requirements. RDA for pregnancy is 350 to 400 mg/day depending on age of the woman. RDA for lactation ranges 310 to 360 mg/day for same reason. For children ages 1–13 years, the RDA increases with age from 65 to 200 mg/day. As for safety, the IOM also sets Tolerable upper intake levels (ULs) for vitamins and minerals when evidence is sufficient. In the case of magnesium the UL is set at 350 mg/day. The UL is specific to magnesium consumed as a dietary supplement, the reason being that too much magnesium consumed at one time can cause diarrhea. The UL does not apply to food-sourced magnesium. Collectively the EARs, RDAs and ULs are referred to as Dietary Reference Intakes.
* = Adequate intake
The European Food Safety Authority (EFSA) refers to the collective set of information as Dietary Reference Values, with Population Reference Intake (PRI) instead of RDA, and Average Requirement instead of EAR. AI and UL are defined the same as in the United States. For women and men ages 18 and older, the AIs are set at 300 and 350 mg/day, respectively. AIs for pregnancy and lactation are also 300 mg/day. For children ages 1–17 years, the AIs increase with age from 170 to 250 mg/day. These AIs are lower than the U.S. RDAs. The European Food Safety Authority reviewed the same safety question and set its UL at 250 mg/day - lower than the U.S. value. The magnesium UL is unique in that it is lower than some of the RDAs. It applies to intake from a pharmacological agent or dietary supplement only and does not include intake from food and water.
Labeling
For U.S. food and dietary supplement labeling purposes, the amount in a serving is expressed as a percent of daily value (%DV). For magnesium labeling purposes, 100% of the daily value was 400 mg, but as of May 27, 2016, it was revised to 420 mg to bring it into agreement with the RDA. A table of the old and new adult Daily Values is provided at Reference Daily Intake.
Food sources
Green vegetables such as spinach provide magnesium because of the abundance of chlorophyll molecules, which contain the ion. Nuts (especially Brazil nuts, cashews and almonds), seeds (e.g., pumpkin seeds), dark chocolate, roasted soybeans, bran, and some whole grains are also good sources of magnesium.
Although many foods contain magnesium, it is usually found in low levels. As with most nutrients, daily needs for magnesium are unlikely to be met by one serving of any single food. Eating a wide variety of fruits, vegetables, and grains will help ensure adequate intake of magnesium.
Because magnesium readily dissolves in water, refined foods, which are often processed or cooked in water and dried, in general, are poor sources of the nutrient. For example, whole-wheat bread has twice as much magnesium as white bread because the magnesium-rich germ and bran are removed when white flour is processed. The table of food sources of magnesium suggests many dietary sources of magnesium.
"Hard" water can also provide magnesium, but "soft" water contains less of the ion. Dietary surveys do not assess magnesium intake from water, which may lead to underestimating total magnesium intake and its variability.
Too much magnesium may make it difficult for the body to absorb calcium. Not enough magnesium can lead to hypomagnesemia as described above, with irregular heartbeats, high blood pressure (a sign in humans but not some experimental animals such as rodents), insomnia, and muscle spasms (fasciculation). However, as noted, symptoms of low magnesium from pure dietary deficiency are thought to be rarely encountered.
Following are some foods and the amount of magnesium in them:
Pumpkin seeds, no hulls ( cup) = 303 mg
Chia seeds, ( cup) = 162 mg
Buckwheat flour ( cup) = 151 mg
Brazil nuts ( cup) = 125 mg
Oat bran, raw ( cup) = 110 mg
Cocoa powder ( cup) = 107 mg
Halibut (3 oz) = 103 mg
Almonds ( cup) = 99 mg
Cashews ( cup) = 89 mg
Whole wheat flour ( cup) = 83 mg
Spinach, boiled ( cup) = 79 mg
Swiss chard, boiled ( cup) = 75 mg
Chocolate, 70% cocoa (1 oz) = 73 mg
Tofu, firm ( cup) = 73 mg
Black beans, boiled ( cup) = 60 mg
Quinoa, cooked ( cup) = 59 mg
Peanut butter (2 tablespoons) = 50 mg
Walnuts ( cup) = 46 mg
Sunflower seeds, hulled ( cup) = 41 mg
Chickpeas, boiled ( cup) = 39 mg
Kale, boiled ( cup) = 37 mg
Lentils, boiled ( cup) = 36 mg
Oatmeal, cooked ( cup) = 32 mg
Fish sauce (1 Tbsp) = 32 mg
Milk, non fat (1 cup) = 27 mg
Coffee, espresso (1 oz) = 24 mg
Whole wheat bread (1 slice) = 23 mg
Biological range, distribution, and regulation
In animals, it has been shown that different cell types maintain different concentrations of magnesium. It seems likely that the same is true for plants. This suggests that different cell types may regulate influx and efflux of magnesium in different ways based on their unique metabolic needs. Interstitial and systemic concentrations of free magnesium must be delicately maintained by the combined processes of buffering (binding of ions to proteins and other molecules) and muffling (the transport of ions to storage or extracellular spaces).
In plants, and more recently in animals, magnesium has been recognized as an important signaling ion, both activating and mediating many biochemical reactions. The best example of this is perhaps the regulation of carbon fixation in chloroplasts in the Calvin cycle.
Magnesium is very important in cellular function. Deficiency of the nutrient causes disease of the affected organism. In single-cell organisms such as bacteria and yeast, low levels of magnesium manifests in greatly reduced growth rates. In magnesium transport knockout strains of bacteria, healthy rates are maintained only with exposure to very high external concentrations of the ion. In yeast, mitochondrial magnesium deficiency also leads to disease.
Plants deficient in magnesium show stress responses. The first observable signs of both magnesium starvation and overexposure in plants is a decrease in the rate of photosynthesis. This is due to the central position of the Mg2+ ion in the chlorophyll molecule. The later effects of magnesium deficiency on plants are a significant reduction in growth and reproductive viability. Magnesium can also be toxic to plants, although this is typically seen only in drought conditions.
In animals, magnesium deficiency (hypomagnesemia) is seen when the environmental availability of magnesium is low. In ruminant animals, particularly vulnerable to magnesium availability in pasture grasses, the condition is known as 'grass tetany'. Hypomagnesemia is identified by a loss of balance due to muscle weakness. A number of genetically attributable hypomagnesemia disorders have also been identified in humans.
Overexposure to magnesium may be toxic to individual cells, though these effects have been difficult to show experimentally. Hypermagnesemia, an overabundance of magnesium in the blood, is usually caused by loss of kidney function. Healthy animals rapidly excrete excess magnesium in the urine and stool. Urinary magnesium is called magnesuria. Characteristic concentrations of magnesium in model organisms are: in E. coli 30-100mM (bound), 0.01-1mM (free), in budding yeast 50mM, in mammalian cell 10mM (bound), 0.5mM (free) and in blood plasma 1mM.
Biological chemistry
Mg2+ is the fourth-most-abundant metal ion in cells (per moles) and the most abundant free divalent cation — as a result, it is deeply and intrinsically woven into cellular metabolism. Indeed, Mg2+-dependent enzymes appear in virtually every metabolic pathway: Specific binding of Mg2+ to biological membranes is frequently observed, Mg2+ is also used as a signalling molecule, and much of nucleic acid biochemistry requires Mg2+, including all reactions that require release of energy from ATP. In nucleotides, the triple-phosphate moiety of the compound is invariably stabilized by association with Mg2+ in all enzymatic processes.
Chlorophyll
In photosynthetic organisms, Mg2+ has the additional vital role of being the coordinating ion in the chlorophyll molecule. This role was discovered by Richard Willstätter, who received the Nobel Prize in Chemistry 1915 for the purification and structure of chlorophyll binding with sixth number of carbon
Enzymes
The chemistry of the Mg2+ ion, as applied to enzymes, uses the full range of this ion's unusual reaction chemistry to fulfill a range of functions. Mg2+ interacts with substrates, enzymes, and occasionally both (Mg2+ may form part of the active site). In general, Mg2+ interacts with substrates through inner sphere coordination, stabilising anions or reactive intermediates, also including binding to ATP and activating the molecule to nucleophilic attack. When interacting with enzymes and other proteins, Mg2+ may bind using inner or outer sphere coordination, to either alter the conformation of the enzyme or take part in the chemistry of the catalytic reaction. In either case, because Mg2+ is only rarely fully dehydrated during ligand binding, it may be a water molecule associated with the Mg2+ that is important rather than the ion itself. The Lewis acidity of Mg2+ (pKa 11.4) is used to allow both hydrolysis and condensation reactions (most common ones being phosphate ester hydrolysis and phosphoryl transfer) that would otherwise require pH values greatly removed from physiological values.
Essential role in the biological activity of ATP
ATP (adenosine triphosphate), the main source of energy in cells, must be bound to a magnesium ion in order to be biologically active. What is called ATP is often actually Mg-ATP.
Nucleic acids
Nucleic acids have an important range of interactions with Mg2+. The binding of Mg2+ to DNA and RNA stabilises structure; this can be observed in the increased melting temperature (Tm) of double-stranded DNA in the presence of Mg2+. In addition, ribosomes contain large amounts of Mg2+ and the stabilisation provided is essential to the complexation of this ribo-protein. A large number of enzymes involved in the biochemistry of nucleic acids bind Mg2+ for activity, using the ion for both activation and catalysis. Finally, the autocatalysis of many ribozymes (enzymes containing only RNA) is Mg2+ dependent (e.g. the yeast mitochondrial group II self splicing introns).
Magnesium ions can be critical in maintaining the positional integrity of closely clustered phosphate groups. These clusters appear in numerous and distinct parts of the cell nucleus and cytoplasm. For instance, hexahydrated Mg2+ ions bind in the deep major groove and at the outer mouth of A-form nucleic acid duplexes.
Cell membranes and walls
Biological cell membranes and cell walls are polyanionic surfaces. This has important implications for the transport of ions, in particular because it has been shown that different membranes preferentially bind different ions. Both Mg2+ and Ca2+ regularly stabilize membranes by the cross-linking of carboxylated and phosphorylated head groups of lipids. However, the envelope membrane of E. coli has also been shown to bind Na+, K+, Mn2+ and Fe3+. The transport of ions is dependent on both the concentration gradient of the ion and the electric potential (ΔΨ) across the membrane, which will be affected by the charge on the membrane surface. For example, the specific binding of Mg2+ to the chloroplast envelope has been implicated in a loss of photosynthetic efficiency by the blockage of K+ uptake and the subsequent acidification of the chloroplast stroma.
Proteins
The Mg2+ ion tends to bind only weakly to proteins (Ka ≤ 105) and this can be exploited by the cell to switch enzymatic activity on and off by changes in the local concentration of Mg2+. Although the concentration of free cytoplasmic Mg2+ is on the order of 1 mmol/L, the total Mg2+ content of animal cells is 30 mmol/L and in plants the content of leaf endodermal cells has been measured at values as high as 100 mmol/L (Stelzer et al., 1990), much of which buffered in storage compartments. The cytoplasmic concentration of free Mg2+ is buffered by binding to chelators (e.g., ATP), but also, what is more important, it is buffered by storage of Mg2+ in intracellular compartments. The transport of Mg2+ between intracellular compartments may be a major part of regulating enzyme activity. The interaction of Mg2+ with proteins must also be considered for the transport of the ion across biological membranes.
Manganese
In biological systems, only manganese (Mn2+) is readily capable of replacing Mg2+, but only in a limited set of circumstances. Mn2+ is very similar to Mg2+ in terms of its chemical properties, including inner and outer shell complexation. Mn2+ effectively binds ATP and allows hydrolysis of the energy molecule by most ATPases. Mn2+ can also replace Mg2+ as the activating ion for a number of Mg2+-dependent enzymes, although some enzyme activity is usually lost. Sometimes such enzyme metal preferences vary among closely related species: For example, the reverse transcriptase enzyme of lentiviruses like HIV, SIV and FIV is typically dependent on Mg2+, whereas the analogous enzyme for other retroviruses prefers Mn2+.
Importance in drug binding
An article investigating the structural basis of interactions between clinically relevant antibiotics and the 50S ribosome appeared in Nature in October 2001. High-resolution X-ray crystallography established that these antibiotics associate only with the 23S rRNA of a ribosomal subunit, and no interactions are formed with a subunit's protein portion. The article stresses that the results show "the importance of putative Mg2+ ions for the binding of some drugs".
Measuring magnesium in biological samples
By radioactive isotopes
The use of radioactive tracer elements in ion uptake assays allows the calculation of km, Ki and Vmax and determines the initial change in the ion content of the cells. 28Mg decays by the emission of a high-energy beta or gamma particle, which can be measured using a scintillation counter. However, the radioactive half-life of 28Mg, the most stable of the radioactive magnesium isotopes, is only 21 hours. This severely restricts the experiments involving the nuclide. Also, since 1990, no facility has routinely produced 28Mg, and the price per mCi is now predicted to be approximately US$30,000. The chemical nature of Mg2+ is such that it is closely approximated by few other cations. However, Co2+, Mn2+ and Ni2+ have been used successfully to mimic the properties of Mg2+ in some enzyme reactions, and radioactive forms of these elements have been employed successfully in cation transport studies. The difficulty of using metal ion replacement in the study of enzyme function is that the relationship between the enzyme activities with the replacement ion compared to the original is very difficult to ascertain.
By fluorescent indicators
A number of chelators of divalent cations have different fluorescence spectra in the bound and unbound states. Chelators for Ca2+ are well established, have high affinity for the cation, and low interference from other ions. Mg2+ chelators lag behind and the major fluorescence dye for Mg2+ (mag-fura 2) actually has a higher affinity for Ca2+. This limits the application of this dye to cell types where the resting level of Ca2+ is < 1 μM and does not vary with the experimental conditions under which Mg2+ is to be measured. Recently, Otten et al. (2001) have described work into a new class of compounds that may prove more useful, having significantly better binding affinities for Mg2+. The use of the fluorescent dyes is limited to measuring the free Mg2+. If the ion concentration is buffered by the cell by chelation or removal to subcellular compartments, the measured rate of uptake will give only minimum values of km and Vmax.
By electrophysiology
First, ion-specific microelectrodes can be used to measure the internal free ion concentration of cells and organelles. The major advantages are that readings can be made from cells over relatively long periods of time, and that unlike dyes very little extra ion buffering capacity is added to the cells.
Second, the technique of two-electrode voltage-clamp allows the direct measurement of the ion flux across the membrane of a cell. The membrane is held at an electric potential and the responding current is measured. All ions passing across the membrane contribute to the measured current.
Third, the technique of patch-clamp uses isolated sections of natural or artificial membrane in much the same manner as voltage-clamp but without the secondary effects of a cellular system. Under ideal conditions the conductance of individual channels can be quantified. This methodology gives the most direct measurement of the action of ion channels.
By absorption spectroscopy
Flame atomic absorption spectroscopy (AAS) determines the total magnesium content of a biological sample. This method is destructive; biological samples must be broken down in concentrated acids to avoid clogging the fine nebulising apparatus. Beyond this, the only limitation is that samples must be in a volume of approximately 2 mL and at a concentration range of 0.1 – 0.4 μmol/L for optimum accuracy. As this technique cannot distinguish between Mg2+ already present in the cell and that taken up during the experiment, only content not uptaken can be quantified.
Inductively coupled plasma (ICP) using either the mass spectrometry (MS) or atomic emission spectroscopy (AES) modifications also allows the determination of the total ion content of biological samples. These techniques are more sensitive than flame AAS and are capable of measuring the quantities of multiple ions simultaneously. However, they are also significantly more expensive.
Magnesium transport
The chemical and biochemical properties of Mg2+ present the cellular system with a significant challenge when transporting the ion across biological membranes. The dogma of ion transport states that the transporter recognises the ion then progressively removes the water of hydration, removing most or all of the water at a selective pore before releasing the ion on the far side of the membrane. Due to the properties of Mg2+, large volume change from hydrated to bare ion, high energy of hydration and very low rate of ligand exchange in the inner coordination sphere, these steps are probably more difficult than for most other ions. To date, only the ZntA protein of Paramecium has been shown to be a Mg2+ channel. The mechanisms of Mg2+ transport by the remaining proteins are beginning to be uncovered with the first three-dimensional structure of a Mg2+ transport complex being solved in 2004.
The hydration shell of the Mg2+ ion has a very tightly bound inner shell of six water molecules and a relatively tightly bound second shell containing 12–14 water molecules (Markham et al., 2002). Thus, it is presumed that recognition of the Mg2+ ion requires some mechanism to interact initially with the hydration shell of Mg2+, followed by a direct recognition/binding of the ion to the protein. Due to the strength of the inner sphere complexation between Mg2+ and any ligand, multiple simultaneous interactions with the transport protein at this level might significantly retard the ion in the transport pore. Hence, it is possible that much of the hydration water is retained during transport, allowing the weaker (but still specific) outer sphere coordination.
In spite of the mechanistic difficulty, Mg2+ must be transported across membranes, and a large number of Mg2+ fluxes across membranes from a variety of systems have been described. However, only a small selection of Mg2+ transporters have been characterised at the molecular level.
Ligand ion channel blockade
Magnesium ions (Mg2+) in cellular biology are usually in almost all senses opposite to Ca2+ ions, because they are bivalent too, but have greater electronegativity and thus exert greater pull on water molecules, preventing passage through the channel (even though the magnesium itself is smaller). Thus, Mg2+ ions block Ca2+ channels such as (NMDA channels) and have been shown to affect gap junction channels forming electrical synapses.
Plant physiology of magnesium
The previous sections have dealt in detail with the chemical and biochemical aspects of Mg2+ and its transport across cellular membranes. This section will apply this knowledge to aspects of whole plant physiology, in an attempt to show how these processes interact with the larger and more complex environment of the multicellular organism.
Nutritional requirements and interactions
Mg2+ is essential for plant growth and is present in higher plants in amounts on the order of 80 μmol g−1 dry weight. The amounts of Mg2+ vary in different parts of the plant and are dependent upon nutritional status. In times of plenty, excess Mg2+ may be stored in vascular cells (Stelzer et al., 1990; and in times of starvation Mg2+ is redistributed, in many plants, from older to newer leaves.
Mg2+ is taken up into plants via the roots. Interactions with other cations in the rhizosphere can have a significant effect on the uptake of the ion.(Kurvits and Kirkby, 1980; The structure of root cell walls is highly permeable to water and ions, and hence ion uptake into root cells can occur anywhere from the root hairs to cells located almost in the centre of the root (limited only by the Casparian strip). Plant cell walls and membranes carry a great number of negative charges, and the interactions of cations with these charges is key to the uptake of cations by root cells allowing a local concentrating effect. Mg2+ binds relatively weakly to these charges, and can be displaced by other cations, impeding uptake and causing deficiency in the plant.
Within individual plant cells, the Mg2+ requirements are largely the same as for all cellular life; Mg2+ is used to stabilise membranes, is vital to the utilisation of ATP, is extensively involved in the nucleic acid biochemistry, and is a cofactor for many enzymes (including the ribosome). Also, Mg2+ is the coordinating ion in the chlorophyll molecule. It is the intracellular compartmentalisation of Mg2+ in plant cells that leads to additional complexity. Four compartments within the plant cell have reported interactions with Mg2+. Initially, Mg2+ will enter the cell into the cytoplasm (by an as yet unidentified system), but free Mg2+ concentrations in this compartment are tightly regulated at relatively low levels (≈2 mmol/L) and so any excess Mg2+ is either quickly exported or stored in the second intracellular compartment, the vacuole. The requirement for Mg2+ in mitochondria has been demonstrated in yeast and it seems highly likely that the same will apply in plants. The chloroplasts also require significant amounts of internal Mg2+, and low concentrations of cytoplasmic Mg2+. In addition, it seems likely that the other subcellular organelles (e.g., Golgi, endoplasmic reticulum, etc.) also require Mg2+.
Distributing magnesium ions within the plant
Once in the cytoplasmic space of root cells Mg2+, along with the other cations, is probably transported radially into the stele and the vascular tissue. From the cells surrounding the xylem the ions are released or pumped into the xylem and carried up through the plant. In the case of Mg2+, which is highly mobile in both the xylem and phloem, the ions will be transported to the top of the plant and back down again in a continuous cycle of replenishment. Hence, uptake and release from vascular cells is probably a key part of whole plant Mg2+ homeostasis. Figure 1 shows how few processes have been connected to their molecular mechanisms (only vacuolar uptake has been associated with a transport protein, AtMHX).
The diagram shows a schematic of a plant and the putative processes of Mg2+ transport at the root and leaf where Mg2+ is loaded and unloaded from the vascular tissues. Mg2+ is taken up into the root cell wall space (1) and interacts with the negative charges associated with the cell walls and membranes. Mg2+ may be taken up into cells immediately (symplastic pathway) or may travel as far as the Casparian band (4) before being absorbed into cells (apoplastic pathway; 2). The concentration of Mg2+ in the root cells is probably buffered by storage in root cell vacuoles (3). Note that cells in the root tip do not contain vacuoles. Once in the root cell cytoplasm, Mg2+ travels toward the centre of the root by plasmodesmata, where it is loaded into the xylem (5) for transport to the upper parts of the plant. When the Mg2+ reaches the leaves it is unloaded from the xylem into cells (6) and again is buffered in vacuoles (7). Whether cycling of Mg2+ into the phloem occurs via general cells in the leaf (8) or directly from xylem to phloem via transfer cells (9) is unknown. Mg2+ may return to the roots in the phloem sap.
When a Mg2+ ion has been absorbed by a cell requiring it for metabolic processes, it is generally assumed that the ion stays in that cell for as long as the cell is active. In vascular cells, this is not always the case; in times of plenty, Mg2+ is stored in the vacuole, takes no part in the day-to-day metabolic processes of the cell (Stelzer et al., 1990), and is released at need. But for most cells it is death by senescence or injury that releases Mg2+ and many of the other ionic constituents, recycling them into healthy parts of the plant. In addition, when Mg2+ in the environment is limiting, some species are able to mobilise Mg2+ from older tissues. These processes involve the release of Mg2+ from its bound and stored states and its transport back into the vascular tissue, where it can be distributed to the rest of the plant. In times of growth and development, Mg2+ is also remobilised within the plant as source and sink relationships change.
The homeostasis of Mg2+ within single plant cells is maintained by processes occurring at the plasma membrane and at the vacuole membrane (see Figure 2). The major driving force for the translocation of ions in plant cells is ΔpH. H+-ATPases pump H+ ions against their concentration gradient to maintain the pH differential that can be used for the transport of other ions and molecules. H+ ions are pumped out of the cytoplasm into the extracellular space or into the vacuole. The entry of Mg2+ into cells may occur through one of two pathways, via channels using the ΔΨ (negative inside) across this membrane or by symport with H+ ions. To transport the Mg2+ ion into the vacuole requires a Mg2+/H+ antiport transporter (such as AtMHX). The H+-ATPases are dependent on Mg2+ (bound to ATP) for activity, so that Mg2+ is required to maintain its own homeostasis.
A schematic of a plant cell is shown including the four major compartments currently recognised as interacting with Mg2+. H+-ATPases maintain a constant ΔpH across the plasma membrane and the vacuole membrane. Mg2+ is transported into the vacuole using the energy of ΔpH (in A. thaliana by AtMHX). Transport of Mg2+ into cells may use either the negative ΔΨ or the ΔpH. The transport of Mg2+ into mitochondria probably uses ΔΨ as in the mitochondria of yeast, and it is likely that chloroplasts take Mg2+ by a similar system. The mechanism and the molecular basis for the release of Mg2+ from vacuoles and from the cell is not known. Likewise, the light-regulated Mg2+ concentration changes in chloroplasts are not fully understood, but do require the transport of H+ ions across the thylakoid membrane.
Magnesium, chloroplasts and photosynthesis
Mg2+ is the coordinating metal ion in the chlorophyll molecule, and in plants where the ion is in high supply about 6% of the total Mg2+ is bound to chlorophyll. Thylakoid stacking is stabilised by Mg2+ and is important for the efficiency of photosynthesis, allowing phase transitions to occur.
Mg2+ is probably taken up into chloroplasts to the greatest extent during the light-induced development from proplastid to chloroplast or etioplast to chloroplast. At these times, the synthesis of chlorophyll and the biogenesis of the thylakoid membrane stacks absolutely require the divalent cation.
Whether Mg2+ is able to move into and out of chloroplasts after this initial developmental phase has been the subject of several conflicting reports. Deshaies et al. (1984) found that Mg2+ did move in and out of isolated chloroplasts from young pea plants, but Gupta and Berkowitz (1989) were unable to reproduce the result using older spinach chloroplasts. Deshaies et al. had stated in their paper that older pea chloroplasts showed less significant changes in Mg2+ content than those used to form their conclusions. The relative proportion of immature chloroplasts present in the preparations may explain these observations.
The metabolic state of the chloroplast changes considerably between night and day. During the day, the chloroplast is actively harvesting the energy of light and converting it into chemical energy. The activation of the metabolic pathways involved comes from the changes in the chemical nature of the stroma on the addition of light. H+ is pumped out of the stroma (into both the cytoplasm and the lumen) leading to an alkaline pH. Mg2+ (along with K+) is released from the lumen into the stroma, in an electroneutralisation process to balance the flow of H+. Finally, thiol groups on enzymes are reduced by a change in the redox state of the stroma. Examples of enzymes activated in response to these changes are fructose 1,6-bisphosphatase, sedoheptulose bisphosphatase and ribulose-1,5-bisphosphate carboxylase. During the dark period, if these enzymes were active a wasteful cycling of products and substrates would occur.
Two major classes of the enzymes that interact with Mg2+ in the stroma during the light phase can be identified. Firstly, enzymes in the glycolytic pathway most often interact with two atoms of Mg2+. The first atom is as an allosteric modulator of the enzymes' activity, while the second forms part of the active site and is directly involved in the catalytic reaction. The second class of enzymes includes those where the Mg2+ is complexed to nucleotide di- and tri-phosphates (ADP and ATP), and the chemical change involves phosphoryl transfer. Mg2+ may also serve in a structural maintenance role in these enzymes (e.g., enolase).
Magnesium stress
Plant stress responses can be observed in plants that are under- or over-supplied with Mg2+. The first observable signs of Mg2+ stress in plants for both starvation and toxicity is a depression of the rate of photosynthesis, it is presumed because of the strong relationships between Mg2+ and chloroplasts/chlorophyll. In pine trees, even before the visible appearance of yellowing and necrotic spots, the photosynthetic efficiency of the needles drops markedly. In Mg2+ deficiency, reported secondary effects include carbohydrate immobility, loss of RNA transcription and loss of protein synthesis. However, due to the mobility of Mg2+ within the plant, the deficiency phenotype may be present only in the older parts of the plant. For example, in Pinus radiata starved of Mg2+, one of the earliest identifying signs is the chlorosis in the needles on the lower branches of the tree. This is because Mg2+ has been recovered from these tissues and moved to growing (green) needles higher in the tree.
A Mg2+ deficit can be caused by the lack of the ion in the media (soil), but more commonly comes from inhibition of its uptake. Mg2+ binds quite weakly to the negatively charged groups in the root cell walls, so that excesses of other cations such as K+, NH4+, Ca2+, and Mn2+ can all impede uptake.(Kurvits and Kirkby, 1980; In acid soils Al3+ is a particularly strong inhibitor of Mg2+ uptake. The inhibition by Al3+ and Mn2+ is more severe than can be explained by simple displacement, hence it is possible that these ions bind to the Mg2+ uptake system directly. In bacteria and yeast, such binding by Mn2+ has already been observed. Stress responses in the plant develop as cellular processes halt due to a lack of Mg2+ (e.g. maintenance of ΔpH across the plasma and vacuole membranes). In Mg2+-starved plants under low light conditions, the percentage of Mg2+ bound to chlorophyll has been recorded at 50%. Presumably, this imbalance has detrimental effects on other cellular processes.
Mg2+ toxicity stress is more difficult to develop. When Mg2+ is plentiful, in general the plants take up the ion and store it (Stelzer et al., 1990). However, if this is followed by drought then ionic concentrations within the cell can increase dramatically. High cytoplasmic Mg2+ concentrations block a K+ channel in the inner envelope membrane of the chloroplast, in turn inhibiting the removal of H+ ions from the chloroplast stroma. This leads to an acidification of the stroma that inactivates key enzymes in carbon fixation, which all leads to the production of oxygen free radicals in the chloroplast that then cause oxidative damage.
See also
Biology and pharmacology of chemical elements
Magnesium deficiency (agriculture)
Notes
References
electronic-book electronic-
External links
Magnesium Deficiency
List of foods rich in Magnesium
The Magnesium Website- Includes full text papers and textbook chapters by leading magnesium authorities Mildred Seelig, Jean Durlach, Burton M. Altura and Bella T. Altura. Links to over 300 articles discussing magnesium and magnesium deficiency.
Dietary Reference Intake
Healing Thresholds - description of research studies regarding supplementation with magnesium and other therapies for autism
Physiology
Plant physiology
Magnesium
Biology and pharmacology of chemical elements
Biological systems | 0.77089 | 0.989396 | 0.762715 |
Military medicine | The term military medicine has a number of potential connotations. It may mean:
A medical specialty, specifically a branch of occupational medicine attending to the medical risks and needs (both preventive and interventional) of soldiers, sailors and other service members. This disparate arena has historically involved the prevention and treatment of infectious diseases (especially tropical diseases), and, in the 20th century, the ergonomics and health effects of operating military-specific machines and equipment such as submarines, tanks, helicopters and airplanes. Undersea and aviation medicine can be understood as subspecialties of military medicine, or in any case originated as such. Few countries certify or recognize "military medicine" as a formal speciality or subspeciality in its own right.
The planning and practice of the surgical management of mass battlefield casualties and the logistical and administrative considerations of establishing and operating combat support hospitals. This involves military medical hierarchies, especially the organization of structured medical command and administrative systems that interact with and support deployed combat units. (See Battlefield medicine.)
The administration and practice of health care for military service members and their dependents in non-deployed (peacetime) settings. This may (as in the United States) consist of a medical system paralleling all the medical specialties and sub-specialties that exist in the civilian sector. (See also Veterans Health Administration which serves U.S. veterans.)
Medical research and development specifically bearing upon problems of military medical interest. Historically, this encompasses all of the medical advances emerging from medical research efforts directed at addressing the problems encountered by deployed military forces (e.g., vaccines or drugs for soldiers, medical evacuation systems, drinking water chlorination, etc.) many of which ultimately prove important beyond the purely military considerations that inspired them.
Legal status
Military medical personnel engage in humanitarian work and are "protected persons" under international humanitarian law in accordance with the First and Second Geneva Conventions and their Additional Protocols, which established legally binding rules guaranteeing neutrality and protection for wounded soldiers, field or ship's medical personnel, and specific humanitarian institutions in an armed conflict. International humanitarian law makes no distinction between medical personnel who are members of the armed forces (and who hold military ranks) and those who are civilian volunteers. All medical personnel are considered non-combatants under international humanitarian law because of their humanitarian duties, and they may not be attacked and not be taken as prisoners of war; hospitals and other medical facilities and transports identified as such, whether they are military or civilian, may not be attacked either. The red cross, the red crescent and the red crystal are the protective signs recognised under international humanitarian law, and are used by military medical personnel and facilities for this purpose. Attacking military medical personnel, patients in their care, or medical facilities or transports legitimately marked as such is a war crime. Likewise, misusing these protective signs to mask military operations is the war crime of perfidy. Military medical personnel may be armed, usually with service pistols, for the purpose of self defense or the defense of patients.
Historical significance
The significance of military medicine for combat strength goes far beyond treatment of battlefield injuries; in every major war fought until the late 19th century disease claimed more soldier casualties than did enemy action. During the American Civil War (1860–65), for example, about twice as many soldiers died of disease as were killed or mortally wounded in combat. The Franco-Prussian War (1870–71) is considered to have been the first conflict in which combat injury exceeded disease, at least in the German coalition army which lost 3.47% of its average headcount to combat and only 1.82% to disease. In new world countries, such as Australia, New Zealand, the United States and Canada, military physicians and surgeons contributed significantly to the development of civilian health care.
Improvements in military medicine have increased the survival rates in successive wars, due to improvements in medical evacuation, battlefield medicine and trauma care. Similar improvements have been seen in trauma practices during the Iraq war. Some military trauma care practices are disseminated by citizen soldiers who return to civilian practice. One such practice is where major trauma patients are transferred to an operating theater as soon as possible, to stop internal bleeding, increasing the survival rate. Within the United States, the survival rate for gunshot wounds has increased, leading to apparent declines in the gun death rate in states that have stable rates of gunshot hospitalizations.
Military medicine by country
North America
Canada
Royal Canadian Medical Service
Royal Canadian Dental Corps
Canadian Forces Health Services Group
Surgeon General (Canada)
National Defence Medical Centre
United States
Assistant Secretary of Defense for Health Affairs
Military Health System
Military Medicine, academic journal
TRICARE
United States Unified Medical Command
Uniformed Services University of the Health Sciences
Medical Education and Training Campus
Henry M. Jackson Foundation for the Advancement of Military Medicine
Defense Health Agency
National Center for Medical Intelligence
Health Professions Scholarship Program
Joint Task Force National Capital Region/Medical
Alexander T. Augusta Military Medical Center
Association of Military Surgeons of the United States
Tactical Combat Casualty Care
Armed Forces Institute of Pathology
Armed Forces Radiobiology Research Institute
Defense Health Program Budget Activity Group
Department of Defense Medical Examination Review Board
National Museum of Health and Medicine
Medicine in the American Civil War
National Museum of Civil War Medicine
U.S. Army
Surgeon General of the U.S. Army
Army Medical Department
Battalion Aid Station
Borden Institute
Combat Support Hospital
Fort Detrick
Fort Sam Houston
Forward Surgical Teams
United States Army Medical Corps
United States Army Nurse Corps
United States Army Veterinary Corps
Mobile Army Surgical Hospital
Portable Surgical Hospital
68W, the "combat medic"
Combat Medical Badge
Expert Field Medical Badge
Textbook of Military Medicine published by the U.S. Army
United States Army Medical Department Center and School
United States Army Medical Department Museum
U.S. Army Dental Command
U.S. Army Medical Command
United States Army Medical Research and Development Command
United States Army Medical Research Institute of Infectious Diseases
United States Army Medical Command, Vietnam
United States Army Medical Command, Europe
Walter Reed Army Medical Center
Walter Reed Army Institute of Research
U.S. Army Public Health Center
Army Medical School
United States Army Health Services Command
Army Medical Museum and Library
Army Medical Department regimental coat of arms
Combat lifesaver course
U.S. Navy
Surgeon General of the U.S. Navy
Bureau of Medicine and Surgery
United States Navy Health Care
U.S. Navy Medical Corps
U.S. Navy Dental Corps
U.S. Navy Nurse Corps
U.S. Navy Medical Service Corps
U.S. Navy Hospital Corpsman
United States Naval Hospital (disambiguation)
Special amphibious reconnaissance corpsman
Battalion Aid Station
Naval Hospital Corps School
Naval Medical Center San Diego
Naval Medical Center Portsmouth
National Naval Medical Center (Walter Reed National Military Medical Center)
Naval Hospital Yokosuka Japan
Naval Hospital Guam
Naval Health Clinic New England
Naval Health Clinic Cherry Point
Naval Medical Research Command
Naval Health Research Center
Naval Medical Forces Atlantic
Naval Medical Research Unit South
Naval Medical Research Unit Dayton
Naval Submarine Medical Research Laboratory
Charleston Naval Hospital Historic District
Old Naval Observatory
Hospital ship
USNS Mercy
USNS Comfort
Sick bay
Loblolly boy
Diving medicine
United States Navy staff corps
U.S. Air Force
Surgeon General of the U.S. Air Force
U.S. Air Force Medical Service (including Dental Corps, Medical Corps, Nursing Corps, and other corps)
United States Air Force Nurse Corps
United States Air Force Pararescue
United States Air Force School of Aerospace Medicine
Museum of Aerospace Medicine
Aeromedical evacuation
Critical Care Air Transport Team
Expeditionary Medical Support System
Aviation medicine
Europe
France
French Defence Health Service
École du service de santé des armées
Belgium
Belgian Medical Component
Germany
Bundeswehr Joint Medical Service
Bundeswehr Medical Academy
Luftwaffe Institute of Aviation Medicine
Naval Medical Institute
Generaloberstabsarzt
Generalstabsarzt
Generalarzt
Oberstarzt
Oberfeldarzt
Oberstabsarzt
Stabsarzt
Oberarzt (military)
Assistenzarzt (military)
Italy
Corpo sanitario dell'Esercito Italiano
Corpo sanitario militare marittimo
Corpo sanitario aeronautico
Servizio sanitario dell'Arma dei carabinieri
Russia
Main Military Medical Directorate
Kirov Military Medical Academy (founded in 1798)
Military academies in Russia#Kuybyshev Military Medical Academy
Military Medical Business, academic journal
Russian Museum of Military Medicine
Serbia
Military Medical Academy
Sweden
Surgeon-General of the Swedish Armed Forces
Medical Corps of the Swedish Armed Forces
Swedish Armed Forces Centre for Defence Medicine
Surgeon-in-Chief of the Swedish Army
Surgeon-in-Chief of the Swedish Navy
Surgeon-in-Chief of the Swedish Air Force
Swedish Army Medical Corps
Swedish Naval Medical Officers' Corps
Swedish Armed Forces Diving and Naval Medicine Centre
Swedish Army Veterinary Corps
United Kingdom
Royal Navy Medical Service
Royal Naval Hospital
Queen Alexandra's Royal Naval Nursing Service
Medical Assistant (Royal Navy)
Institute of Naval Medicine
Naval surgeon
Surgeon's mate
Loblolly boy
Journal of the Royal Naval Medical Service
List of hospitals and hospital ships of the Royal Navy
Army Medical Services
Royal Army Medical Corps
Royal Army Dental Corps
Royal Army Veterinary Corps
Queen Alexandra's Royal Army Nursing Corps
Combat Medical Technician
Medical Support Officer
Regimental Aid Post
Territorial Force Nursing Service
Royal Army Medical College
Museum of Military Medicine
RAF Medical Services
Princess Mary's Royal Air Force Nursing Service
RAF Centre of Aviation Medicine
RAF Institute of Aviation Medicine
Surgeon-General (United Kingdom)
Defence Medical Services
Defence Medical Academy
Ministry of Defence Hospital Units
Defence CBRN Centre
Asia
India
Director General Armed Forces Medical Services (India)
Army Medical Corps (India)
Armed Forces Medical College
Command Hospital
Army Hospital Research and Referral
Military Hospitals
Israel
Logistics, Medical, and the Centers Directorate
Medical Corps (Israel)
Unit 669
Sri Lanka
Sri Lanka Army Medical Corps
Thailand
Phramongkutklao College of Medicine
Vietnam
Vietnam Military Medical University (Học Viện Quân Y) in Hanoi
Other regions
Australia
Joint Health Command (Australia)
Australian Army Medical Women's Service
Australian Army Medical Units, World War I
Australian Army Nursing Service
Royal Australian Army Medical Corps
Royal Australian Army Nursing Corps
Royal Australian Army Dental Corps
Australian Army Veterinary Corps
Australian Army Psychology Corps
Royal Australian Navy School of Underwater Medicine
RAAF Institute of Aviation Medicine
List of Australian hospital ships
South Africa
South African Medical Service
South African Military Health Service
International
International Committee of Military Medicine
Committee of Chiefs of Military Medical Services in NATO (COMEDS)
See also
Battlefield medicine
Casualty evacuation (CASEVAC)
Combat medic
Combat stress reaction
Disaster medicine
Field hospital
Flight nurse
Flight medic
Flight surgeon
Equipment of a combat medic
History of military nutrition in the United States
List of drugs used by militaries
Medical corps
Medical evacuation (MEDEVAC)
Medical Service Corps
Medical logistics
Military ambulance
Military medical ethics
Military hospital
Military nurse
Military psychiatrist
Military psychiatry
Military psychology
Triage
Stretcher bearer
References
Further reading
Bowlby, Sir Anthony and Colonel Cuthbert Wallace. "The Development of British Surgery at the Front." The British Medical Journal 1 (1917): 705–721.
Churchill, Edward D. "Healing by First Intention and with Suppuration: Studies in the History of Wound Healing.” Journal of the History of Medicine and Allied Sciences 19 (1964): 193–214.
Churchill, Edward D. “The Surgical Management of the wounded at the time of the Fall of Rome.” Annals of Surgery 120 (1944): 268–283.
Cowdrey, Albert E. Fighting for Life: American Military Medicine in World War II (1994), scholarly history, 400 pp
Cowdrey, Albert E. United States Army in the Korean War: The Medics War (1987), full-scale scholarly official history; online free
Fauntleroy, A.M. "The Surgical Lessons of the European War." Annals of Surgery 64 (1916): 136–150.
Fazal, Tanisha M. (2024). Military Medicine and the Hidden Costs of War. Oxford University Press.
Grissinger, Jay W. "The Development of Military Medicine." Bulletin of the New York Academy of Medicine 3 (1927): 301–356. online
Harrison, Mark. Medicine and victory: British military medicine in the Second World War (Oxford UP, 2004).
Whayne, Col. Tom F. and Colonel Joseph H. McNinch. “Fifty Years of Medical Progress: Medicine as a Social Instrument: Military Medicine.” The New England Journal of Medicine 244 (1951): 591–601.
Wintermute, Bobby A. Public health and the US military: a history of the Army Medical Department, 1818–1917 (2010).
Primary sources
Kendrick, Douglas B. Memoirs of a Twentieth-Century Army Surgeon (Sunflower University Press, 19920, U.S. Army
External links
U.S. military medicine
Military Medicine related links from USAF Air University
Association of Military Surgeons of the United States (AMSUS)
Military Medicine, the International Journal of AMSUS
Patriot Medicine, a vertical network for the military medical ecosystem
Military Medicine Through Time. Life and Death in the War Zone | NOVA | PBS
The Borden Institute Homepage
Military Medicine Documents at USU Archive
U.S. Army Preventive Medicine news archive from the 7th Infantry Division (Light) Panorama weekly newspaper 1988.
Virtual Naval Hospital – a digital library of military medicine and humanitarian medicine
http://www.ipernity.com/doc/57114/5652001/in/keyword/487917/self (military medical exams)
Australian military medicine
Australian Military Medicine Association
International Magazine for Military Medicine
MCIF MEDICAL CORPS INTERNATIONAL FORUM International Magazine for Military Medicine
NATO Centre of Excellence for Military Medicine
NATO Centre of Excellence for Military Medicine
Military supporting service occupations | 0.774669 | 0.984548 | 0.762699 |
Organophosphate poisoning | Organophosphate poisoning is poisoning due to organophosphates (OPs). Organophosphates are used as insecticides, medications, and nerve agents. Symptoms include increased saliva and tear production, diarrhea, vomiting, small pupils, sweating, muscle tremors, and confusion. While onset of symptoms is often within minutes to hours, some symptoms can take weeks to appear. Symptoms can last for days to weeks.
Organophosphate poisoning occurs most commonly as a suicide attempt in farming areas of the developing world and less commonly by accident. Exposure can be from drinking, breathing in the vapors, or skin exposure. The underlying mechanism involves the inhibition of acetylcholinesterase (AChE), leading to the buildup of acetylcholine (ACh) in the body. Diagnosis is typically based on the symptoms and can be confirmed by measuring butyrylcholinesterase activity in the blood. Carbamate poisoning can present similarly.
Prevention efforts include banning very toxic types of organophosphates. Among those who work with pesticides the use of protective clothing and showering before going home is also useful. In those who have organophosphate poisoning the primary treatments are atropine, oximes such as pralidoxime, and diazepam. General measures such as oxygen and intravenous fluids are also recommended. Attempts to decontaminate the stomach, with activated charcoal or other means, have not been shown to be useful. While there is a theoretical risk of health care workers taking care of a poisoned person becoming poisoned themselves, the degree of risk appears to be very small.
OPs are one of the most common causes of poisoning worldwide. There are nearly 3 million poisonings per year resulting in two hundred thousand deaths. Around 15% of people who are poisoned die as a result. Organophosphate poisoning has been reported at least since 1962.
Signs and symptoms
The symptoms of organophosphate poisoning include muscle weakness, fatigue, muscle cramps, fasciculation, and paralysis. Other symptoms include hypertension, and hypoglycemia.
Overstimulation of nicotinic acetylcholine receptors in the central nervous system, due to accumulation of ACh, results in anxiety, headache, convulsions, ataxia, depression of respiration and circulation, tremor, general weakness, and potentially coma. When there is expression of muscarinic overstimulation due to excess acetylcholine at muscarinic acetylcholine receptors symptoms of visual disturbances, tightness in chest, wheezing due to bronchoconstriction, increased bronchial secretions, increased salivation, lacrimation, sweating, peristalsis, and urination can occur.
The effects of organophosphate poisoning on muscarinic receptors are recalled using the mnemonic SLUDGEM (salivation, lacrimation, urination, defecation, gastrointestinal motility, emesis, miosis) An additional mnemonic is MUDDLES: miosis, urination, diarrhea, diaphoresis, lacrimation, excitation, and salivation. These mnemonics do not take into account the critical CNS and nicotinic effects of organophosphates.
The onset and severity of symptoms, whether acute or chronic, depends upon the specific chemical, the route of exposure (skin, lungs, or GI tract), the dose, and the individuals ability to degrade the compound, which the PON1 enzyme level will affect.
Reproductive effects
Certain reproductive effects in fertility, growth, and development for males and females have been linked specifically to OP pesticide exposure. Most of the research on reproductive effects has been conducted on farmers working with pesticides and insecticides in rural areas. For those males exposed to OP pesticides, poor semen and sperm quality have been seen, including reduced seminal volume and percentage motility, as well as a decrease in sperm count per ejaculate. In females menstrual cycle disturbances, longer pregnancies, spontaneous abortions, stillbirths, and some developmental effects in offspring have been linked to OP pesticide exposure. Prenatal exposure has been linked to impaired fetal growth and development. The effects of OP exposure on infants and children are at this time currently being researched to come to a conclusive finding.
Evidence of OP exposure in pregnant mothers are linked to several health effects in the fetus. Some of these effects include delayed mental development, Pervasive developmental disorder (PDD), morphological abnormalities in the cerebral surface.
Neurotoxic effects
Neurotoxic effects have also been linked to poisoning with OP pesticides causing four neurotoxic effects in humans: cholinergic syndrome, intermediate syndrome, organophosphate-induced delayed polyneuropathy (OPIDP), and chronic organophosphate-induced neuropsychiatric disorder (COPIND). These syndromes result after acute and chronic exposure to OP pesticides.
Cholinergic syndrome occurs in acute poisonings with OP pesticides and is directly related to levels of AChE activity. Symptoms include miosis, sweating, lacrimation, gastrointestinal symptoms, respiratory difficulties, shortness of breath, slowed heart rate, cyanosis, vomiting, diarrhea, trouble sleeping, as well as other symptoms. Along with these central effects can be seen and finally seizures, convulsions, coma, respiratory failure. If the person survives the first day of poisoning, personality changes can occur, in addition to aggressive behavior, psychotic episodes, memory and attention disturbances, and other delayed effects. When death occurs, it is most commonly due to respiratory failure due to paralysis of respiratory muscles and depression of central nervous system, which is responsible for respiration. For people affected by cholinergic syndrome, atropine sulfate combined with an oxime is used to combat the effects of the acute OP poisoning. Diazepam is sometimes also administered if convulsions or muscle fasciculations begin.
The intermediate syndrome (IMS) appears in the interval between the end of the cholinergic crisis and the onset of OPIDP. Symptoms associated with IMS manifest between 24 and 96 hours after exposure. The exact etiology, incidence, and risk factors associated with IMS are not well understood, but IMS is recognized as a disorder of neuromuscular junctions. IMS occurs when a person has a prolonged and severe inhibition of AChE. It has been linked to specific OP pesticides such as parathion, methylparathion, and dichlorvos. Patients generally present with increasing weakness in the facial, neck flexor, and respiratory muscles.
OPIDP occurs in a small percentage of cases, roughly two weeks after exposure, where temporary paralysis occurs. This loss of function and ataxia of peripheral nerves and spinal cord is the phenomenon of OPIDP. Once the symptoms begin with shooting pains in both legs, the symptoms continue to worsen for 3–6 months. In the most severe cases quadriplegia has been observed. Treatment only affects sensory nerves, not motor neurons which may permanently lose function. The aging and phosphorylation of more than 70% of functional NTE in peripheral nerves is one of the processes involved in OPIDP. Standard treatments for OP poisoning are ineffective for OPIDP.
COPIND occurs without cholinergic symptoms and is independent of AChE inhibition. COPIND appears with a delay and is long lasting. Symptoms associated with COPIND include cognitive deficit, mood changes, autonomic dysfunction, peripheral neuropathy, and extrapyramidal symptoms. The underlying mechanisms of COPIND have not been determined, but it is hypothesized that withdrawal of OP pesticides after chronic exposure or acute exposure could be a factor.
Pregnancy
Evidence of exposure to OP pesticides during gestation and early postnatal period have been linked to neurodevelopmental effects in animals, specifically rats. Animals exposed in utero to chlorpyrifos exhibited decreased balance, poorer cliff avoidance, decreased locomotion, delays in maze performance, and increased gait abnormalities. Early gestation is believed to be a critical time period for the neurodevelopmental effects of pesticides. OPs affect the cholinergic system of fetuses, so exposure to chlorpyrifos during critical periods of brain development potentially could cause cellular, synaptic, and neurobehavioral abnormalities in animals. In rats exposed to methylparathion, studies found reduced AChE activity in all brain regions and subtle alterations in behaviors such as locomotor activity and impaired cage emergence. Organophosphates as whole have been linked to decreases in the length of limbs, head circumference, and slower rates of postnatal weight gain in mice.
Cancer
The International Agency for Research on Cancer (IARC) found that organophosphate exposure may increase cancer risk. Tetrachlorvinphos and parathion were classified as "possibly carcinogenic", malathion, and diazinon.
Cause
OP pesticide exposure occurs through inhalation, ingestion and dermal contact. Because OP pesticides degrade quickly when exposed to air and light, they have been considered relatively safe to consumers. However, OP residues may linger on fruits and vegetables. Certain OP pesticides have been banned for use on some crops, For example, methyl parathion is banned from use on some crops and permitted on others. It can also occur through deliberate poisoning using nerve agents such as sarin and tabun.
Examples
Insecticides including malathion, parathion, diazinon, fenthion, dichlorvos, chlorpyrifos, ethion, trichlorfon
Nerve agents including soman, sarin, tabun, VX
Herbicides including tribufos [DEF], merphos are tricresyl phosphate–containing industrial chemicals.
The U.S. Environmental Protection Agency maintains an extensive list of commercially-sold organophosphate products for anyone worried about possible exposure. It can be found here (in the blue sidebars of the document).
Exposure to any of the above-listed organophosphates may occur through inhalation, skin absorption, and ingestion, most commonly of food that has been treated with an OP herbicide or insecticide. Exposure to these chemicals can occur at public buildings, schools, residential areas, and in agricultural areas. Chlorpyrifos and Malathion have been linked to reproductive effects, neurotoxicity, kidney/liver damage, and birth defects. Dichlorvos has also been linked to reproductive effects, neurotoxicity, and kidney/liver damage. It is also recognized to be a possible carcinogen.
Pathophysiology
The health effects associated with organophosphate poisoning are a result of excess acetylcholine (ACh) present at different nerve synapses and neuromuscular junctions across the body. Specifically, acetylcholinesterase (AChE), the enzyme that normally and constantly breaks down acetylcholine, is inhibited by the organophosphate substance. ACh accumulates in the parasympathetic nervous system, the central nervous system, and in nicotinic neuromuscular junctions.
Organophosphate inhibition of AChE may be reversible or irreversible, depending on whether covalent bonding (also called "aging" in this context) occurs. Chemically, organophosphates cause poisoning by phosphorylating the serine hydroxyl residue on AChE, which inactivates AChE. This causes disturbances across the cholinergic synapses and can only be reactivated very slowly, if at all. Paraoxonase 1 (PON1) is a key enzyme involved in organophosphate toxicity and has been found to be critical in determining an organism's sensitivity to organophosphate exposure.
Sensitivity
PON1 can inactivate some OPs through hydrolysis. PON1 hydrolyzes the active metabolites in several OP insecticides such as chlorpyrifos oxon, and diazoxon, as well as nerve agents such as soman, sarin, and VX. PON1 hydrolyzes the metabolites, not the parent compounds of insectides. PON1 gene polymorphism can lead to a variety of different enzyme levels and catalytic efficiency of this esterase, which in turn suggests that different individuals may be more or less susceptible to the toxic effect of OP exposure. Higher levels of PON1 plasma hydrolytic activity provide a greater degree of protection against OP pesticides. Rats injected with purified PON1 from rabbit serum were more resistant to acute cholinergic activity than the control rats. PON1 knockouts in mice are found to be more sensitive to the toxicity of pesticides, like chlorpyrifos. Animal experiments indicate that while PON1 plays a significant role in regulating the toxicity of OPs its degree of protection given depends on the compound (i.e. Chlorpyrifos oxon or diazoxon). The catalytic efficiency with which PON1 can degrade toxic OPs determines the degree of protection that PON1 can provide for organism. The higher the concentration of PON1 the better the protection provided. PON1 activity is much lower in neonates, so neonates are more sensitive to OP exposure. In 2006, up to a 13-fold degree of variation was seen in PON1 levels in adults; connected findings showed that biological sensitivity to diazoxon showed similar or even greater variance.
Diagnosis
A number of measurements exist to assess exposure and early biological effects for organophosphate poisoning. Measurements of OP metabolites in both the blood and urine can be used to determine if a person has been exposed to organophosphates. Specifically in the blood, metabolites of cholinesterases, such as butyrylcholinesterase (BuChE) activity in plasma, neuropathy target esterase (NTE) in lymphocytes, and of acetylcholinesterase (AChE) activity in red blood cells. Due to both AChE and BuChE being the main targets of organophosphates, their measurement is widely used as an indication of an exposure to an OP. The main restriction on this type of diagnosis is that depending on the OP, the degree to which either AChE or BuChE are inhibited differs; therefore, measure of metabolites in blood and urine do not specify which OP agent is responsible for the poisoning. However, for fast initial screening, determining AChE and BuChE activity in the blood are the most widely used procedures for confirming a diagnosis of OP poisoning. The most widely used portable testing device is the Test-mate ChE field test, which can be used to determine levels of Red Blood Cells (RBC), AChE and plasma (pseudo) cholinesterase (PChE) in the blood in about four minutes. This test has been shown to be just as effective as a regular laboratory test and because of this, the portable ChE field test is frequently used by people who work with pesticides on a daily basis.
Treatment
Current antidotes for OP poisoning consist of a pretreatment with carbamates to protect AChE from inhibition by OP compounds and post-exposure treatments with anti-cholinergic drugs. Anti-cholinergic drugs work to counteract the effects of excess acetylcholine and reactivate AChE. Atropine can be used as an antidote in conjunction with pralidoxime or other pyridinium oximes (such as trimedoxime or obidoxime), though the use of "-oximes" has been found to be of no benefit, or to be possibly harmful, in at least two meta-analyses. Atropine is a muscarinic antagonist, and thus blocks the action of acetylcholine peripherally. These antidotes are effective at preventing lethality from OP poisoning, but current treatment lack the ability to prevent post-exposure incapacitation, performance deficits, or permanent brain damage. While the efficacy of atropine has been well-established, clinical experience with pralidoxime has led to widespread doubt about its efficacy in treatment of OP poisoning.
Enzyme bioscavengers are being developed as a pretreatment to sequester highly toxic OPs before they can reach their physiological targets and prevent the toxic effects from occurring. Significant advances with cholinesterases (ChEs), specifically human serum BChE (HuBChE) have been made. HuBChe can offer a broad range of protection for nerve agents including soman, sarin, tabun, and VX. HuBChE also possess a very long retention time in the human circulation system and because it is from a human source it will not produce any antagonistic immunological responses. HuBChE is currently being assessed for inclusion into the protective regimen against OP nerve agent poisoning. Currently there is potential for PON1 to be used to treat sarin exposure, but recombinant PON1 variants would need to first be generated to increase its catalytic efficiency.
Another potential treatment being researched is the Class III anti-arrhythmic agents. Hyperkalemia of the tissue is one of the symptoms associated with OP poisoning. While the cellular processes leading to cardiac toxicity are not well understood, the potassium current channels are believed to be involved. Class III anti-arrhythmic agents block the potassium membrane currents in cardiac cells, which makes them a candidate for become a therapeutic of OP poisoning.
There is insufficient evidence to support using plasma alkalinisation to treat a person with organophosphate poisoning.
Epidemiology
Organophosphate pesticides are one of the top causes of poisoning worldwide, with an annual incidence of poisonings among agricultural workers varying from 3-10% per country.
History
Ginger Jake
A striking example of OPIDN occurred during the 1930s Prohibition Era when thousands of men in the American South and Midwest developed arm and leg weakness and pain after drinking a "medicinal" alcohol substitute. The drink, called "Ginger Jake", contained an adulterated Jamaican ginger extract containing tri-ortho-cresyl phosphate (TOCP) which resulted in partially reversible neurological damage. The damage resulted in the limping "Jake Leg" or "Jake Walk" which were terms frequently used in the blues music of the period. Europe and Morocco both experienced outbreaks of TOCP poisoning from contaminated abortifacients and cooking oil, respectively.
Gulf War syndrome
Research has linked the neurological abnormalities found in Persian Gulf War veterans who have chronic multisymptom illnesses to exposure to wartime combinations of organophosphate chemical nerve agents. Before, it was believed that veterans had a psychologically based disorder or depression, most likely post-traumatic stress disorder (PTSD). Many veterans were given pyridostigmine bromide (PB) pills to protect against nerve gas agents such as sarin and soman. During the war veterans were exposed to combinations of organophosphate pesticides and nerve agents, which produced symptoms associated with chronic organophosphate-induced delayed polyneuropathy (OPIDP) syndrome. Similar symptoms found in the veterans were the same symptoms reported for individuals in occupational settings who were acutely poisoned by organophosphates, such as chlorpyrifos. Studies found veterans experienced deficits in intellectual and academic abilities, simple motor skills, memory impairment, and impaired emotional function. These symptoms indicate brain damage, not a psychologically based disorder.
Society and culture
United States
Under a 1988 amendment to the Federal Insecticide, Fungicide and Rodenticide Act (FIFRA), the U.S. Environmental Protection Agency (EPA) regulates organophosphate pesticides Its focus was initially on registering pesticides for use on food crops. No pesticide can be sold in the United States before the EPA has reviewed the manufacturer's application for registration and determined that the use of the product will not present an unreasonable risk to the public or the environment. In 1996, with the passage of the Food Quality Protection Act, Congress required the EPA to reassess all existing pesticide tolerances with specific consideration for children. This resulted in a 10-year review process of the health and environmental effects of all pesticides, beginning with the organophosphates. As part of that process, in 1999 the EPA announced a ban the use of organophosphate pesticide methyl parathion and significant restrictions on the use of another OP, azinphos methyl, in what they called "kid's food". The review process was concluded in 2006 and eliminated or modified thousands of other uses of pesticides. Other legislative action has been taken to protect children from the risks of organophosphates.
Many non-governmental and research groups, as well as the EPA's Office of Inspector General, have expressed concerns that the review did not take into account possible neurotoxic effects on developing fetuses and children, an area of developing research. OIG report. A group of leading EPA scientists sent a letter to the chief administrator, Stephen Johnson, decrying the lack of developmental neurotoxicity data in the review process. EPA Letter EHP article New studies have shown toxicity to developing organisms during certain "critical periods" at doses much lower than those previously suspected to cause harm.
Even the restrictions which did successfully pass have been controversial. For example, in 1999 the EPA restricted the use of chlorpyrifos in households (under the commercial name Dursban). However, the EPA did not limit its use in agriculture. Chlorpyrifos remains one of the most widely used pesticides. This may soon change. On February 8, 2013, the EPA requested comment on a preliminary evaluation of the potential risks to children and other bystanders from volatilization of chlorpyrifos from treated crops
Vulnerable groups
Some populations are more vulnerable to pesticide poisoning. In the United States, farmworkers can be exposed via direct spray, drift, spills, direct contact with treated crops or soil, or defective or missing protective equipment. Migrant workers may be at an especially high risk of chronic exposure as over the course of a growing season, they may work at multiple farms, thus increasing their exposure to pesticides. Farmworkers in more permanent positions may receive more safety training and/or more "consistent reinforcement of safety behaviors than seasonal farmworkers or day laborers." For migrant farmworkers, language barriers and/or education level could be a barrier to understanding posted warning signs, labels and safety warnings located on the pesticides, or understanding any safety training that is provided.
Other factors that may lead to greater exposure for the migrant farmworker population include: limited or no access to safety equipment, little to no control over pesticide use, cultural factors, and fear of job loss if they report potential hazards. Studies have also shown that there are some key beliefs by farmworkers that may exacerbate pesticide exposure, including the belief that "pesticides must be felt, seen, tasted, or smelled to be present; the skin blocks absorption and body openings facilitate it; exposure occurs only when a pesticide is wet;…and acute, not low-level chronic exposure is the primary danger."
This, coupled with the difficulty or uncertainty of recognizing and/or diagnosing chronic pesticide poisoning by the medical community, makes it difficult for exposed workers to receive an effective remedy. Migrant workers may also be hesitant to seek-out medical care due to lack of health insurance, language barriers, immigration status, cost, cultural factors, lack of transportation, fear of job loss, and lack of awareness of workers' compensation benefits.
Sergei and Yulia Skripal
In March 2018, Sergei Skripal and his daughter were poisoned in Salisbury, England, with an organophosphate poison known as a Novichok agent. Both fell unconscious while sitting on a park bench. A first responder to the scene also became contaminated and had symptoms of organophosphate poisoning. All three survived after hospital treatment. Despite continually denying responsibility for the attack, Russia is suspected to be behind the poisonings.
Alexei Navalny
On 20 August 2020, Russian politician Alexei Navalny developed life-threatening acute poisoning symptoms on a flight. He was later transferred to Berlin, where poisoning by a cholinesterase inhibitor was diagnosed and confirmed by multiple tests in independent laboratories.
References
Toxic effects of substances chiefly nonmedicinal as to source
Organophosphates
Wikipedia medicine articles ready to translate
Wikipedia emergency medicine articles ready to translate
Medical mnemonics | 0.765924 | 0.995772 | 0.762686 |
Adipose tissue | Adipose tissue (also known as body fat or simply fat) is a loose connective tissue composed mostly of adipocytes. It also contains the stromal vascular fraction (SVF) of cells including preadipocytes, fibroblasts, vascular endothelial cells and a variety of immune cells such as adipose tissue macrophages. Its main role is to store energy in the form of lipids, although it also cushions and insulates the body.
Previously treated as being hormonally inert, in recent years adipose tissue has been recognized as a major endocrine organ, as it produces hormones such as leptin, estrogen, resistin, and cytokines (especially TNFα). In obesity, adipose tissue is implicated in the chronic release of pro-inflammatory markers known as adipokines, which are responsible for the development of metabolic syndromea constellation of diseases including type 2 diabetes, cardiovascular disease and atherosclerosis.
Adipose tissue is derived from preadipocytes and its formation appears to be controlled in part by the adipose gene. The two types of adipose tissue are white adipose tissue (WAT), which stores energy, and brown adipose tissue (BAT), which generates body heat. Adipose tissuemore specifically brown adipose tissuewas first identified by the Swiss naturalist Conrad Gessner in 1551.
Anatomical features
In humans, adipose tissue is located: beneath the skin (subcutaneous fat), around internal organs (visceral fat), in bone marrow (yellow bone marrow), intermuscular (muscular system), and in the breast (breast tissue). Adipose tissue is found in specific locations, which are referred to as adipose depots. Apart from adipocytes, which comprise the highest percentage of cells within adipose tissue, other cell types are present, collectively termed stromal vascular fraction (SVF) of cells. SVF includes preadipocytes, fibroblasts, adipose tissue macrophages, and endothelial cells.
Adipose tissue contains many small blood vessels. In the integumentary system, which includes the skin, it accumulates in the deepest level, the subcutaneous layer, providing insulation from heat and cold. Around organs, it provides protective padding. However, its main function is to be a reserve of lipids, which can be oxidised to meet the energy needs of the body and to protect it from excess glucose by storing triglycerides produced by the liver from sugars, although some evidence suggests that most lipid synthesis from carbohydrates occurs in the adipose tissue itself. Adipose depots in different parts of the body have different biochemical profiles. Under normal conditions, it provides feedback for hunger and diet to the brain.
Mice
Mice have eight major adipose depots, four of which are within the abdominal cavity. The paired gonadal depots are attached to the uterus and ovaries in females and the epididymis and testes in males; the paired retroperitoneal depots are found along the dorsal wall of the abdomen, surrounding the kidney, and, when massive, extend into the pelvis. The mesenteric depot forms a glue-like web that supports the intestines and the omental depot (which originates near the stomach and spleen) and - when massive - extends into the ventral abdomen. Both the mesenteric and omental depots incorporate much lymphoid tissue as lymph nodes and milky spots, respectively.
The two superficial depots are the paired inguinal depots, which are found anterior to the upper segment of the hind limbs (underneath the skin) and the subscapular depots, paired medial mixtures of brown adipose tissue adjacent to regions of white adipose tissue, which are found under the skin between the dorsal crests of the scapulae. The layer of brown adipose tissue in this depot is often covered by a "frosting" of white adipose tissue; sometimes these two types of fat (brown and white) are hard to distinguish. The inguinal depots enclose the inguinal group of lymph nodes. Minor depots include the pericardial, which surrounds the heart, and the paired popliteal depots, between the major muscles behind the knees, each containing one large lymph node. Of all the depots in the mouse, the gonadal depots are the largest and the most easily dissected, comprising about 30% of dissectible fat.
Obesity
In an obese person, excess adipose tissue hanging downward from the abdomen is referred to as a panniculus. A panniculus complicates surgery of the morbidly obese individual. It may remain as a literal "apron of skin" if a severely obese person loses large amounts of fat (a common result of gastric bypass surgery). Obesity is treated through exercise, diet, and behavioral therapy. Reconstructive surgery is one aspect of treatment.
Visceral fat
Visceral fat or abdominal fat (also known as organ fat or intra-abdominal fat) is located inside the abdominal cavity, packed between the organs (stomach, liver, intestines, kidneys, etc.). Visceral fat is different from subcutaneous fat underneath the skin, and intramuscular fat interspersed in skeletal muscles. Fat in the lower body, as in thighs and buttocks, is subcutaneous and is not consistently spaced tissue, whereas fat in the abdomen is mostly visceral and semi-fluid. Visceral fat is composed of several adipose depots, including mesenteric, epididymal white adipose tissue (EWAT), and perirenal depots. Visceral fat is often expressed in terms of its area in cm2 (VFA, visceral fat area).
An excess of visceral fat is known as abdominal obesity, or "belly fat", in which the abdomen protrudes excessively. New developments such as the Body Volume Index (BVI) are specifically designed to measure abdominal volume and abdominal fat. Excess visceral fat is also linked to type 2 diabetes, insulin resistance, inflammatory diseases, and other obesity-related diseases. Likewise, the accumulation of neck fat (or cervical adipose tissue) has been shown to be associated with mortality. Several studies have suggested that visceral fat can be predicted from simple anthropometric measures, and predicts mortality more accurately than body mass index or waist circumference.
Men are more likely to have fat stored in the abdomen due to sex hormone differences. Estrogen (female sex hormone) causes fat to be stored in the buttocks, thighs, and hips in women. When women reach menopause and the estrogen produced by the ovaries declines, fat migrates from the buttocks, hips and thighs to the waist; later fat is stored in the abdomen.
Visceral fat can be caused by excess cortisol levels. At least 10 MET-hours per week of aerobic exercise leads to visceral fat reduction in those without metabolic-related disorders. Resistance training and caloric restriction also reduce visceral fat, although their effect may not be cumulative. Both exercise and hypocaloric diet cause loss of visceral fat, but exercise has a larger effect on visceral fat versus total fat. High-intensity exercise is one way to effectively reduce total abdominal fat. An energy-restricted diet combined with exercise will reduce total body fat and the ratio of visceral adipose tissue to subcutaneous adipose tissue, suggesting a preferential mobilization for visceral fat over subcutaneous fat.
Epicardial fat
Epicardial adipose tissue (EAT) is a particular form of visceral fat deposited around the heart and found to be a metabolically active organ that generates various bioactive molecules, which might significantly affect cardiac function. Marked component differences have been observed in comparing EAT with subcutaneous fat, suggesting a location-specific impact of stored fatty acids on adipocyte function and metabolism.
Subcutaneous fat
Most of the remaining nonvisceral fat is found just below the skin in a region called the hypodermis. This subcutaneous fat is not related to many of the classic obesity-related pathologies, such as heart disease, cancer, and stroke, and some evidence even suggests it might be protective. The typically female (or gynecoid) pattern of body fat distribution around the hips, thighs, and buttocks is subcutaneous fat, and therefore poses less of a health risk compared to visceral fat.
Like all other fat organs, subcutaneous fat is an active part of the endocrine system, secreting the hormones leptin and resistin.
The relationship between the subcutaneous adipose layer and total body fat in a person is often modelled by using regression equations. The most popular of these equations was formed by Durnin and Wormersley, who rigorously tested many types of skinfold, and, as a result, created two formulae to calculate the body density of both men and women. These equations present an inverse correlation between skinfolds and body density—as the sum of skinfolds increases, the body density decreases.
Factors such as sex, age, population size or other variables may make the equations invalid and unusable, and, , Durnin and Wormersley's equations remain only estimates of a person's true level of fatness. New formulae are still being created.
Marrow fat
Marrow fat, also known as marrow adipose tissue (MAT), is a poorly understood adipose depot that resides in the bone and is interspersed with hematopoietic cells as well as bony elements. The adipocytes in this depot are derived from mesenchymal stem cells (MSC) which can give rise to fat cells, bone cells as well as other cell types. The fact that MAT increases in the setting of calorie restriction/ anorexia is a feature that distinguishes this depot from other fat depots. Exercise regulates MAT, decreasing MAT quantity and diminishing the size of marrow adipocytes. The exercise regulation of marrow fat suggests that it bears some physiologic similarity to other white adipose depots. Moreover, increased MAT in obesity further suggests a similarity to white fat depots.
Ectopic fat
Ectopic fat is the storage of triglycerides in tissues other than adipose tissue, that are supposed to contain only small amounts of fat, such as the liver, skeletal muscle, heart, and pancreas. This can interfere with cellular functions and hence organ function and is associated with insulin resistance in type-2 diabetes. It is stored in relatively high amounts around the organs of the abdominal cavity, but is not to be confused with visceral fat.
The specific cause for the accumulation of ectopic fat is unknown. The cause is likely a combination of genetic, environmental, and behavioral factors that are involved in excess energy intake and decreased physical activity. Substantial weight loss can reduce ectopic fat stores in all organs and this is associated with an improvement of the function of those organs.
In the latter case, non-invasive weight loss interventions like diet or exercise can decrease ectopic fat (particularly in heart and liver) in overweight or obese children and adults.
Physiology
Free fatty acids (FFAs) are liberated from lipoproteins by lipoprotein lipase (LPL) and enter the adipocyte, where they are reassembled into triglycerides by esterifying them onto glycerol. Human fat tissue contains from 61% to 94% lipids, with obese and lean individuals tending towards the high and low ends of this range, respectively.
There is a constant flux of FFAs entering and leaving adipose tissue. The net direction of this flux is controlled by insulin and leptin—if insulin is elevated, then there is a net inward flux of FFA, and only when insulin is low can FFA leave adipose tissue. Insulin secretion is stimulated by high blood sugar, which results from consuming carbohydrates.
In humans, lipolysis (hydrolysis of triglycerides into free fatty acids) is controlled through the balanced control of lipolytic B-adrenergic receptors and a2A-adrenergic receptor-mediated antilipolysis.
Fat cells have an important physiological role in maintaining triglyceride and free fatty acid levels, as well as determining insulin resistance. Abdominal fat has a different metabolic profile—being more prone to induce insulin resistance. This explains to a large degree why central obesity is a marker of impaired glucose tolerance and is an independent risk factor for cardiovascular disease (even in the absence of diabetes mellitus and hypertension). Studies of female monkeys at Wake Forest University (2009) discovered that individuals with higher stress have higher levels of visceral fat in their bodies. This suggests a possible cause-and-effect link between the two, wherein stress promotes the accumulation of visceral fat, which in turn causes hormonal and metabolic changes that contribute to heart disease and other health problems.
Recent advances in biotechnology have allowed for the harvesting of adult stem cells from adipose tissue, allowing stimulation of tissue regrowth using a patient's own cells. In addition, adipose-derived stem cells from both human and animals reportedly can be efficiently reprogrammed into induced pluripotent stem cells without the need for feeder cells. The use of a patient's own cells reduces the chance of tissue rejection and avoids ethical issues associated with the use of human embryonic stem cells. A growing body of evidence also suggests that different fat depots (i.e. abdominal, omental, pericardial) yield adipose-derived stem cells with different characteristics. These depot-dependent features include proliferation rate, immunophenotype, differentiation potential, gene expression, as well as sensitivity to hypoxic culture conditions. Oxygen levels seem to play an important role on the metabolism and in general the function of adipose-derived stem cells.
Adipose tissue is a major peripheral source of aromatase in both males and females, contributing to the production of estradiol.
Adipose derived hormones include:
Adiponectin
Resistin
Plasminogen activator inhibitor-1 (PAI-1)
TNFα
IL-6
Leptin
Estradiol (E2)
Adipose tissues also secrete a type of cytokines (cell-to-cell signalling proteins) called adipokines (adipose cytokines), which play a role in obesity-associated complications. Perivascular adipose tissue releases adipokines such as adiponectin that affect the contractile function of the vessels that they surround.
Brown fat
Brown fat or brown adipose tissue (BAT) is a specialized form of adipose tissue important for adaptive thermogenesis in humans and other mammals. BAT can generate heat by "uncoupling" the respiratory chain of oxidative phosphorylation within mitochondria through tissue-specific expression of uncoupling protein 1 (UCP1). BAT is primarily located around the neck and large blood vessels of the thorax, where it may effectively act in heat exchange. BAT is robustly activated upon cold exposure by the release of catecholamines from sympathetic nerves that results in UCP1 activation.
Nearly half of the nerves present in adipose tissue are sensory neurons connected to the dorsal root ganglia.
BAT activation may also occur in response to overfeeding. UCP1 activity is stimulated by long chain fatty acids that are produced subsequent to β-adrenergic receptor activation. UCP1 is proposed to function as a fatty acid proton symporter, although the exact mechanism has yet to be elucidated. In contrast, UCP1 is inhibited by ATP, ADP, and GTP.
Attempts to simulate this process pharmacologically have so far been unsuccessful. Techniques to manipulate the differentiation of "brown fat" could become a mechanism for weight loss therapy in the future, encouraging the growth of tissue with this specialized metabolism without inducing it in other organs. A review on the eventual therapeutic targeting of brown fat to treat human obesity was published by Samuelson and Vidal-Puig in 2020.
Until recently, brown adipose tissue in humans was thought to be primarily limited to infants, but new evidence has overturned that belief. Metabolically active tissue with temperature responses similar to brown adipose was first reported in the neck and trunk of some human adults in 2007, and the presence of brown adipose in human adults was later verified histologically in the same anatomical regions.
Beige fat and WAT browning
Browning of WAT, also referred to as "beiging", occurs when adipocytes within WAT depots develop features of BAT. Beige adipocytes take on a multilocular appearance (containing several lipid droplets) and increase expression of uncoupling protein 1 (UCP1). In doing so, these normally energy-storing adipocytes become energy-releasing adipocytes.
The calorie-burning capacity of brown and beige fat has been extensively studied as research efforts focus on therapies targeted to treat obesity and diabetes. The drug 2,4-dinitrophenol, which also acts as a chemical uncoupler similarly to UCP1, was used for weight loss in the 1930s. However, it was quickly discontinued when excessive dosing led to adverse side effects including hyperthermia and death. β3-adrenergic agonists, like CL316,243, have also been developed and tested in humans. However, the use of such drugs has proven largely unsuccessful due to several challenges, including varying species receptor specificity and poor oral bioavailability.
Cold is a primary regulator of BAT processes and induces WAT browning. Browning in response to chronic cold exposure has been well documented and is a reversible process. A study in mice demonstrated that cold-induced browning can be completely reversed in 21 days, with measurable decreases in UCP1 seen within a 24-hour period. A study by Rosenwald et al. revealed that when the animals are re-exposed to a cold environment, the same adipocytes will adopt a beige phenotype, suggesting that beige adipocytes are retained.
Transcriptional regulators, as well as a growing number of other factors, regulate the induction of beige fat. Four regulators of transcription are central to WAT browning and serve as targets for many of the molecules known to influence this process. These include peroxisome proliferator-activated receptor gamma (PPARγ), PRDM16, peroxisome proliferator-activated receptor gamma coactivator 1 alpha (PGC-1α), and Early B-Cell Factor-2 (EBF2).
The list of molecules that influence browning has grown in direct proportion to the popularity of this topic and is constantly evolving as more knowledge is acquired. Among these molecules are irisin and fibroblast growth factor 21 (FGF21), which have been well-studied and are believed to be important regulators of browning. Irisin is secreted from muscle in response to exercise and has been shown to increase browning by acting on beige preadipocytes. FGF21, a hormone secreted mainly by the liver, has garnered a great deal of interest after being identified as a potent stimulator of glucose uptake and a browning regulator through its effects on PGC-1α. It is increased in BAT during cold exposure and is thought to aid in resistance to diet-induced obesity FGF21 may also be secreted in response to exercise and a low protein diet, although the latter has not been thoroughly investigated. Data from these studies suggest that environmental factors like diet and exercise may be important mediators of browning. In mice, it was found that beiging can occur through the production of methionine-enkephalin peptides by type 2 innate lymphoid cells in response to interleukin 33.
Genomics and bioinformatics tools to study browning
Due to the complex nature of adipose tissue and a growing list of browning regulatory molecules, great potential exists for the use of bioinformatics tools to improve study within this field. Studies of WAT browning have greatly benefited from advances in these techniques, as beige fat is rapidly gaining popularity as a therapeutic target for the treatment of obesity and diabetes.
DNA microarray is a bioinformatics tool used to quantify expression levels of various genes simultaneously, and has been used extensively in the study of adipose tissue. One such study used microarray analysis in conjunction with Ingenuity IPA software to look at changes in WAT and BAT gene expression when mice were exposed to temperatures of 28 and 6 °C. The most significantly up- and downregulated genes were then identified and used for analysis of differentially expressed pathways. It was discovered that many of the pathways upregulated in WAT after cold exposure are also highly expressed in BAT, such as oxidative phosphorylation, fatty acid metabolism, and pyruvate metabolism. This suggests that some of the adipocytes switched to a beige phenotype at 6 °C. Mössenböck et al. also used microarray analysis to demonstrate that insulin deficiency inhibits the differentiation of beige adipocytes but does not disturb their capacity for browning. These two studies demonstrate the potential for the use of microarray in the study of WAT browning.
RNA sequencing (RNA-Seq) is a powerful computational tool that allows for the quantification of RNA expression for all genes within a sample. Incorporating RNA-Seq into browning studies is of great value, as it offers better specificity, sensitivity, and a more comprehensive overview of gene expression than other methods. RNA-Seq has been used in both human and mouse studies in an attempt characterize beige adipocytes according to their gene expression profiles and to identify potential therapeutic molecules that may induce the beige phenotype. One such study used RNA-Seq to compare gene expression profiles of WAT from wild-type (WT) mice and those overexpressing Early B-Cell Factor-2 (EBF2). WAT from the transgenic animals exhibited a brown fat gene program and had decreased WAT specific gene expression compared to the WT mice. Thus, EBF2 has been identified as a potential therapeutic molecule to induce beiging.
Chromatin immunoprecipitation with sequencing (ChIP-seq) is a method used to identify protein binding sites on DNA and assess histone modifications. This tool has enabled examination of epigenetic regulation of browning and helps elucidate the mechanisms by which protein-DNA interactions stimulate the differentiation of beige adipocytes. Studies observing the chromatin landscapes of beige adipocytes have found that adipogenesis of these cells results from the formation of cell specific chromatin landscapes, which regulate the transcriptional program and, ultimately, control differentiation. Using ChIP-seq in conjunction with other tools, recent studies have identified over 30 transcriptional and epigenetic factors that influence beige adipocyte development.
Genetics
The thrifty gene hypothesis (also called the famine hypothesis) states that in some populations the body would be more efficient at retaining fat in times of plenty, thereby endowing greater resistance to starvation in times of food scarcity. This hypothesis, originally advanced in the context of glucose metabolism and insulin resistance, has been discredited by physical anthropologists, physiologists, and the original proponent of the idea himself with respect to that context, although according to its developer it remains "as viable as when [it was] first advanced" in other contexts.
In 1995, Jeffrey Friedman, in his residency at the Rockefeller University, together with Rudolph Leibel, Douglas Coleman et al. discovered the protein leptin that the genetically obese mouse lacked. Leptin is produced in the white adipose tissue and signals to the hypothalamus. When leptin levels drop, the body interprets this as a loss of energy, and hunger increases. Mice lacking this protein eat until they are four times their normal size.
Leptin, however, plays a different role in diet-induced obesity in rodents and humans. Because adipocytes produce leptin, leptin levels are elevated in the obese. However, hunger remains, and—when leptin levels drop due to weight loss—hunger increases. The drop of leptin is better viewed as a starvation signal than the rise of leptin as a satiety signal. However, elevated leptin in obesity is known as leptin resistance. The changes that occur in the hypothalamus to result in leptin resistance in obesity are currently the focus of obesity research.
Gene defects in the leptin gene (ob) are rare in human obesity. , only 14 individuals from five families have been identified worldwide who carry a mutated ob gene (one of which was the first ever identified cause of genetic obesity in humans)—two families of Pakistani origin living in the UK, one family living in Turkey, one in Egypt, and one in Austria—and two other families have been found that carry a mutated ob receptor. Others have been identified as genetically partially deficient in leptin, and, in these individuals, leptin levels on the low end of the normal range can predict obesity.
Several mutations of genes involving the melanocortins (used in brain signaling associated with appetite) and their receptors have also been identified as causing obesity in a larger portion of the population than leptin mutations.
Physical properties
Adipose tissue has a density of ~0.9 g/ml. Thus, a person with more adipose tissue will float more easily than a person of the same weight with more muscular tissue, since muscular tissue has a density of 1.06 g/ml.
Body fat meter
A body fat meter is a tool used to measure the body fat to weight ratio in the human body. Different meters use various methods to determine the ratio. They tend to under-read body fat percentage.
In contrast with clinical tools like DXA and underwater weighing, one relatively inexpensive type of body fat meter uses the principle of bioelectrical impedance analysis (BIA) in order to determine an individual's body fat percentage. To achieve this, the meter passes a small, harmless, electric current through the body and measures the resistance, then uses information on the person's weight, height, age, and sex to calculate an approximate value for the person's body fat percentage. The calculation measures the total volume of water in the body (lean tissue and muscle contain a higher percentage of water than fat), and estimates the percentage of fat based on this information. The result can fluctuate several percentage points depending on what has been eaten and how much water has been drunk before the analysis. This method is quick and readily accessible, but imprecise. Alternative methods are: skin fold methods using calipers, underwater weighing, whole body air displacement plethysmography (ADP) and DXA.
Animal studies
Within the fat (adipose) tissue of CCR2 deficient mice, there is an increased number of eosinophils, greater alternative Macrophage activation, and a propensity towards type 2 cytokine expression. Furthermore, this effect was exaggerated when the mice became obese from a high fat diet.
Gallery
See also
Adipose differentiation-related protein
Adipocytes
Apelin
Bioelectrical impedance analysis – a method to measure body fat percentage.
Blubber – an extra thick form of adipose tissue found in some marine mammals.
Body fat percentage
Body roundness index
Cellulite
Lipolysis
Lipodystrophy
Human fat used as pharmaceutical in traditional medicine
Obesity
Starvation
Steatosis (also called fatty change, fatty degeneration or adipose degeneration)
Stem cells
Subcutaneous fat
Bariatrics
Classification of obesity
Classification of childhood obesity
EPODE International Network, the world's largest obesity-prevention network
World Fit A program of the United States Olympic Committee (USOC), and the United States Olympians and Paralympians Association (USOP)
Obesity and walking
Social stigma of obesity
References
Further reading
External links
Adipose tissue photomicrographs
Connective tissue
Endocrine system
Obesity | 0.764026 | 0.998224 | 0.762669 |
Dwarfism | Dwarfism is a condition of people and animals marked by unusually small size or short stature. In humans, it is sometimes defined as an adult height of less than , regardless of sex; the average adult height among people with dwarfism is . Disproportionate dwarfism is characterized by either short limbs or a short torso. In cases of proportionate dwarfism, both the limbs and torso are unusually small. Intelligence is usually normal, and most people with it have a nearly normal life expectancy. People with dwarfism can usually bear children, although there are additional risks to the mother and child depending upon the underlying condition.
The most common and recognizable form of dwarfism in humans (comprising 70% of cases) is achondroplasia, a genetic disorder whereby the limbs are diminutive. Growth hormone deficiency is responsible for most other cases. There are many other less common causes. Treatment of the condition depends on the underlying cause. Those with genetic disorders such as osteochondrodysplasia can sometimes be treated with surgery or physical therapy. Hormone disorders can also be treated with growth hormone therapy before the child's growth plates fuse. Individual accommodations such as specialized furniture, are often used by people with dwarfism. Many support groups provide services to aid individuals and the discrimination they may face.
In addition to the medical aspect of the condition there are social aspects. For a person with dwarfism, height discrimination can lead to ridicule in childhood and discrimination in adulthood. In the United Kingdom, United States, Canada, Australia, and other English-speaking countries, labels that some people with dwarfism accept include dwarf (plural: dwarfs), little person (LP), or person of short stature (see terminology). Historically, the term midget was used to describe dwarfs (primarily proportionate); however, some now consider this term offensive.
Signs and symptoms
A defining characteristic of dwarfism is an adult height less than 2.3% of the CDC standard growth charts. There is a wide range of physical characteristics. Variations in individuals are identified by diagnosing and monitoring the underlying disorders. There may not be any complications outside adapting to their size. Short stature is a common replacement of the term 'dwarfism', especially in a medical context. Short stature is clinically defined as a height within the lowest 2.3% of those in the general population. However, those with mild skeletal dysplasias may not be affected by dwarfism. In some cases of untreated hypochondroplasia, males grow up to . Though that is short in a relative context, it does not fall into the extreme ranges of the growth charts.
Disproportionate dwarfism is characterized by shortened limbs or a shortened torso. In achondroplasia one has an average-sized trunk with short limbs and a larger forehead. Facial features are often affected and individual body parts may have problems associated with them. Spinal stenosis, ear infection, and hydrocephalus are common. In case of spinal dysostosis, one has a small trunk, with average-sized limbs. Proportionate dwarfism is marked by a short torso with short limbs, thus leading to a height that is significantly below average. There may be long periods without any significant growth. Sexual development is often delayed or impaired into adulthood. This dwarfism type is caused by an endocrine disorder and not a skeletal dysplasia.
Physical effects of malformed bones vary according to the specific disease. Many involve joint pain caused by abnormal bone alignment, or from nerve compression. Early degenerative joint disease, exaggerated lordosis or scoliosis, and constriction of spinal cord or nerve roots can cause pain and disability. Reduced thoracic size can restrict lung growth and reduce pulmonary function. Some forms of dwarfism are associated with disordered function of other organs, such as the brain or liver, sometimes severely enough to be more of an impairment than the unusual bone growth. Mental effects also vary according to the specific underlying syndrome. In most cases of skeletal dysplasia, such as achondroplasia, mental function is not impaired. However, there are syndromes which can affect the cranial structure and growth of the brain, severely impairing mental capacity. Unless the brain is directly affected by the underlying disorder, there is little to no chance of mental impairment that can be attributed to dwarfism.
The psycho-social limitations of society may be more disabling than the physical symptoms, especially in childhood and adolescence, but people with dwarfism vary greatly in the degree to which social participation and emotional health are affected.
Social prejudice against extreme shortness may reduce social and marital opportunities.
Numerous studies have demonstrated reduced employment opportunities. Severe shortness is associated with lower income.
Self-esteem may decline and family relationships may be affected.
Extreme shortness (in the range) can, if not accommodated for, interfere with activities of daily living, like driving or using countertops built for taller people. Other common attributes of dwarfism such as bowed knees and unusually short fingers can lead to back problems and difficulty in walking and handling objects.
Children with dwarfism are particularly vulnerable to teasing and ridicule from classmates. Because dwarfism is relatively uncommon, children may feel isolated from their peers.
Causes
Dwarfism can result from many medical conditions, each with its own separate symptoms and causes. Extreme shortness in humans with proportional body parts usually has a hormonal cause, such as growth hormone deficiency, once called pituitary dwarfism. Achondroplasia is responsible for the majority of human dwarfism cases, followed by spondyloepiphyseal dysplasia and diastrophic dysplasia.
Achondroplasia
The most recognizable and most common form of dwarfism in humans is achondroplasia, which accounts for 70% of dwarfism cases, and occurs in 4 to 15 out of 100,000 live births.
It produces rhizomelic short limbs, increased spinal curvature, and distortion of skull growth. In achondroplasia the body's limbs are proportionately shorter than the trunk (abdominal area), with a larger head than average and characteristic facial features. Achondroplasia is an autosomal dominant disorder caused by the presence of an altered allele in the genome. If a pair of achondroplasia alleles are present, the result is fatal, usually perinatally. Achondroplasia is a mutation in the fibroblast growth factor receptor 3. In the context of achondroplasia, this mutation causes FGFR3 to become constitutively active, inhibiting bone growth.
Growth hormone deficiency
Growth hormone deficiency (GHD) is a medical condition in which the body produces insufficient growth hormone. Growth hormone, also called somatotropin, is a polypeptide hormone which stimulates growth and cell reproduction. If this hormone is lacking, stunted or even halted growth may become apparent. Children with this disorder may grow slowly and puberty may be delayed by several years or indefinitely. Growth hormone deficiency has no single definite cause. It can be caused by mutations of specific genes, damage to the pituitary gland, Turner's syndrome, poor nutrition, or even stress (leading to psychogenic dwarfism). Laron syndrome (growth hormone insensitivity) is another cause. Those with growth hormone issues tend to be proportionate.
Metatropic dysplasia
Metatropic means "changing form" and refers to this form of skeletal dysplasia as there is an abnormality in the growth plates. Skeletal changes continue over time and may need surgical intervention to help protect the lungs. Symptoms starting at birth may be mild or can be fatal. There are recognizable features in individuals with this genetic disorder. Some are short stature, narrow chest, "facial features such as a prominent forehead, underdevelopment of the upper jaw, cheekbones and eye sockets (midface hypoplasia), and a squared-off jaw." It is considered a more severe skeletal dysplasia, but is very rare, with the exact number of those affected unknown. Prognosis is largely on a case-by-case basis depending on the severity, and life expectancy may not be impacted unless there are respiratory complications.
Other
Other causes of dwarfism are spondyloepiphyseal dysplasia congenita, diastrophic dysplasia, pseudoachondroplasia, hypochondroplasia, Noonan syndrome, primordial dwarfism, Cockayne syndrome, Kniest dysplasia, Turner syndrome, osteogenesis imperfecta (OI), and hypothyroidism. Severe shortness with skeletal distortion also occurs in several of the mucopolysaccharidoses and other storage disorders. Hypogonadotropic hypogonadism may cause proportionate, yet temporary, dwarfism. NPR2 disproportionate dwarfism was discovered recently and is caused by a mutant gene.
Serious chronic illnesses may produce dwarfism as a side effect. Harsh environmental conditions, such as malnutrition, may also produce dwarfism. These types of dwarfism are indirect consequences of the generally unhealthy or malnourished condition of the individual, and not of any specific disease. The dwarfism often takes the form of simple short stature, without any deformities, thus leading to proportionate dwarfism. In societies where poor nutrition is widespread, the average height of the population may be reduced below its genetic potential by the lack of proper nutrition. Sometimes there is no definitive cause of short stature.
Diagnosis
Dwarfism is often diagnosed in childhood on the basis of visible symptoms. A physical examination can usually suffice to diagnose certain types of dwarfism, but genetic testing and diagnostic imaging may be used to determine the exact condition. In a person's youth, growth charts that track height can be used to diagnose subtle forms of dwarfism that have no other striking physical characteristics.
Short stature or stunted growth during youth is usually what brings the condition to medical attention. Skeletal dysplasia is usually suspected because of obvious physical features (e.g., unusual configuration of face or shape of skull), because of an obviously affected parent, or because body measurements (arm span, upper to lower segment ratio) indicate disproportion. Bone X-rays are often key to diagnosing a specific skeletal dysplasia, but are not the sole diagnostic tool. Most children with suspected skeletal dysplasias are referred to a genetics clinic for diagnostic confirmation and genetic counseling. Since about the year 2000, genetic tests for some of the specific disorders have become available.
During an initial medical evaluation of shortness, the absence of disproportion and other clues listed above usually indicates causes other than bone dysplasias.
Classification
In men and women, the sole requirement for being considered a dwarf is having an adult height under and it is almost always sub-classified with respect to the underlying condition that is the cause of the short stature. Dwarfism is usually caused by a genetic variant; achondroplasia is caused by a mutation on chromosome 4. If dwarfism is caused by a medical disorder, the person is referred to by the underlying diagnosed disorder. Disorders causing dwarfism are often classified by proportionality. Disproportionate dwarfism describes disorders that cause unusual proportions of the body parts, while proportionate dwarfism results in a generally uniform stunting of the body.
Disorders that cause dwarfism may be classified according to one of hundreds of names, which are usually permutations of the following roots:
location
rhizomelic = root, i.e., bones of the upper arm or thigh
mesomelic = middle, i.e., bones of the forearm or lower leg
acromelic = end, i.e., bones of hands and feet.
micromelic = entire limbs are shortened
source
chondro = of cartilage
osteo = of bone
spondylo = of the vertebrae
plasia = form
trophy = growth
Examples include achondroplasia and chondrodystrophy.
Prevention
Many types of dwarfism are currently impossible to prevent because they are genetically caused. Genetic conditions that cause dwarfism may be identified with genetic testing, by screening for the specific variations that result in the condition. However, due to the number of causes of dwarfism, it may be impossible to determine definitively if a child will be born with dwarfism. Dwarfism resulting from malnutrition or a hormonal abnormality may be treated with an appropriate diet or hormonal therapy. Growth hormone deficiency may be remedied via injections of human growth hormone (HGH) during early life.
Management
Genetic mutations of most forms of dwarfism caused by bone dysplasia cannot be altered yet, so therapeutic interventions are typically aimed at preventing or reducing pain or physical disability, increasing adult height, or mitigating psychosocial stresses and enhancing social adaptation. Forms of dwarfism associated with the endocrine system may be treated using hormonal therapy. If the cause is prepubescent hyposecretion of growth hormone, supplemental growth hormone may correct the abnormality. If the receptor for growth hormone is itself affected, the condition may prove harder to treat. Hypothyroidism is another possible cause of dwarfism that can be treated through hormonal therapy. Injections of thyroid hormone can mitigate the effects of the condition, but lack of proportion may be permanent.
Pain and disability may be ameliorated by physical therapy, braces or other orthotic devices, or by surgical procedures. The only simple interventions that increase perceived adult height are dress enhancements, such as shoe lifts or hairstyle. Growth hormone is rarely used for shortness caused by bone dysplasias, since the height benefit is typically small (less than ) and the cost high. The most effective means of increasing adult height by several inches is distraction osteogenesis, though availability is limited and the cost is high in terms of money, discomfort, and disruption of life. Most people with dwarfism do not choose this option, and it remains controversial. For other types of dwarfism, surgical treatment is not possible.
Society and culture
Terminology
The appropriate term for describing a person of particularly short stature (or with the genetic condition achondroplasia) has developed euphemistically.
The noun dwarf stems from , originally referring to a being from Germanic mythology—a dwarf—that dwells in mountains and in the earth, and is associated with wisdom, smithing, mining, and crafting. The etymology of the word dwarf is contested, and scholars have proposed varying theories about the origins of the being, including that dwarfs may have originated as nature spirits or as beings associated with death, or as a mixture of concepts. Competing etymologies include a basis in the Indo-European root (meaning ), the Indo-European root (whence modern Dutch and ), and comparisons have been made with the Old Indian dhvaras (a type of demonic being). The being may not have gained associations with small stature until a later period.
The terms "little person", "LP" and "person of short stature" are the preferred terms of those with this disorder, and while many are uncomfortable with "dwarf" it remains a common term in some areas. However, the plural "dwarfs" as opposed to "dwarves" is generally preferred in the medical context, possibly because the plural "dwarves" was popularized by author J. R. R. Tolkien, describing a race of characters in his The Lord of the Rings books resembling Norse dwarfs. "Midget", whose etymology indicates a "tiny biting insect", came into prominence in the mid-19th century after Harriet Beecher Stowe used it in her novels Sunny Memories of Foreign Lands and Oldtown Folks where she described children and an extremely short man, respectively. Later some people of short stature considered the word to be offensive because it was the descriptive term applied to P. T. Barnum's dwarfs used for public amusement during the freak show era. It is also not considered accurate as it is not a medical term or diagnosis, though it is sometimes used as a slang term to describe those who are particularly short, whether or not they have dwarfism.
Participation
Individuals with dwarfism are capable of actively participating in various aspects of society. They have access to education, sports, and can pursue careers, engaging in a wide range of professions.
Acceptance
Individuals with dwarfism often face prejudice and stereotypes. Research by Klein (2019) has demonstrated that awareness of the stigmatization of this group can promote full participation in society. The research by Green and Pinter (2018) in the field of humor and social psychology can provide insights to reduce stereotypes and promote a more objective perception.
Accommodation
In daily life, little people face numerous obstacles because the environment is tailored to average-sized individuals. Some little people can only use ATMs, kitchens, toilets, and sinks with aids. Low stools and step stools play a special role, as they can be used in various ways to bridge the height difference.
Stools are also useful as footrests while sitting, as the legs of short people dangle in the air when sitting on an average chair, which can be painful and uncomfortable in the long run and may hinder fine motor skills during work.
To be mobile, some individuals use customized scooters or bicycles, as it can be problematic, depending on the type of short stature, to walk longer distances. With specially adapted vehicles, most individuals of short stature can drive without further hindrances. Generally, pedal extensions and an individually adjusted seat at the correct height are required. Some little people are tall enough to drive without pedal extensions. Usually, patients with skeletal dysplasia with limited mobility can receive allowances or grants for vehicle assistance through governmental help or rehabilitation providers.
Dwarf sports
Dwarfs have support and compete in sport by a number of organizations nationally and internationally. They are included in some events in the athletics at the Summer Paralympics.
The Dwarf Athletic Association of America and the Dwarf Sports Association UK provide opportunities for dwarfs to compete nationally and internationally in the Americas and Europe, respectively. The World Dwarf Games (WDG) are a multi-sport event for athletes of short stature. The WDG have been held every four years since 1993 and are the world's largest sporting event exclusively for athletes with dwarfism. The Dwarf Sports Association UK organizes between 5 and 20 events per month for athletes with restricted growth conditions in the UK. For instance, swimming and bicycling are often recommended for people with skeletal dysplasias, since those activities put minimal pressure on the spine.
Since its early days, professional wrestling has had the involvement of dwarf athletes. "Midget wrestling" had its heyday in the 1950s–'70s, when wrestlers such as Little Beaver, Lord Littlebrook, and Fuzzy Cupid toured North America, and Sky Low Low was the first holder of the National Wrestling Alliance's World Midget Championship. In the next couple of decades, more wrestlers became prominent in North America including foreign wrestlers like Japan's Little Tokyo. Although the term is seen by some as pejorative, many past and current midget wrestlers including Hornswoggle said they take pride in the term due to its history in the industry and its marketability.
Art and media depictions
In art, literature, and movies, dwarfs are rarely depicted as ordinary people who are very short but rather as a species apart. Novelists, artists, and moviemakers may attach special moral or aesthetic significance to their "apartness" or misshapenness.
Artistic representations of dwarfism are found on Greek vases and other ancient artifacts, including ancient Egyptian art in which dwarfs are likely to have been seen as a divine manifestation, with records indicating that they were able to reach high positions in society at the time.
The ancient Hindu text Bhagavat Purana devotes nine chapters to the adventures of Vamana, a dwarf avatar of Lord Vishnu.
Depictions of dwarfism are also found in European paintings and many illustrations. Many European paintings (especially Spanish) of the 16th–19th centuries depict dwarfs by themselves or with others. In the Talmud, it is said that the second born son of the Egyptian Pharaoh of the Bible was a dwarf. Recent scholarship has suggested that ancient Egyptians held dwarfs in high esteem. Several important mythological figures of the North American Wyandot nation are portrayed as dwarfs.
As popular media has become more widespread, the number of works depicting dwarfs have increased dramatically. Dwarfism is depicted in many books, films, and TV series such as Willow, The Wild Wild West, The Man with the Golden Gun (and later parodied in Austin Powers), Gulliver's Travels by Jonathan Swift, The Wizard of Oz, Willy Wonka & the Chocolate Factory, Bad Santa, A Son of the Circus, Little People, Big World, The Little Couple, A Song of Ice and Fire (and its TV adaptation Game of Thrones), Seinfeld, The Orator, In Bruges, The Tin Drum by Günter Grass, the short-lived reality show The Littlest Groom, and the films The Station Agent and Zero.
The Animal Planet TV series Pit Boss features dwarf actor Shorty Rossi and his talent agency, "Shortywood Productions", which Rossi uses to provide funding for his pit bull rescue operation, "Shorty's Rescue". Rossi's three full-time employees, featured in the series, are little people and aspiring actors. In September 2014, Creative Business House along with Donnons Leur Une Chance, created the International Dwarf Fashion Show to raise awareness and boost self-confidence of people living with dwarfism. A number of reality television series on Lifetime, beginning with Little Women: LA in 2014, focused on showing the lives of women living with dwarfism in various cities around the United States.
See also
Dwarfs and pygmies in Ancient Egypt
Dwarf-tossing
Ellis–Van Creveld syndrome
Gigantism
Human height
Island dwarfism
Itabaianinha, a city in Brazil historically noted for its dwarf population
Kingdom of the Little People
Laron syndrome
List of people with dwarfism
List of dwarfism organisations
List of the verified shortest people
Midget
Mulibrey nanism
Phyletic dwarfism
Short stature
Pygmy peoples
Dwarf hamster (disambiguation)
Dwarf rabbit
References
External links
Little People of the World Organization [Hub for all International Organizations; services/advocacy/know your rights/support]
Little People of America (Includes a list of International support groups)
Little People of Canada (Includes a list of Canadian Provincial support groups)
Little People UK
Dwarf Sports Association UK
Restricted Growth Association UK
Growth disorders
Human height | 0.763033 | 0.999519 | 0.762666 |
Anastomosis | An anastomosis (, : anastomoses) is a connection or opening between two things (especially cavities or passages) that are normally diverging or branching, such as between blood vessels, leaf veins, or streams. Such a connection may be normal (such as the foramen ovale in a fetus' heart) or abnormal (such as the patent foramen ovale in an adult's heart); it may be acquired (such as an arteriovenous fistula) or innate (such as the arteriovenous shunt of a metarteriole); and it may be natural (such as the aforementioned examples) or artificial (such as a surgical anastomosis). The reestablishment of an anastomosis that had become blocked is called a reanastomosis. Anastomoses that are abnormal, whether congenital or acquired, are often called fistulas.
The term is used in medicine, biology, mycology, geology, and geography.
Etymology
Anastomosis: medical or Modern Latin, from Greek ἀναστόμωσις, anastomosis, "outlet, opening", Gr ana- "up, on, upon", stoma "mouth", "to furnish with a mouth". Thus the -stom- syllable is cognate with that of stoma in botany or stoma in medicine.
Medical anatomy
An anastomosis is the connection of two normally divergent structures. It refers to connections between blood vessels or between other tubular structures such as loops of intestine.
Circulatory
In circulatory anastomoses, many arteries naturally anastomose with each other; for example, the inferior epigastric artery and superior epigastric artery, or the anterior and/or posterior communicating arteries in the Circle of Willis in the brain. The circulatory anastomosis is further divided into arterial and venous anastomosis. Arterial anastomosis includes actual arterial anastomosis (e.g., palmar arch, plantar arch) and potential arterial anastomosis (e.g. coronary arteries and cortical branch of cerebral arteries). Anastomoses also form alternative routes around capillary beds in areas that do not need a large blood supply, thus helping regulate systemic blood flow.
Surgical
Surgical anastomosis occurs when segments of intestine, blood vessel, or any other structure are connected together surgically (anastomosed). Examples include arterial anastomosis in bypass surgery, intestinal anastomosis after a piece of intestine has been resected, Roux-en-Y anastomosis and ureteroureterostomy. Surgical anastomosis techniques include Linear Stapled Anastomosis, Hand Sewn Anastomosis, End-to-End Anastomosis (EEA). Anastomosis can be performed by hand or with an anastomosis assist device. Studies have been performed comparing various anastomosis approaches taking into account surgical "time and cost, postoperative anastomotic bleeding, leakage, and stricture".
Pathological
Pathological anastomosis results from trauma or disease and may involve veins, arteries, or intestines. These are usually referred to as fistulas. In the cases of veins or arteries, traumatic fistulas usually occur between artery and vein. Traumatic intestinal fistulas usually occur between two loops of intestine (entero-enteric fistula) or intestine and skin (enterocutaneous fistula). Portacaval anastomosis, by contrast, is an anastomosis between a vein of the portal circulation and a vein of the systemic circulation, which allows blood to bypass the liver in patients with portal hypertension, often resulting in hemorrhoids, esophageal varices, or caput medusae.
Biology
Evolution
In evolution, anastomosis is a recombination of evolutionary lineage. Conventional accounts of evolutionary lineage present themselves as the branching out of species into novel forms. Under anastomosis, species might recombine after initial branching out, such as in the case of recent research that shows that ancestral populations along human and chimpanzee lineages may have interbred after an initial branching event. The concept of anastomosis also applies to the theory of symbiogenesis, in which new species emerge from the formation of novel symbiotic relationships.
Mycology
In mycology, anastomosis is the fusion between branches of the same or different hyphae. Hence the bifurcating fungal hyphae can form true reticulating networks. By sharing materials in the form of dissolved ions, hormones, and nucleotides, the fungus maintains bidirectional communication with itself. The fungal network might begin from several origins; several spores (i.e. by means of conidial anastomosis tubes), several points of penetration, each a spreading circumference of absorption and assimilation. Once encountering the tip of another expanding, exploring self, the tips press against each other in pheromonal recognition or by an unknown recognition system, fusing to form a genetic singular clonal colony that can cover hectares called a genet or just microscopical areas.
For fungi, anastomosis is also a component of reproduction. In some fungi, two different haploid mating types – if compatible – merge. Somatically, they form a morphologically similar mycelial wave front that continues to grow and explore. The significant difference is that each septated unit is binucleate, containing two unfused nuclei, i.e. one from each parent that eventually undergoes karyogamy and meiosis to complete the sexual cycle.
Also the term "anastomosing" is used for mushroom gills which interlink and separate to form a network.
Botany
The growth of a strangler fig around a host tree, with tendrils fusing together to form a mesh, is called anastomosing.
Geosciences
Geology
In geology, veins of quartz (or other) minerals can display anastomosis.
Ductile shear zones frequently show anastomosing geometries of highly-strained rocks around lozenges of less-deformed material.
Molten lava flows sometimes flow in anastomosed lava channels or lava tubes.
In cave systems, anastomosis is the splitting of cave passages that later reconnect.
Geography and hydrology
Anastomosing rivers, anastomosing streams consist of multiple channels that divide and reconnect and are separated by semi-permanent banks formed of cohesive material, such that they are unlikely to migrate from one channel position to another. They can be confused with braided rivers based on their planforms alone, but braided rivers are much shallower and more dynamic than anastomosing rivers. Some definitions require that an anastomosing river be made up of interconnected channels that enclose floodbasins, again in contrast with braided rivers.
Rivers with anastomosed reaches include the Magdalena River in Colombia, the upper Columbia River in British Columbia, Canada, the Drumheller Channels of the Channeled Scablands of the state of Washington, US, and the upper Narew River in Poland. The term anabranch has been used for segments of anastomosing rivers.
Braided streams show anastomosing channels around channel bars of alluvium.
References
Angiology
Digestive system
Evolutionary biology
Petrology
Surgery | 0.765924 | 0.995732 | 0.762655 |
Glycation | Glycation (non-enzymatic glycosylation) is the covalent attachment of a sugar to a protein, lipid or nucleic acid molecule. Typical sugars that participate in glycation are glucose, fructose, and their derivatives. Glycation is the non-enzymatic process responsible for many (e.g. micro and macrovascular) complications in diabetes mellitus and is implicated in some diseases and in aging. Glycation end products are believed to play a causative role in the vascular complications of diabetes mellitus.
In contrast with glycation, glycosylation is the enzyme-mediated ATP-dependent attachment of sugars to a protein or lipid. Glycosylation occurs at defined sites on the target molecule. It is a common form of post-translational modification of proteins and is required for the functioning of the mature protein.
Biochemistry
Glycations occur mainly in the bloodstream to a small proportion of the absorbed simple sugars: glucose, fructose, and galactose. It appears that fructose has approximately ten times the glycation activity of glucose, the primary body fuel. Glycation can occur through Amadori reactions, Schiff base reactions, and Maillard reactions; which lead to advanced glycation end products (AGEs).
Biomedical implications
Red blood cells have a consistent lifespan of 120 days and are accessible for measurement of glycated hemoglobin. Measurement of HbA1c—the predominant form of glycated hemoglobin—enables medium-term blood sugar control to be monitored in diabetes.
Some glycation products are implicated in many age-related chronic diseases, including cardiovascular diseases (the endothelium, fibrinogen, and collagen are damaged) and Alzheimer's disease (amyloid proteins are side-products of the reactions progressing to AGEs).
Long-lived cells (such as nerves and different types of brain cell), long-lasting proteins (such as crystallins of the lens and cornea), and DNA can sustain substantial glycation over time. Damage by glycation results in stiffening of the collagen in the blood vessel walls, leading to high blood pressure, especially in diabetes. Glycations also cause weakening of the collagen in the blood vessel walls, which may lead to micro- or macro-aneurysm; this may cause strokes if in the brain.
DNA glycation
The term DNA glycation applies to DNA damage induced by reactive carbonyls (principally methylglyoxal and glyoxal) that are present in cells as by-products of sugar metabolism. Glycation of DNA can cause mutation, breaks in DNA and cytotoxicity. Guanine in DNA is the base most susceptible to glycation. Glycated DNA, as a form of damage, appears to be as frequent as the more well studied oxidative DNA damage. A protein, designated DJ-1 (also known as PARK7), is employed in the repair of glycated DNA bases in humans, and homologs of this protein have also been identified in bacteria.
See also
Advanced glycation end-product
Alagebrium
Fructose
Galactose
Glucose
Glycosylation
Glycated hemoglobin
List of aging processes
Additional reading
References
Ageing processes
Carbohydrates
Post-translational modification
Protein metabolism | 0.768523 | 0.992364 | 0.762654 |
Iron deficiency | Iron deficiency, or sideropenia, is the state in which a body lacks enough iron to supply its needs. Iron is present in all cells in the human body and has several vital functions, such as carrying oxygen to the tissues from the lungs as a key component of the hemoglobin protein, acting as a transport medium for electrons within the cells in the form of cytochromes, and facilitating oxygen enzyme reactions in various tissues. Too little iron can interfere with these vital functions and lead to morbidity and death.
Total body iron averages approximately 3.8 g in men and 2.3 g in women. In blood plasma, iron is carried tightly bound to the protein transferrin. There are several mechanisms that control iron metabolism and safeguard against iron deficiency. The main regulatory mechanism is situated in the gastrointestinal tract. The majority of iron absorption occurs in the duodenum, the first section of the small intestine. A number of dietary factors may affect iron absorption. When loss of iron is not sufficiently compensated by intake of iron from the diet, a state of iron deficiency develops over time. When this state is uncorrected, it leads to iron-deficiency anemia, a common type of anemia. Before anemia occurs, the medical condition of iron deficiency without anemia is called latent iron deficiency (LID).
Anemia is a condition characterized by inadequate red blood cells (erythrocytes) or hemoglobin. When the body lacks sufficient amounts of iron, production of the protein hemoglobin is reduced. Hemoglobin binds to oxygen, enabling red blood cells to supply oxygenated blood throughout the body. Women of child-bearing age, children, and people with poor diet are most susceptible to the disease. A primary cause of iron deficiency in non-pregnant women is menstrual bleeding, which accounts for their comparatively higher risk than men. Most cases of iron-deficiency anemia are mild, alongside physical symptoms such as dizziness and shortness of breath, women with iron deficiency may also experience anxiety, depression, and restless leg syndrome. If not treated can cause problems like an irregular heartbeat, pregnancy complications, and delayed growth in infants and children that could affect their cognitive development and their behavior.
Signs and symptoms
Symptoms of iron deficiency can occur even before the condition has progressed to iron deficiency anemia.
Symptoms of iron deficiency are not unique to iron deficiency (i.e. not pathognomonic). Iron is needed for many enzymes to function normally, so a wide range of symptoms may eventually emerge, either as the secondary result of the anemia, or as other primary results of iron deficiency. Symptoms of iron deficiency include:
fatigue
dizziness/lightheadedness
pallor
hair loss
twitches
irritability
weakness
pica
brittle or grooved nails
hair thinning
Plummer–Vinson syndrome: painful atrophy of the mucous membrane covering the tongue, the pharynx and the esophagus
impaired immune function
pagophagia
restless legs syndrome
in chronic cases, increase in blood pressure
Continued iron deficiency may progress to anemia and worsening fatigue. Thrombocytosis, or an elevated platelet count, can also result. A lack of sufficient iron levels in the blood is one reason that some people cannot donate blood.
Signs and symptoms in children
pale skin
fatigue
slowed growth and development
poor appetite
decrease in the size of testes
behavioral problems
abnormal rapid breathing
frequent infection
Iron requirements in young children to teenagers
Causes
blood loss (hemoglobin contains iron)
donation
excessive menstrual bleeding
non-menstrual bleeding
bleeding from the gastrointestinal tract (anus) (ulcers, hemorrhoids, ulcerative colitis, stomach or colon cancer, etc.)
rarely, laryngological bleeding or from the respiratory tract
inadequate intake
substances (in diet or drugs) interfering with iron absorption
Fluoroquinolone antibiotics
malabsorption syndromes
inflammation where it is adaptive to limit bacterial growth in infection, but is also present in many other chronic diseases such as Inflammatory bowel disease and rheumatoid arthritis
parasitic infection
Though genetic defects causing iron deficiency have been studied in rodents, there are no known genetic disorders of human iron metabolism that directly cause iron deficiency.
Athletics
Possible reasons that athletics may contribute to lower iron levels includes mechanical hemolysis (destruction of red blood cells from physical impact), loss of iron through sweat and urine, gastrointestinal blood loss, and haematuria (presence of blood in urine). Although small amounts of iron are excreted in sweat and urine, these losses can generally be seen as insignificant even with increased sweat and urine production, especially considering that athletes' bodies appear to become conditioned to retain iron better. Mechanical hemolysis is most likely to occur in high-impact sports, especially among long-distance runners who experience "foot-strike hemolysis" from the repeated impact of their feet with the ground. Exercise-induced gastrointestinal bleeding is most likely to occur in endurance athletes. Haematuria in athletes is most likely to occur in those that undergo repetitive impacts on the body, particularly affecting the feet (such as running on a hard road, or Kendo) and hands (e.g. Conga or Candombe drumming). Additionally, athletes in sports that emphasize weight loss (e.g. ballet, gymnastics, marathon running, and cycling) as well as sports that emphasize high-carbohydrate, low-fat diets, may be at an increased risk for iron deficiency.
Inadequate intake
A U.S. federal survey of food consumption determined that for women and men over the age of 19, average iron consumption from foods and beverages was 13.1 and 18.0 mg/day, respectively. For women, 16% in the age range 14–50 years consumed less than the Estimated Average Requirement (EAR), for men ages 19 and up, fewer than 3%. Consumption data were updated in a 2014 U.S. government survey and reported that for men and women ages 20 and older the average iron intakes were, respectively, 16.6 and 12.6 mg/day. People in the U.S. usually obtain adequate amounts of iron from their diets. However, subgroups like infants, young children, teenaged girls, pregnant women, and premenopausal women are at risk of obtaining less than the EAR. Socio-economic and racial differences further affect the rates of iron deficiency.
Bioavailability
Iron is needed for bacterial growth making its bioavailability an important factor in controlling infection. Blood plasma as a result carries iron tightly bound to transferrin, which is taken up by cells by endocytosing transferrin, thus preventing its access to bacteria. Between 15 and 20 percent of the protein content in human milk consists of lactoferrin that binds iron. As a comparison, in cow's milk, this is only 2 percent. As a result, breast-fed babies have fewer infections. Lactoferrin is also concentrated in tears, saliva and at wounds to bind iron to limit bacterial growth. Egg white contains 12% conalbumin to withhold it from bacteria that get through the egg shell (for this reason, prior to antibiotics, egg white was used to treat infections).
To reduce bacterial growth, plasma concentrations of iron are lowered in a variety of systemic inflammatory states due to increased production of hepcidin which is mainly released by the liver in response to increased production of pro-inflammatory cytokines such as interleukin-6. This functional iron deficiency will resolve once the source of inflammation is rectified; however, if not resolved, it can progress to anaemia of chronic inflammation. The underlying inflammation can be caused by fever, inflammatory bowel disease, infections, chronic heart failure (CHF), carcinomas, or following surgery.
Reflecting this link between iron bioavailability and bacterial growth, the taking of oral iron supplements in excess of 200 mg/day causes a relative overabundance of iron that can alter the types of bacteria that are present within the gut. There have been concerns regarding parenteral iron being administered whilst bacteremia is present, although this has not been borne out in clinical practice. A moderate iron deficiency, in contrast, can provide protection against acute infection, especially against organisms that reside within hepatocytes and macrophages, such as malaria and tuberculosis. This is mainly beneficial in regions with a high prevalence of these diseases and where standard treatment is unavailable.
Diagnosis
A complete blood count can reveal microcytic anemia, although this is not always presenteven when iron deficiency progresses to iron-deficiency anemia.
Low serum ferritin (see below)
Low serum iron
High TIBC (total iron binding capacity), although this can be elevated in cases of anemia of chronic inflammation.
It is possible that the fecal occult blood test might be positive, if iron deficiency is the result of gastrointestinal bleeding; although the sensitivity of the test may mean that in some cases it will be negative even with enteral blood loss.
As always, laboratory values have to be interpreted with the lab's reference values in mind and considering all aspects of the individual clinical situation.
Serum ferritin can be elevated in inflammatory conditions; so a normal serum ferritin may not always exclude iron deficiency, and the utility is improved by taking a concurrent C-reactive protein (CRP). The level of serum ferritin that is viewed as "high" depends on the condition. For example, in inflammatory bowel disease the threshold is 100, where as in chronic heart failure (CHF) the levels are 200.
Treatment
Before commencing treatment, there should be definitive diagnosis of the underlying cause for iron deficiency. This is particularly the case in older patients, who are most susceptible to colorectal cancer and the gastrointestinal bleeding it often causes. In adults, 60% of patients with iron-deficiency anemia may have underlying gastrointestinal disorders leading to chronic blood loss.
It is likely that the cause of the iron deficiency will need treatment as well.
Upon diagnosis, the condition can be treated with iron supplements. The choice of supplement will depend upon both the severity of the condition, the required speed of improvement (e.g. if awaiting elective surgery) and the likelihood of treatment being effective (e.g. if the patient has underlying IBD, is undergoing dialysis, or is having ESA therapy).
Examples of oral iron that are often used are ferrous sulfate, ferrous gluconate, or amino acid chelate tablets. Recent research suggests the replacement dose of iron, at least in the elderly with iron deficiency, may be as little as 15 mg per day of elemental iron.
Low-certainty evidence suggests that IBD-related anemia treatment with Intravenous (IV) iron infusion may be more effective than oral iron therapy, with fewer people needing to stop treatment early due to adverse effects. The type of iron preparation may be an important determinant of clinical benefit. Moderate-certainty evidence suggests response to treatment may be higher when IV ferric carboxymaltose, rather than IV iron sucrose preparation is used, despite very-low certainty evidence of increased adverse effects, including bleeding, in those receiving ferric carboxymaltose treatment.
Ferric maltol, marketed as Accrufer and Ferracru, is available in oral and IV preparations. When used as a treatment for IBD-related anemia, very low certainty evidence suggests a marked benefit with oral ferric maltol compared with placebo. However it was unclear whether the IV preparation was more effective than oral ferric maltol.
A Cochrane review of controlled trials comparing intravenous (IV) iron therapy with oral iron supplements in people with chronic kidney disease, found low-certainty evidence that people receiving IV-iron treatment were 1.71 times as likely to reach their target hemoglobin levels. Overall, hemoglobin was 0.71g/dl higher than those treated with oral iron supplements. Iron stores in the liver, estimated by serum ferritin, were also 224.84 μg/L higher in those receiving IV-iron. However there was also low-certainty evidence that allergic reactions were more likely following IV-iron therapy. It was unclear whether type of iron therapy administration affects the risk of death from any cause, including cardiovascular, nor whether it may alter the number of people who may require a blood transfusion or dialysis.
Food sources
Mild iron deficiency can be prevented or corrected by eating iron-rich foods and by cooking in an iron skillet. Because iron is a requirement for most plants and animals, a wide range of foods provide iron. Good sources of dietary iron have heme iron, as this is most easily absorbed and is not inhibited by medication or other dietary components. Two examples are red meat and poultry. Non-heme sources do contain iron, though it has reduced bioavailability. Examples are lentils, beans, leafy vegetables, pistachios, tofu, fortified bread, and fortified breakfast cereals.
Iron from different foods is absorbed and processed differently by the body; for instance, iron in meat (heme iron source) is more easily absorbed than iron in grains and vegetables ("non-heme" iron sources). Minerals and chemicals in one type of food may also inhibit absorption of iron from another type of food eaten at the same time. For example, oxalates and phytic acid form insoluble complexes which bind iron in the gut before it can be absorbed.
Because iron from plant sources is less easily absorbed than the heme-bound iron of animal sources, vegetarians and vegans should have a somewhat higher total daily iron intake than those who eat meat, fish or poultry. Legumes and dark-green leafy vegetables like broccoli, kale and Asian greens are especially good sources of iron for vegetarians and vegans. However, spinach and Swiss chard contain oxalates which bind iron, making it almost entirely unavailable for absorption. Iron from non-heme sources is more readily absorbed if consumed with foods that contain either heme-bound iron or vitamin C. This is due to a hypothesised "meat factor" which enhances iron absorption. The benefits of eating seasonings or condiments that have been fortified with iron for people with iron deficiencies are not clear. There is some evidence that iron fortified condiments or seasonings may help reduce an iron deficiency, however, whether this improves a person's health and prevents the person from developing anemia is not clear.
Following are two tables showing the richest foods in heme and non-heme iron. The "% RDA" column is based on the USDA Recommended Dietary Allowance of 18 mg for women aged between 19 and 50, and 8 mg for men aged 19 and older as well as women aged 51 and older.
Food recommendations for children
Children at 6 months should start having solid food that contains enough iron, which could be found in both heme and non-heme iron.
Heme iron:
Red meat (for example, beef, pork, lamb, goat, or venison)
Fatty fish
Poultry (for example, chicken or turkey)
Eggs
Non-heme iron:
Iron-fortified infant cereals
Tofu
Beans and lentils
Dark green leafy vegetables
Iron deficiency can have serious health consequences that diet may not be able to quickly correct; hence, an iron supplement is often necessary if the iron deficiency has become symptomatic.
Blood transfusion
Blood transfusion is sometimes used to treat iron deficiency with hemodynamic instability. Sometimes transfusions are considered for people who have chronic iron deficiency or who will soon go to surgery, but even if such people have low hemoglobin, they should be given oral treatment or intravenous iron.
Intravenous iron therapy for non-anaemic, iron-deficient adults
Current evidence is limited to base any recommendations that intravenous iron therapy is beneficial for treating non-anaemic, iron-deficient adults. Further research in this area is needed as current body of evidence is very low quality.
Cancer research
The presence of Helicobacter pylori in the stomach can cause inflammation and can lower the threshold for the development of gastric cancer. In the setting of iron deficiency, H. pylori causes more severe inflammation and the development of premalignant lesions. This inflammatory effect appears to be mediated, in part, through altered bile acid production including an increase in deoxycholic acid, a secondary bile acid implicated in colon cancer and other gastrointestinal cancers.
See also
Haemochromatosis - a condition in which the body stores too much iron
Bahima disease
CO2 fertilization effect
Notes
References
Further reading
Nutrition,Iron (2018).Centers for Disease Control and Prevention.
External links
Mineral deficiencies
Deficiency
Red blood cell disorders | 0.765382 | 0.996419 | 0.762642 |
Occupational safety and health | Occupational safety and health (OSH) or occupational health and safety (OHS) is a multidisciplinary field concerned with the safety, health, and welfare of people at work (i.e., while performing duties required by one's occupation). OSH is related to the fields of occupational medicine and occupational hygiene and aligns with workplace health promotion initiatives. OSH also protects all the general public who may be affected by the occupational environment.
According to the official estimates of the United Nations, the WHO/ILO Joint Estimate of the Work-related Burden of Disease and Injury, almost 2 million people die each year due to exposure to occupational risk factors. Globally, more than 2.78 million people die annually as a result of workplace-related accidents or diseases, corresponding to one death every fifteen seconds. There are an additional 374 million non-fatal work-related injuries annually. It is estimated that the economic burden of occupational-related injury and death is nearly four per cent of the global gross domestic product each year. The human cost of this adversity is enormous.
In common-law jurisdictions, employers have the common law duty (also called duty of care) to take reasonable care of the safety of their employees. Statute law may, in addition, impose other general duties, introduce specific duties, and create government bodies with powers to regulate occupational safety issues. Details of this vary from jurisdiction to jurisdiction.
Prevention of workplace incidents and occupational diseases is addressed through the implementation of occupational safety and health programs at company level.
Definitions
The International Labour Organization (ILO) and the World Health Organization (WHO) share a common definition of occupational health. It was first adopted by the Joint ILO/WHO Committee on Occupational Health at its first session in 1950:
In 1995, a consensus statement was added:
An alternative definition for occupational health given by the WHO is: "occupational health deals with all aspects of health and safety in the workplace and has a strong focus on primary prevention of hazards."
The expression "occupational health", as originally adopted by the WHO and the ILO, refers to both short- and long-term adverse health effects. In more recent times, the expressions "occupational safety and health" and "occupational health and safety" have come into use (and have also been adopted in works by the ILO), based on the general understanding that occupational health refers to hazards associated to disease and long-term effects, while occupational safety hazards are those associated to work accidents causing injury and sudden severe conditions.
History
Research and regulation of occupational safety and health are a relatively recent phenomenon. As labor movements arose in response to worker concerns in the wake of the industrial revolution, workers' safety and health entered consideration as a labor-related issue.
Beginnings
Written works on occupational diseases began to appear by the end of the 15th century, when demand for gold and silver was rising due to the increase in trade and iron, copper, and lead were also in demand from the nascent firearms market. Deeper mining became common as a consequence. In 1473, , a German physician, wrote a short treatise On the Poisonous Wicked Fumes and Smokes, focused on coal, nitric acid, lead, and mercury fumes encountered by metal workers and goldsmiths. In 1587, Paracelsus (1493–1541) published the first work on the mine and smelter workers diseases. In it, he gave accounts of miners' "lung sickness". In 1526, Georgius Agricola's (1494–1553) De re metallica, a treaty on metallurgy, described accidents and diseases prevalent among miners and recommended practices to prevent them. Like Paracelsus, Agricola mentioned the dust that "eats away the lungs, and implants consumption."
The seeds of state intervention to correct social ills were sown during the reign of Elizabeth I by the Poor Laws, which originated in attempts to alleviate hardship arising from widespread poverty. While they were perhaps more to do with a need to contain unrest than morally motivated, they were significant in transferring responsibility for helping the needy from private hands to the state.
In 1713, Bernardino Ramazzini (1633–1714), often described as the father of occupational medicine and a precursor to occupational health, published his De morbis artificum diatriba (Dissertation on Workers' Diseases), which outlined the health hazards of chemicals, dust, metals, repetitive or violent motions, odd postures, and other disease-causative agents encountered by workers in more than fifty occupations. It was the first broad-ranging presentation of occupational diseases.
Percivall Pott (1714–1788), an English surgeon, described cancer in chimney sweeps (chimney sweeps' carcinoma), the first recognition of an occupational cancer in history.
The Industrial Revolution in Britain
The United Kingdom was the first nation to industrialize. Soon shocking evidence emerged of serious physical and moral harm suffered by children and young persons in the cotton textile mills, as a result of exploitation of cheap labor in the factory system. Responding to calls for remedial action from philanthropists and some of the more enlightened employers, in 1802 Sir Robert Peel, himself a mill owner, introduced a bill to parliament with the aim of improving their conditions. This would engender the Health and Morals of Apprentices Act 1802, generally believed to be the first attempt to regulate conditions of work in the United Kingdom. The act applied only to cotton textile mills and required employers to keep premises clean and healthy by twice yearly washings with quicklime, to ensure there were sufficient windows to admit fresh air, and to supply "apprentices" (i.e., pauper and orphan employees) with "sufficient and suitable" clothing and accommodation for sleeping. It was the first of the 19th century Factory Acts.
Charles Thackrah (1795–1833), another pioneer of occupational medicine, wrote a report on The State of Children Employed in Cotton Factories, which was sent to the Parliament in 1818. Thackrah recognized issues of inequalities of health in the workplace, with manufacturing in towns causing higher mortality than agriculture.
The Act of 1833 created a dedicated professional Factory Inspectorate. The initial remit of the Inspectorate was to police restrictions on the working hours in the textile industry of children and young persons (introduced to prevent chronic overwork, identified as leading directly to ill-health and deformation, and indirectly to a high accident rate).
In 1840 a Royal Commission published its findings on the state of conditions for the workers of the mining industry that documented the appallingly dangerous environment that they had to work in and the high frequency of accidents. The commission sparked public outrage which resulted in the Mines and Collieries Act of 1842. The act set up an inspectorate for mines and collieries which resulted in many prosecutions and safety improvements, and by 1850, inspectors were able to enter and inspect premises at their discretion.
On the urging of the Factory Inspectorate, a further act in 1844 giving similar restrictions on working hours for women in the textile industry introduced a requirement for machinery guarding (but only in the textile industry, and only in areas that might be accessed by women or children). The latter act was the first to take a significant step toward improvement of workers' safety, as the former focused on health aspects alone.
The first decennial British Registrar-General's mortality report was issued in 1851. Deaths were categorized by social classes, with class I corresponding to professionals and executives and class V representing unskilled workers. The report showed that mortality rates increased with the class number.
Continental Europe
Otto von Bismarck inaugurated the first social insurance legislation in 1883 and the first worker's compensation law in 1884 – the first of their kind in the Western world. Similar acts followed in other countries, partly in response to labor unrest.
United States
The United States are responsible for the first health program focusing on workplace conditions. This was the Marine Hospital Service, inaugurated in 1798 and providing care for merchant seamen. This was the beginning of what would become the US Public Health Service (USPHS).
The first worker compensation acts in the United States were passed in New York in 1910 and in Washington and Wisconsin in 1911. Later rulings included occupational diseases in the scope of the compensation, which was initially restricted to accidents.
In 1914 the USPHS set up the Office of Industrial Hygiene and Sanitation, the ancestor of the current National Institute for Safety and Health (NIOSH). In the early 20th century, workplace disasters were still common. For example, in 1911 a fire at the Triangle Shirtwaist Company in New York killed 146 workers, mostly women and immigrants. Most died trying to open exits that had been locked. Radium dial painter cancers,"phossy jaw", mercury and lead poisonings, silicosis, and other pneumoconioses were extremely common.
The enactment of the Federal Coal Mine Health and Safety Act of 1969 was quickly followed by the 1970 Occupational Safety and Health Act, which established the Occupational Safety and Health Administration (OSHA) and NIOSH in their current form`.
Workplace hazards
A wide array of workplace hazards can damage the health and safety of people at work. These include but are not limited to, "chemicals, biological agents, physical factors, adverse ergonomic conditions, allergens, a complex network of safety risks," as well a broad range of psychosocial risk factors. Personal protective equipment can help protect against many of these hazards. A landmark study conducted by the World Health Organization and the International Labour Organization found that exposure to long working hours is the occupational risk factor with the largest attributable burden of disease, i.e. an estimated 745,000 fatalities from ischemic heart disease and stroke events in 2016. This makes overwork the globally leading occupational health risk factor.
Physical hazards affect many people in the workplace. Occupational hearing loss is the most common work-related injury in the United States, with 22 million workers exposed to hazardous occupational noise levels at work and an estimated $242 million spent annually on worker's compensation for hearing loss disability. Falls are also a common cause of occupational injuries and fatalities, especially in construction, extraction, transportation, healthcare, and building cleaning and maintenance. Machines have moving parts, sharp edges, hot surfaces and other hazards with the potential to crush, burn, cut, shear, stab or otherwise strike or wound workers if used unsafely.
Biological hazards (biohazards) include infectious microorganisms such as viruses, bacteria and toxins produced by those organisms such as anthrax. Biohazards affect workers in many industries; influenza, for example, affects a broad population of workers. Outdoor workers, including farmers, landscapers, and construction workers, risk exposure to numerous biohazards, including animal bites and stings, urushiol from poisonous plants, and diseases transmitted through animals such as the West Nile virus and Lyme disease. Health care workers, including veterinary health workers, risk exposure to blood-borne pathogens and various infectious diseases, especially those that are emerging.
Dangerous chemicals can pose a chemical hazard in the workplace. There are many classifications of hazardous chemicals, including neurotoxins, immune agents, dermatologic agents, carcinogens, reproductive toxins, systemic toxins, asthmagens, pneumoconiotic agents, and sensitizers. Authorities such as regulatory agencies set occupational exposure limits to mitigate the risk of chemical hazards. International investigations are ongoing into the health effects of mixtures of chemicals, given that toxins can interact synergistically instead of merely additively. For example, there is some evidence that certain chemicals are harmful at low levels when mixed with one or more other chemicals. Such synergistic effects may be particularly important in causing cancer. Additionally, some substances (such as heavy metals and organohalogens) can accumulate in the body over time, thereby enabling small incremental daily exposures to eventually add up to dangerous levels with little overt warning.
Psychosocial hazards include risks to the mental and emotional well-being of workers, such as feelings of job insecurity, long work hours, and poor work-life balance. Psychological abuse has been found present within the workplace as evidenced by previous research. A study by Gary Namie on workplace emotional abuse found that 31% of women and 21% of men who reported workplace emotional abuse exhibited three key symptoms of post-traumatic stress disorder (hypervigilance, intrusive imagery, and avoidance behaviors). Sexual harassment is a serious hazard that can be found in workplaces.
By industry
Specific occupational safety and health risk factors vary depending on the specific sector and industry. Construction workers might be particularly at risk of falls, for instance, whereas fishermen might be particularly at risk of drowning. Similarly psychosocial risks such as workplace violence are more pronounced for certain occupational groups such as health care employees, police, correctional officers and teachers.
Primary sector
Agriculture
Agriculture workers are often at risk of work-related injuries, lung disease, noise-induced hearing loss, skin disease, as well as certain cancers related to chemical use or prolonged sun exposure. On industrialized farms, injuries frequently involve the use of agricultural machinery. The most common cause of fatal agricultural injuries in the United States is tractor rollovers, which can be prevented by the use of roll over protection structures which limit the risk of injury in case a tractor rolls over. Pesticides and other chemicals used in farming can also be hazardous to worker health, and workers exposed to pesticides may experience illnesses or birth defects. As an industry in which families, including children, commonly work alongside their families, agriculture is a common source of occupational injuries and illnesses among younger workers. Common causes of fatal injuries among young farm worker include drowning, machinery and motor vehicle-related accidents.
The 2010 NHIS-OHS found elevated prevalence rates of several occupational exposures in the agriculture, forestry, and fishing sector which may negatively impact health. These workers often worked long hours. The prevalence rate of working more than 48 hours a week among workers employed in these industries was 37%, and 24% worked more than 60 hours a week. Of all workers in these industries, 85% frequently worked outdoors compared to 25% of all US workers. Additionally, 53% were frequently exposed to vapors, gas, dust, or fumes, compared to 25% of all US workers.
Mining and oil and gas extraction
The mining industry still has one of the highest rates of fatalities of any industry. There are a range of hazards present in surface and underground mining operations. In surface mining, leading hazards include such issues as geological instability, contact with plant and equipment, rock blasting, thermal environments (heat and cold), respiratory health (black lung), etc. In underground mining, operational hazards include respiratory health, explosions and gas (particularly in coal mine operations), geological instability, electrical equipment, contact with plant and equipment, heat stress, inrush of bodies of water, falls from height, confined spaces. ionising radiation, etc.
According to data from the 2010 NHIS-OHS, workers employed in mining and oil and gas extraction industries had high prevalence rates of exposure to potentially harmful work organization characteristics and hazardous chemicals. Many of these workers worked long hours: 50% worked more than 48 hours a week and 25% worked more than 60 hours a week in 2010. Additionally, 42% worked non-standard shifts (not a regular day shift). These workers also had high prevalence of exposure to physical/chemical hazards. In 2010, 39% had frequent skin contact with chemicals. Among nonsmoking workers, 28% of those in mining and oil and gas extraction industries had frequent exposure to secondhand smoke at work. About two-thirds were frequently exposed to vapors, gas, dust, or fumes at work.
Secondary sector
Construction
Construction is one of the most dangerous occupations in the world, incurring more occupational fatalities than any other sector in both the United States and in the European Union. In 2009, the fatal occupational injury rate among construction workers in the United States was nearly three times that for all workers. Falls are one of the most common causes of fatal and non-fatal injuries among construction workers. Proper safety equipment such as harnesses and guardrails and procedures such as securing ladders and inspecting scaffolding can curtail the risk of occupational injuries in the construction industry. Due to the fact that accidents may have disastrous consequences for employees as well as organizations, it is of utmost importance to ensure health and safety of workers and compliance with HSE construction requirements. Health and safety legislation in the construction industry involves many rules and regulations. For example, the role of the Construction Design Management (CDM) Coordinator as a requirement has been aimed at improving health and safety on-site.
The 2010 National Health Interview Survey Occupational Health Supplement (NHIS-OHS) identified work organization factors and occupational psychosocial and chemical/physical exposures which may increase some health risks. Among all US workers in the construction sector, 44% had non-standard work arrangements (were not regular permanent employees) compared to 19% of all US workers, 15% had temporary employment compared to 7% of all US workers, and 55% experienced job insecurity compared to 32% of all US workers. Prevalence rates for exposure to physical/chemical hazards were especially high for the construction sector. Among nonsmoking workers, 24% of construction workers were exposed to secondhand smoke while only 10% of all US workers were exposed. Other physical/chemical hazards with high prevalence rates in the construction industry were frequently working outdoors (73%) and frequent exposure to vapors, gas, dust, or fumes (51%).
Tertiary sector
The service sector comprises diverse workplaces. Each type of workplace has its own health risks. While some occupations have become more mobile, others still require people to sit at desks. As the number of service sector jobs has risen in developed countries, more and more jobs have become sedentary, presenting an array of health problems that differ from health problems associated with manufacturing and the primary sector. Contemporary health problems include obesity. Some working conditions, such as occupational stress, workplace bullying, and overwork, have negative consequences for physical and mental health.
Tipped wage workers are at a higher risk of negative mental health outcomes like addiction or depression. "The higher prevalence of mental health problems may be linked to the precarious nature of service work, including lower and unpredictable wages, insufficient benefits, and a lack of control over work hours and assigned shifts." Close to 70% of tipped wage workers are women. Additionally, "almost 40 percent of people who work for tips are people of color: 18 percent are Latino, 10 percent are African American, and 9 percent are Asian. Immigrants are also overrepresented in the tipped workforce." According to data from the 2010 NHIS-OHS, hazardous physical/chemical exposures in the service sector were lower than national averages. On the other hand, potentially harmful work organization characteristics and psychosocial workplace exposures were relatively common in this sector. Among all workers in the service industry, 30% experienced job insecurity in 2010, 27% worked non-standard shifts (not a regular day shift), 21% had non-standard work arrangements (were not regular permanent employees).
Due to the manual labor involved and on a per employee basis, the US Postal Service, UPS and FedEx are the 4th, 5th and 7th most dangerous companies to work for in the US.
Healthcare and social assistance
Healthcare workers are exposed to many hazards that can adversely affect their health and well-being. Long hours, changing shifts, physically demanding tasks, violence, and exposures to infectious diseases and harmful chemicals are examples of hazards that put these workers at risk for illness and injury. Musculoskeletal injury (MSI) is the most common health hazard in for healthcare workers and in workplaces overall. Injuries can be prevented by using proper body mechanics.
According to the Bureau of Labor statistics, US hospitals recorded 253,700 work-related injuries and illnesses in 2011, which is 6.8 work-related injuries and illnesses for every 100 full-time employees. The injury and illness rate in hospitals is higher than the rates in construction and manufacturing – two industries that are traditionally thought to be relatively hazardous.
Workplace fatality and injury statistics
Worldwide
An estimated 2.90 million work-related deaths occurred in 2019, increased from 2.78 million death from 2015. About, one-third of the total work-related deaths (31%) were due to circulatory diseases, while cancer contributed 29%, respiratory diseases 17%, and occupational injuries contributed 11% (or about 319,000 fatalities). Other diseases such as work-related communicable diseases contributed 6%, while neuropsychiatric conditions contributed 3% and work-related digestive disease and genitourinary diseases contributed 1% each. The contribution of cancers and circulatory diseases to total work-related deaths increased from 2015, while deaths due to occupational injuries decreased. Although work-related injury deaths and non-fatal injuries rates were on a decreasing trend, the total deaths and non-fatal outcomes were on the rise. Cancers represented the most significant cause of mortality in high-income countries. The number of non-fatal occupational injuries for 2019 was estimated to be 402 million.
Mortality rate is unevenly distributed, with male mortality rate (108.3 per 100,000 employed male individuals) being significantly higher than female rate (48.4 per 100,000). 6.7% of all deaths globally are represented by occupational fatalities.
European Union
Certain EU member states admit to having lacking quality control in occupational safety services, to situations in which risk analysis takes place without any on-site workplace visits and to insufficient implementation of certain EU OSH directives. Disparities between member states result in different impact of occupational hazards on the economy. In the early 2000s, the total societal costs of work-related health problems and accidents varied from 2.6% to 3.8% of the national GDPs across the member states.
In 2021, in the EU-27 as a whole, 93% of deaths due to injury were of males.
Russia
One of the decisions taken by the communist regime under Stalin was to reduce the number of accidents and occupational diseases to zero. The tendency to decline remained in the Russian Federation in the early 21st century. However, as in previous years, data reporting and publication was incomplete and manipulated, so that the actual number of work-related diseases and accidents are unknown. The ILO reports that, according to the information provided by the Russian government, there are 190,000 work-related fatalities each year, of which 15,000 due to occupational accidents.
After the demise of the USSR, enterprises became owned by oligarchs who were not interested in upholding safe and healthy conditions in the workplace. Expenditure on equipment modernization was minimal and the share of harmful workplaces increased. The government did not interfere in this, and sometimes it helped employers. At first, the increase in occupational diseases and accidents was slow, due to the fact that in the 1990s it was compensated by mass deindustrialization. However, in the 2000s deindustrialization slowed and occupational diseases and injuries started to rise in earnest. Therefore, in the 2010s the Ministry of Labor adopted federal law no. 426-FZ. This piece of legislation has been described as ineffective and based on the superficial assumption that the issuance of personal protective equipment to the employee means real improvement of working conditions. Meanwhile, the Ministry of Health made significant changes in the methods of risk assessment in the workplace. However, specialists from the Izmerov Research Institute of Occupational Health found that the post-2014 apparent decrease in the share of employees engaged in hazardous working conditions is due to the change in definitions consequent to the Ministry of Health's decision, but does not reflect actual improvements. This was most clearly shown in the results for the aluminum industry.
Further problems in the accounting of workplace fatalities arise from the fact that multiple Russian federal entities collect and publish records, a practice that should be avoided. In 2008 alone, 2074 accidents at work may have not been reported in official government sources.
United Kingdom
In the UK there were 135 fatal injuries at work in financial year 2022–2023, compared with 651 in 1974 (the year when the Health and Safety at Work Act was promulgated). The fatal injury rate declined from 2.1 fatalities per 100,000 workers in 1981 to 0.41 in financial year 2022–2023. Over recent decades reductions in both fatal and non-fatal workplace injuries have been very significant. However, illnesses statistics have not uniformly improved: while musculoskeletal disorders have diminished, the rate of self-reported work-related stress, depression or anxiety has increased, and the rate of mesothelioma deaths has remained broadly flat (due to past asbestos exposures).
United States
The Occupational Safety and Health Statistics (OSHS) program in the Bureau of Labor Statistics of the United States Department of Labor compiles information about workplace fatalities and non-fatal injuries in the United States. The OSHS program produces three annual reports:
Counts and rates of nonfatal occupational injuries and illnesses by detailed industry and case type (SOII summary data)
Case circumstances and worker demographic data for nonfatal occupational injuries and illnesses resulting in days away from work (SOII case and demographic data)
Counts and rates of fatal occupational injuries (CFOI data)
The Bureau also uses tools like AgInjuryNews.org to identify and compile additional sources of fatality reports for their datasets.
Between 1913 and 2013, workplace fatalities dropped by approximately 80%. In 1970, an estimated 14,000 workers were killed on the job. By 2021, in spite of the workforce having since more than doubled, workplace deaths were down to about 5,190. According to the census of occupational injuries 5,486 people died on the job in 2022, up from the 2021 total of 5,190. The fatal injury rate was 3.7 per 100,000 full-time equivalent workers. The decrease in the mortality rate is only partly (about 10–15%) explained by the deindustrialization of the US in the last 40 years.
About 3.5 million nonfatal workplace injuries and illnesses were reported by private industry employers in 2022, occurring at a rate of 3.0 cases per 100 full-time workers.
Management systems
Companies may adopt a safety and health management system (SMS), either voluntarily or because required by applicable regulations, to deal in a structured and systematic way with safety and health risks in their workplace. An SMS provides a systematic way to assess and improve prevention of workplace accidents and incidents based on structured management of workplace risks and hazards. It must be adaptable to changes in the organization's business and legislative requirements. It is usually based on the Deming cycle, or plan-do-check-act (PDCA) principle. An effective SMS should:
Define how the organization is set up to manage risk
Identify workplace hazards and implement suitable controls
Implement effective communication across all levels of the organization
Implement a process to identify and correct non-conformity and non-compliance issues
Implement a continual improvement process
Management standards across a range of business functions such as environment, quality and safety are now being designed so that these traditionally disparate elements can be integrated and managed within a single business management system and not as separate and stand-alone functions. Therefore, some organizations dovetail other management system functions, such as process safety, environmental resource management or quality management together with safety management to meet both regulatory requirements, industry sector requirements and their own internal and discretionary standard requirements.
Standards
International
The ILO published ILO-OSH 2001 on Guidelines on Occupational Safety and Health Management Systems to assist organizations with introducing OSH management systems. These guidelines encouraged continual improvement in employee health and safety, achieved via a constant process of policy; organization; planning and implementation; evaluation; and action for improvement, all supported by constant auditing to determine the success of OSH actions.
From 1999 to 2018, OHSAS 18001 was adopted as a British and Polish standard and widely used internationally. It was developed by a selection of trade bodies, international standards and certification bodies to address a gap where no third-party certifiable international standard existed. It was designed for integration with ISO 9001 and ISO 14001.
OHSAS 18001 was replaced by ISO 45001, which was published in March 2018 and implemented in March 2021.
National
National management system standards for occupational health and safety include AS/NZS 4801 for Australia and New Zealand (now superseded by ISO 45001), CSA Z1000:14 for Canada (which is due to be discontinued in favor of CSA Z45001:19, the Canadian adoption of ISO 45000) and ANSI/ASSP Z10 for the United States. In Germany, the Bavarian state government, in collaboration with trade associations and private companies, issued their OHRIS standard for occupational health and safety management systems. A new revision was issued in 2018. The Taiwan Occupational Safety and Health Management System (TOSHMS) was issued in 1997 under the auspices of Taiwan's Occupational Safety and Health Administration.
Identifying OSH hazards and assessing risk
Hazards, risks, outcomes
The terminology used in OSH varies between countries, but generally speaking:
A hazard is something that can cause harm if not controlled.
The outcome is the harm that results from an uncontrolled hazard.
A risk is a combination of the probability that a particular outcome may occur and the severity of the harm involved.
"Hazard", "risk", and "outcome" are used in other fields to describe e.g., environmental damage or damage to equipment. However, in the context of OSH, "harm" generally describes the direct or indirect degradation, temporary or permanent, of the physical, mental, or social well-being of workers. For example, repetitively carrying out manual handling of heavy objects is a hazard. The outcome could be a musculoskeletal disorder (MSD) or an acute back or joint injury. The risk can be expressed numerically (e.g., a 0.5 or 50/50 chance of the outcome occurring during a year), in relative terms (e.g., "high/medium/low"), or with a multi-dimensional classification scheme (e.g., situation-specific risks).
Hazard identification
Hazard identification is an important step in the overall risk assessment and risk management process. It is where individual work hazards are identified, assessed and controlled or eliminated as close to source (location of the hazard) as reasonably practicable. As technology, resources, social expectation or regulatory requirements change, hazard analysis focuses controls more closely toward the source of the hazard. Thus, hazard control is a dynamic program of prevention. Hazard-based programs also have the advantage of not assigning or implying there are "acceptable risks" in the workplace. A hazard-based program may not be able to eliminate all risks, but neither does it accept "satisfactory" – but still risky – outcomes. And as those who calculate and manage the risk are usually managers, while those exposed to the risks are a different group, a hazard-based approach can bypass conflict inherent in a risk-based approach.
The information that needs to be gathered from sources should apply to the specific type of work from which the hazards can come from. Examples of these sources include interviews with people who have worked in the field of the hazard, history and analysis of past incidents, and official reports of work and the hazards encountered. Of these, the personnel interviews may be the most critical in identifying undocumented practices, events, releases, hazards and other relevant information. Once the information is gathered from a collection of sources, it is recommended for these to be digitally archived (to allow for quick searching) and to have a physical set of the same information in order for it to be more accessible. One innovative way to display the complex historical hazard information is with a historical hazards identification map, which distills the hazard information into an easy-to-use graphical format.
Risk assessment
Modern occupational safety and health legislation usually demands that a risk assessment be carried out prior to making an intervention. This assessment should:
Identify the hazards
Identify all affected by the hazard and how
Evaluate the risk
Identify and prioritize appropriate control measures.
The calculation of risk is based on the likelihood or probability of the harm being realized and the severity of the consequences. This can be expressed mathematically as a quantitative assessment (by assigning low, medium and high likelihood and severity with integers and multiplying them to obtain a risk factor), or qualitatively as a description of the circumstances by which the harm could arise.
The assessment should be recorded and reviewed periodically and whenever there is a significant change to work practices. The assessment should include practical recommendations to control the risk. Once recommended controls are implemented, the risk should be re-calculated to determine if it has been lowered to an acceptable level. Generally speaking, newly introduced controls should lower risk by one level, i.e., from high to medium or from medium to low.
National legislation and public organizations
Occupational safety and health practice vary among nations with different approaches to legislation, regulation, enforcement, and incentives for compliance. In the EU, for example, some member states promote OSH by providing public monies as subsidies, grants or financing, while others have created tax system incentives for OSH investments. A third group of EU member states has experimented with using workplace accident insurance premium discounts for companies or organizations with strong OSH records.
Australia
In Australia, four of the six states and both territories have enacted and administer harmonized work health and safety legislation in accordance with the Intergovernmental Agreement for Regulatory and Operational Reform in Occupational Health and Safety. Each of these jurisdictions has enacted work health and safety legislation and regulations based on the Commonwealth Work Health and Safety Act 2011 and common codes of practice developed by Safe Work Australia. Some jurisdictions have also included mine safety under the model approach. However, most have retained separate legislation for the time being. In August 2019, Western Australia committed to join nearly every other state and territory in implementing the harmonized Model WHS Act, Regulations and other subsidiary legislation. Victoria has retained its own regime, although the Model WHS laws themselves drew heavily on the Victorian approach.
Canada
In Canada, workers are covered by provincial or federal labor codes depending on the sector in which they work. Workers covered by federal legislation (including those in mining, transportation, and federal employment) are covered by the Canada Labour Code; all other workers are covered by the health and safety legislation of the province in which they work. The Canadian Centre for Occupational Health and Safety (CCOHS), an agency of the Government of Canada, was created in 1978 by an act of parliament. The act was based on the belief that all Canadians had "a fundamental right to a healthy and safe working environment." CCOHS is mandated to promote safe and healthy workplaces and help prevent work-related injuries and illnesses.
China
In China, the Ministry of Health is responsible for occupational disease prevention and the State Administration of Work Safety workplace safety issues. The Work Safety Law (安全生产法) was issued on 1 November 2002. The Occupational Disease Control Act came into force on 1 May 2002. In 2018, the National Health Commission (NHC) was formally established to formulating national health policies. The NHC formulated the "National Occupational Disease Prevention and Control Plan (2021–2025)" in the context of the activities leading to the "Healthy China 2030" initiative.
European Union
The European Agency for Safety and Health at Work was founded in 1994. In the European Union, member states have enforcing authorities to ensure that the basic legal requirements relating to occupational health and safety are met. In many EU countries, there is strong cooperation between employer and worker organizations (e.g., unions) to ensure good OSH performance, as it is recognized this has benefits for both the worker (through maintenance of health) and the enterprise (through improved productivity and quality).
Member states have all transposed into their national legislation a series of directives that establish minimum standards on occupational health and safety. These directives (of which there are about 20 on a variety of topics) follow a similar structure requiring the employer to assess workplace risks and put in place preventive measures based on a hierarchy of hazard control. This hierarchy starts with elimination of the hazard and ends with personal protective equipment.
Denmark
In Denmark, occupational safety and health is regulated by the Danish Act on Working Environment and Cooperation at the Workplace. The Danish Working Environment Authority (Arbejdstilsynet) carries out inspections of companies, draws up more detailed rules on health and safety at work and provides information on health and safety at work. The result of each inspection is made public on the web pages of the Danish Working Environment Authority so that the general public, current and prospective employees, customers and other stakeholders can inform themselves about whether a given organization has passed the inspection.
Netherlands
In the Netherlands, the laws for safety and health at work are registered in the Working Conditions Act (Arbeidsomstandighedenwet and Arbeidsomstandighedenbeleid). Apart from the direct laws directed to safety and health in working environments, the private domain has added health and safety rules in Working Conditions Policies (Arbeidsomstandighedenbeleid), which are specified per industry. The Ministry of Social Affairs and Employment (SZW) monitors adherence to the rules through their inspection service. This inspection service investigates industrial accidents and it can suspend work and impose fines when it deems the Working Conditions Act has been violated. Companies can get certified with a VCA certificate for safety, health and environment performance. All employees have to obtain a VCA certificate too, with which they can prove that they know how to work according to the current and applicable safety and environmental regulations.
Ireland
The main health and safety regulation in Ireland is the Safety, Health and Welfare at Work Act 2005, which replaced earlier legislation from 1989. The Health and Safety Authority, based in Dublin, is responsible for enforcing health and safety at work legislation.
Spain
In Spain, occupational safety and health is regulated by the Spanish Act on Prevention of Labor Risks. The Ministry of Labor is the authority responsible for issues relating to labor environment. The National Institute for Safety and Health at Work (Instituto Nacional de Seguridad y Salud en el Trabajo, INSST) is the government's scientific and technical organization specialized in occupational safety and health.
Sweden
In Sweden, occupational safety and health is regulated by the Work Environment Act. The Swedish Work Environment Authority (Arbetsmiljöverket) is the government agency responsible for issues relating to the working environment. The agency works to disseminate information and furnish advice on OSH, has a mandate to carry out inspections, and a right to issue stipulations and injunctions to any non-compliant employer.
India
In India, the Ministry of Labour and Employment formulates national policies on occupational safety and health in factories and docks with advice and assistance from its Directorate General Factory Advice Service and Labour Institutes (DGFASLI), and enforces its policies through inspectorates of factories and inspectorates of dock safety. The DGFASLI provides technical support in formulating rules, conducting occupational safety surveys and administering occupational safety training programs.
Indonesia
In Indonesia, the Ministry of Manpower (Kementerian Ketenagakerjaan, or Kemnaker) is responsible to ensure the safety, health and welfare of workers. Important OHS acts include the Occupational Safety Act 1970 and the Occupational Health Act 1992. Sanctions, however, are still low (with a maximum of 15 million rupiahs fine and/or a maximum of one year in prison) and violations are still very frequent.
Japan
The Japanese Ministry of Health, Labor and Welfare (MHLW) is the governmental agency overseeing occupational safety and health in Japan. The MHLW is responsible for enforcing Industrial Safety and Health Act of 1972 – the key piece of OSH legislation in Japan –, setting regulations and guidelines, supervising labor inspectors who monitor workplaces for compliance with safety and health standards, investigating accidents, and issuing orders to improve safety conditions. The Labor Standards Bureau is an arm of MHLW tasked with supervising and guiding businesses, inspecting manufacturing facilities for safety and compliance, investigating accidents, collecting statistics, enforcing regulations and administering fines for safety violations, and paying accident compensation for injured workers.
The (JISHA) is a non-profit organization established under the Industrial Safety and Health Act of 1972. It works closely with MHLW, the regulatory body, to promote workplace safety and health. The responsibilities of JISHA include: Providing education and training on occupational safety and health, conducting research and surveys on workplace safety and health issues, offering technical guidance and consultations to businesses, disseminating information and raising awareness about occupational safety and health, and collaborating with international organizations to share best practices and improve global workplace safety standards.
The (JNIOSH) conducts research to support governmental policies in occupational safety and health. The organization categorizes its research into project studies, cooperative research, fundamental research, and government-requested research. Each category focuses on specific themes, from preventing accidents and ensuring workers' health, to addressing changes in employment structure. The organization sets clear goals, develops road maps, and collaborates with the Ministry of Health, Labor and Welfare to discuss progress and policy contributions.
Malaysia
In Malaysia, the Department of Occupational Safety and Health (DOSH) under the Ministry of Human Resources is responsible to ensure that the safety, health and welfare of workers in both the public and private sector is upheld. DOSH is responsible to enforce the Factories and Machinery Act 1967 and the Occupational Safety and Health Act 1994. Malaysia has a statutory mechanism for worker involvement through elected health and safety representatives and health and safety committees. This followed a similar approach originally adopted in Scandinavia.
Saudi Arabia
In Saudi Arabia, the Ministry of Human Resources and Social Development administrates workers' rights and the labor market as a whole, consistent with human rights rules upheld by the Human Rights Commission of the kingdom.
Singapore
In Singapore, the Ministry of Manpower (MOM) is the government agency in charge of OHS policies and enforcement. The key piece of legislation regulating aspects of OHS is the Workplace Safety and Health Act. The MOM promotes and manages campaigns against unsafe work practices, such as when working at height, operating cranes and in traffic management. Examples include Operation Cormorant and the Falls Prevention Campaign.
South Africa
In South Africa the Department of Employment and Labour is responsible for occupational health and safety inspection and enforcement in the commercial and industrial sectors, with the exclusion of mining, where the Department of Mineral Resources is responsible. The main statutory legislation on health and safety in the jurisdiction of the Department of Employment and Labour is the OHS Act or OHSA (Act No. 85 of 1993: Occupational Health and Safety Act, as amended by the Occupational Health and Safety Amendment Act, No. 181 of 1993). Regulations implementing the OHS Act include:
General Safety Regulations, 1986
Environmental Regulations for Workplaces, 1987
Driven Machinery Regulations, 1988
General Machinery Regulations, 1988
Noise Induced Hearing Loss Regulations, 2003
Pressure Equipment Regulations, 2004
General Administrative Regulations, 2003
Diving Regulations, 2009
Construction Regulations, 2014
Syria
In Syria, health and safety is the responsibility of the Ministry of Social Affairs and Labor.
Taiwan
In Taiwan, the of the Ministry of Labor is in charge of occupational safety and health. The matter is governed under the .
United Arab Emirates
In the United Arab Emirates, national OSH legislation is based on the Federal Law on Labor (1980). Order No. 32 of 1982 on Protection from Hazards and Ministerial Decision No. 37/2 of 1982 are also of importance. The competent authority for safety and health at work at the federal level is the Ministry of Human Resources and Emiratisation (MoHRE).
United Kingdom
Health and safety legislation in the UK is drawn up and enforced by the Health and Safety Executive and local authorities under the Health and Safety at Work etc. Act 1974 (HASAWA or HSWA). HASAWA introduced (section 2) a general duty on an employer to ensure, so far as is reasonably practicable, the health, safety and welfare at work of all his employees, with the intention of giving a legal framework supporting codes of practice not in themselves having legal force but establishing a strong presumption as to what was reasonably practicable (deviations from them could be justified by appropriate risk assessment). The previous reliance on detailed prescriptive rule-setting was seen as having failed to respond rapidly enough to technological change, leaving new technologies potentially unregulated or inappropriately regulated. HSE has continued to make some regulations giving absolute duties (where something must be done with no "reasonable practicability" test) but in the UK the regulatory trend is away from prescriptive rules, and toward goal setting and risk assessment. Recent major changes to the laws governing asbestos and fire safety management embrace the concept of risk assessment. The other key aspect of the UK legislation is a statutory mechanism for worker involvement through elected health and safety representatives and health and safety committees. This followed a similar approach in Scandinavia, and that approach has since been adopted in countries such as Australia, Canada, New Zealand and Malaysia.
The Health and Safety Executive service dealing with occupational medicine has been the Employment Medical Advisory Service. In 2014 a new occupational health organization, the Health and Work Service, was created to provide advice and assistance to employers in order to get back to work employees on long-term sick-leave. The service, funded by the government, offers medical assessments and treatment plans, on a voluntary basis, to people on long-term absence from their employer; in return, the government no longer foots the bill for statutory sick pay provided by the employer to the individual.
United States
In the United States, President Richard Nixon signed the Occupational Safety and Health Act into law on 29 December 1970. The act created the three agencies which administer OSH: the Occupational Safety and Health Administration (OSHA), the National Institute for Occupational Safety and Health (NIOSH), and the Occupational Safety and Health Review Commission (OSHRC). The act authorized OSHA to regulate private employers in the 50 states, the District of Columbia, and territories. It includes a general duty clause (29 U.S.C. §654, 5(a)) requiring an employer to comply with the Act and regulations derived from it, and to provide employees with "employment and a place of employment which are free from recognized hazards that are causing or are likely to cause [them] death or serious physical harm."
OSHA was established in 1971 under the Department of Labor. It has headquarters in Washington, DC, and ten regional offices, further broken down into districts, each organized into three sections: compliance, training, and assistance. Its stated mission is "to ensure safe and healthful working conditions for workers by setting and enforcing standards and by providing training, outreach, education and assistance." The original plan was for OSHA to oversee 50 state plans with OSHA funding 50% of each plan, but this did not work out that way: there are 26 approved state plans (with four covering only public employees) and OSHA manages the plan in the states not participating.
OSHA develops safety standards in the Code of Federal Regulations and enforces those safety standards through compliance inspections conducted by Compliance Officers; enforcement resources are focused on high-hazard industries. Worksites may apply to enter OSHA's Voluntary Protection Program (VPP). A successful application leads to an on-site inspection; if this is passed, the site gains VPP status and OSHA no longer inspect it annually nor (normally) visit it unless there is a fatal accident or an employee complaint until VPP revalidation (after three–five years). VPP sites generally have injury and illness rates less than half the average for their industry.
OSHA has a number of specialists in local offices to provide information and training to employers and employees at little or no cost. Similarly OSHA produces a range of publications and funds consultation services available for small businesses.
OSHA has strategic partnership and alliance programs to develop guidelines, assist in compliance, share resources, and educate workers in OHS. OSHA manages Susan B. Harwood grants to non-profit organizations to train workers and employers to recognize, avoid, and prevent safety and health hazards in the workplace. Grants focus on small business, hard-to-reach workers and high-hazard industries.
The National Institute for Occupational Safety and Health (NIOSH), also created under the Occupational Safety and Health Act, is the federal agency responsible for conducting research and making recommendations for the prevention of work-related injury and illness. NIOSH is part of the Centers for Disease Control and Prevention (CDC) within the Department of Health and Human Services.
Professional roles and responsibilities
Those in the field of occupational safety and health come from a wide range of disciplines and professions including medicine, occupational medicine, epidemiology, physiotherapy and rehabilitation, psychology, human factors and ergonomics, and many others. Professionals advise on a broad range of occupational safety and health matters. These include how to avoid particular pre-existing conditions causing a problem in the occupation, correct posture, frequency of rest breaks, preventive actions that can be undertaken, and so forth. The quality of occupational safety is characterized by (1) the indicators reflecting the level of industrial injuries, (2) the average number of days of incapacity for work per employer, (3) employees' satisfaction with their work conditions and (4) employees' motivation to work safely.
The main tasks undertaken by the OSH practitioner include:
Inspecting, testing and evaluating workplace environments, programs, equipment, and practices to ensure that they follow government safety regulation.
Designing and implementing workplace programs and procedures that control or prevent chemical, physical, or other risks to workers.
Educating employers and workers about maintaining workplace safety.
Demonstrating use of safety equipment and ensuring proper use by workers.
Investigating incidents to determine the cause and possible prevention.
Preparing written reports of their findings.
OSH specialists examine worksites for environmental or physical factors that could harm employee health, safety, comfort or performance. They then find ways to improve potential risk factors. For example, they may notice potentially hazardous conditions inside a chemical plant and suggest changes to lighting, equipment, materials, or ventilation. OSH technicians assist specialists by collecting data on work environments and implementing the worksite improvements that specialists plan. Technicians also may check to make sure that workers are using required protective gear, such as masks and hardhats. OSH specialists and technicians may develop and conduct employee training programs. These programs cover a range of topics, such as how to use safety equipment correctly and how to respond in an emergency. In the event of a workplace safety incident, specialists and technicians investigate its cause. They then analyze data from the incident, such as the number of people impacted, and look for trends in occurrence. This evaluation helps them to recommend improvements to prevent future incidents.
Given the high demand in society for health and safety provisions at work based on reliable information, OSH professionals should find their roots in evidence-based practice. A new term is "evidence-informed decision making". Evidence-based practice can be defined as the use of evidence from literature, and other evidence-based sources, for advice and decisions that favor the health, safety, well-being, and work ability of workers. Therefore, evidence-based information must be integrated with professional expertise and the workers' values. Contextual factors must be considered related to legislation, culture, financial, and technical possibilities. Ethical considerations should be heeded.
The roles and responsibilities of OSH professionals vary regionally but may include evaluating working environments, developing, endorsing and encouraging measures that might prevent injuries and illnesses, providing OSH information to employers, employees, and the public, providing medical examinations, and assessing the success of worker health programs.
The Netherlands
In the Netherlands, the required tasks for health and safety staff are only summarily defined and include:
Providing voluntary medical examinations.
Providing a consulting room on the work environment to the workers.
Providing health assessments (if needed for the job concerned).
Dutch law influences the job of the safety professional mainly through the requirement on employers to use the services of a certified working-conditions service for advice. A certified service must employ sufficient numbers of four types of certified experts to cover the risks in the organizations which use the service:
A safety professional
An occupational hygienist
An occupational physician
A work and organization specialist.
In 2004, 14% of health and safety practitioners in the Netherlands had an MSc and 63% had a BSc. 23% had training as an OSH technician.
Norway
In Norway, the main required tasks of an occupational health and safety practitioner include:
Systematic evaluations of the working environment.
Endorsing preventive measures which eliminate causes of illnesses in the workplace.
Providing information on the subject of employees' health.
Providing information on occupational hygiene, ergonomics, and environmental and safety risks in the workplace.
In 2004, 37% of health and safety practitioners in Norway had an MSc and 44% had a BSc. 19% had training as an OSH technician.
Education and training
Formal education
There are multiple levels of training applicable to the field of occupational safety and health. Programs range from individual non-credit certificates and awareness courses focusing on specific areas of concern, to full doctoral programs. The University of Southern California was one of the first schools in the US to offer a PhD program focusing on the field. Further, multiple master's degree programs exist, such as that of the Indiana State University who offer MSc and MA programs. Other masters-level qualifications include the MSc and Master of Research (MRes) degrees offered by the University of Hull in collaboration with the National Examination Board in Occupational Safety and Health (NEBOSH). Graduate programs are designed to train educators, as well as high-level practitioners.
Many OSH generalists focus on undergraduate studies; programs within schools, such as that of the University of North Carolina's online BSc in environmental health and safety, fill a large majority of hygienist needs. However, smaller companies often do not have full-time safety specialists on staff, thus, they appoint a current employee to the responsibility. Individuals finding themselves in positions such as these, or for those enhancing marketability in the job-search and promotion arena, may seek out a credit certificate program. For example, the University of Connecticut's online OSH certificate provides students familiarity with overarching concepts through a 15-credit (5-course) program. Programs such as these are often adequate tools in building a strong educational platform for new safety managers with a minimal outlay of time and money. Further, most hygienists seek certification by organizations that train in specific areas of concentration, focusing on isolated workplace hazards. The American Society of Safety Professionals (ASSP), Board for Global EHS Credentialing (BGC), and American Industrial Hygiene Association (AIHA) offer individual certificates on many different subjects from forklift operation to waste disposal and are the chief facilitators of continuing education in the OSH sector.
In the US, the training of safety professionals is supported by NIOSH through their NIOSH Education and Research Centers.
In the UK, both NEBOSH and the Institution of Occupational Safety and Health (IOSH) develop health and safety qualifications and courses which cater to a mixture of industries and levels of study. Although both organizations are based in the UK, their qualifications are recognized and studied internationally as they are delivered through their own global networks of approved providers. The Health and Safety Executive has also developed health and safety qualifications in collaboration with the NEBOSH.
In Australia, training in OSH is available at the vocational education and training level, and at university undergraduate and postgraduate level. Such university courses may be accredited by an accreditation board of the Safety Institute of Australia. The institute has produced a Body of Knowledge which it considers is required by a generalist safety and health professional and offers a professional qualification. The Australian Institute of Health and Safety has instituted the national Eric Wigglesworth OHS Education Medal to recognize achievement in OSH doctorate education.
Field training
One form of training delivered in the workplace is known as toolbox talk. According to the UK's Health and Safety Executive, a toolbox talk is a short presentation to the workforce on a single aspect of health and safety. Such talks are often used, especially in the construction industry, by site supervisors, frontline managers and owners of small construction firms to prepare and deliver advice on matters of health, safety and the environment and to obtain feedback from the workforce.
Use of virtual reality
Virtual reality is a novel tool to deliver safety training in many fields. Some applications have been developed and tested especially for fire and construction safety training. Preliminary findings seem to support that virtual reality is more effective than traditional training in knowledge retention.
Contemporary developments
On an international scale, the World Health Organization (WHO) and the International Labour Organization (ILO) have begun focusing on labor environments in developing nations with projects such as Healthy Cities. Many of these developing countries are stuck in a situation in which their relative lack of resources to invest in OSH leads to increased costs due to work-related illnesses and accidents. The ILO estimates that work-related illness and accidents cost up to 10% of GDP in Latin America, compared with just 2.6% to 3.8% in the EU. There is continued use of asbestos, a notorious hazard, in some developing countries. So asbestos-related disease is expected to continue to be a significant problem well into the future.
Artificial intelligence
There are several broad aspects of artificial intelligence (AI) that may give rise to specific hazards.
Many hazards of AI are psychosocial in nature due to its potential to cause changes in work organization. For example, AI is expected to lead to changes in the skills required of workers, requiring retraining of existing workers, flexibility, and openness to change. Increased monitoring may lead to micromanagement or perception of surveillance, and thus to workplace stress. There is also the risk of people being forced to work at a robot's pace, or to monitor robot performance at nonstandard hours. Additionally, algorithms may show algorithmic bias through being trained on past decisions may mimic undesirable human biases, for example, past discriminatory hiring and firing practices. Some approaches to accident analysis may be biased to safeguard a technological system and its developers by assigning blame to the individual human operator instead.
Physical hazards in the form of human–robot collisions may arise from robots using AI, especially collaborative robots (cobots). Cobots are intended to operate in close proximity to humans, which makes it impossible to implement the common hazard control of isolating the robot using fences or other barriers, which is widely used for traditional industrial robots. Automated guided vehicles are a type of cobot in common use, often as forklifts or pallet jacks in warehouses or factories.
Both applications and hazards arising from AI can be considered as part of existing frameworks for occupational health and safety risk management. As with all hazards, risk identification is most effective and least costly when done in the design phase. AI, in common with other computational technologies, requires cybersecurity measures to stop software breaches and intrusions, as well as information privacy measures. Communication and transparency with workers about data usage is a control for psychosocial hazards arising from security and privacy issues. Workplace health surveillance, the collection and analysis of health data on workers, is challenging for AI because labor data are often reported in aggregate, does not provide breakdowns between different types of work, and is focused on economic data such as wages and employment rates rather than skill content of jobs.
Coronavirus
The National Institute of Occupational Safety and Health (NIOSH) National Occupational Research Agenda Manufacturing Council established an externally-lead COVID-19 workgroup to provide exposure control information specific to working in manufacturing environments. The workgroup identified disseminating information most relevant to manufacturing workplaces as a priority, and that would include providing content in Wikipedia. This includes evidence-based practices for infection control plans, and communication tools.
Nanotechnology
Nanotechnology is an example of a new, relatively unstudied technology. A Swiss survey of 138 companies using or producing nanoparticulate matter in 2006 resulted in forty completed questionnaires. Sixty-five per cent of respondent companies stated they did not have a formal risk assessment process for dealing with nanoparticulate matter. Nanotechnology already presents new issues for OSH professionals that will only become more difficult as nanostructures become more complex. The size of the particles renders most containment and personal protective equipment ineffective. The toxicology values for macro sized industrial substances are rendered inaccurate due to the unique nature of nanoparticulate matter. As nanoparticulate matter decreases in size its relative surface area increases dramatically, increasing any catalytic effect or chemical reactivity substantially versus the known value for the macro substance. This presents a new set of challenges in the near future to rethink contemporary measures to safeguard the health and welfare of employees against a nanoparticulate substance that most conventional controls have not been designed to manage.
Occupational health inequalities
Occupational health inequalities refer to differences in occupational injuries and illnesses that are closely linked with demographic, social, cultural, economic, and/or political factors. Although many advances have been made to rectify gaps in occupational health within the past half century, still many persist due to the complex overlapping of occupational health and social factors. There are three main areas of research on occupational health inequities:
Identifying which social factors, either individually or in combination, contribute to the inequitable distribution of work-related benefits and risks.
Examining how the related structural disadvantages materialize in the lives of workers to put them at greater risk for occupational injury or illness.
Translating these findings into intervention research to build an evidence base of effective ways for reducing occupational health inequities.
Transnational and immigrant worker populations
Immigrant worker populations often are at greater risk for workplace injuries and fatalities. For example within the United States, immigrant Mexican workers have one of the highest rates of fatal workplace injuries out of all of the working population. Statistics like these are explained through a combination of social, structural, and physical aspects of the workplace. These workers struggle to access safety information and resources in their native languages because of lack of social and political inclusion. In addition to linguistically tailored interventions, it is also critical for the interventions to be culturally appropriate.
Those residing in a country to work without a visa or other formal authorization may also not have access to legal resources and recourse that are designed to protect most workers. Health and Safety organizations that rely on whistleblowers instead of their own independent inspections may be especially at risk of having an incomplete picture of worker health.
See also
Regulations
Related fields
Notes
References
Sources
Further reading
External links
International agencies
(EU) European Agency for Safety & Health at Work (EU-OSHA)
(UN) International Labour Organization (ILO)
National bodies
(Canada) Canadian Centre for Occupational Health and Safety
(Japan) Japan Industrial Safety and Health Association
(Japan) Ministry of Health, Labor and Welfare
(Japan) Japan National Institute of Occupational Safety and Health
(UK) Health and Safety Executive
(US) National Institute for Occupational Safety and Health (NIOSH)
(US) Occupational Safety and Health Administration (OSHA)
Legislation
(Canada) EnviroOSH Legislation plus Standards
Publications
American Journal of Industrial Medicine
Education
National Examination Board in Occupational Safety and Health (NEBOSH)
Risk management in business
Industrial hygiene
Safety engineering
Environmental social science
Working conditions
Infectious diseases
Industrial and organizational psychology
Health promotion
ky:Өндүрүштөгү гигиена | 0.76392 | 0.998302 | 0.762623 |
Medical model of disability | The medical model of disability, or medical model, is based in a biomedical perception of disability. This model links a disability diagnosis to an individual's physical body. The model supposes that a disability may reduce the individual's quality of life and aims to correct or diminish the disability with medical intervention. It is often contrasted with the social model of disability.
The medical model focuses on curing or managing illness or disability. By extension, the medical model supposes a compassionate or just society invests resources in health care and related services in an attempt to cure or manage disabilities medically. This is in an aim to expand or improve functioning, and to allow disabled people to lead a more "normal" life. The medical profession's responsibility and potential in this area is seen as central.
History
Before the introduction of the biomedical model, patients relaying their narratives to the doctors was paramount. Through these narratives and developing an intimate relationship with the patients, the doctors would develop treatment plans in a time when diagnostic and treatment options were limited. This could particularly be illustrated with aristocratic doctors treating the elite during the 17th and 18th century.
In 1980, the World Health Organization (WHO) introduced a framework for working with disability, publishing the "International Classification of Impairments, Disabilities and Handicaps". The framework proposed to approach disability by using the terms Impairment, Handicap and Disability.
Impairment = a loss or abnormality of physical bodily structure or function, of logic-psychic origin, or physiological or anatomical origin
Disability = any limitation or function loss deriving from impairment that prevents the performance of an activity in the time lapse considered normal for a human being
Handicap = the disadvantaged condition deriving from impairment or disability limiting a person performing a role considered normal in respect of age, sex and social and cultural factors
Components and usage
While personal narrative is present in interpersonal interactions, and particularly dominant in Western Culture, personal narrative during interactions with medical personnel is reduced to relaying information about specific symptoms of the disability to medical professionals. The medical professionals then interpret the information provided about the disability by the patient to determine a diagnosis, which likely will be linked to biological causes. Medical professionals now define what is "normal" and what is "abnormal" in terms of biology and disability.
In some countries, the medical model of disability has influenced legislation and policy pertaining to persons with disabilities on a national level.
The International Classification of Functioning, Disability and Health (ICF), published in 2001, defines disability as an umbrella term for impairments, activity limitations and participation restrictions. Disability is the interaction between individuals with a health condition (such as cerebral palsy, Down syndrome and depression) and personal and environmental factors (such as negative attitudes, inaccessible transportation and public buildings, and limited social supports).
The altered language and words used show a marked change in emphasis from talking in terms of disease or impairment to talking in terms of levels of health and functioning. It takes into account the social aspects of disability and does not see disability only as a 'medical' or 'biological' dysfunction. That change is consistent with widespread acceptance of the social model of disability.
Criticism
The medical model focuses on individual intervention and treatment as the proper approach to disability. Emphasis is placed on the biological expression of disability rather than on the systems and structures that can inhibit the lives of people with disabilities. Under the medical model, disabled bodies are defined as something to be corrected, changed, or cured. Terminology used can perpetuate negative labels such as deviant, pathological, and defective, thus, best understood in medical terms. The history and future of disability are severely constricted, focusing solely on medical implications and can overlook social constructions contributing to the experience of disability. Alternatively, the social model presents disability less as an objective fact of the body and mind, and positions it in terms of social relations and barriers that an individual may face in social settings.
The medical model of disability can influence the factors within the creation of medical or disability aides, such as creating aides reminiscent of hospital settings and institutions which can be traumatic to some who have spent and extended period of time there, or which solely reflect the function of hospital aides but not necessarily the function of an aide outside of these contexts.
Among advocates of disability rights, who tend to subscribe to the social model instead, the medical model of disability is often cited as the basis of an unintended social degradation of disabled people (otherwise known as ableism). Resources are seen as excessively misdirected towards an almost-exclusively medical focus when those same resources could potentially be used towards things like universal design and societal inclusionary practices. This includes the monetary and societal costs and benefits of various interventions, be they medical, surgical, social or occupational, from prosthetics, drug-based and other "cures", and medical tests such as genetic screening or preimplantation genetic diagnosis. According to disability rights advocates, the medical model of disability is used to justify large investment in these procedures, technologies and research, when adaptation of the disabled person's environment could potentially be more beneficial to the society at large, as well as financially cheaper and physically more attainable.
Also, some disability rights groups see the medical model of disability as a civil rights issue and criticize charitable organizations or medical initiatives that use it in their portrayal of disabled people, because it promotes a pitiable, essentially negative, largely disempowered image of people with disabilities rather than casting disability as a political, social and environmental problem (see also the political slogan "Piss On Pity").
See also
Cure
Medical model of autism
Medicalization
Models of deafness
Neurodiversity
References
External links
The Open University: Making your teaching inclusive: The Medical Model
Disability
Medical sociology
Medical models
Sociological theories
Social theories | 0.770171 | 0.990153 | 0.762587 |
Structure fire | A structure fire is a fire involving the structural components of various types of residential, commercial or industrial buildings, such as barn fires. Residential buildings range from single-family detached homes and townhouses to apartments and tower blocks, or various commercial buildings ranging from offices to shopping malls. This is in contrast to "room and contents" fires, chimney fires, vehicle fires, wildfires or other outdoor fires.
Structure fires typically have a similar response from the fire department that include engines, ladder trucks, rescue squads, chief officers, and an EMS unit, each of which will have specific initial assignments. The actual response and assignments will vary between fire departments.
It is not unusual for some fire departments to have a predetermined mobilization plan for when a fire incident is reported in certain structures in their area. This plan may include mobilizing the nearest aerial firefighting vehicle to a tower block, or a foam-carrying vehicle to structures known to contain certain hazardous chemicals.
Types (United States)
In the United States, according to NFPA, structures are divided into five construction types based on the severity of the fire hazard:
Causes of house fires
In a recent study, conducted by American Survey CO, for the period of 2005–2010, the causes of house fires across America were as follows:
Appliances and electrical (stoves, microwaves, toasters, radiators, various heating systems, small appliances) - approximately 47%
Gas leaks - around 5-7%
Open flames (candles, fireplaces) - approximately 32%
Children playing with matches - Around 10%
Spreading of fires from house to house - approximately 3%
See also
Fire extinguisher
Firefighting
Fire prevention
External links
National Fire Protection Association (US)
NFPA Research
Haung, Kai. 2009. Population and Building Factors That Impact Residential Fire Rates in Large U.S. Cities. Applied Research Project. Texas State University. http://ecommons.txstate.edu/arp/287/
Firefighting
Types of fire | 0.775305 | 0.983568 | 0.762565 |
Anaerobic exercise | Anaerobic exercise is a type of exercise that breaks down glucose in the body without using oxygen; anaerobic means "without oxygen". This type of exercise leads to a buildup of lactic acid.
In practical terms, this means that anaerobic exercise is more intense, but shorter in duration than aerobic exercise.
The biochemistry of anaerobic exercise involves a process called glycolysis, in which glucose is converted to adenosine triphosphate (ATP), the primary source of energy for cellular reactions.
Anaerobic exercise may be used to help build endurance, muscle strength, and power.
Metabolism
Anaerobic metabolism is a natural part of metabolic energy expenditure. Fast twitch muscles (as compared to slow twitch muscles) operate using anaerobic metabolic systems, such that any use of fast twitch muscle fibers leads to increased anaerobic energy expenditure. Intense exercise lasting upwards of four minutes (e.g. a mile race) may still have considerable anaerobic energy expenditure. An example is high-intensity interval training, an exercise strategy that is performed under anaerobic conditions at intensities that reach an excess of 90% of the maximum heart rate. Anaerobic energy expenditure is difficult to accurately quantify. Some methods estimate the anaerobic component of an exercise by determining the maximum accumulated oxygen deficit or measuring the lactic acid formation in muscle mass.
In contrast, aerobic exercise includes lower intensity activities performed for longer periods of time. Activities such as walking, jogging, rowing, and cycling require oxygen to generate the energy needed for prolonged exercise (i.e., aerobic energy expenditure). For sports that require repeated short bursts of exercise, the aerobic system acts to replenish and store energy during recovery periods to fuel the next energy burst. Therefore, training strategies for many sports demand that both aerobic and anaerobic systems be developed. The benefits of adding anaerobic exercise include improving cardiovascular endurance as well as build and maintaining muscle strength and losing weight.
The anaerobic energy systems are:
The alactic anaerobic system, which consists of high energy phosphates, adenosine triphosphate, and creatine phosphate; and
The lactic anaerobic system, which features anaerobic glycolysis.
High energy phosphates are stored in limited quantities within muscle cells. Anaerobic glycolysis exclusively uses glucose (and glycogen) as a fuel in the absence of oxygen, or more specifically, when ATP is needed at rates that exceed those provided by aerobic metabolism. The consequence of such rapid glucose breakdown is the formation of lactic acid (or more appropriately, its conjugate base lactate at biological pH levels). Physical activities that last up to about thirty seconds rely primarily on the former ATP-CP phosphagen system. Beyond this time, both aerobic and anaerobic glycolysis-based metabolic systems are used.
The by-product of anaerobic glycolysis—lactate—has traditionally been thought to be detrimental to muscle function. However, this appears likely only when lactate levels are very high. Elevated lactate levels are only one of many changes that occur within and around muscle cells during intense exercise that can lead to fatigue. Fatigue, which is muscle failure, is a complex subject that depends on more than just changes to lactate concentration. Energy availability, oxygen delivery, perception to pain, and other psychological factors all contribute to muscular fatigue. Elevated muscle and blood lactate concentrations are a natural consequence of any physical exertion. The effectiveness of anaerobic activity can be improved through training.
Anaerobic exercise also increases an individual's basal metabolic rate (BMR).
Examples
Anaerobic exercises are high-intensity workouts completed over shorter durations, while aerobic exercises include variable-intensity workouts completed over longer durations. Some examples of anaerobic exercises include sprints, high-intensity interval training (HIIT), and strength training.
See also
Aerobic exercise
Bioenergetic systems
Margaria-Kalamen power test
Strength training
Weight training
Cori cycle
Citric acid cycle
References
Exercise biochemistry
Exercise physiology
Physical exercise
Bodybuilding | 0.764012 | 0.998055 | 0.762526 |
Cellular respiration | Cellular respiration is the process by which biological fuels are oxidized in the presence of an inorganic electron acceptor, such as oxygen, to drive the bulk production of adenosine triphosphate (ATP), which contains energy. Cellular respiration may be described as a set of metabolic reactions and processes that take place in the cells of organisms to convert chemical energy from nutrients into ATP, and then release waste products.
Cellular respiration is a vital process that occurs in the cells of all living organisms. Respiration can be either aerobic, requiring oxygen, or anaerobic; some organisms can switch between aerobic and anaerobic respiration.
The reactions involved in respiration are catabolic reactions, which break large molecules into smaller ones, producing large amounts of energy (ATP). Respiration is one of the key ways a cell releases chemical energy to fuel cellular activity. The overall reaction occurs in a series of biochemical steps, some of which are redox reactions. Although cellular respiration is technically a combustion reaction, it is an unusual one because of the slow, controlled release of energy from the series of reactions.
Nutrients that are commonly used by animal and plant cells in respiration include sugar, amino acids and fatty acids, and the most common oxidizing agent is molecular oxygen (O2). The chemical energy stored in ATP (the bond of its third phosphate group to the rest of the molecule can be broken allowing more stable products to form, thereby releasing energy for use by the cell) can then be used to drive processes requiring energy, including biosynthesis, locomotion or transportation of molecules across cell membranes.
Aerobic respiration
Aerobic respiration requires oxygen (O2) in order to create ATP. Although carbohydrates, fats and proteins are consumed as reactants, aerobic respiration is the preferred method of pyruvate production in glycolysis, and requires pyruvate to the mitochondria in order to be fully oxidized by the citric acid cycle. The products of this process are carbon dioxide and water, and the energy transferred is used to make bonds between ADP and a third phosphate group to form ATP (adenosine triphosphate), by substrate-level phosphorylation, NADH and FADH2.
The negative ΔG indicates that the reaction is exothermic (exergonic) and can occur spontaneously.
The potential of NADH and FADH2 is converted to more ATP through an electron transport chain with oxygen and protons (hydrogen ions) as the "terminal electron acceptors". Most of the ATP produced by aerobic cellular respiration is made by oxidative phosphorylation. The energy released is used to create a chemiosmotic potential by pumping protons across a membrane. This potential is then used to drive ATP synthase and produce ATP from ADP and a phosphate group. Biology textbooks often state that 38 ATP molecules can be made per oxidized glucose molecule during cellular respiration (2 from glycolysis, 2 from the Krebs cycle, and about 34 from the electron transport system). However, this maximum yield is never quite reached because of losses due to leaky membranes as well as the cost of moving pyruvate and ADP into the mitochondrial matrix, and current estimates range around 29 to 30 ATP per glucose.
Aerobic metabolism is up to 15 times more efficient than anaerobic metabolism (which yields 2 molecules of ATP per 1 molecule of glucose). However, some anaerobic organisms, such as methanogens are able to continue with anaerobic respiration, yielding more ATP by using inorganic molecules other than oxygen as final electron acceptors in the electron transport chain. They share the initial pathway of glycolysis but aerobic metabolism continues with the Krebs cycle and oxidative phosphorylation. The post-glycolytic reactions take place in the mitochondria in eukaryotic cells, and in the cytoplasm in prokaryotic cells.
Although plants are net consumers of carbon dioxide and producers of oxygen via photosynthesis, plant respiration accounts for about half of the CO2 generated annually by terrestrial ecosystems.
Glycolysis
Glycolysis is a metabolic pathway that takes place in the cytosol of cells in all living organisms. Glycolysis can be literally translated as "sugar splitting", and occurs regardless of oxygen's presence or absence. In aerobic conditions, the process converts one molecule of glucose into two molecules of pyruvate (pyruvic acid), generating energy in the form of two net molecules of ATP. Four molecules of ATP per glucose are actually produced, but two are consumed as part of the preparatory phase. The initial phosphorylation of glucose is required to increase the reactivity (decrease its stability) in order for the molecule to be cleaved into two pyruvate molecules by the enzyme aldolase. During the pay-off phase of glycolysis, four phosphate groups are transferred to four ADP by substrate-level phosphorylation to make four ATP, and two NADH are produced when the pyruvate is oxidized. The overall reaction can be expressed this way:
Glucose + 2 NAD+ + 2 Pi + 2 ADP → 2 pyruvate + 2 NADH + 2 ATP + 2 H+ + 2 H2O + energy
Starting with glucose, 1 ATP is used to donate a phosphate to glucose to produce glucose 6-phosphate. Glycogen can be converted into glucose 6-phosphate as well with the help of glycogen phosphorylase. During energy metabolism, glucose 6-phosphate becomes fructose 6-phosphate. An additional ATP is used to phosphorylate fructose 6-phosphate into fructose 1,6-bisphosphate by the help of phosphofructokinase. Fructose 1,6-biphosphate then splits into two phosphorylated molecules with three carbon chains which later degrades into pyruvate.
Oxidative decarboxylation of pyruvate
Pyruvate is oxidized to acetyl-CoA and CO2 by the pyruvate dehydrogenase complex (PDC). The PDC contains multiple copies of three enzymes and is located in the mitochondria of eukaryotic cells and in the cytosol of prokaryotes. In the conversion of pyruvate to acetyl-CoA, one molecule of NADH and one molecule of CO2 is formed.
Citric acid cycle
The citric acid cycle is also called the Krebs cycle or the tricarboxylic acid cycle. When oxygen is present, acetyl-CoA is produced from the pyruvate molecules created from glycolysis. Once acetyl-CoA is formed, aerobic or anaerobic respiration can occur. When oxygen is present, the mitochondria will undergo aerobic respiration which leads to the Krebs cycle. However, if oxygen is not present, fermentation of the pyruvate molecule will occur. In the presence of oxygen, when acetyl-CoA is produced, the molecule then enters the citric acid cycle (Krebs cycle) inside the mitochondrial matrix, and is oxidized to CO2 while at the same time reducing NAD to NADH. NADH can be used by the electron transport chain to create further ATP as part of oxidative phosphorylation. To fully oxidize the equivalent of one glucose molecule, two acetyl-CoA must be metabolized by the Krebs cycle. Two low-energy waste products, H2O and CO2, are created during this cycle.
The citric acid cycle is an 8-step process involving 18 different enzymes and co-enzymes. During the cycle, acetyl-CoA (2 carbons) + oxaloacetate (4 carbons) yields citrate (6 carbons), which is rearranged to a more reactive form called isocitrate (6 carbons). Isocitrate is modified to become α-ketoglutarate (5 carbons), succinyl-CoA, succinate, fumarate, malate and, finally, oxaloacetate.
The net gain from one cycle is 3 NADH and 1 FADH2 as hydrogen (proton plus electron) carrying compounds and 1 high-energy GTP, which may subsequently be used to produce ATP. Thus, the total yield from 1 glucose molecule (2 pyruvate molecules) is 6 NADH, 2 FADH2, and 2 ATP.
Oxidative phosphorylation
In eukaryotes, oxidative phosphorylation occurs in the mitochondrial cristae. It comprises the electron transport chain that establishes a proton gradient (chemiosmotic potential) across the boundary of the inner membrane by oxidizing the NADH produced from the Krebs cycle. ATP is synthesized by the ATP synthase enzyme when the chemiosmotic gradient is used to drive the phosphorylation of ADP. The electrons are finally transferred to exogenous oxygen and, with the addition of two protons, water is formed.
Efficiency of ATP production
The table below describes the reactions involved when one glucose molecule is fully oxidized into carbon dioxide. It is assumed that all the reduced coenzymes are oxidized by the electron transport chain and used for oxidative phosphorylation.
Although there is a theoretical yield of 38 ATP molecules per glucose during cellular respiration, such conditions are generally not realized because of losses such as the cost of moving pyruvate (from glycolysis), phosphate, and ADP (substrates for ATP synthesis) into the mitochondria. All are actively transported using carriers that utilize the stored energy in the proton electrochemical gradient.
Pyruvate is taken up by a specific, low Km transporter to bring it into the mitochondrial matrix for oxidation by the pyruvate dehydrogenase complex.
The phosphate carrier (PiC) mediates the electroneutral exchange (antiport) of phosphate (; Pi) for OH− or symport of phosphate and protons (H+) across the inner membrane, and the driving force for moving phosphate ions into the mitochondria is the proton motive force.
The ATP-ADP translocase (also called adenine nucleotide translocase, ANT) is an antiporter and exchanges ADP and ATP across the inner membrane. The driving force is due to the ATP (−4) having a more negative charge than the ADP (−3), and thus it dissipates some of the electrical component of the proton electrochemical gradient.
The outcome of these transport processes using the proton electrochemical gradient is that more than 3 H+ are needed to make 1 ATP. Obviously, this reduces the theoretical efficiency of the whole process and the likely maximum is closer to 28–30 ATP molecules. In practice the efficiency may be even lower because the inner membrane of the mitochondria is slightly leaky to protons. Other factors may also dissipate the proton gradient creating an apparently leaky mitochondria. An uncoupling protein known as thermogenin is expressed in some cell types and is a channel that can transport protons. When this protein is active in the inner membrane it short circuits the coupling between the electron transport chain and ATP synthesis. The potential energy from the proton gradient is not used to make ATP but generates heat. This is particularly important in brown fat thermogenesis of newborn and hibernating mammals.
According to some newer sources, the ATP yield during aerobic respiration is not 36–38, but only about 30–32 ATP molecules / 1 molecule of glucose , because:
ATP : NADH+H+ and ATP : FADH2 ratios during the oxidative phosphorylation appear to be not 3 and 2, but 2.5 and 1.5 respectively. Unlike in the substrate-level phosphorylation, the stoichiometry here is difficult to establish.
ATP synthase produces 1 ATP / 3 H+. However the exchange of matrix ATP for cytosolic ADP and Pi (antiport with OH− or symport with H+) mediated by ATP–ADP translocase and phosphate carrier consumes 1 H+ / 1 ATP as a result of regeneration of the transmembrane potential changed during this transfer, so the net ratio is 1 ATP : 4 H+.
The mitochondrial electron transport chain proton pump transfers across the inner membrane 10 H+ / 1 NADH+H+ (4 + 2 + 4) or 6 H+ / 1 FADH2 (2 + 4).
So the final stoichiometry is
1 NADH+H+ : 10 H+ : 10/4 ATP = 1 NADH+H+ : 2.5 ATP
1 FADH2 : 6 H+ : 6/4 ATP = 1 FADH2 : 1.5 ATP
ATP : NADH+H+ coming from glycolysis ratio during the oxidative phosphorylation is
1.5, as for FADH2, if hydrogen atoms (2H++2e−) are transferred from cytosolic NADH+H+ to mitochondrial FAD by the glycerol phosphate shuttle located in the inner mitochondrial membrane.
2.5 in case of malate-aspartate shuttle transferring hydrogen atoms from cytosolic NADH+H+ to mitochondrial NAD+
So finally we have, per molecule of glucose
Substrate-level phosphorylation: 2 ATP from glycolysis + 2 ATP (directly GTP) from Krebs cycle
Oxidative phosphorylation
2 NADH+H+ from glycolysis: 2 × 1.5 ATP (if glycerol phosphate shuttle transfers hydrogen atoms) or 2 × 2.5 ATP (malate-aspartate shuttle)
2 NADH+H+ from the oxidative decarboxylation of pyruvate and 6 from Krebs cycle: 8 × 2.5 ATP
2 FADH2 from the Krebs cycle: 2 × 1.5 ATP
Altogether this gives 4 + 3 (or 5) + 20 + 3 = 30 (or 32) ATP per molecule of glucose
These figures may still require further tweaking as new structural details become available. The above value of 3 H+ / ATP for the synthase assumes that the synthase translocates 9 protons, and produces 3 ATP, per rotation. The number of protons depends on the number of c subunits in the Fo c-ring, and it is now known that this is 10 in yeast Fo and 8 for vertebrates. Including one H+ for the transport reactions, this means that synthesis of one ATP requires protons in yeast and in vertebrates. This would imply that in human mitochondria the 10 protons from oxidizing NADH would produce 2.72 ATP (instead of 2.5) and the 6 protons from oxidizing succinate or ubiquinol would produce 1.64 ATP (instead of 1.5). This is consistent with experimental results within the margin of error described in a recent review.
The total ATP yield in ethanol or lactic acid fermentation is only 2 molecules coming from glycolysis, because pyruvate is not transferred to the mitochondrion and finally oxidized to the carbon dioxide (CO2), but reduced to ethanol or lactic acid in the cytoplasm.
Fermentation
Without oxygen, pyruvate (pyruvic acid) is not metabolized by cellular respiration but undergoes a process of fermentation. The pyruvate is not transported into the mitochondrion but remains in the cytoplasm, where it is converted to waste products that may be removed from the cell. This serves the purpose of oxidizing the electron carriers so that they can perform glycolysis again and removing the excess pyruvate. Fermentation oxidizes NADH to NAD+ so it can be re-used in glycolysis. In the absence of oxygen, fermentation prevents the buildup of NADH in the cytoplasm and provides NAD+ for glycolysis. This waste product varies depending on the organism. In skeletal muscles, the waste product is lactic acid. This type of fermentation is called lactic acid fermentation. In strenuous exercise, when energy demands exceed energy supply, the respiratory chain cannot process all of the hydrogen atoms joined by NADH. During anaerobic glycolysis, NAD+ regenerates when pairs of hydrogen combine with pyruvate to form lactate. Lactate formation is catalyzed by lactate dehydrogenase in a reversible reaction. Lactate can also be used as an indirect precursor for liver glycogen. During recovery, when oxygen becomes available, NAD+ attaches to hydrogen from lactate to form ATP. In yeast, the waste products are ethanol and carbon dioxide. This type of fermentation is known as alcoholic or ethanol fermentation. The ATP generated in this process is made by substrate-level phosphorylation, which does not require oxygen.
Fermentation is less efficient at using the energy from glucose: only 2 ATP are produced per glucose, compared to the 38 ATP per glucose nominally produced by aerobic respiration. Glycolytic ATP, however, is produced more quickly. For prokaryotes to continue a rapid growth rate when they are shifted from an aerobic environment to an anaerobic environment, they must increase the rate of the glycolytic reactions. For multicellular organisms, during short bursts of strenuous activity, muscle cells use fermentation to supplement the ATP production from the slower aerobic respiration, so fermentation may be used by a cell even before the oxygen levels are depleted, as is the case in sports that do not require athletes to pace themselves, such as sprinting.
Anaerobic respiration
Cellular respiration is the process by which biological fuels are oxidised in the presence of an inorganic electron acceptor, such as oxygen, to produce large amounts of energy and drive the bulk production of ATP.
Anaerobic respiration is used by microorganisms, either bacteria or archaea, in which neither oxygen (aerobic respiration) nor pyruvate derivatives (fermentation) is the final electron acceptor. Rather, an inorganic acceptor such as sulfate, nitrate, or sulfur (S) is used. Such organisms could be found in unusual places such as underwater caves or near hydrothermal vents at the bottom of the ocean., as well as in anoxic soils or sediment in wetland ecosystems.
In July 2019, a scientific study of Kidd Mine in Canada discovered sulfur-breathing organisms which live below the surface. These organisms are also remarkable because they consume minerals such as pyrite as their food source.
See also
Maintenance respiration: maintenance as a functional component of cellular respiration
Microphysiometry
Pasteur point
Respirometry: research tool to explore cellular respiration
Tetrazolium chloride: cellular respiration indicator
Complex 1: NADH:ubiquinone oxidoreductes
References
External links
A detailed description of respiration vs. fermentation
Kimball's online resource for cellular respiration
Cellular Respiration and Fermentation at Clermont College
Metabolism
Plant physiology | 0.763229 | 0.999078 | 0.762525 |
Foodborne illness | Foodborne illness (also known as foodborne disease and food poisoning) is any illness resulting from the contamination of food by pathogenic bacteria, viruses, or parasites, as well as prions (the agents of mad cow disease), and toxins such as aflatoxins in peanuts, poisonous mushrooms, and various species of beans that have not been boiled for at least 10 minutes.
Symptoms vary depending on the cause. They often include vomiting, fever, and aches, and may include diarrhea. Bouts of vomiting can be repeated with an extended delay in between. This is because even if infected food was eliminated from the stomach in the first bout, microbes, like bacteria (if applicable), can pass through the stomach into the intestine and begin to multiply. Some types of microbes stay in the intestine.
For contaminants requiring an incubation period, symptoms may not manifest for hours to days, depending on the cause and on the quantity of consumption. Longer incubation periods tend to cause those affected to not associate the symptoms with the item consumed, so they may misattribute the symptoms to gastroenteritis, for example.
Causes
Foodborne illness usually arises from improper handling, preparation, or food storage. Good hygiene practices before, during, and after food preparation can reduce the chances of contracting an illness. There is a consensus in the public health community that regular hand-washing is one of the most effective defenses against the spread of foodborne illness. The action of monitoring food to ensure that it will not cause foodborne illness is known as food safety. Foodborne disease can also be caused by a large variety of toxins that affect the environment.
Furthermore, foodborne illness can be caused by a number of chemicals, such as pesticides, medicines, and natural toxic substances such as vomitoxin, poisonous mushrooms or reef fish.
Bacteria
Bacteria are a common cause of foodborne illness. In 2000, the United Kingdom reported the individual bacteria involved as the following: Campylobacter jejuni 77.3%, Salmonella 20.9%, Escherichia coli O157:H7 1.4%, and all others less than 0.56%.
In the past, bacterial infections were thought to be more prevalent because few places had the capability to test for norovirus and no active surveillance was being done for this particular agent. Toxins from bacterial infections are delayed because the bacteria need time to multiply. As a result, symptoms associated with intoxication are usually not seen until 12–72 hours or more after eating contaminated food. However, in some cases, such as Staphylococcal food poisoning, the onset of illness can be as soon as 30 minutes after ingesting contaminated food.
A 2022 study concluded that washing uncooked chicken could increase the risk of pathogen transfer, and that specific washing conditions can decrease the risk of transfer.
Most common bacterial foodborne pathogens are:
Campylobacter jejuni which can lead to secondary Guillain–Barré syndrome and periodontitis
Clostridium perfringens, the "cafeteria germ"
Salmonella spp. – its S. typhimurium infection is caused by consumption of eggs or poultry that are not adequately cooked or by other interactive human-animal pathogens
Escherichia coli O157:H7 enterohemorrhagic (EHEC) which can cause hemolytic-uremic syndrome
Other common bacterial foodborne pathogens are:
Bacillus cereus
Escherichia coli, other virulence properties, such as enteroinvasive (EIEC), enteropathogenic (EPEC), enterotoxigenic (ETEC), enteroaggregative (EAEC or EAgEC)
Listeria monocytogenes
Shigella spp.
Staphylococcus aureus
Streptococcus
Vibrio cholerae, including O1 and non-O1
Vibrio parahaemolyticus
Vibrio vulnificus
Yersinia enterocolitica and Yersinia pseudotuberculosis
Less common bacterial agents:
Brucella spp.
Corynebacterium ulcerans
Coxiella burnetii or Q fever
Plesiomonas shigelloides
Enterotoxins
In addition to disease caused by direct bacterial infection, some foodborne illnesses are caused by enterotoxins (exotoxins targeting the intestines). Enterotoxins can produce illness even when the microbes that produced them have been killed. Symptom onset varies with the toxin but may be rapid in onset, as in the case of enterotoxins of Staphylococcus aureus in which symptoms appear in one to six hours. This causes intense vomiting including or not including diarrhea (resulting in staphylococcal enteritis), and staphylococcal enterotoxins (most commonly staphylococcal enterotoxin A but also including staphylococcal enterotoxin B) are the most commonly reported enterotoxins although cases of poisoning are likely underestimated. It occurs mainly in cooked and processed foods due to competition with other biota in raw foods, and humans are the main cause of contamination as a substantial percentage of humans are persistent carriers of S. aureus. The CDC has estimated about 240,000 cases per year in the United States.
Clostridium botulinum
Clostridium perfringens
Bacillus cereus
The rare but potentially deadly disease botulism occurs when the anaerobic bacterium Clostridium botulinum grows in improperly canned low-acid foods and produces botulin, a powerful paralytic toxin.
Pseudoalteromonas tetraodonis, certain species of Pseudomonas and Vibrio, and some other bacteria, produce the lethal tetrodotoxin, which is present in the tissues of some living animal species rather than being a product of decomposition.
Emerging foodborne pathogens
Aeromonas hydrophila, Aeromonas caviae, Aeromonas sobria
Scandinavian outbreaks of Yersinia enterocolitica have recently increased to an annual basis, connected to the non-canonical contamination of pre-washed salad.
Preventing bacterial food poisoning
Governments have the primary mandate of ensuring safe food for all, however all actors in the food chain are responsible to ensure only safe food reaches the consumer, thus preventing foodborne illnesses. This is achieved through the implementation of strict hygiene rules and a public veterinary and phytosanitary service that monitors animal products throughout the food chain, from farming to delivery in shops and restaurants. This regulation includes:
traceability: the origin of the ingredients (farm of origin, identification of the crop or animal) and where and when it has been processed must be known in the final product; in this way, the origin of the disease can be traced and resolved (and possibly penalized), and the final products can be removed from sale if a problem is detected;
enforcement of hygiene procedures such as HACCP and the "cold chain";
power of control and of law enforcement of veterinarians.
In August 2006, the United States Food and Drug Administration approved phage therapy which involves spraying meat with viruses that infect bacteria, and thus preventing infection. This has raised concerns because without mandatory labeling, consumers would not know that meat and poultry products have been treated with the spray.
At home, prevention mainly consists of good food safety practices. Many forms of bacterial poisoning can be prevented by cooking food sufficiently, and either eating it quickly or refrigerating it effectively. Many toxins, however, are not destroyed by heat treatment.
Techniques that help prevent food borne illness in the kitchen are hand washing, rinsing produce, preventing cross-contamination, proper storage, and maintaining cooking temperatures. In general, freezing or refrigerating prevents virtually all bacteria from growing, and heating food sufficiently kills parasites, viruses, and most bacteria. Bacteria grow most rapidly at the range of temperatures between , called the "danger zone". Storing food below or above the "danger zone" can effectively limit the production of toxins. For storing leftovers, the food must be put in shallow containers
for quick cooling and must be refrigerated within two hours. When food is reheated, it must reach an internal temperature of or until hot or steaming to kill bacteria.
Mycotoxins and alimentary mycotoxicoses
The term alimentary mycotoxicosis refers to the effect of poisoning by mycotoxins through food consumption. The term mycotoxin is usually reserved for the toxic chemical compounds naturally produced by fungi that readily colonize crops under given temperature and moisture conditions. Mycotoxins can have important effects on human and animal health. For example, an outbreak which occurred in the UK during 1960 caused the death of 100,000 turkeys which had consumed aflatoxin-contaminated peanut meal. In the USSR in World War II, 5,000 people died due to alimentary toxic aleukia (ALA).. In Kenya, mycotoxins led to the death of 125 people in 2004, after consumption of contaminated grains. In animals, mycotoxicosis targets organ systems such as liver and digestive system. Other effects can include reduced productivity and suppression of the immune system, thus pre-disposing the animals to other secondary infections.
The common foodborne Mycotoxins include:
Aflatoxins – originating from Aspergillus parasiticus and Aspergillus flavus. They are frequently found in tree nuts, peanuts, maize, sorghum and other oilseeds, including corn and cottonseeds. The pronounced forms of aflatoxins are those of B1, B2, G1, and G2, amongst which Aflatoxin B1 predominantly targets the liver, which will result in necrosis, cirrhosis, and carcinoma. Other forms of aflatoxins exist as metabolites such as Aflatoxin M1. In the US, the acceptable level of total aflatoxins in foods is less than 20 μg/kg, except for Aflatoxin M1 in milk, which should be less than 0.5 μg/kg The official document can be found at FDA's website. The European union has more stringent standards, set at 10 μg/kg in cereals and cereal products. These references are also adopted in other countries.
Altertoxins – are those of alternariol (AOH), alternariol methyl ether (AME), altenuene (ALT), altertoxin-1 (ATX-1), tenuazonic acid (TeA), and radicinin (RAD), originating from Alternaria spp. Some of the toxins can be present in sorghum, ragi, wheat and tomatoes. Some research has shown that the toxins can be easily cross-contaminated between grain commodities, suggesting that manufacturing and storage of grain commodities is a critical practice.
Citrinin
Citreoviridin
Cyclopiazonic acid
Cytochalasins
Ergot alkaloids / ergopeptine alkaloids – ergotamine
Fumonisins – Crop corn can be easily contaminated by the fungi Fusarium moniliforme, and its fumonisin B1 will cause leukoencephalomalacia (LEM) in horses, pulmonary edema syndrome (PES) in pigs, liver cancer in rats and esophageal cancer in humans. For human and animal health, both the FDA and the EC have regulated the content levels of toxins in food and animal feed.
Fusaric acid
Fusarochromanone
Kojic acid
Lolitrem alkaloids
Moniliformin
3-Nitropropionic acid
Nivalenol
Ochratoxins – In Australia, The Limit of Reporting (LOR) level for ochratoxin A (OTA) analyses in 20th Australian Total Diet Survey was 1 μg/kg, whereas the EC restricts the content of OTA to 5 μg/kg in cereal commodities, 3 μg/kg in processed products and 10 μg/kg in dried vine fruits.
Oosporeine
Patulin – Currently, this toxin has been advisably regulated on fruit products. The EC and the FDA have limited it to under 50 μg/kg for fruit juice and fruit nectar, while limits of 25 μg/kg for solid-contained fruit products and 10 μg/kg for baby foods were specified by the EC.
Phomopsins
Sporidesmin A
Sterigmatocystin
Tremorgenic mycotoxins – Five of them have been reported to be associated with molds found in fermented meats. These are fumitremorgen B, paxilline, penitrem A, verrucosidin, and verruculogen.
Trichothecenes – sourced from Cephalosporium, Fusarium, Myrothecium, Stachybotrys, and Trichoderma. The toxins are usually found in molded maize, wheat, corn, peanuts and rice, or animal feed of hay and straw. Four trichothecenes, T-2 toxin, HT-2 toxin, diacetoxyscirpenol (DAS), and deoxynivalenol (DON) have been most commonly encountered by humans and animals. The consequences of oral intake of, or dermal exposure to, the toxins will result in alimentary toxic aleukia, neutropenia, aplastic anemia, thrombocytopenia and/or skin irritation. In 1993, the FDA issued a document for the content limits of DON in food and animal feed at an advisory level. In 2003, US published a patent that is very promising for farmers to produce a trichothecene-resistant crop.
Zearalenone
Zearalenols
Viruses
Viral infections make up perhaps one third of cases of food poisoning in developed countries. In the US, more than 50% of cases are viral and noroviruses are the most common foodborne illness, causing 57% of outbreaks in 2004. Foodborne viral infection are usually of intermediate (1–3 days) incubation period, causing illnesses which are self-limited in otherwise healthy individuals; they are similar to the bacterial forms described above.
Enterovirus
Hepatitis A is distinguished from other viral causes by its prolonged (2–6 week) incubation period and its ability to spread beyond the stomach and intestines into the liver. It often results in jaundice, or yellowing of the skin, but rarely leads to chronic liver dysfunction. The virus has been found to cause infection due to the consumption of fresh-cut produce which has fecal contamination.
Hepatitis E
Norovirus
Rotavirus
Parasites
Most foodborne parasites are zoonoses.
Platyhelminthes:
Diphyllobothrium sp.
Nanophyetus sp.
Taenia saginata
Taenia solium
Fasciola hepatica
See also: Tapeworm and Flatworm
Nematode:
Anisakis sp.
Ascaris lumbricoides
Eustrongylides sp.
Toxocara
Trichinella spiralis
Trichuris trichiura
Protozoa:
Acanthamoeba and other free-living amoebae
Cryptosporidiosis
Cyclospora cayetanensis
Entamoeba histolytica
Giardia lamblia
Sarcocystis hominis
Sarcocystis suihominis
Toxoplasma
Natural toxins
Several foods can naturally contain toxins, many of which are not produced by bacteria. Plants in particular may be toxic; animals which are naturally poisonous to eat are rare. In evolutionary terms, animals can escape being eaten by fleeing; plants can use only passive defenses such as poisons and distasteful substances, for example capsaicin in chili peppers and pungent sulfur compounds in garlic and onions. Most animal poisons are not synthesised by the animal, but acquired by eating poisonous plants to which the animal is immune, or by bacterial action.
Alkaloids
Ciguatera poisoning
Grayanotoxin (honey intoxication)
Hormones from the thyroid glands of slaughtered animals (especially triiodothyronine in cases of hamburger thyrotoxicosis or alimentary thyrotoxicosis)
Mushroom toxins
Phytohaemagglutinin (red kidney bean poisoning; destroyed by boiling)
Pyrrolizidine alkaloids
Shellfish toxin, including paralytic shellfish poisoning, diarrhetic shellfish poisoning, neurotoxic shellfish poisoning, amnesic shellfish poisoning and ciguatera fish poisoning
Scombrotoxin
Solanine (green potato poisoning)
Tetrodotoxin (fugu fish poisoning)
Some plants contain substances which are toxic in large doses, but have therapeutic properties in appropriate dosages.
Foxglove contains cardiac glycosides.
Poisonous hemlock (conium) has medicinal uses.
Other pathogenic agents
Prions, resulting in Creutzfeldt–Jakob disease (CJD) and its variant (vCJD)
"Ptomaine poisoning" misconception
Ptomaine poisoning was a myth that persisted in the public consciousness, in newspaper headlines, and legal cases as an official diagnosis, decades after it had been scientifically disproven in the 1910s.
In the 19th century, the Italian chemist Francesco Selmi, of Bologna, introduced the generic name (from Greek ptōma, "fall, fallen body, corpse") for alkaloids found in decaying animal and vegetable matter, especially (as reflected in their names) putrescine and cadaverine. The 1892 Merck's Bulletin stated, "We name such products of bacterial origin ptomaines; and the special alkaloid produced by the comma bacillus is variously named Cadaverine, Putrescine, etc.", while The Lancet stated, "The chemical ferments produced in the system, the... ptomaines which may exercise so disastrous an influence." It is now known that the "disastrous... influence" is due to the direct action of bacteria and only slightly due to the alkaloids. Thus, the use of the phrase "ptomaine poisoning" is now largely obsolete.
At a Communist political convention in Massillon, Ohio, and aboard a cruise ship in Washington, D.C., hundreds of people were sickened in separate incidents by tainted potato salad, during a single week in 1932, drawing national attention to the dangers of so-called "ptomaine poisoning" in the pages of the American news weekly, Time. In 1944, another newspaper article reported that over 150 people in Chicago were hospitalized with ptomaine poisoning, apparently from rice pudding served by a restaurant chain.
Mechanism
Incubation period
The delay between the consumption of contaminated food and the appearance of the first symptoms of illness is called the incubation period. This ranges from hours to days (and rarely months or even years, such as in the case of listeriosis or bovine spongiform encephalopathy), depending on the agent, and on how much was consumed. If symptoms occur within one to six hours after eating the food, it suggests that it is caused by a bacterial toxin or a chemical rather than live bacteria.
The long incubation period of many foodborne illnesses tends to cause those affected to attribute their symptoms to gastroenteritis.
During the incubation period, microbes pass through the stomach into the intestine, attach to the cells lining the intestinal walls, and begin to multiply there. Some types of microbes stay in the intestine, some produce a toxin that is absorbed into the bloodstream, and some can directly invade the deeper body tissues. The symptoms produced depend on the type of microbe.
Infectious dose
The infectious dose is the amount of agent that must be consumed to give rise to symptoms of foodborne illness, and varies according to the agent and the consumer's age and overall health. Pathogens vary in minimum infectious dose; for example, Shigella sonnei has a low estimated minimum dose of < 500 colony-forming units (CFU) while Staphylococcus aureus has a relatively high estimate.
In the case of Salmonella a relatively large inoculum of 1 million to 1 billion organisms is necessary to produce symptoms in healthy human volunteers, as Salmonellae are very sensitive to acid. An unusually high stomach pH level (low acidity) greatly reduces the number of bacteria required to cause symptoms by a factor of between 10 and 100.
Gut microbiota unaccustomed to endemic organisms
Foodborne illness often occurs as travelers' diarrhea in persons whose gut microbiota is unaccustomed to organisms endemic to the visited region. This effect of microbiologic naïveté is compounded by any food safety lapses in the food's preparation.
Epidemiology
Asymptomatic subclinical infection may help spread these diseases, particularly Staphylococcus aureus, Campylobacter, Salmonella, Shigella, Enterobacter, Vibrio cholerae, and Yersinia. For example, as of 1984 it was estimated that in the United States, 200,000 people were asymptomatic carriers of Salmonella.
Infants
Globally, infants are a group that is especially vulnerable to foodborne disease. The World Health Organization has issued recommendations for the preparation, use and storage of prepared formulas. Breastfeeding remains the best preventive measure for protection from foodborne infections in infants.
United States
A CDC report for the period 2017–2019 found that 41% of outbreaks at restaurants were caused by a sick employee. Contributory factors identified included lack of written policy compliance with FDA recommendations for identifying red-flag symptoms, glove use, and hand washing; lack of paid sick leave at the majority of establishments; and social pressure to come to work even while sick. The remaining outbreaks had a variety of causes, including inadequate cooking, improper temperature, and cross-contamination.
In the United States, using FoodNet data from 2000 to 2007, the CDC estimated there were 47.8 million foodborne illnesses per year (16,000 cases for 100,000 inhabitants) with 9.4 million of these caused by 31 known identified pathogens.
127,839 were hospitalized (43 per 100,000 inhabitants per year).
3,037 people died (1.0 per 100,000 inhabitants per year).
United Kingdom
According to a 2012 report from the Food Standards Agency, there were around a million cases of foodborne illness per year (1,580 cases for 100,000 inhabitants).
20,000 people were hospitalised (32 per 100,000 inhabitants);
500 died (0.80 per 100,000 inhabitants).
France
This data pertains to reported medical cases of 23 specific pathogens in the 1990s, as opposed to total population estimates of all foodborne illness for the United States.
In France, for 735,590 to 769,615 cases of infection identified as being with the 23 specific pathogens, 238,836 to 269,085 were estimated to have been contracted from food:
between 12,995 and 22,030 people were hospitalized (10,188 to 17,771 estimated to have contracted their infections from food);
between 306 and 797 people died (228 to 691 estimated to have contracted their infections from food).
Australia
A study by the Australian National University published in 2022 for Food Standards Australia New Zealand estimated there are 4.67 million cases of food poisoning in Australia each year that result in 47,900 hospitalisations, 38 deaths and a cost to the economy of $2.1 billion.
A previous study using different methodology and published in November 2014, found in 2010 that there were an estimated 4.1 million cases of foodborne gastroenteritis acquired in Australia on average each year, along with 5,140 cases of non-gastrointestinal illness.
The main causes were norovirus, pathogenic Escherichia coli, Campylobacter spp. and non-typhoidal Salmonella spp., although the causes of approximately 80% of illnesses were unknown. Approximately 25% (90% CrI: 13%–42%) of the 15.9 million episodes of gastroenteritis that occur in Australia were estimated to be transmitted by contaminated food. This equates to an average of approximately one episode of foodborne gastroenteritis every five years per person. Data on the number of hospitalisations and deaths represent the occurrence of serious foodborne illness. Including gastroenteritis, non-gastroenteritis and sequelae, there were an estimated annual 31,920 (90% CrI: 29,500–35,500) hospitalisations due to foodborne illness and 86 (90% CrI: 70–105) deaths due to foodborne illness circa 2010. This study concludes that these rates are similar to recent estimates in the US and Canada.
A main aim of this study was to compare if foodborne illness incidence had increased over time. In this study, similar methods of assessment were applied to data from circa 2000, which showed that the rate of foodborne gastroenteritis had not changed significantly over time. Two key estimates were the total number of gastroenteritis episodes each year, and the proportion considered foodborne. In circa 2010, it was estimated that 25% of all episodes of gastroenteritis were foodborne. By applying this proportion of episodes due to food to the incidence of gastroenteritis circa 2000, there were an estimated 4.3 million (90% CrI: 2.2–7.3 million) episodes of foodborne gastroenteritis circa 2000, although credible intervals overlap with 2010. Taking into account changes in population size, applying these equivalent methods suggests a 17% decrease in the rate of foodborne gastroenteritis between 2000 and 2010, with considerable overlap of the 90% credible intervals.
This study replaces a previous estimate of 5.4 million cases of foodborne illness in Australia every year, causing:
18,000 hospitalizations
120 deaths (0.5 deaths per 100,000 inhabitants)
2.1 million lost days off work
1.2 million doctor consultations
300,000 prescriptions for antibiotics.
Most foodborne disease outbreaks in Australia have been linked to raw or minimally cooked eggs or poultry. The Australian Food Safety Information Council estimates that one third of cases of food poisoning occur in the home.
Outbreaks
The vast majority of reported cases of foodborne illness occur as individual or sporadic cases. The origin of most sporadic cases is undetermined. In the United States, where people eat outside the home frequently, 58% of cases originate from commercial food facilities (2004 FoodNet data). An outbreak is defined as occurring when two or more people experience similar illness after consuming food from a common source.
Often, a combination of events contributes to an outbreak, for example, food might be left at room temperature for many hours, allowing bacteria to multiply which is compounded by inadequate cooking which results in a failure to kill the dangerously elevated bacterial levels.
Outbreaks are usually identified when those affected know each other. Outbreaks can also be identified by public health staff when there are unexpected increases in laboratory results for certain strains of bacteria. Outbreak detection and investigation in the United States is primarily handled by local health jurisdictions and is inconsistent from district to district. It is estimated that 1–2% of outbreaks are detected.
Society and culture
United Kingdom
In Aberdeen, in 1964, a large-scale (>400 cases) outbreak of typhoid occurred, caused by contaminated corned beef which had been imported from Argentina. The corned beef was placed in cans and because the cooling plant had failed, cold river water from the Plate estuary was used to cool the cans. One of the cans had a defect and the meat inside was contaminated. That meat was then sliced using a meat slicer in a shop in Aberdeen, and a lack of machinery-cleaning led to the spreading of the contamination to other meats cut in the slicer. Those meats were eaten by people in Aberdeen who then became ill.
Serious outbreaks of foodborne illness since the 1970s prompted key changes in UK food safety law. The outbreaks included the deaths of 19 patients in the Stanley Royd Hospital outbreak and the bovine spongiform encephalopathy (BSE, mad cow disease) outbreak identified in the 1980s. The deaths of 21 people in the 1996 Wishaw outbreak of E. coli O157 was a precursor to the establishment of the Food Standards Agency which, according to Tony Blair in the 1998 white paper A Force for Change Cm 3830, "would be powerful, open and dedicated to the interests of consumers".
In May 2015, for the second year running, England's Food Standards Agency devoted its annual Food Safety Week to "The Chicken Challenge". The focus was on the handling of raw chicken in the home and in catering facilities in a drive to reduce the high levels of food poisoning from the campylobacter bacterium. Anne Hardy argues that widespread public education of food hygiene can be useful, particularly through media (TV cookery programmes) and advertisement. She points to the examples set by Scandinavian societies.
United States
In 2001, the Center for Science in the Public Interest petitioned the United States Department of Agriculture to require meat packers to remove spinal cords before processing cattle carcasses for human consumption, a measure designed to lessen the risk of infection by variant Creutzfeldt–Jakob disease. The petition was supported by the American Public Health Association, the Consumer Federation of America, the Government Accountability Project, the National Consumers League, and Safe Tables Our Priority.
None of the US Department of Health and Human Services targets regarding incidence of foodborne infections were reached in 2007.
A report issued in June 2018 by NBC's Minneapolis station using research by both the CDC and the Minnesota Department of Health concluded that foodborne illness is on the rise in the U.S.
India
In India, Entamoeba is the most common cause of food illness, followed by Campylobacter bacteria, Salmonella bacteria, E. coli bacteria, and norovirus. According to statistics, food poisoning was the second most common cause of infectious disease outbreak in India in 2017. The numbers of outbreaks have increased from 50 in 2008 to 242 in 2017.
Organizations
The World Health Organization Department of Food Safety and Zoonoses (FOS) provides scientific advice for organizations and the public on issues concerning the safety of food. Its mission is to lower the burden of foodborne disease, thereby strengthening the health security and sustainable development of Member States. Foodborne and waterborne diarrhoeal diseases kill an estimated 2.2 million people annually, most of whom are children. WHO works closely with the Food and Agriculture Organization of the United Nations (FAO) to address food safety issues along the entire food production chain—from production to consumption—using new methods of risk analysis. These methods provide efficient, science-based tools to improve food safety, thereby benefiting both public health and economic development.
International Food Safety Authorities Network (INFOSAN)
The International Food Safety Authorities Network (INFOSAN) is a joint program of the WHO and FAO. INFOSAN has been connecting national authorities from around the globe since 2004, with the goal of preventing the international spread of contaminated food and foodborne disease and strengthening food safety systems globally. This is done by:
Promoting the rapid exchange of information during food safety events;
Sharing information on important food safety issues of global interest;
Promoting partnership and collaboration between countries; and
Helping countries strengthen their capacity to manage food safety risks.
Membership to INFOSAN is voluntary, but is restricted to representatives from national and regional government authorities and requires an official letter of designation. INFOSAN seeks to reflect the multidisciplinary nature of food safety and promote intersectoral collaboration by requesting the designation of Focal Points in each of the respective national authorities with a stake in food safety, and a single Emergency Contact Point in the national authority with the responsibility for coordinating national food safety emergencies; countries choosing to be members of INFOSAN are committed to sharing information between their respective food safety authorities and other INFOSAN members. The operational definition of a food safety authority includes those authorities involved in: food policy; risk assessment; food control and management; food inspection services; foodborne disease surveillance and response; laboratory services for monitoring and surveillance of foods and foodborne diseases; and food safety information, education and communication across the farm-to-table continuum.
Prioritisation of foodborne pathogens
The Food and Agriculture Organization of the United Nations and The World Health Organization have published a global ranking of foodborne parasites using a multicriteria ranking tool concluding that Taenia solium was the most relevant, followed by Echinococcus granulosus, Echinococcus multilocularis, and Toxoplasma gondii. The same method was used regionally to rank the most important foodborne parasites in Europe ranking Echinococcus multilocularis of highest relevance, followed by Toxoplasma gondii and Trichinella spiralis.
Regulatory steps
Food may be contaminated during all stages of food production and retailing. In order to prevent viral contamination, regulatory authorities in Europe have enacted several measures:
European Commission Regulation (EC) No 2073/2005 of November 15, 2005
European Committee for Standardization (CEN): Standard method for the detection of norovirus and hepatitis A virus in food products (shellfish, fruits and vegetables, surfaces and bottled water)
CODEX Committee on Food Hygiene (CCFH): Guideline for the application of general principles of food hygiene for the control of viruses in food
See also
American Public Health Association v. Butz
Food allergy
Food microbiology
Food quality
Food safety
Food spoilage
Food testing strips
Gastroenteritis
List of foodborne illness outbreaks by country
List of food contamination incidents
Mycotoxicology
STOP Foodborne Illness
United States Disease Control and Prevention
Zoonotic pathogens
References
Further reading
Periodicals
International Journal of Food Microbiology, , Elsevier
Foodborne Pathogens and Disease, , Mary Ann Liebert, Inc.
Mycopathologia, (electronic), (paper), Springer
Books
(electronic).
External links
Foodborne diseases, emerging, WHO, Fact sheet N°124, revised January 2002
Foodborne illness information pages , NSW Food Authority
Food safety and foodborne illness, WHO, Fact sheet N°237, revised January 2002
UK Health protection Agency
US PulseNet
Food poisoning from NHS Direct Online
Food Safety Network hosted at the University of Guelph, Canada.
Food Standard Agency website
Food safety
Health disasters | 0.764507 | 0.997386 | 0.762508 |
Scleroderma | Scleroderma is a group of autoimmune diseases that may result in changes to the skin, blood vessels, muscles, and internal organs. The disease can be either localized to the skin or involve other organs, as well. Symptoms may include areas of thickened skin, stiffness, feeling tired, and poor blood flow to the fingers or toes with cold exposure. One form of the condition, known as CREST syndrome, classically results in calcium deposits, Raynaud's syndrome, esophageal problems, thickening of the skin of the fingers and toes, and areas of small, dilated blood vessels.
The cause is unknown, but it may be due to an abnormal immune response. Risk factors include family history, certain genetic factors, and exposure to silica. The underlying mechanism involves the abnormal growth of connective tissue, which is believed to be the result of the immune system attacking healthy tissues. Diagnosis is based on symptoms, supported by a skin biopsy or blood tests.
While no cure is known, treatment may improve symptoms. Medications used include corticosteroids, methotrexate, and non-steroidal anti-inflammatory drugs (NSAIDs). Outcome depends on the extent of disease. Those with localized disease generally have a normal life expectancy. In those with systemic disease, life expectancy can be affected, and this varies based on subtype. Death is often due to lung, gastrointestinal, or heart complications.
About three per 100,000 people per year develop the systemic form. The condition most often begins in middle age. Women are more often affected than men. Scleroderma symptoms were first described in 1753 by Carlo Curzio and then well documented in 1842. The term is from the Greek skleros meaning "hard" and derma meaning "skin".
Signs and symptoms
Potential signs and symptoms include:
Cardiovascular: Raynaud's phenomenon (is the presenting symptom in 30% of affected persons, occurs in 95% of affected individuals at some time during their illness); healed pitting ulcers on the fingertips; skin and mucosal telangiectasis; palpitations, irregular heart rate and fainting due to conduction abnormalities, hypertension, and congestive heart failure
Digestive: gastroesophageal reflux disease, bloating, indigestion, loss of appetite, diarrhoea alternating with constipation, sicca syndrome and its complications, loosening of teeth, and hoarseness (due to acid reflux).
Pulmonary: progressive worsening of shortness of breath, chest pain (due to pulmonary artery hypertension) and dry, persistent cough due to interstitial lung disease
Musculoskeletal: joint, muscle aches, loss of joint range of motion, carpal tunnel syndrome, and muscle weakness
Genitourinary: erectile dysfunction, dyspareunia, kidney problems, or kidney failure
Other: facial pain due to trigeminal neuralgia, hand paresthesias, headache, stroke, fatigue, calcinosis, and weight loss
Cause
Scleroderma is caused by genetic and environmental factors. Mutations in HLA genes seem to play a crucial role in the pathogenesis of some cases; likewise silica, aromatic and chlorinated solvents, ketones, trichloroethylene, welding fumes, and white spirits exposure seems to contribute to the condition in a small proportion of affected persons.
Pathophysiology
Scleroderma is characterised by increased synthesis of collagen (leading to the sclerosis), damage to small blood vessels, activation of T lymphocytes, and production of altered connective tissue. Its proposed pathogenesis is the following:
It begins with an inciting event at the level of the vasculature, probably the endothelium. The inciting event is yet to be elucidated, but may be a viral agent, oxidative stress, or autoimmune. Endothelial cell damage and apoptosis ensue, leading to the vascular leakiness that manifests in early clinical stages as tissue oedema. At this stage, it is predominantly a Th1- and Th17-mediated disease.
After this, the vasculature is further compromised by impaired angiogenesis and impaired vasculogenesis (fewer endothelial progenitor cells), likely related to the presence of antiendothelinal cell antibodies (AECA). Despite this impaired angiogenesis, elevated levels of pro-angiogenic growth factors such as PDGF and VEGF is often seen in persons with the condition. The balance of vasodilation and vasoconstriction becomes askew, and the net result is vasoconstriction. The damaged endothelium then serves as a point of origin for blood-clot formation and further contributes to ischaemia-reperfusion injury and the generation of reactive oxygen species. These later stages are characterised by Th2 polarity.
The damaged endothelium upregulates adhesion molecules and chemokines to attract leucocytes, which enables the development of innate and adaptive immune responses, including loss of tolerance to various oxidised antigens, which includes topoisomerase I. B cells mature into plasma cells, which furthers the autoimmune component of the condition. T cells differentiate into subsets, including Th2 cells, which play a vital role in tissue fibrosis. Anti–topoisomerase 1 antibodies, in turn, stimulate type I interferon production.
Fibroblasts are recruited and activated by multiple cytokines and growth factors to generate myofibroblasts. Dysregulated transforming growth factor β (TGF-β) signalling in fibroblasts and myofibroblasts has been observed in multiple studies of scleroderma-affected individuals. Activation of fibroblasts and myofibroblasts leads to excessive deposition of collagen and other related proteins, leading to fibrosis. B cells are implicated in this stage, IL-6 and TGF-β produced by the B cells decrease collagen degradation and increase extracellular matrix production. Endothelin signalling is implicated in the pathophysiology of fibrosis.
Vitamin D is implicated in the pathophysiology of the disease. An inverse correlation between plasma levels of vitamin D and scleroderma severity has been noted, and vitamin D is known to play a crucial role in regulating (usually suppressing) the actions of the immune system.
Diagnosis
Typical scleroderma is classically defined as symmetrical skin thickening, with about 70% of cases also presenting with Raynaud's phenomenon, nail-fold capillary changes, and antinuclear antibodies. Affected individuals may experience systemic organ involvement. No single test for scleroderma works all of the time, hence diagnosis is often a matter of exclusion. Atypical scleroderma may show any variation of these changes without skin changes or with finger swelling only.
Laboratory testing can show antitopoisomerase antibodies, like anti-scl70 (causing a diffuse systemic form), or anticentromere antibodies (causing a limited systemic form and the CREST syndrome). Other autoantibodies can be seen, such as anti-U3 or anti-RNA polymerase.
Antidouble-stranded DNA autoantibodies are likely to be present in serum.
Differential
Diseases that are often in the differential include:
Eosinophilia is a condition in which too many eosinophils (a type of immune cell that attacks parasites and is involved in certain allergic reactions) are present in the blood.
Eosinophilia-myalgia syndrome is a form of eosinophilia caused by L-tryptophan supplements.
Eosinophilic fasciitis affects the connective tissue surrounding skeletal muscles, bones, blood vessels, and nerves in the arms and legs.
Graft-versus-host disease is an autoimmune condition that occurs as a result of bone-marrow transplants in which the immune cells from the transplanted bone marrow attack the host's body.
Mycosis fungoides is a type of cutaneous T cell lymphoma, a rare cancer that causes rashes all over the body.
Nephrogenic systemic fibrosis is a condition usually caused by kidney failure that results in fibrosis (thickening) of the tissues.
Primary biliary cirrhosis is an autoimmune disease of the liver.
Primary pulmonary hypertension
Complex regional pain syndrome
Classification
Scleroderma is characterised by the appearance of circumscribed or diffuse, hard, smooth, ivory-colored areas that are immobile and which give the appearance of hidebound skin, a disease occurring in both localised and systemic forms:
Localised scleroderma
Localised morphea
Morphea-lichen sclerosus et atrophicus overlap
Generalised morphea
Atrophoderma of Pasini and Pierini
Pansclerotic morphea
Morphea profunda
Linear scleroderma
Systemic scleroderma
CREST syndrome
Progressive systemic sclerosis
Treatment
No cure for scleroderma is known, although relief of symptoms is often achieved; these include treatment of:
Raynaud's phenomenon with vasodilators such as calcium channel blockers, alpha blockers, serotonin receptor antagonists, angiotensin II receptor inhibitors, statins, local nitrates or iloprost
Digital ulcers with phosphodiesterase 5 inhibitors (e.g., sildenafil) or iloprost
Prevention of new digital ulcers with bosentan
Malnutrition, secondary to intestinal flora overgrowth with tetracycline antibiotics such as tetracycline
Interstitial lung disease with cyclophosphamide, azathioprine with or without corticosteroids
Pulmonary arterial hypertension with endothelin receptor antagonists, phosphodiesterase 5 inhibitors, and prostanoids
Gastrooesophageal reflux disease with antacids or prokinetics
Kidney crises with angiotensin converting enzyme inhibitors and angiotensin II receptor antagonists
Systemic disease-modifying treatment with immunosuppressants is often used. Immunosuppressants used in its treatment include azathioprine, methotrexate, cyclophosphamide, mycophenolate, intravenous immunoglobulin, rituximab, sirolimus, alefacept, and the tyrosine kinase inhibitors, imatinib, nilotinib, and dasatinib.
Experimental therapies under investigation include endothelin receptor antagonists, tyrosine kinase inhibitors, beta-glycan peptides, halofuginone, basiliximab, alemtuzumab, abatacept, and haematopoietic stem cell transplantation.
Prognosis
, the five-year survival rate for systemic scleroderma was about 85%, whereas the 10-year survival rate was just under 70%. This varies according to the subtype; while localized scleroderma rarely results in death, the systemic form can, and the diffuse systemic form carries a worse prognosis than the limited form. The major scleroderma-related causes of death are: pulmonary hypertension, pulmonary fibrosis, and scleroderma renal crisis. People with scleroderma are also at a heightened risk for developing osteoporosis and for contracting cancer (especially liver, lung, haematologic, and bladder cancers). Scleroderma is also associated with an increased risk of cardiovascular disease.
According to a study of an Australian cohort, between 1985 and 2015, the average life expectancy of a person with scleroderma increased from 66 years to 74 years (the average Australian life expectancy increased from 76 to 82 years in the same period).
Epidemiology
Scleroderma most commonly first presents between the ages of 20 and 50 years, although any age group can be affected. Women are four to nine times more likely to develop scleroderma than men.
This disease is found worldwide. In the United States, prevalence is estimated at 240 per million and the annual incidence of scleroderma is 19 per million people. Likewise in the United States, it is slightly more common in African Americans than in their white counterparts. Choctaw Native Americans are more likely than Americans of European descent to develop the type of scleroderma that affects internal organs. In Germany, the prevalence is between 10 and 150 per million people, and the annual incidence is between three and 28 per million people. In South Australia, the annual incidence is 23 per million people, and the prevalence 233 per million people.
Pregnancy
Scleroderma in pregnancy is a complex situation; it increases the risk to both mother and child. Overall, scleroderma is associated with reduced fetal weight for gestational age. The treatment for scleroderma often includes known teratogens such as cyclophosphamide, methotrexate, mycophenolate, etc., so careful avoidance of such drugs during pregnancy is advised. In these cases hydroxychloroquine and low-dose corticosteroids might be used for disease control.
See also
Congenital fascial dystrophy
Chi Chi DeVayne, (developed scleroderma in the years leading up to her death)
References
External links
Handout on Health: Scleroderma – US National Institute of Arthritis and Musculoskeletal and Skin Diseases
Autoimmune diseases
Mucinoses
Rare diseases
Wikipedia medicine articles ready to translate
Wikipedia neurology articles ready to translate
Systemic connective tissue disorders | 0.762969 | 0.999364 | 0.762483 |
Pyrophobia | Pyrophobia is a fear of fire, which can be considered irrational if beyond what is considered normal. This phobia is ancient and primordial, perhaps since humanity's discovery of fire. Usually pertaining to humans' comprehensible reaction to fire itself, the fear of fire by other animals cannot be considered pyrophobic, as they are thought not to understand its purpose beyond general danger.
Signs and symptoms
When witnessing fire or smoke (even if the fire poses no threat, such as a candle), suspecting a fire is nearby, or (in some cases) visualizing fires, pyrophobes exhibit typical psychological and physiological symptoms of fear and panic: acute stress, fast heartbeat, shortness of breath, tightness in chest, sweating, nausea, shaking or trembling, dry mouth, needing to go to the bathroom, dizziness and/or fainting. A pyrophobe may also attempt to avoid or flee from fires, and avoid situations where harmless fire may be present (such as a barbecue or a campfire). The severity of pyrophobia can range from inconvenient to disturbing a person's daily functioning.
Causes
The most common cause of pyrophobia is that fire poses a potential threat to life safety (which is identical in animals). However, people who are intensely pyrophobic cannot even get close to or tolerate even a small controlled fire, such as fireplace, bonfire or lit candle. In many cases a bad childhood experience with fire may have triggered the condition.
Treatment
Exposure therapy is the most common way to treat pyrophobia. This method involves showing patients fires in order of increasing size, from a lit cigarette up to a stove or grill flame.
Another method of treatment is talk therapy, in which a patient tells a therapist about the cause of this fear. This can calm the patient to make them less afraid of controlled fire.
People can relieve pyrophobia by interacting with other pyrophobes to share their experiences that caused fear. Alternatively, pyrophobia can be treated using hypnosis.
Medication can also be used to treat pyrophobic people, although since it has side effects, the method is not highly recommended.
See also
List of phobias
Pyromania, its exact opposite
References
Environmental phobias
Fire | 0.771199 | 0.988694 | 0.76248 |
Otitis media | Otitis media is a group of inflammatory diseases of the middle ear. One of the two main types is acute otitis media (AOM), an infection of rapid onset that usually presents with ear pain. In young children this may result in pulling at the ear, increased crying, and poor sleep. Decreased eating and a fever may also be present. The other main type is otitis media with effusion (OME), typically not associated with symptoms, although occasionally a feeling of fullness is described; it is defined as the presence of non-infectious fluid in the middle ear which may persist for weeks or months often after an episode of acute otitis media. Chronic suppurative otitis media (CSOM) is middle ear inflammation that results in a perforated tympanic membrane with discharge from the ear for more than six weeks. It may be a complication of acute otitis media. Pain is rarely present. All three types of otitis media may be associated with hearing loss. If children with hearing loss due to OME do not learn sign language, it may affect their ability to learn.
The cause of AOM is related to childhood anatomy and immune function. Either bacteria or viruses may be involved. Risk factors include exposure to smoke, use of pacifiers, and attending daycare. It occurs more commonly among indigenous Australians and those who have cleft lip and palate or Down syndrome. OME frequently occurs following AOM and may be related to viral upper respiratory infections, irritants such as smoke, or allergies. Looking at the eardrum is important for making the correct diagnosis. Signs of AOM include bulging or a lack of movement of the tympanic membrane from a puff of air. New discharge not related to otitis externa also indicates the diagnosis.
A number of measures decrease the risk of otitis media including pneumococcal and influenza vaccination, breastfeeding, and avoiding tobacco smoke. The use of pain medications for AOM is important. This may include paracetamol (acetaminophen), ibuprofen, benzocaine ear drops, or opioids. In AOM, antibiotics may speed recovery but may result in side effects. Antibiotics are often recommended in those with severe disease or under two years old. In those with less severe disease they may only be recommended in those who do not improve after two or three days. The initial antibiotic of choice is typically amoxicillin. In those with frequent infections tympanostomy tubes may decrease recurrence. In children with otitis media with effusion antibiotics may increase resolution of symptoms, but may cause diarrhoea, vomiting and skin rash.
Worldwide AOM affects about 11% of people a year (about 325 to 710 million cases). Half the cases involve children less than five years of age and it is more common among males. Of those affected about 4.8% or 31 million develop chronic suppurative otitis media. The total number of people with CSOM is estimated at 65–330 million people. Before the age of ten OME affects about 80% of children at some point. Otitis media resulted in 3,200 deaths in 2015 – down from 4,900 deaths in 1990.
Signs and symptoms
The primary symptom of acute otitis media is ear pain; other possible symptoms include fever, reduced hearing during periods of illness, tenderness on touch of the skin above the ear, purulent discharge from the ears, irritability, ear blocking sensation and diarrhea (in infants). Since an episode of otitis media is usually precipitated by an upper respiratory tract infection (URTI), there are often accompanying symptoms like a cough and nasal discharge. One might also experience a feeling of fullness in the ear.
Discharge from the ear can be caused by acute otitis media with perforation of the eardrum, chronic suppurative otitis media, tympanostomy tube otorrhea, or acute otitis externa. Trauma, such as a basilar skull fracture, can also lead to cerebrospinal fluid otorrhea (discharge of CSF from the ear) due to cerebral spinal drainage from the brain and its covering (meninges).
Causes
The common cause of all forms of otitis media is dysfunction of the Eustachian tube. This is usually due to inflammation of the mucous membranes in the nasopharynx, which can be caused by a viral upper respiratory tract infection (URTI), strep throat, or possibly by allergies.
By reflux or aspiration of unwanted secretions from the nasopharynx into the normally sterile middle-ear space, the fluid may then become infected – usually with bacteria. The virus that caused the initial upper respiratory infection can itself be identified as the pathogen causing the infection.
Diagnosis
As its typical symptoms overlap with other conditions, such as acute external otitis, symptoms alone are not sufficient to predict whether acute otitis media is present; it has to be complemented by visualization of the tympanic membrane. Examiners may use a pneumatic otoscope with a rubber bulb attached to assess the mobility of the tympanic membrane. Other methods to diagnose otitis media is with a tympanometry, reflectometry, or hearing test.
In more severe cases, such as those with associated hearing loss or high fever, audiometry, tympanogram, temporal bone CT and MRI can be used to assess for associated complications, such as mastoid effusion, subperiosteal abscess formation, bony destruction, venous thrombosis or meningitis.
Acute otitis media in children with moderate to severe bulging of the tympanic membrane or new onset of otorrhea (drainage) is not due to external otitis. Also, the diagnosis may be made in children who have mild bulging of the ear drum and recent onset of ear pain (less than 48 hours) or intense erythema (redness) of the ear drum. To confirm the diagnosis, middle-ear effusion and inflammation of the eardrum (called or tympanitis) have to be identified; signs of these are fullness, bulging, cloudiness and redness of the eardrum. It is important to attempt to differentiate between acute otitis media and otitis media with effusion (OME), as antibiotics are not recommended for OME. It has been suggested that bulging of the tympanic membrane is the best sign to differentiate AOM from OME, with a bulging of the membrane suggesting AOM rather than OME.
Viral otitis may result in blisters on the external side of the tympanic membrane, which is called bullous myringitis (myringa being Latin for "eardrum"). However, sometimes even examination of the eardrum may not be able to confirm the diagnosis, especially if the canal is small. If wax in the ear canal obscures a clear view of the eardrum it should be removed using a blunt cerumen curette or a wire loop. Also, an upset young child's crying can cause the eardrum to look inflamed due to distension of the small blood vessels on it, mimicking the redness associated with otitis media.
Acute otitis media
The most common bacteria isolated from the middle ear in AOM are Streptococcus pneumoniae, Haemophilus influenzae, Moraxella catarrhalis, and Staphylococcus aureus.
Otitis media with effusion
Otitis media with effusion (OME), also known as serous otitis media (SOM) or secretory otitis media (SOM), and colloquially referred to as 'glue ear,' is fluid accumulation that can occur in the middle ear and mastoid air cells due to negative pressure produced by dysfunction of the Eustachian tube. This can be associated with a viral upper respiratory infection (URI) or bacterial infection such as otitis media. An effusion can cause conductive hearing loss if it interferes with the transmission of vibrations of middle ear bones to the vestibulocochlear nerve complex that are created by sound waves.
Early-onset OME is associated with feeding of infants while lying down, early entry into group child care, parental smoking, lack or too short a period of breastfeeding, and greater amounts of time spent in group child care, particularly those with a large number of children. These risk factors increase the incidence and duration of OME during the first two years of life.
Chronic suppurative otitis media
Chronic suppurative otitis media (CSOM) is a long-term middle ear inflammation causing persistent ear discharge due to a perforated eardrum. It often follows an unresolved upper respiratory infection leading to acute otitis media. Prolonged inflammation leads to middle ear swelling, ulceration, perforation, and attempts at repair with granulation tissue and polyps. This can worsen discharge and inflammation, potentially developing into CSOM, often associated with cholesteatoma. Symptoms may include ear discharge or pus seen only on examination. Hearing loss is common. Risk factors include poor eustachian tube function, recurrent ear infections, crowded living, daycare attendance, and certain craniofacial malformations.
Worldwide approximately 11% of the human population is affected by AOM every year, or 709 million cases. About 4.4% of the population develop CSOM.
According to the World Health Organization, CSOM is a primary cause of hearing loss in children. Adults with recurrent episodes of CSOM have a higher risk of developing permanent conductive and sensorineural hearing loss.
In Britain, 0.9% of children and 0.5% of adults have CSOM, with no difference between the sexes. The incidence of CSOM across the world varies dramatically where high income countries have a relatively low prevalence while in low income countries the prevalence may be up to three times as great. Each year 21,000 people worldwide die due to complications of CSOM.
Adhesive otitis media
Adhesive otitis media occurs when a thin retracted ear drum becomes sucked into the middle-ear space and stuck (i.e., adherent) to the ossicles and other bones of the middle ear.
Prevention
AOM is far less common in breastfed infants than in formula-fed infants, and the greatest protection is associated with exclusive breastfeeding (no formula use) for the first six months of life. A longer duration of breastfeeding is correlated with a longer protective effect.
Pneumococcal conjugate vaccines (PCV) in early infancy decrease the risk of acute otitis media in healthy infants. PCV is recommended for all children, and, if implemented broadly, PCV would have a significant public health benefit. Influenza vaccination in children appears to reduce rates of AOM by 4% and the use of antibiotics by 11% over 6 months. However, the vaccine resulted in increased adverse-effects such as fever and runny nose. The small reduction in AOM may not justify the side effects and inconvenience of influenza vaccination every year for this purpose alone. PCV does not appear to decrease the risk of otitis media when given to high-risk infants or for older children who have previously experienced otitis media.
Risk factors such as season, allergy predisposition and presence of older siblings are known to be determinants of recurrent otitis media and persistent middle-ear effusions (MEE). History of recurrence, environmental exposure to tobacco smoke, use of daycare, and lack of breastfeeding have all been associated with increased risk of development, recurrence, and persistent MEE. Pacifier use has been associated with more frequent episodes of AOM.
Long-term antibiotics, while they decrease rates of infection during treatment, have an unknown effect on long-term outcomes such as hearing loss. This method of prevention has been associated with emergence of undesirable antibiotic-resistant otitic bacteria.
There is moderate evidence that the sugar substitute xylitol may reduce infection rates in healthy children who go to daycare.
Evidence does not support zinc supplementation as an effort to reduce otitis rates except maybe in those with severe malnutrition such as marasmus.
Probiotics do not show evidence of preventing acute otitis media in children.
Management
Oral and topical pain killers are the mainstay for the treatment of pain caused by otitis media. Oral agents include ibuprofen, paracetamol (acetaminophen), and opiates. A 2023 review found evidence for the effectiveness of single or combinations of oral pain relief in acute otitis media is lacking. Topical agents shown to be effective include antipyrine and benzocaine ear drops. Decongestants and antihistamines, either nasal or oral, are not recommended due to the lack of benefit and concerns regarding side effects. Half of cases of ear pain in children resolve without treatment in three days and 90% resolve in seven or eight days. The use of steroids is not supported by the evidence for acute otitis media.
Antibiotics
Use of antibiotics for acute otitis media has benefits and harms. As over 82% of acute episodes settle without treatment, about 20 children must be treated to prevent one case of ear pain, 33 children to prevent one perforation, and 11 children to prevent one opposite-side ear infection. For every 14 children treated with antibiotics, one child has an episode of vomiting, diarrhea or a rash. Analgesics may relieve pain, if present. For people requiring surgery to treat otitis media with effusion, preventative antibiotics may not help reduce the risk of post-surgical complications.
For bilateral acute otitis media in infants younger than 24 months, there is evidence that the benefits of antibiotics outweigh the harms. A 2015 Cochrane review concluded that watchful waiting is the preferred approach for children over six months with non severe acute otitis media.
Most children older than 6 months of age who have acute otitis media do not benefit from treatment with antibiotics. If antibiotics are used, a narrow-spectrum antibiotic like amoxicillin is generally recommended, as broad-spectrum antibiotics may be associated with more adverse events. If there is resistance or use of amoxicillin in the last 30 days then amoxicillin-clavulanate or another penicillin derivative plus beta lactamase inhibitor is recommended. Taking amoxicillin once a day may be as effective as twice or three times a day. While less than 7 days of antibiotics have fewer side effects, more than seven days appear to be more effective. If there is no improvement after 2–3 days of treatment a change in therapy may be considered. Azithromycin appears to have less side effects than either high dose amoxicillin or amoxicillin/clavulanate.
Tympanostomy tube
Tympanostomy tubes (also called "grommets") are recommended with three or more episodes of acute otitis media in 6 months or four or more in a year, with at least one episode or more attacks in the preceding 6 months. There is tentative evidence that children with recurrent acute otitis media (AOM) who receive tubes have a modest improvement in the number of further AOM episodes (around one fewer episode at six months and less of an improvement at 12 months following the tubes being inserted). Evidence does not support an effect on long-term hearing or language development. A common complication of having a tympanostomy tube is otorrhea, which is a discharge from the ear. The risk of persistent tympanic membrane perforation after children have grommets inserted may be low. It is still uncertain whether or not grommets are more effective than a course of antibiotics.
Oral antibiotics should not be used to treat uncomplicated acute tympanostomy tube otorrhea. They are not sufficient for the bacteria that cause this condition and have side effects including increased risk of opportunistic infection. In contrast, topical antibiotic eardrops are useful.
Otitis media with effusion
The decision to treat is usually made after a combination of physical exam and laboratory diagnosis, with additional testing including audiometry, tympanogram, temporal bone CT and MRI. Decongestants, glucocorticoids, and topical antibiotics are generally not effective as treatment for non-infectious, or serous, causes of mastoid effusion. Moreover, it is recommended against using antihistamines and decongestants in children with OME. In less severe cases or those without significant hearing impairment, the effusion can resolve spontaneously or with more conservative measures such as autoinflation. In more severe cases, tympanostomy tubes can be inserted, possibly with adjuvant adenoidectomy as it shows a significant benefit as far as the resolution of middle ear effusion in children with OME is concerned.
Chronic suppurative otitis media
Topical antibiotics are of uncertain benefit as of 2020. Some evidence suggests that topical antibiotics may be useful either alone or with antibiotics by mouth. Antiseptics are of unclear effect. Topical antibiotics (quinolones) are probably better at resolving ear discharge than antiseptics.
Alternative medicine
Complementary and alternative medicine is not recommended for otitis media with effusion because there is no evidence of benefit. Homeopathic treatments have not been proven to be effective for acute otitis media in a study with children. An osteopathic manipulation technique called the Galbreath technique was evaluated in one randomized controlled clinical trial; one reviewer concluded that it was promising, but a 2010 evidence report found the evidence inconclusive.
Outcomes
Complications of acute otitis media consists of perforation of the ear drum, infection of the mastoid space behind the ear (mastoiditis), and more rarely intracranial complications can occur, such as bacterial meningitis, brain abscess, or dural sinus thrombosis. It is estimated that each year 21,000 people die due to complications of otitis media.
Membrane rupture
In severe or untreated cases, the tympanic membrane may perforate, allowing the pus in the middle-ear space to drain into the ear canal. If there is enough, this drainage may be obvious. Even though the perforation of the tympanic membrane suggests a highly painful and traumatic process, it is almost always associated with a dramatic relief of pressure and pain. In a simple case of acute otitis media in an otherwise healthy person, the body's defenses are likely to resolve the infection and the ear drum nearly always heals.
An option for severe acute otitis media in which analgesics are not controlling ear pain is to perform a tympanocentesis, i.e., needle aspiration through the tympanic membrane to relieve the ear pain and to identify the causative organism(s).
Hearing loss
Children with recurrent episodes of acute otitis media and those with otitis media with effusion or chronic suppurative otitis media have higher risks of developing conductive and sensorineural hearing loss. Globally approximately 141 million people have mild hearing loss due to otitis media (2.1% of the population). This is more common in males (2.3%) than females (1.8%).
This hearing loss is mainly due to fluid in the middle ear or rupture of the tympanic membrane. Prolonged duration of otitis media is associated with ossicular complications and, together with persistent tympanic membrane perforation, contributes to the severity of the disease and hearing loss. When a cholesteatoma or granulation tissue is present in the middle ear, the degree of hearing loss and ossicular destruction is even greater.
Periods of conductive hearing loss from otitis media may have a detrimental effect on speech development in children. Some studies have linked otitis media to learning problems, attention disorders, and problems with social adaptation. Furthermore, it has been demonstrated that individuals with otitis media have more depression/anxiety-related disorders compared to individuals with normal hearing. Once the infections resolve and hearing thresholds return to normal, childhood otitis media may still cause minor and irreversible damage to the middle ear and cochlea. More research on the importance of screening all children under 4 years old for otitis media with effusion needs to be performed.
Epidemiology
Acute otitis media is very common in childhood. It is the most common condition for which medical care is provided in children under five years of age in the US. Acute otitis media affects 11% of people each year (709 million cases) with half occurring in those below five years. Chronic suppurative otitis media affects about 5% or 31 million of these cases with 22.6% of cases occurring annually under the age of five years. Otitis media resulted in 2,400 deaths in 2013down from 4,900 deaths in 1990.
Australian Aboriginals experience a high level of conductive hearing loss largely due to the massive incidence of middle ear disease among the young in Aboriginal communities. Aboriginal children experience middle ear disease for two and a half years on average during childhood compared with three months for non indigenous children. If untreated it can leave a permanent legacy of hearing loss. The higher incidence of deafness in turn contributes to poor social, educational and emotional outcomes for the children concerned. Such children as they grow into adults are also more likely to experience employment difficulties and find themselves caught up in the criminal justice system. Research in 2012 revealed that nine out of ten Aboriginal prison inmates in the Northern Territory suffer from significant hearing loss.
Andrew Butcher speculates that the lack of fricatives and the unusual segmental inventories of Australian languages may be due to the very high presence of otitis media ear infections and resulting hearing loss in their populations. People with hearing loss often have trouble distinguishing different vowels and hearing fricatives and voicing contrasts. Australian Aboriginal languages thus seem to show similarities to the speech of people with hearing loss, and avoid those sounds and distinctions which are difficult for people with early childhood hearing loss to perceive. At the same time, Australian languages make full use of those distinctions, namely place of articulation distinctions, which people with otitis media-caused hearing loss can perceive more easily. This hypothesis has been challenged on historical, comparative, statistical, and medical grounds.
Etymology
The term otitis media is composed of otitis, Ancient Greek for "inflammation of the ear", and media, Latin for "middle".
References
External links
Otitis
Diseases of middle ear and mastoid
Pediatrics
Audiology
Wikipedia medicine articles ready to translate
Wikipedia emergency medicine articles ready to translate
fr:Otite
Otorhinolaryngology
Otology | 0.764422 | 0.997431 | 0.762459 |
Signs and symptoms of cancer | Cancer symptoms are changes in the body caused by the presence of cancer. They are usually caused by the effect of a cancer on the part of the body where it is growing, although the disease can cause more general symptoms such as weight loss or tiredness. There are more than 100 different types of cancer with a wide range of signs and symptoms which can manifest in different ways.
Signs and Symptoms
Cancer is a group of diseases involving abnormal cell growth with the potential to invade or spread to other parts of the body. Cancer can be difficult to diagnose because its signs and symptoms are often nonspecific, meaning they may be general phenomena that do not point directly to a specific disease process.
In medicine, a sign is an objective piece of data that can be measured or observed, as in a high body temperature (fever), a rash, or a bruise. A symptom, by contrast, is the subjective experience that may signify a disease, illness or injury, such as pain, dizziness, or fatigue. Signs and symptoms are not mutually exclusive, for example a subjective feeling of fever can be noted as sign by using a thermometer that registers a high reading.
Because many symptoms of cancer are gradual in onset and general in nature, cancer screening (also called cancer surveillance) is a key public health priority. This may include laboratory work, physical examinations, tissue samples, or diagnostic imaging tests that a community of experts recommends be conducted at set intervals for particular populations. Screenings can identify cancers before symptoms develop, or early in the disease course. Certain cancers can be prevented with vaccines against the viruses that cause them (e.g., HPV vaccines as prevention against cervical cancer).
Additionally, patient education about worrisome symptoms that require further evaluation is paramount to reduce morbidity and mortality from cancer. Symptoms that cause excess worry, symptoms that persist or are unexplained, and/or the appearance of several symptoms together particularly warrant evaluation by a health professional.
Cancer Signs and Symptoms
Mechanisms
Cancer may produce symptoms in one or more of the following ways:
Mass effect: An abnormal growth of tissue, or tumor, may compress nearby structures, causing pain, inflammation or disruption of function. Not all cancers produce solid tumors. Even benign cancers (those that do not metastasize, or spread to other tissues) may have serious consequences if they appear in dangerous places, particularly the heart or brain. Small bowel obstructions caused by the growth of a tumor in the digestive system is another example of a 'space-occupying' consequence of cancer.
Loss of Function: Tumor cells may deplete normal cells of oxygen and nutrients, thus disrupting the function of a vital organ. Many tumors stimulate new blood vessel formation which serves to supply the tumor rather than the normal, healthy tissue. The abnormal function of cancer cells and reduced function of normal cells in a given organ may lead to organ failure.
Increased Lactate Production: The Warburg Effect states that cancer cells in the presence of oxygen and glucose take a different path of energy production, diverting energy for biomass production to support tumor growth. This unique metabolism of cancer cells opens doors for possible cancer treatments including targeting lactate dehydrogenase and TCA intermediate production.
Paraneoplastic Syndromes: Some cancers produce "ectopic" hormones, particularly when tumors arise from neuroendocrine cells, causing a variety of endocrine imbalances. Examples include the production of parathyroid hormones by parathyroid tumors or serotonin by carcinoid tumors. In these cases, the cell types that produce these active small molecules proliferate malignantly and lose their responsiveness to negative feedback. Because hormones operate on tissues far from the site of production, paraneoplastic signs and symptoms may appear far from the tumor of origin.
Venous Thromboembolism: Patients with certain types of cancers are at increased risk of blood clots due to excess production of clotting factors. These clots may disrupt circulation locally or dislodge and travel to the heart, lungs, or brain, and may be fatal. Symptoms of blood clots may include pain, swelling, warmth and in late stages, numbness, particularly in the arms and legs. Some cancer treatments may further increase this risk.
Effusions: Cancers may stimulate fluid shifts in the body and lead to extracellular collections of fluid. Breast and lung cancer, for example, often cause pleural effusions, or a buildup of fluid in the lining of the lungs. Abdominal cancers, including ovarian and uterine cancers, may cause fluid buildup in the abdominal cavity.
Suspicious Symptoms
Symptoms of cancer may be nonspecific changes to the individual's sense of physical well-being (constitutional symptoms), or may localize to a particular organ system or anatomic region.
The following symptoms may be manifestations of an underlying cancer. Alternatively, they may point to non-cancerous disease processes, benign tumors, or even be within the physiological range of normal. They may appear at the primary site of cancer or be symptoms of cancer metastasis, or spread. Further workup by a trained healthcare professional is required to diagnose cancer.
Constitutional Symptoms
Unexplained weight loss: Weight loss that is unintended and not explained by diet, exercise or other illness may be a warning sign of many types of cancer
Unexplained pain: Pain that persists, has no clear cause, and does not respond to treatment may be a warning sign of many types of cancers.
Unexplained tiredness or fatigue: Unusual and persistent tiredness may point to underlying illness, including blood cell cancers like leukemia or lymphoma
Unexplained night sweats or fever: These may be signs of an immune system cancer. Fever in children rarely points to malignancy, but may merit evaluation.
Local Symptoms
Cancer Signs: Medical Workup
A health professional may pursue a formal diagnostic workup to evaluate symptoms of cancer. The tests ordered will depend upon the type of cancer suspected. These may include the following:
Basic Metabolic Panel
Barium enema
Biopsy
Bone scan
Bone marrow aspiration and biopsy
Breast MRI
Colonoscopy, Sigmoidoscopy, and/or Endoscopy
Complete Blood Count and/or Peripheral Blood Smear
Computed Tomography (CT) Scan
Digital Rectal Exam
Electrocardiogram (EKG) and Echocardiogram
Fecal Occult Blood Tests
Magnetic Resonance Imaging (MRI)
Mammogram
MUGA Scan
Pap Test
Positron Emission Tomography (PET) Scan
Tumor Marker Tests
Ultrasound
Treatment-Related and Secondary Symptoms
Cancers treatments may include surgery, chemotherapy, radiation therapy, hormonal therapy, targeted therapy (including immunotherapy such as monoclonal antibody therapy) and synthetic lethality, most commonly as a series of separate treatments (e.g. chemotherapy before surgery). Some of these cancer therapies may produce treatment-related, or secondary, symptoms, including:
Pain
Cancer pain may be caused by the tumor itself compressing nearby structures, impinging on nerves, or causing an inflammatory response. It may also be caused by therapies such as radiation or chemotherapy. With competent management, cancer pain can be eliminated or well controlled in 80% to 90% of cases, but nearly 50% of cancer patients in the developed world receive less than optimal care. Worldwide, nearly 80% of people with cancer receive little or no pain medication. Cancer pain in children and in people with intellectual disabilities is also reported as being under-treated.
Infection
Deep Vein Thrombosis
Pulmonary Embolism
Tumor Lysis Syndrome
Muscle Aches
Symptoms that require immediate treatment include:
Fever that is 100.4 °F (38 °C) or higher
Shaking chills
Chest pain or shortness of breath
Confusion
Severe headache with a stiff neck
Bloody urine
References
External links
Symptoms and signs
Oncology | 0.779242 | 0.978434 | 0.762437 |
Cytokine release syndrome | In immunology, cytokine release syndrome (CRS) is a form of systemic inflammatory response syndrome (SIRS) that can be triggered by a variety of factors such as infections and certain drugs. It refers to cytokine storm syndromes (CSS) and occurs when large numbers of white blood cells are activated and release inflammatory cytokines, which in turn activate yet more white blood cells. CRS is also an adverse effect of some monoclonal antibody medications, as well as adoptive T-cell therapies. When occurring as a result of a medication, it is also known as an infusion reaction.
The term cytokine storm is often used interchangeably with CRS but, despite the fact that they have similar clinical phenotype, their characteristics are different. When occurring as a result of a therapy, CRS symptoms may be delayed until days or weeks after treatment. Immediate-onset CRS is a cytokine storm, although severe cases of CRS have also been called cytokine storms.
Signs and symptoms
Symptoms include fever that tends to fluctuate, fatigue, loss of appetite, muscle and joint pain, nausea, vomiting, diarrhea, rashes, fast breathing, rapid heartbeat, low blood pressure, seizures, headache, confusion, delirium, hallucinations, tremor, and loss of coordination.
Lab tests and clinical monitoring show low blood oxygen, widened pulse pressure, increased cardiac output (early), potentially diminished cardiac output (late), high levels of nitrogen compounds in the blood, elevated D-dimer, elevated transaminases, factor I deficiency and excessive bleeding, higher-than-normal level of bilirubin.
Cause
CRS occurs when large numbers of white blood cells, including B cells, T cells, natural killer cells, macrophages, dendritic cells, and monocytes are activated and release inflammatory cytokines, which activate more white blood cells in a positive feedback loop of pathogenic inflammation. Immune cells are activated by stressed or infected cells through receptor-ligand interactions.
This can occur when the immune system is fighting pathogens, as cytokines produced by immune cells recruit more effector immune cells such as T-cells and inflammatory monocytes (which differentiate into macrophages) to the site of inflammation or infection. In addition, pro-inflammatory cytokines binding their cognate receptor on immune cells results in activation and stimulation of further cytokine production.
Adoptive cell transfer of autologous T-cells modified with chimeric antigen receptors (CAR-T cell therapy) also causes CRS. Serum samples of patients with CAR-T associated CRS have elevated levels of IL-6, IFN-γ, IL-8 (CXCL8), IL-10, GM-CSF, MIP-1α/β, MCP-1 (CCL2), CXCL9, and CXCL10 (IP-10). The most predictive biomarkers 36h after CAR-T infusion of CRS are a fever ≥38.9 °C (102 °F) and elevated levels of MCP-1 in serum. Many of the cytokines elevated in CRS are not produced by CAR-T cells, but by myeloid cells that are pathogenically licensed through T-cell-mediated activating mechanisms. For example, in vitro co-culture experiments have demonstrated IL-6, MCP-1, and MIP-1 are not produced by CAR-T cells, but rather by inflammatory myeloid lineage cells. In vivo models have demonstrated NSG (NOD/SCID/γ-chain deficient mice) with defects of both lymphocyte and myeloid lineage compartments do not develop CRS after CAR-T cell infusion.
In addition to adoptive T-cell therapies, severe CRS or cytokine reactions can occur in a number of infectious and non-infectious diseases including graft-versus-host disease (GVHD), coronavirus disease 2019 (COVID-19), acute respiratory distress syndrome (ARDS), sepsis, Ebola, avian influenza, smallpox, and systemic inflammatory response syndrome (SIRS).
Although severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) is sufficiently cleared by the early acute phase anti-viral response in most individuals, some progress to a hyperinflammatory condition, often with life-threatening pulmonary involvement. This systemic hyperinflammation results in inflammatory lymphocytic and monocytic infiltration of the lung and the heart, causing ARDS and cardiac failure. Patients with fulminant COVID-19 and ARDS have classical serum biomarkers of CRS including elevated CRP, LDH, IL-6, and ferritin.
Hemophagocytic lymphohistiocytosis and Epstein-Barr virus-related hemophagocytic lymphohistiocytosis are caused by extreme elevations in cytokines and can be regarded as one form of severe cytokine release syndrome.
Medications
Cytokine reaction syndrome may also be induced by certain medications, such as the CD20 antibody rituximab and the CD19 CAR T cell tisagenlecleucel. The experimental drug TGN1412—also known as Theralizumab—caused extremely serious symptoms when given to six participants in a Phase I trial. A controlled and limited CRS is triggered by active fever therapy with mixed bacterial vaccines (MBV) according to Coley; it is used for oncological and certain chronic diseases. CRS has also arisen with biotherapeutics such as COVID-19 vaccines (Frontiers of Immunology 2022 13: 967226) and monoclonal antibodies intended to suppress or activate the immune system through receptors on white blood cells. Muromonab-CD3, an anti-CD3 monoclonal antibody intended to suppress the immune system to prevent rejection of organ transplants; alemtuzumab, which is anti-CD52 and used to treat blood cancers as well as multiple sclerosis and in organ transplants; and rituximab, which is anti-CD20 and used to treat blood cancers and auto-immune disorders, all cause CRS.
Diagnosis
CRS needs to be distinguished from symptoms of the disease itself and, in the case of drugs, from other adverse effects—for example tumor lysis syndrome requires different interventions. As of 2015, differential diagnoses depended on the judgement of doctor as there were no objective tests.
Classification
CRS is a form of systemic inflammatory response syndrome and is an adverse effect of some drugs.
The Common Terminology Criteria for Adverse Events classifications for CRS as of version 4.03 issued in 2010 were:
Prevention
Severe CRS caused by some drugs can be prevented by using lower doses, infusing slowly, and administering anti-histamines or corticosteroids before and during administration of the drug.
In vitro assays have been developed to understand the risk that pre-clinical drug candidates might cause CRS and guide dosing for Phase I trials, and regulatory agencies expect to see results of such tests in investigational new drug applications.
A modified Chandler loop model can be used as a preclinical tool to assess infusion reactions.
Management
Treatment for less severe CRS is supportive, addressing the symptoms like fever, muscle pain, or fatigue. Moderate CRS requires oxygen therapy and giving fluids and antihypotensive agents to raise blood pressure. For moderate to severe CRS, the use of immunosuppressive agents like corticosteroids may be necessary, but judgment must be used to avoid negating the effect of drugs intended to activate the immune system.
Tocilizumab, an anti-IL-6 monoclonal antibody, was FDA approved for steroid-refractory CRS based on retrospective case study data.
Lenzilumab, an anti-GM-CSF monoclonal antibody, is also clinically proven to be effective at managing cytokine release by reducing activation of myeloid cells and decreasing the production of IL-1, IL-6, MCP-1, MIP-1, and IP-10. Additionally, as a soluble cytokine blockade, it will not increase serum levels of GM-CSF (a phenomenon seen with tocilizumab and IL-6).
Although frequently used to treat severe CRS in people with ARDS, corticosteroids and NSAIDs have been evaluated in clinical trials and have shown no effect on lung mechanics, gas exchange, or beneficial outcome in early established ARDS.
Epidemiology
Severe CRS is rare. Minor and moderate CRS are common side effects of immune-modulating antibody therapies and CAR-T therapies.
Research
Key therapeutic targets to abrogate hyper-inflammation in CRS are IL-1, IL-6, and GM-CSF. An in vivo model found that GM-CSF knockout CAR-T cells do not induce CRS in mice. However, IL-1 knockout and IL-6 knockout hosts (whose myeloid cells are deficient in IL-1 and IL-6, respectively) were susceptible to CRS after the administration of wild-type CAR-T cells. It is thought this may be because while blockade of IL-1 and IL-6 are myeloid-derived cytokines are thus too far downstream of the inflammatory cascade. Moreover, while tocilizumab (anti-IL-6R monoclonal antibody) may have an anti-inflammatory and antipyretic effect, it has been shown to increase serum levels of IL-6 by saturating the receptor, thus driving the cytokine across the blood brain barrier (BBB) and worsening neurotoxicity. Monoclonal antibody blockade of GM-CSF with lenzilumab has been demonstrated to protect mice from CAR-T associated CRS and neurotoxicity while maintaining anti-leukemic efficacy.
See also
Macrophage activation syndrome
sHLH
References
Immune system disorders
Syndromes affecting immunity
Immunology | 0.76791 | 0.992866 | 0.762432 |
Preventive healthcare | Preventive healthcare, or prophylaxis, is the application of healthcare measures to prevent diseases. Disease and disability are affected by environmental factors, genetic predisposition, disease agents, and lifestyle choices, and are dynamic processes that begin before individuals realize they are affected. Disease prevention relies on anticipatory actions that can be categorized as primal, primary, secondary, and tertiary prevention.
Each year, millions of people die of preventable causes. A 2004 study showed that about half of all deaths in the United States in 2000 were due to preventable behaviors and exposures. Leading causes included cardiovascular disease, chronic respiratory disease, unintentional injuries, diabetes, and certain infectious diseases. This same study estimates that 400,000 people die each year in the United States due to poor diet and a sedentary lifestyle. According to estimates made by the World Health Organization (WHO), about 55 million people died worldwide in 2011, and two-thirds of these died from non-communicable diseases, including cancer, diabetes, and chronic cardiovascular and lung diseases. This is an increase from the year 2000, during which 60% of deaths were attributed to these diseases.)
Preventive healthcare is especially important given the worldwide rise in the prevalence of chronic diseases and deaths from these diseases. There are many methods for prevention of disease. One of them is prevention of teenage smoking through information giving. It is recommended that adults and children aim to visit their doctor for regular check-ups, even if they feel healthy, to perform disease screening, identify risk factors for disease, discuss tips for a healthy and balanced lifestyle, stay up to date with immunizations and boosters, and maintain a good relationship with a healthcare provider. In pediatrics, some common examples of primary prevention are encouraging parents to turn down the temperature of their home water heater in order to avoid scalding burns, encouraging children to wear bicycle helmets, and suggesting that people use the air quality index (AQI) to check the level of pollution in the outside air before engaging in sporting activities. Some common disease screenings include checking for hypertension (high blood pressure), hyperglycemia (high blood sugar, a risk factor for diabetes mellitus), hypercholesterolemia (high blood cholesterol), screening for colon cancer, depression, HIV and other common types of sexually transmitted disease such as chlamydia, syphilis, and gonorrhea, mammography (to screen for breast cancer), colorectal cancer screening, a Pap test (to check for cervical cancer), and screening for osteoporosis. Genetic testing can also be performed to screen for mutations that cause genetic disorders or predisposition to certain diseases such as breast or ovarian cancer. However, these measures are not affordable for every individual and the cost effectiveness of preventive healthcare is still a topic of debate.
Overview
Preventive healthcare strategies are described as taking place at the primal, primary, secondary, and tertiary prevention levels.
Although advocated as preventive medicine in the early twentieth century by Sara Josephine Baker, in the 1940s, Hugh R. Leavell and E. Gurney Clark coined the term primary prevention. They worked at the Harvard and Columbia University Schools of Public Health, respectively, and later expanded the levels to include secondary and tertiary prevention. Goldston (1987) notes that these levels might be better described as "prevention, treatment, and rehabilitation", although the terms primary, secondary, and tertiary prevention are still in use today. The concept of primal prevention has been created much more recently, in relation to the new developments in molecular biology over the last fifty years, more particularly in epigenetics, which point to the paramount importance of environmental conditions, both physical and affective, on the organism during its fetal and newborn life, or so-called primal period of life.
Primal and primordial preventions
Primal prevention is health promotion par excellence. New knowledge in molecular biology, in particular epigenetics, points to how much affective as well as physical environment during fetal and newborn life may determine adult health. This way of promoting health consists mainly in providing future parents with pertinent, unbiased information on primal health and supporting them during their child's primal period of life (i.e., "from conception to first anniversary" according to definition by the Primal Health Research Centre, London). This includes adequate parental leave, ideally for both parents, with kin caregiving and financial help where needed.
Primordial prevention refers to all measures designed to prevent the development of risk factors in the first place, early in life, and even preconception, as Ruth A. Etzel has described it "all population-level actions and measures that inhibit the emergence and establishment of adverse environmental, economic, and social conditions". This could be reducing air pollution or prohibiting endocrine-disrupting chemicals in food-handling equipment and food contact materials.
Primary prevention
Primary prevention consists of traditional health promotion and "specific protection". Health promotion activities include prevention strategies such as health education and lifestyle medicine, and are current, non-clinical life choices such as eating nutritious meals and exercising often, that prevent lifestyle-related medical conditions, improve the quality of life, and create a sense of overall well-being. Preventing disease and creating overall well-being prolongs life expectancy. Health-promotional activities do not target a specific disease or condition but rather promote health and well-being on a very general level. On the other hand, specific protection targets a type or group of diseases and complements the goals of health promotion.
Food
Food is the most basic tool in preventive health care. Poor nutrition is linked to various chronic illnesses. Because of this, having a healthy diet and proper nutrition can be used to prevent illnesses.
Access
The 2011 National Health Interview Survey performed by the Centers for Disease Control was the first national survey to include questions about ability to pay for food. Difficulty with paying for food, medicine, or both is a problem facing 1 out of 3 Americans. If better food options were available through food banks, soup kitchens, and other resources for low-income people, obesity and the chronic conditions that come along with it would be better controlled. A food desert is an area with restricted access to healthy foods due to a lack of supermarkets within a reasonable distance. These are often low-income neighborhoods with the majority of residents lacking transportation. There have been several grassroots movements since 1995 to encourage urban gardening, using vacant lots to grow food cultivated by local residents. Mobile fresh markets are another resource for residents in a "food desert", which are specially outfitted buses bringing affordable fresh fruits and vegetables to low-income neighborhoods.
Food education and guidance
It has been proposed that healthy longevity diets are included in standard healthcare as switching from a "typical Western diet" could often extend life by a decade.
Protective measures
Specific protective measures such as water purification, sewage treatment, and the development of personal hygienic routines, such as regular hand-washing, safe sex to prevent sexually transmitted infections, became mainstream upon the discovery of infectious disease agents and have decreased the rates of communicable diseases which are spread in unsanitary conditions.
Scientific advancements in genetics have contributed to the knowledge of hereditary diseases and have facilitated progress in specific protective measures in individuals who are carriers of a disease gene or have an increased predisposition to a specific disease. Genetic testing has allowed physicians to make quicker and more accurate diagnoses and has allowed for tailored treatments or personalized medicine.
Food safety has a significant impact on human health and food quality monitoring has increased.
Water, including drinking water, is also monitored in many cases for securing health. There also is some monitoring of air pollution. In many cases, environmental standards such as via maximum pollution levels, regulation of chemicals, occupational hygiene requirements or consumer protection regulations establish some protection in combination with the monitoring.
Preventive measures like vaccines and medical screenings are also important. Using PPE properly and getting the recommended vaccines and screenings can help decrease the spread of respiratory diseases, protecting the healthcare workers as well as their patients.
Secondary prevention
Secondary prevention deals with latent diseases and attempts to prevent an asymptomatic disease from progressing to symptomatic disease. Certain diseases can be classified as primary or secondary. This depends on definitions of what constitutes a disease, though, in general, primary prevention addresses the root cause of a disease or injury whereas secondary prevention aims to detect and treat a disease early on. Secondary prevention consists of "early diagnosis and prompt treatment" to contain the disease and prevent its spread to other individuals, and "disability limitation" to prevent potential future complications and disabilities from the disease. Early diagnosis and prompt treatment for a syphilis patient would include a course of antibiotics to destroy the pathogen and screening and treatment of any infants born to syphilitic mothers. Disability limitation for syphilitic patients includes continued check-ups on the heart, cerebrospinal fluid, and central nervous system of patients to curb any damaging effects such as blindness or paralysis.
Tertiary prevention
Finally, tertiary prevention attempts to reduce the damage caused by symptomatic disease by focusing on mental, physical, and social rehabilitation. Unlike secondary prevention, which aims to prevent disability, the objective of tertiary prevention is to maximize the remaining capabilities and functions of an already disabled patient. Goals of tertiary prevention include: preventing pain and damage, halting progression and complications from disease, and restoring the health and functions of the individuals affected by disease. For syphilitic patients, rehabilitation includes measures to prevent complete disability from the disease, such as implementing work-place adjustments for the blind and paralyzed or providing counseling to restore normal daily functions to the greatest extent possible.
The general use of machinery that has adequate ventilation and airflow is suggested for these patients in order to halt progression and complications of disease. A study conducted in nursing homes to prevent diseases concluded that the use of evaporative humidifiers to maintain the indoor humidity within the range 40–60% can reduce respiratory risk. Certain diseases thrive in different humidities, so the use of the humidifiers can help kill the particles of diseases.
Leading causes of preventable death
United States
The leading preventable cause of death in the United States is tobacco; however, poor diet and lack of exercise may soon surpass tobacco as a leading cause of death. These behaviors are modifiable and public health and prevention efforts could make a difference to reduce these deaths.
Worldwide
The leading causes of preventable death worldwide share similar trends to the United States. There are a few differences between the two, such as malnutrition, pollution, and unsafe sanitation, that reflect health disparities between the developing and developed world.
However, several of the leading causes of death – or underlying contributors to earlier death – may not be included as "preventable" causes of death. A study concluded that pollution was "responsible for approximately 9 million deaths per year" in 2019. And another study concluded that the global mean loss of life expectancy (a measure similar to years of potential life lost) from air pollution in 2015 was 2.9 years, substantially more than, for example, 0.3 years from all forms of direct violence, albeit a significant fraction of the LLE is considered to be unavoidable (such as pollution from some natural wildfires).
A landmark study conducted by the World Health Organization and the International Labour Organization found that exposure to long working hours is the occupational risk factor with the largest attributable burden of disease, i.e. an estimated 745,000 fatalities from ischemic heart disease and stroke events in 2016. With this study, prevention of exposure to long working hours has emerged as a priority for prevention healthcare in workplace settings.
Child mortality
In 2010, 7.6 million children died before reaching the age of 5. While this is a decrease from 9.6 million in 2000, it was still far from the fourth Millennium Development Goal to decrease child mortality by two-thirds by 2015. Of these deaths, about 64% were due to infection including diarrhea, pneumonia, and malaria. About 40% of these deaths occurred in neonates (children ages 1–28 days) due to pre-term birth complications. The highest number of child deaths occurred in Africa and Southeast Asia. As of 2015 in Africa, almost no progress has been made in reducing neonatal death since 1990. In 2010, India, Nigeria, Democratic Republic of the Congo, Pakistan, and China contributed to almost 50% of global child deaths. Targeting efforts in these countries is essential to reducing the global child death rate.
Child mortality is caused by factors including poverty, environmental hazards, and lack of maternal education. In 2003, the World Health Organization created a list of interventions in the following table that were judged economically and operationally "feasible," based on the healthcare resources and infrastructure in 42 nations that contribute to 90% of all infant and child deaths. The table indicates how many infant and child deaths could have been prevented in 2000, assuming universal healthcare coverage.
Preventive methods
Obesity
Obesity is a major risk factor for a wide variety of conditions including cardiovascular diseases, hypertension, certain cancers, and type 2 diabetes. In order to prevent obesity, it is recommended that individuals adhere to a consistent exercise regimen as well as a nutritious and balanced diet. A healthy individual should aim for acquiring 10% of their energy from proteins, 15-20% from fat, and over 50% from complex carbohydrates, while avoiding alcohol as well as foods high in fat, salt, and sugar. Sedentary adults should aim for at least half an hour of moderate-level daily physical activity and eventually increase to include at least 20 minutes of intense exercise, three times a week. Preventive health care offers many benefits to those that chose to participate in taking an active role in the culture. The medical system in our society is geared toward curing acute symptoms of disease after the fact that they have brought us into the emergency room. An ongoing epidemic within American culture is the prevalence of obesity. Healthy eating and regular exercise play a significant role in reducing an individual's risk for type 2 diabetes. A 2008 study concluded that about 23.6 million people in the United States had diabetes, including 5.7 million that had not been diagnosed. 90 to 95 percent of people with diabetes have type 2 diabetes. Diabetes is the main cause of kidney failure, limb amputation, and new-onset blindness in American adults.
Sexually transmitted infections
Sexually transmitted infections (STIs), such as syphilis and HIV, are common but preventable with safe-sex practices. STIs can be asymptomatic, or cause a range of symptoms. Preventive measures for STIs are called prophylactics. The term especially applies to the use of condoms, which are highly effective at preventing disease, but also to other devices meant to prevent STIs, such as dental dams and latex gloves. Other means for preventing STIs include education on how to use condoms or other such barrier devices, testing partners before having unprotected sex, receiving regular STI screenings, to both receive treatment and prevent spreading STIs to partners, and, specifically for HIV, regularly taking prophylactic antiretroviral drugs, such as Truvada. Post-exposure prophylaxis, started within 72 hours (optimally less than 1 hour) after exposure to high-risk fluids, can also protect against HIV transmission.
Malaria prevention using genetic modification
Genetically modified mosquitoes are being used in developing countries to control malaria. This approach has been subject to objections and controversy.
Thrombosis
Thrombosis is a serious circulatory disease affecting thousands, usually older persons undergoing surgical procedures, women taking oral contraceptives and travelers. The consequences of thrombosis can be heart attacks and strokes. Prevention can include exercise, anti-embolism stockings, pneumatic devices, and pharmacological treatments.
Cancer
In recent years, cancer has become a global problem. Low and middle income countries share a majority of the cancer burden largely due to exposure to carcinogens resulting from industrialization and globalization. However, primary prevention of cancer and knowledge of cancer risk factors can reduce over one third of all cancer cases. Primary prevention of cancer can also prevent other diseases, both communicable and non-communicable, that share common risk factors with cancer.
Lung cancer
Lung cancer is the leading cause of cancer-related deaths in the United States and Europe and is a major cause of death in other countries. Tobacco is an environmental carcinogen and the major underlying cause of lung cancer. Between 25% and 40% of all cancer deaths and about 90% of lung cancer cases are associated with tobacco use. Other carcinogens include asbestos and radioactive materials. Both smoking and second-hand exposure from other smokers can lead to lung cancer and eventually death.
Prevention of tobacco use is paramount to prevention of lung cancer. Individual, community, and statewide interventions can prevent or cease tobacco use. 90% of adults in the U.S. who have ever smoked did so prior to the age of 20. In-school prevention/educational programs, as well as counseling resources, can help prevent and cease adolescent smoking. Other cessation techniques include group support programs, nicotine replacement therapy (NRT), hypnosis, and self-motivated behavioral change. Studies have shown long term success rates (>1 year) of 20% for hypnosis and 10%-20% for group therapy.
Cancer screening programs serve as effective sources of secondary prevention. The Mayo Clinic, Johns Hopkins, and Memorial Sloan-Kettering hospitals conducted annual x-ray screenings and sputum cytology tests and found that lung cancer was detected at higher rates, earlier stages, and had more favorable treatment outcomes, which supports widespread investment in such programs.
Legislation can also affect smoking prevention and cessation. In 1992, Massachusetts (United States) voters passed a bill adding an extra 25 cent tax to each pack of cigarettes, despite intense lobbying and $7.3 million spent by the tobacco industry to oppose this bill. Tax revenue goes toward tobacco education and control programs and has led to a decline of tobacco use in the state.
Lung cancer and tobacco smoking are increasing worldwide, especially in China. China is responsible for about one-third of the global consumption and production of tobacco products. Tobacco control policies have been ineffective as China is home to 350 million regular smokers and 750 million passive smokers and the annual death toll is over 1 million. Recommended actions to reduce tobacco use include decreasing tobacco supply, increasing tobacco taxes, widespread educational campaigns, decreasing advertising from the tobacco industry, and increasing tobacco cessation support resources. In Wuhan, China, a 1998 school-based program implemented an anti-tobacco curriculum for adolescents and reduced the number of regular smokers, though it did not significantly decrease the number of adolescents who initiated smoking. This program was therefore effective in secondary but not primary prevention and shows that school-based programs have the potential to reduce tobacco use.
Skin cancer
Skin cancer is the most common cancer in the United States. The most lethal form of skin cancer, melanoma, leads to over 50,000 annual deaths in the United States. Childhood prevention is particularly important because a significant portion of ultraviolet radiation exposure from the sun occurs during childhood and adolescence and can subsequently lead to skin cancer in adulthood. Furthermore, childhood prevention can lead to the development of healthy habits that continue to prevent cancer for a lifetime.
The Centers for Disease Control and Prevention (CDC) recommends several primary prevention methods including: limiting sun exposure between 10 AM and 4 PM, when the sun is strongest, wearing tighter-weave natural cotton clothing, wide-brim hats, and sunglasses as protective covers, using sunscreens that protect against both UV-A and UV-B rays, and avoiding tanning salons. Sunscreen should be reapplied after sweating, exposure to water (through swimming for example) or after several hours of sun exposure. Since skin cancer is very preventable, the CDC recommends school-level prevention programs including preventive curricula, family involvement, participation and support from the school's health services, and partnership with community, state, and national agencies and organizations to keep children away from excessive UV radiation exposure.
Most skin cancer and sun protection data comes from Australia and the United States. An international study reported that Australians tended to demonstrate higher knowledge of sun protection and skin cancer knowledge, compared to other countries. Of children, adolescents, and adults, sunscreen was the most commonly used skin protection. However, many adolescents purposely used sunscreen with a low sun protection factor (SPF) in order to get a tan. Various Australian studies have shown that many adults failed to use sunscreen correctly; many applied sunscreen well after their initial sun exposure and/or failed to reapply when necessary. A 2002 case-control study in Brazil showed that only 3% of case participants and 11% of control participants used sunscreen with SPF >15.
Cervical cancer
Cervical cancer ranks among the top three most common cancers among women in Latin America, sub-Saharan Africa, and parts of Asia. Cervical cytology screening aims to detect abnormal lesions in the cervix so that women can undergo treatment prior to the development of cancer. Given that high quality screening and follow-up care has been shown to reduce cervical cancer rates by up to 80%, most developed countries now encourage sexually active women to undergo a Pap test every 3–5 years. Finland and Iceland have developed effective organized programs with routine monitoring and have managed to significantly reduce cervical cancer mortality while using fewer resources than unorganized, opportunistic programs such as those in the United States or Canada.
In developing nations in Latin America, such as Chile, Colombia, Costa Rica, and Cuba, both public and privately organized programs have offered women routine cytological screening since the 1970s. However, these efforts have not resulted in a significant change in cervical cancer incidence or mortality in these nations. This is likely due to low quality, inefficient testing. However, Puerto Rico, which has offered early screening since the 1960s, has witnessed almost a 50% decline in cervical cancer incidence and almost a four-fold decrease in mortality between 1950 and 1990. Brazil, Peru, India, and several high-risk nations in sub-Saharan Africa which lack organized screening programs, have a high incidence of cervical cancer.
Colorectal cancer
Colorectal cancer is globally the second most common cancer in women and the third-most common in men, and the fourth most common cause of cancer death after lung, stomach, and liver cancer, having caused 715,000 deaths in 2010.
It is also highly preventable; about 80 percent of colorectal cancers begin as benign growths, commonly called polyps, which can be easily detected and removed during a colonoscopy. Other methods of screening for polyps and cancers include fecal occult blood testing. Lifestyle changes that may reduce the risk of colorectal cancer include increasing consumption of whole grains, fruits and vegetables, and reducing consumption of red meat.
Dementia
Health disparities and barriers to accessing care
Access to healthcare and preventive health services is unequal, as is the quality of care received. A study conducted by the Agency for Healthcare Research and Quality (AHRQ) revealed health disparities in the United States. In the United States, elderly adults (>65 years old) received worse care and had less access to care than their younger counterparts. The same trends are seen when comparing all racial minorities (black, Hispanic, Asian) to white patients, and low-income people to high-income people. Common barriers to accessing and utilizing healthcare resources included lack of income and education, language barriers, and lack of health insurance. Minorities were less likely than whites to possess health insurance, as were individuals who completed less education. These disparities made it more difficult for the disadvantaged groups to have regular access to a primary care provider, receive immunizations, or receive other types of medical care. Additionally, uninsured people tend to not seek care until their diseases progress to chronic and serious states and they are also more likely to forgo necessary tests, treatments, and filling prescription medications.
These sorts of disparities and barriers exist worldwide as well. Often, there are decades of gaps in life expectancy between developing and developed countries. For example, Japan has an average life expectancy that is 36 years greater than that in Malawi. Low-income countries also tend to have fewer physicians than high-income countries. In Nigeria and Myanmar, there are fewer than 4 physicians per 100,000 people while Norway and Switzerland have a ratio that is ten-fold higher. Common barriers worldwide include lack of availability of health services and healthcare providers in the region, great physical distance between the home and health service facilities, high transportation costs, high treatment costs, and social norms and stigma toward accessing certain health services.
Economics of lifestyle-based prevention
With lifestyle factors such as diet and exercise rising to the top of preventable death statistics, the economics of healthy lifestyle is a growing concern. There is little question that positive lifestyle choices provide an investment in health throughout life. To gauge success, traditional measures such as the quality years of life method (QALY), show great value. However, that method does not account for the cost of chronic conditions or future lost earnings because of poor health.
Developing future economic models that would guide both private and public investments as well as drive future policy to evaluate the efficacy of positive lifestyle choices on health is a major topic for economists globally. Americans spend over three trillion a year on health care but have a higher rate of infant mortality, shorter life expectancies, and a higher rate of diabetes than other high-income nations because of negative lifestyle choices. Despite these large costs, very little is spent on prevention for lifestyle-caused conditions in comparison. In 2016, the Journal of the American Medical Association estimated that $101 billion was spent in 2013 on the preventable disease of diabetes, and another $88 billion was spent on heart disease. In an effort to encourage healthy lifestyle choices, as of 2010 workplace wellness programs were on the rise but the economics and effectiveness data were continuing to evolve and develop.
Health insurance coverage impacts lifestyle choices, even intermittent loss of coverage had negative effects on healthy choices in the U.S. The repeal of the Affordable Care Act (ACA) could significantly impact coverage for many Americans as well as "The Prevention and Public Health Fund" which is the U.S. first and only mandatory funding stream dedicated to improving public health including counseling on lifestyle prevention issues, such as weight management, alcohol use, and treatment for depression.
Because in the U.S. chronic illnesses predominate as a cause of death and pathways for treating chronic illnesses are complex and multifaceted, prevention is a best practice approach to chronic disease when possible. In many cases, prevention requires mapping complex pathways to determine the ideal point for intervention. Cost-effectiveness of prevention is achievable, but impacted by the length of time it takes to see effects/outcomes of intervention. This makes prevention efforts difficult to fund—particularly in strained financial contexts. Prevention potentially creates other costs as well, due to extending the lifespan and thereby increasing opportunities for illness. In order to assess the cost-effectiveness of prevention, the cost of the preventive measure, savings from avoiding morbidity, and the cost from extending the lifespan need to be considered. Life extension costs become smaller when accounting for savings from postponing the last year of life, which makes up a large fraction of lifetime medical expenditures and becomes cheaper with age. Prevention leads to savings only if the cost of the preventive measure is less than the savings from avoiding morbidity net of the cost of extending the life span. In order to establish reliable economics of prevention for illnesses that are complicated in origin, knowing how best to assess prevention efforts, i.e. developing useful measures and appropriate scope, is required.
Effectiveness
There is no general consensus as to whether or not preventive healthcare measures are cost-effective, but they increase the quality of life dramatically. There are varying views on what constitutes a "good investment." Some argue that preventive health measures should save more money than they cost, when factoring in treatment costs in the absence of such measures. Others have argued in favor of "good value" or conferring significant health benefits even if the measures do not save money. Furthermore, preventive health services are often described as one entity though they comprise a myriad of different services, each of which can individually lead to net costs, savings, or neither. Greater differentiation of these services is necessary to fully understand both the financial and health effects.
A 2010 study reported that in the United States, vaccinating children, cessation of smoking, daily prophylactic use of aspirin, and screening of breast and colorectal cancers had the most potential to prevent premature death. Preventive health measures that resulted in savings included vaccinating children and adults, smoking cessation, daily use of aspirin, and screening for issues with alcoholism, obesity, and vision failure. These authors estimated that if usage of these services in the United States increased to 90% of the population, there would be net savings of $3.7 billion, which comprised only about -0.2% of the total 2006 United States healthcare expenditure. Despite the potential for decreasing healthcare spending, utilization of healthcare resources in the United States still remains low, especially among Latinos and African-Americans. Overall, preventive services are difficult to implement because healthcare providers have limited time with patients and must integrate a variety of preventive health measures from different sources.
While these specific services bring about small net savings, not every preventive health measure saves more than it costs. A 1970s study showed that preventing heart attacks by treating hypertension early on with drugs actually did not save money in the long run. The money saved by evading treatment from heart attack and stroke only amounted to about a quarter of the cost of the drugs. Similarly, it was found that the cost of drugs or dietary changes to decrease high blood cholesterol exceeded the cost of subsequent heart disease treatment. Due to these findings, some argue that rather than focusing healthcare reform efforts exclusively on preventive care, the interventions that bring about the highest level of health should be prioritized.
In 2008, Cohen et al. outlined a few arguments made by skeptics of preventive healthcare. Many argue that preventive measures only cost less than future treatment when the proportion of the population that would become ill in the absence of prevention is fairly large. The Diabetes Prevention Program Research Group conducted a 2012 study evaluating the costs and benefits in quality-adjusted life-years or QALYs of lifestyle changes versus taking the drug metformin. They found that neither method brought about financial savings, but were cost-effective nonetheless because they brought about an increase in QALYs. In addition to scrutinizing costs, preventive healthcare skeptics also examine efficiency of interventions. They argue that while many treatments of existing diseases involve use of advanced equipment and technology, in some cases, this is a more efficient use of resources than attempts to prevent the disease. Cohen suggested that the preventive measures most worth exploring and investing in are those that could benefit a large portion of the population to bring about cumulative and widespread health benefits at a reasonable cost.
Cost-effectiveness of childhood obesity interventions
There are at least four nationally implemented childhood obesity interventions in the United States: the Sugar-Sweetened Beverage excise tax (SSB), the TV AD program, active physical education (Active PE) policies, and early care and education (ECE) policies. They each have similar goals of reducing childhood obesity. The effects of these interventions on BMI have been studied, and the cost-effectiveness analysis (CEA) has led to a better understanding of projected cost reductions and improved health outcomes. The Childhood Obesity Intervention Cost-Effectiveness Study (CHOICES) was conducted to evaluate and compare the CEA of these four interventions.
Gortmaker, S.L. et al. (2015) states: "The four initial interventions were selected by the investigators to represent a broad range of nationally scalable strategies to reduce childhood obesity using a mix of both policy and programmatic strategies... 1. an excise tax of $0.01 per ounce of sweetened beverages, applied nationally and administered at the state level (SSB), 2. elimination of the tax deductibility of advertising costs of TV advertisements for "nutritionally poor" foods and beverages seen by children and adolescents (TV AD), 3. state policy requiring all public elementary schools in which physical education (PE) is currently provided to devote ≥50% of PE class time to moderate and vigorous physical activity (Active PE), and 4. state policy to make early child educational settings healthier by increasing physical activity, improving nutrition, and reducing screen time (ECE)." The CHOICES found that SSB, TV AD, and ECE led to net cost savings. Both SSB and TV AD increased quality adjusted life years and produced yearly tax revenue of 12.5 billion U.S. dollars and 80 million U.S. dollars, respectively.
Some challenges with evaluating the effectiveness of child obesity interventions include:
The economic consequences of childhood obesity are both short and long term. In the short term, obesity impairs cognitive achievement and academic performance. Some believe this is secondary to negative effects on mood or energy, but others suggest there may be physiological factors involved. Furthermore, obese children have increased health care expenses (e.g. medications, acute care visits). In the long term, obese children tend to become obese adults with associated increased risk for a chronic condition such as diabetes or hypertension. Any effect on their cognitive development may also affect their contributions to society and socioeconomic status.
In the CHOICES, it was noted that translating the effects of these interventions may in fact differ among communities throughout the nation. In addition it was suggested that limited outcomes are studied and these interventions may have an additional effect that is not fully appreciated.
Modeling outcomes in such interventions in children over the long term is challenging because advances in medicine and medical technology are unpredictable. The projections from cost-effective analysis may need to be reassessed more frequently.
Economics of U.S. preventive care
As of 2009, the cost-effectiveness of preventive care is a highly debated topic. While some economists argue that preventive care is valuable and potentially cost saving, others believe it is an inefficient waste of resources. Preventive care is composed of a variety of clinical services and programs including annual doctor's check-ups, annual immunizations, and wellness programs; recent models show that these simple interventions can have significant economic impacts.
Clinical preventive services and programs
Research on preventive care addresses the question of whether it is cost saving or cost effective and whether there is an economics evidence base for health promotion and disease prevention. The need for and interest in preventive care is driven by the imperative to reduce health care costs while improving quality of care and the patient experience. Preventive care can lead to improved health outcomes and cost savings potential. Services such as health assessments/screenings, prenatal care, and telehealth and telemedicine can reduce morbidity or mortality with low cost or cost savings. Specifically, health assessments/screenings have cost savings potential, with varied cost-effectiveness based on screening and assessment type. Inadequate prenatal care can lead to an increased risk of prematurity, stillbirth, and infant death. Time is the ultimate resource and preventive care can help mitigate the time costs. Telehealth and telemedicine is one option that has gained consumer interest, acceptance, and confidence and can improve quality of care and patient satisfaction.
Economics for investment
There are benefits and trade-offs when considering investment in preventive care versus other types of clinical services. Preventive care can be a good investment as supported by the evidence base and can drive population health management objectives. The concepts of cost saving and cost-effectiveness are different and both are relevant to preventive care. Preventive care that may not save money may still provide health benefits; thus, there is a need to compare interventions relative to impact on health and cost.
Preventive care transcends demographics and is applicable to people of every age. The Health Capital Theory underpins the importance of preventive care across the lifecycle and provides a framework for understanding the variances in health and health care that are experienced. It treats health as a stock that provides direct utility. Health depreciates with age and the aging process can be countered through health investments. The theory further supports that individuals demand good health, that the demand for health investment is a derived demand (i.e. investment is health is due to the underlying demand for good health), and the efficiency of the health investment process increases with knowledge (i.e. it is assumed that the more educated are more efficient consumers and producers of health).
The prevalence elasticity of demand for prevention can also provide insights into the economics. Demand for preventive care can alter the prevalence rate of a given disease and further reduce or even reverse any further growth of prevalence. Reduction in prevalence subsequently leads to reduction in costs. There are a number of organizations and policy actions that are relevant when discussing the economics of preventive care services. The evidence base, viewpoints, and policy briefs from the Robert Wood Johnson Foundation, the Organisation for Economic Co-operation and Development (OECD), and efforts by the U.S. Preventive Services Task Force (USPSTF) all provide examples that improve the health and well-being of populations (e.g. preventive health assessments/screenings, prenatal care, and telehealth/telemedicine). The Affordable Care Act (ACA) has major influence on the provision of preventive care services, although it is currently under heavy scrutiny and review by the new administration. According to the Centers for Disease Control and Prevention (CDC), the ACA makes preventive care affordable and accessible through mandatory coverage of preventive services without a deductible, copayment, coinsurance, or other cost sharing.
The U.S. Preventive Services Task Force (USPSTF), a panel of national experts in prevention and evidence-based medicine, works to improve health of Americans by making evidence-based recommendations about clinical preventive services. They do not consider the cost of a preventive service when determining a recommendation. Each year, the organization delivers a report to Congress that identifies critical evidence gaps in research and recommends priority areas for further review.
The National Network of Perinatal Quality Collaboratives (NNPQC), sponsored by the CDC, supports state-based perinatal quality collaboratives (PQCs) in measuring and improving upon health care and health outcomes for mothers and babies. These PQCs have contributed to improvements such as reduction in deliveries before 39 weeks, reductions in healthcare associated bloodstream infections, and improvements in the utilization of antenatal corticosteroids.
Telehealth and telemedicine has realized significant growth and development recently. The Center for Connected Health Policy (The National Telehealth Policy Resource Center) has produced multiple reports and policy briefs on the topic of Telehealth and Telemedicine and how they contribute to preventive services. Policy actions and provision of preventive services do not guarantee utilization. Reimbursement has remained a significant barrier to adoption due to variances in payer and state level reimbursement policies and guidelines through government and commercial payers. Americans use preventive services at about half the recommended rate and cost-sharing, such as deductibles, co-insurance, or copayments, also reduce the likelihood that preventive services will be used. Despite the ACA's enhancement of Medicare benefits and preventive services, there were no effects on preventive service utilization, calling out the fact that other fundamental barriers exist.
Affordable Care Act and preventive healthcare
The Patient Protection and Affordable Care Act, also known as just the Affordable Care Act or Obamacare, was passed and became law in the United States on March 23, 2010. The finalized and newly ratified law was to address many issues in the U.S. healthcare system, which included expansion of coverage, insurance market reforms, better quality, and the forecast of efficiency and costs. Under the insurance market reforms the act required that insurance companies no longer exclude people with pre-existing conditions, allow for children to be covered on their parents' plan until the age of 26, and expand appeals that dealt with reimbursement denials. The Affordable Care Act also banned the limited coverage imposed by health insurances, and insurance companies were to include coverage for preventive health care services. The U.S. Preventive Services Task Force has categorized and rated preventive health services as either A or B, as to which insurance companies must comply and present full coverage. Not only has the U.S. Preventive Services Task Force provided graded preventive health services that are appropriate for coverage, they have also provided many recommendations to clinicians and insurers to promote better preventive care to ultimately provide better quality of care and lower the burden of costs.
Health insurance
Healthcare insurance companies are willing to pay for preventive care despite the fact that patients are not acutely sick in hope that it will prevent them from developing a chronic disease later on in life. Today, health insurance plans offered through the Marketplace, mandated by the Affordable Care Act are required to provide certain preventive care services free of charge to patients. Section 2713 of the Affordable Care Act, specifies that all private Marketplace and all employer-sponsored private plans (except those grandfathered in) are required to cover preventive care services that are ranked A or B by the U.S. Preventive Services Task Force free of charge to patients. UnitedHealthcare insurance company has published patient guidelines at the beginning of the year explaining their preventive care coverage.
Evaluating incremental benefits
Evaluating the incremental benefits of preventive care requires a longer period of time when compared to acutely ill patients. Inputs into the model such as discounting rate and time horizon can have significant effects on the results. One controversial subject is use of a 10-year time frame to assess cost effectiveness of diabetes preventive services by the Congressional Budget Office.
Preventive care services mainly focus on chronic disease. The Congressional Budget Office has provided guidance that further research is needed in the area of the economic impacts of obesity in the U.S. before the CBO can estimate budgetary consequences. A bipartisan report published in May 2015 recognizes the potential of preventive care to improve patients' health at individual and population levels while decreasing the healthcare expenditure.
Economic case
Mortality from modifiable risk factors
Chronic diseases such as heart disease, stroke, diabetes, obesity and cancer have become the most common and costly health problems in the United States. In 2014, it was projected that by 2023 that the number of chronic disease cases would increase by 42%, resulting in $4.2 trillion in treatment and lost economic output. They are also among the top ten leading causes of mortality. Chronic diseases are driven by risk factors that are largely preventable. Sub-analysis performed on all deaths in the United States in 2000 revealed that almost half were attributed to preventable behaviors including tobacco, poor diet, physical inactivity and alcohol consumption. More recent analysis reveals that heart disease and cancer alone accounted for nearly 46% of all deaths. Modifiable risk factors are also responsible for a large morbidity burden, resulting in poor quality of life in the present and loss of future life earning years. It is further estimated that by 2023, focused efforts on the prevention and treatment of chronic disease may result in 40 million fewer chronic disease cases, potentially reducing treatment costs by $220 billion.
Childhood vaccinations
Childhood immunizations are largely responsible for the increase in life expectancy in the 20th century. From an economic standpoint, childhood vaccines demonstrate a very high return on investment. According to Healthy People 2020, for every birth cohort that receives the routine childhood vaccination schedule, direct health care costs are reduced by $9.9 billion and society saves $33.4 billion in indirect costs. The economic benefits of childhood vaccination extend beyond individual patients to insurance plans and vaccine manufacturers, all while improving the health of the population.
Health capital theory
The burden of preventable illness extends beyond the healthcare sector, incurring costs related to lost productivity among workers in the workforce. Indirect costs related to poor health behaviors and associated chronic disease costs U.S. employers billions of dollars each year.
According to the American Diabetes Association (ADA), medical costs for employees with diabetes are twice as high as for workers without diabetes and are caused by work-related absenteeism ($5 billion), reduced productivity at work ($20.8 billion), inability to work due to illness-related disability ($21.6 billion), and premature mortality ($18.5 billion). Reported estimates of the cost burden due to increasingly high levels of overweight and obese members in the workforce vary, with best estimates suggesting 450 million more missed work days, resulting in $153 billion each year in lost productivity, according to the CDC Healthy Workforce.
The health capital model explains how individual investments in health can increase earnings by "increasing the number of healthy days available to work and to earn income." In this context, health can be treated both as a consumption good, wherein individuals desire health because it improves quality of life in the present, and as an investment good because of its potential to increase attendance and workplace productivity over time. Preventive health behaviors such as healthful diet, regular exercise, access to and use of well-care, avoiding tobacco, and limiting alcohol can be viewed as health inputs that result in both a healthier workforce and substantial cost savings.
Quality-adjusted life years
Health benefits of preventive care measures can be described in terms of quality-adjusted life-years (QALYs) saved. A QALY takes into account length and quality of life, and is used to evaluate the cost-effectiveness of medical and preventive interventions. Classically, one year of perfect health is defined as 1 QALY and a year with any degree of less than perfect health is assigned a value between 0 and 1 QALY. As an economic weighting system, the QALY can be used to inform personal decisions, to evaluate preventive interventions and to set priorities for future preventive efforts.
Cost-saving and cost-effective benefits of preventive care measures are well established. The Robert Wood Johnson Foundation evaluated the prevention cost-effectiveness literature, and found that many preventive measures meet the benchmark of <$100,000 per QALY and are considered to be favorably cost-effective. These include screenings for HIV and chlamydia, cancers of the colon, breast and cervix, vision screening, and screening for abdominal aortic aneurysms in men >60 in certain populations. Alcohol and tobacco screening were found to be cost-saving in some reviews and cost-effective in others. According to the RWJF analysis, two preventive interventions were found to save costs in all reviews: childhood immunizations and counseling adults on the use of aspirin.
Minority populations
Health disparities are increasing in the United States for chronic diseases such as obesity, diabetes, cancer, and cardiovascular disease. Populations at heightened risk for health inequities are the growing proportion of racial and ethnic minorities, including African Americans, American Indians, Hispanics/Latinos, Asian Americans, Alaska Natives and Pacific Islanders.
According to the Racial and Ethnic Approaches to Community Health (REACH), a national CDC program, non-Hispanic blacks currently have the highest rates of obesity (48%), and risk of newly diagnosed diabetes is 77% higher among non-Hispanic blacks, 66% higher among Hispanics/Latinos and 18% higher among Asian Americans compared to non-Hispanic whites. Current U.S. population projections predict that more than half of Americans will belong to a minority group by 2044. Without targeted preventive interventions, medical costs from chronic disease inequities will become unsustainable. Broadening health policies designed to improve delivery of preventive services for minority populations may help reduce substantial medical costs caused by inequities in health care, resulting in a return on investment.
Policies
Chronic disease is a population level issue that requires population health level efforts and national and state level public policy to effectively prevent, rather than individual level efforts. The United States currently employs many public health policy efforts aligned with the preventive health efforts discussed above. The Centers for Disease Control and Prevention support initiatives such as Health in All Policies and HI-5 (Health Impact in 5 Years), and collaborative efforts that aim to consider prevention across sectors and address social determinants of health as a method of primary prevention for chronic disease.
Obesity
Policies that address the obesity epidemic should be proactive and far-reaching, including a variety of stakeholders both in healthcare and in other sectors. Recommendations from the Institute of Medicine in 2012 suggest that "concerted action be taken across and within five environments (physical activity (PA), food and beverage, marketing and messaging, healthcare and worksites, and schools) and all sectors of society (including government, business and industry, schools, child care, urban planning, recreation, transportation, media, public health, agriculture, communities, and home) in order for obesity prevention efforts to truly be successful."
There are dozens of current policies acting at either (or all of) the federal, state, local and school levels. Most states employ a physical education requirement of 150 minutes of physical education per week at school, a policy of the National Association of Sport and Physical Education. In some cities, including Philadelphia, a sugary food tax is employed. This is a part of an amendment to Title 19 of the Philadelphia Code, "Finance, Taxes and Collections", Chapter 19-4100, Sugar-Sweetened Beverage Tax that was approved 2016, which establishes an excise tax of $0.015 per fluid ounce on distributors of beverages sweetened with both caloric and non-caloric sweeteners. Distributors are required to file a return with the department, and the department can collect taxes, among other responsibilities. These policies can be a source of tax credits. Under the Philadelphia policy, businesses can apply for tax credits with the revenue department on a first-come, first-served basis. This applies until the total amount of credits for a particular year reaches one million dollars.
Recently, advertisements for food and beverages directed at children have received much attention. The Children's Food and Beverage Advertising Initiative (CFBAI) is a self-regulatory program of the food industry. Each participating company makes a public pledge that details its commitment to advertise only foods that meet certain nutritional criteria to children under 12 years old. This is a self-regulated program with policies written by the Council of Better Business Bureaus. The Robert Wood Johnson Foundation funded research to test the efficacy of the CFBAI. The results showed progress in terms of decreased advertising of food products that target children and adolescents.
Childhood immunization policies
Despite nationwide controversies over childhood vaccination and immunization, there are policies and programs at the federal, state, local and school levels outlining vaccination requirements. All states require children to be vaccinated against certain communicable diseases as a condition for school attendance. However, only 18 states allow exemptions for "philosophical or moral reasons." Diseases for which vaccinations form part of the standard ACIP vaccination schedule are diphtheria tetanus pertussis (whooping cough), poliomyelitis (polio), measles, mumps, rubella, haemophilus influenzae type b, hepatitis B, influenza, and pneumococcal infections. The CDC website maintains such schedules.
The CDC website describes a federally funded program, Vaccines for Children (VFC), which provides vaccines at no cost to children who might not otherwise be vaccinated because of inability to pay. Additionally, the Advisory Committee on Immunization Practices (ACIP) is an expert vaccination advisory board that informs vaccination policy and guides on-going recommendations to the CDC, incorporating the most up-to-date cost-effectiveness and risk-benefit evidence in its recommendations.
See also
Urban green space#Impact on health
Chemoprevention
Consumer protection
Effects of climate change on human health
Health security
Genetic modifications preventing diseases
Epigenetics
Mental illness prevention
Pandemic prevention
Public health
Pre-exposure prophylaxis
Preparedness
Preventive and social medicine
Primary Health Care
Pollution prevention (disambiguation)
Sick building syndrome
Treatment as prevention
Journals and organizations
American Board of Preventive Medicine
American Journal of Preventive Medicine
American Osteopathic Board of Preventive Medicine
Preventive Medicine (journal)
References
External links
United States Preventive Services Task Force (USPSTF)
Canadian Task Force on Preventive Health Care (CTFPHC)
European Centre for Disease Prevention and Control (ECDC)
Medical terminology
Nursing specialties | 0.765527 | 0.995898 | 0.762387 |
Hyphema | Hyphema is the medical condition of bleeding in the anterior chamber of the eye between the iris and the cornea. People usually first notice a loss or decrease in vision. The eye may also appear to have a reddish tinge, or it may appear as a small pool of blood at the bottom of the iris in the cornea. A traumatic hyphema is caused by a blow to the eye. A hyphema can also occur spontaneously.
Presentation
A decrease in vision or a loss of vision is often the first sign of a hyphema. People with microhyphema may have slightly blurred or normal vision. A person with a full hyphema may not be able to see at all (complete loss of vision). The person's vision may improve over time as the blood moves by gravity lower in the anterior chamber of the eye, between the iris and the cornea. In many people, the vision will improve, however some people may have other injuries related to trauma to the eye or complications related to the hyphema. A microhyphema, where red blood cells are hanging in the anterior chamber of the eye, is less severe. A layered hyphema when fresh blood is seen lower in the anterior chamber is moderately severe. A full hyphema (total hyphema), when blood fills up the chamber completely, is the most severe.
Complications
While the vast majority of hyphemas resolve on their own without issue, sometimes complications occur. Traumatic hyphema may lead to increased intraocular pressure (IOP), peripheral anterior synechiae, atrophy of the optic nerve, staining of the cornea with blood, re-bleeding, and impaired accommodation.
Secondary hemorrhage, or rebleeding of the hyphema, is thought to worsen outcomes in terms of visual function and lead to complications such as glaucoma, corneal staining, optic atrophy, or vision loss. Rebleeding occurs in 4–35% of hyphema cases and is a risk factor for glaucoma. Young children with traumatic hyphema are at an increased risk of developing amblyopia, an irreversible condition.
Causes
Hyphemas are frequently caused by injury, and may partially or completely block vision. The most common causes of hyphema are intraocular surgery, blunt trauma, and lacerating trauma. Hyphemas may also occur spontaneously, without any inciting trauma. Spontaneous hyphemas are usually caused by the abnormal growth of blood vessels (neovascularization), tumors of the eye (retinoblastoma or iris melanoma), uveitis, or vascular anomalies (juvenile xanthogranuloma). Additional causes of spontaneous hyphema include: rubeosis iridis, myotonic dystrophy, leukemia, hemophilia, and von Willebrand disease. Conditions or medications that cause thinning of the blood, such as aspirin, warfarin, or drinking alcohol may also cause hyphema. Source of bleeding in hyphema with blunt trauma to eye is circulus iridis major artery.
Treatment
The main goals of treatment are to decrease the risk of re-bleeding within the eye, corneal blood staining, and atrophy of the optic nerve. Small hyphemas can usually be treated on an outpatient basis. There is little evidence that most of the commonly used treatments for hyphema (antifibrinolytic agents [oral and systemic aminocaproic acid, tranexamic acid, and aminomethylbenzoic acid], corticosteroids [systemic and topical], cycloplegics, miotics, aspirin, conjugated estrogens, traditional Chinese medicine, monocular versus bilateral patching, elevation of the head, and bed rest) are effective at improving visual acuity after two weeks. Surgery may be necessary for non-resolving hyphemas, or hyphemas that are associated with high pressure that does not respond to medication. Surgery can be effective for cleaning out the anterior chamber and preventing corneal blood staining.
If pain management is necessary, acetaminophen can be used. Aspirin and ibuprofen should be avoided, because they interfere with platelets' ability to form a clot and consequently increase the risk of additional bleeding. Sedation is not usually necessary for patients with hyphema.
Aminocaproic or tranexamic acids are often prescribed for hyphema on the basis that they reduce the risk of rebleeding by inhibiting the conversion of plasminogen to plasmin, and thereby keeping clots stable. However, the evidence for their effectiveness is limited and aminocaproic acid may actually cause hyphemas to take longer to clear.
Prognosis
Hyphemas require urgent assessment by an optometrist or ophthalmologist as they may result in permanent visual impairment.
A long-standing hyphema may result in hemosiderosis and heterochromia. Blood accumulation may also cause an elevation of the intraocular pressure. On average, the increased pressure in the eye remains for six days before dropping. Most uncomplicated hyphemas resolve within 5–6 days.
Epidemiology
As of 2012, the rate of hyphemas in the United States are about 20 cases per 100,000 people annually. The majority of people with a traumatic hyphema are children and young adults. 60% of traumatic hyphemas are sports-related, and there are more cases in males compared to females.
See also
References
External links
Hyphema - Handbook of Ocular Disease Management
Disorders of iris and ciliary body | 0.767802 | 0.992876 | 0.762332 |
Resuscitation | Resuscitation is the process of correcting physiological disorders (such as lack of breathing or heartbeat) in an acutely ill patient. It is an important part of intensive care medicine, anesthesiology, trauma surgery and emergency medicine. Well-known examples are cardiopulmonary resuscitation and mouth-to-mouth resuscitation.
Variables
See also
References
Critical emergency medicine
Emergency medicine
Intensive care medicine | 0.768368 | 0.992099 | 0.762297 |
Human nose | The human nose is the first organ of the respiratory system. It is also the principal organ in the olfactory system. The shape of the nose is determined by the nasal bones and the nasal cartilages, including the nasal septum, which separates the nostrils and divides the nasal cavity into two.
The nose has an important function in breathing. The nasal mucosa lining the nasal cavity and the paranasal sinuses carries out the necessary conditioning of inhaled air by warming and moistening it. Nasal conchae, shell-like bones in the walls of the cavities, play a major part in this process. Filtering of the air by nasal hair in the nostrils prevents large particles from entering the lungs. Sneezing is a reflex to expel unwanted particles from the nose that irritate the mucosal lining. Sneezing can transmit infections, because aerosols are created in which the droplets can harbour pathogens.
Another major function of the nose is olfaction, the sense of smell. The area of olfactory epithelium, in the upper nasal cavity, contains specialised olfactory cells responsible for this function.
The nose is also involved in the function of speech. Nasal vowels and nasal consonants are produced in the process of nasalisation. The hollow cavities of the paranasal sinuses act as sound chambers that modify and amplify speech and other vocal sounds.
There are several plastic surgery procedures that can be done on the nose, known as rhinoplasties available to correct various structural defects or to change the shape of the nose. Defects may be congenital, or result from nasal disorders or from trauma. These procedures are a type of reconstructive surgery. Elective procedures to change a nose shape are a type of cosmetic surgery.
Structure
Several bones and cartilages make up the bony-cartilaginous framework of the nose, and the internal structure. The nose is also made up of types of soft tissue such as skin, epithelia, mucous membrane, muscles, nerves, and blood vessels. In the skin there are sebaceous glands, and in the mucous membrane there are nasal glands. The bones and cartilages provide strong protection for the internal structures of the nose. There are several muscles that are involved in movements of the nose. The arrangement of the cartilages allows flexibility through muscle control to enable airflow to be modified.
Bones
The bony structure of the nose is provided by the maxilla, frontal bone, and a number of smaller bones.
The topmost bony part of the nose is formed by the nasal part of the frontal bone, which lies between the brow ridges, and ends in a serrated nasal notch. A left and a right nasal bone join with the nasal part of the frontal bone at either side; and these at the side with the small lacrimal bones and the frontal process of each maxilla. The internal roof of the nasal cavity is composed of the horizontal, perforated cribriform plate of the ethmoid bone through which pass sensory fibres of the olfactory nerve. Below and behind the cribriform plate, sloping down at an angle, is the face of the sphenoid bone.
The wall separating the two cavities of the nose, the nasal septum, is made up of bone inside and cartilage closer to the tip of the nose. The bony part is formed by the perpendicular plate of the ethmoid bone at the top, and the vomer bone below. The floor of the nose is made up of the incisive bone and the horizontal plates of the palatine bones, and this makes up the hard palate of the roof of the mouth. The two horizontal plates join at the midline and form the posterior nasal spine that gives attachment to the musculus uvulae in the uvula.
The two maxilla bones join at the base of the nose at the lower nasal midline between the nostrils, and at the top of the philtrum to form the anterior nasal spine. This thin projection of bone holds the cartilaginous center of the nose. It is also an important cephalometric landmark.
Cartilages
The nasal cartilages are the septal, lateral, major alar, and minor alar cartilages. The major and minor cartilages are also known as the greater and lesser alar cartilages. There is a narrow strip of cartilage called the vomeronasal cartilage that lies between the vomer and the septal cartilage.
The septal nasal cartilage, extends from the nasal bones in the midline, to the bony part of the septum in the midline, posteriorly. It then passes along the floor of the nasal cavity. The septum is quadrangular–the upper half is attached to the two lateral nasal cartilages, which are fused to the dorsal septum in the midline. The septum is laterally attached, with loose ligaments, to the bony margin of the anterior nasal aperture, while the inferior ends of the lateral cartilages are free (unattached). The three or four minor alar cartilages are adjacent to the lateral cartilages, held in the connective tissue membrane, that connects the lateral cartilages to the frontal process of the maxilla.
The nasal bones in the upper part of the nose are joined by the midline internasal suture. They join with the septal cartilage at a junction known as the rhinion. The rhinion is the midline junction where the nasal bone meets the septal cartilage. From the rhinion to the apex, or tip, the framework is of cartilage.
The major alar cartilages are thin, U-shaped plates of cartilage on each side of the nose that form the lateral and medial walls of the vestibule, known as the medial and lateral crura. The medial crura are attached to the septal cartilage, forming fleshy parts at the front of the nostrils on each side of the septum, called the medial crural footpods. The medial crura meet at the midline below the end of the septum to form the columella and lobule. The lobule contains the tip of the nose and its base contains the nostrils. At the peaks of the folds of the medial crura, they form the alar domes the tip-defining points of the nose, separated by a notch. They then fold outwards, above and to the side of the nostrils forming the lateral crura. The major alar cartilages are freely moveable and can respond to muscles to either open or constrict the nostrils.
There is a reinforcing structure known as the nasal scroll that resists internal collapse from airflow pressure generated by normal breathing. This structure is formed by the junction between the lateral and major cartilages. Their edges interlock by one scrolling upwards and one scrolling inwards.
Muscles
The muscles of the nose are a subgroup of the facial muscles. They are involved in respiration and facial expression. The muscles of the nose include the procerus, nasalis, depressor septi nasi, levator labii superioris alaeque nasi, and the orbicularis oris of the mouth. As are all of the facial muscles, the muscles of the nose are innervated by the facial nerve and its branches. Although each muscle is independent, the muscles of the nose form a continuous layer with connections between all the components of the muscles and ligaments, in the nasal part of a superficial muscular aponeurotic system (SMAS). The SMAS is continuous from the nasofrontal process to the nasal tip. It divides at level of the nasal valve into superficial and deep layers, each layer having medial and lateral components.
The procerus muscle produces wrinkling over the bridge of the nose, and is active in concentration and frowning. It is a prime target for Botox procedures in the forehead to remove the lines between the eyes.
The nasalis muscle consists of two main parts: a transverse part called the compressor naris, and an alar part termed the dilator naris. The compressor naris muscle compresses the nostrils and may completely close them. The alar part, the dilator naris mainly consists of the dilator naris posterior, and a much smaller dilator naris anterior, and this muscle flares the nostrils. The dilator naris helps to form the upper ridge of the philtrum. The anterior, and the posterior dilator naris, (the alar part of the nasalis muscle), give support to the nasal valves.
The depressor septi nasi may sometimes be absent or rudimentary. The depressor septi pulls the columella, the septum, and the tip of the nose downwards. At the start of inspiration, this muscle tenses the nasal septum and with the dilator naris widens the nostrils.
The levator labii superioris alaeque nasi divides into a medial and a lateral slip. The medial slip blends into the perichondrium of the major alar cartilage and its overlying skin. The lateral slip blends at the side of the upper lip with the levator labii superioris, and with the orbicularis oris. The lateral slip raises the upper lip and deepens and increases the curve above the nasolabial furrow. The medial slip pulls the lateral crus upwards and modifies the curve of the furrow around the alae, and dilates the nostrils.
Soft tissue
The skin of the nose varies in thickness along its length. From the glabella to the bridge (the nasofrontal angle), the skin is thick, fairly flexible, and mobile. It tapers to the bridge where it is thinnest and least flexible as it is closest to the underlying bone. From the bridge until the tip of the nose the skin is thin. The tip is covered in skin that is as thick as the top section, and has many large sebaceous glands.
The thickness of the skin varies but is still separated from the underlying bones and cartilage by four layers – a superficial fatty layer; a fibromuscular layer continued from the SMAS; a deep fatty layer, and the periosteum.
Other areas of soft tissue are found where there is no support from cartilage; these include an area around the sides of the septum – the paraseptal area – an area around the lateral cartilages, an area at the top of the nostril, and an area in the alae.
External nose
The nasal root is the top of the nose that attaches the nose to the forehead. The nasal root is above the bridge and below the glabella, forming an indentation known as the nasion at the frontonasal suture where the frontal bone meets the nasal bones. The nasal dorsum also known as the nasal ridge is the border between the root and the tip of the nose, which in profile can be variously shaped. The ala of the nose (ala nasi, "wing of the nose"; plural alae) is the lower lateral surface of the external nose, shaped by the alar cartilage and covered in dense connective tissue. The alae flare out to form a rounded eminence around the nostril. Sexual dimorphism is evident in the larger nose of the male. This is due to the increased testosterone that thickens the brow ridge and the bridge of the nose making it wider.
Differences in the symmetry of the nose have been noted in studies. Asymmetry is predominantly seen in wider left-sided nasal and other facial features.
Nasal cavity
The nasal cavity is the large internal space of the nose, and is in two parts – the nasal vestibule and the nasal cavity proper. The nasal vestibule is the frontmost part of the nasal cavity, enclosed by cartilages. The vestibule is lined with skin, hair follicles, and a large number of sebaceous glands. A mucous ridge known as the limen nasi separates the vestibule from the rest of the nasal cavity and marks the change from the skin of the vestibule to the respiratory epithelium of the rest of the nasal cavity. This area is also known as a mucocutaneous junction and has a dense microvasculature.
The nasal cavity is divided into two cavities by the nasal septum, and each is accessed by an external nostril. The division into two cavities enables the functioning of the nasal cycle that slows down the conditioning process of the inhaled air.
At the back of the nasal cavity there are two openings, called choanae (also posterior nostrils), that give entrance to the nasopharynx, and rest of the respiratory tract.
On the outer wall of each cavity are three shell-like bones called conchae, arranged as superior, middle and inferior nasal conchae. Below each concha is a corresponding superior, middle, and inferior nasal meatus, or passage. Sometimes when the superior concha is narrow, a fourth supreme nasal concha is present situated above and sharing the space with the superior concha. The term concha refers to the actual bone; when covered by soft tissue and mucosa, and functioning, a concha is termed a turbinate. Excessive moisture as tears collected in the lacrimal sac travel down the nasolacrimal ducts where they drain into the inferior meatus in the nasal cavity.
Most of the nasal cavity and paranasal sinuses is lined with respiratory epithelium as nasal mucosa. In the roof of each cavity is an area of specialised olfactory epithelium. This region is about 5 square cm, covering the superior concha, the cribriform plate, and the nasal septum.
The nasal cavity has a nasal valve area that includes an external nasal valve, and an internal nasal valve. The external nasal valve is bounded medially by the columella, laterally by the lower lateral nasal cartilage, and posteriorly by the nasal sill. The internal nasal valve is bounded laterally by the caudal border of the upper lateral cartilage, medially by the dorsal nasal septum, and inferiorly by the anterior border of the inferior turbinate. The internal nasal valve is the narrowest region of the nasal cavity and is the primary site of nasal resistance. The valves regulate the airflow and resistance. Air breathed in is forced to pass through the narrow internal nasal valve, and then expands as it moves into the nasal cavity. The sudden change in the speed and pressure of the airflow creates turbulence that allows optimum contact with the respiratory epithelium for the necessary warming, moisturising, and filtering. The turbulence also allows movement of the air to pass over the olfactory epithelium and transfer odour information. The angle of the valve between the septum and the sidewall needs to be sufficient for unobstructed airflow, and this is normally between 10 and 15 degrees.
The borders of each nasal cavity are a roof, floor, medial wall (the septum), and lateral wall. The middle part of the roof of the nasal cavity is composed of the horizontal, perforated cribriform plate of the ethmoid bone, through which pass sensory fibres of the olfactory nerve into the cranial cavity.
Paranasal sinuses
The mucosa that lines the nasal cavity extends into its chambers, the paranasal sinuses. The nasal cavity and the paranasal sinuses are referred to as the sinonasal tract or sinonasal region, and its anatomy is recognised as being unique and complex. Four paired paranasal sinuses – the frontal sinus, the sphenoid sinus, the ethmoid sinus and the maxillary sinus drain into regions of the nasal cavity.
The sinuses are air-filled extensions of the nasal cavity into the cranial bones. The frontal sinuses are located in the frontal bone; the sphenoidal sinuses in the sphenoid bone; the maxillary sinuses in the maxilla; and the ethmoidal sinuses in the ethmoid bone.
A narrow opening called a sinus ostium from each of the paranasal sinuses allows drainage into the nasal cavity. The maxillary sinus is the largest of the sinuses and drains into the middle meatus. Most of the ostia open into the middle meatus and the anterior ethmoid, that together are termed the ostiomeatal complex. Adults have a high concentration of cilia in the ostia. The cilia in the sinuses beat towards the openings into the nasal cavity. The increased numbers of cilia and the narrowness of the sinus openings allow for an increased time for moisturising, and warming.
Nose shape
The shape of the nose varies widely due to differences in the nasal bone shapes and formation of the bridge of the nose. Anthropometric studies have importantly contributed to craniofacial surgery, and the nasal index is a recognised anthropometric index used in nasal surgery.
Paul Topinard developed the nasal index as a method of classifying ethnic groups. The index is based on the ratio of the breadth of the nose to its height. The nasal dimensions are also used to classify nasal morphology into five types: Hyperleptorrhine is a very long, narrow nose with a nasal index of 40 to 55. Leptorrhine describes a long, narrow nose with an index of 55–70. Mesorrhine is a medium nose with an index of 70–85. Platyrrhine is a short, broad nose with an index of 85–99·9. The fifth type is the hyperplatyrrhine having an index of more than 100. Variations in nose size between ethnicities may be attributed to differing evolutionary adaptations to local temperatures and humidity. Other factors such as sexual selection may also account for ethnic differences in nose shape.
Some deformities of the nose are named, such as the pug nose and the saddle nose. The pug nose is characterised by excess tissue from the apex that is out of proportion to the rest of the nose. A low and underdeveloped nasal bridge may also be evident. A saddle nose deformity involving the collapse of the bridge of the nose is mostly associated with trauma to the nose but can be caused by other conditions including leprosy.
Werner syndrome, a condition associated with premature aging, causes a "bird-like" appearance due to pinching of the nose.
Down syndrome commonly presents a small nose with a flattened nasal bridge. This can be due to the absence of one or both nasal bones, shortened nasal bones, or nasal bones that have not fused in the midline.
Blood supply and drainage
Supply
The blood supply to the nose is provided by branches of the ophthalmic, maxillary, and facial arteries – branches of the carotid arteries. Branches of these arteries anastomose to form plexuses in and under the nasal mucosa. In the septal region Kiesselbach's plexus is a common site of nosebleeds.
Branches of the ophthalmic artery – the anterior and posterior ethmoidal arteries supply the roof, upper bony septum, and ethmoidal and frontal sinuses. The anterior ethmoidal artery also helps to supply the lower septal cartilage. Another branch is the dorsal nasal artery a terminal branch that supplies the skin of the alae and dorsum.
Branches of the maxillary artery include the greater palatine artery; the sphenopalatine artery and its branches – the posterior lateral nasal arteries and posterior septal nasal branches; the pharyngeal branch; and the infraorbital artery and its branches – the superior anterior and posterior alveolar arteries.
The sphenopalatine artery and the ethmoid arteries supply the outer walls of the nasal cavity. There is additional supply from a branch of the facial artery – the superior labial artery. The sphenopalantine artery is the artery primarily responsible for supplying the nasal mucosa.
The skin of the alae is supplied by the septal and lateral nasal branches of the facial artery. The skin of the outer parts of the alae and the dorsum of the nose are supplied by the dorsal nasal artery a branch of the ophthalmic artery, and the infraorbital branch of the maxillary arteries.
Drainage
Veins of the nose include the angular vein that drains the side of the nose, receiving lateral nasal veins from the alae. The angular vein joins with the superior labial vein. Some small veins from the dorsum of the nose drain to the nasal arch of the frontal vein at the root of the nose.
In the posterior region of the cavity, specifically in the posterior part of the inferior meatus is a venous plexus known as Woodruff's plexus. This plexus is made up of large thin-walled veins with little soft tissue such as muscle or fiber. The mucosa of the plexus is thin with very few structures.
Lymphatic drainage
From different areas of the nose superficial lymphatic vessels run with the veins, and deep lymphatic vessels travel with the arteries.
Lymph drains from the anterior half of the nasal cavity, including both the medial and lateral walls, to join that of the external nasal skin to drain into the
submandibular lymph nodes. The rest of the nasal cavity and paranasal sinuses all drain to the upper deep cervical lymph nodes, either directly or through the retropharyngeal lymph nodes. The back of the nasal floor probably drains to the parotid lymph nodes.
Nerve supply
The nerve supply to the nose and paranasal sinuses comes from two branches of the trigeminal nerve (CN V): the ophthalmic nerve (CN V1), the maxillary nerve (CN V2), and branches from these.
In the nasal cavity, the nasal mucosa is divided in terms of nerve supply into a back lower part (posteroinferior), and a frontal upper part (anterosuperior). The posterior part is supplied by a branch of the maxillary nerve – the nasopalatine nerve, which reaches the septum. Lateral nasal branches of the greater palatine nerve supply the lateral wall.
The frontal upper part is supplied from a branch of the ophthalmic nerve – the nasociliary nerve, and its branches – the anterior and posterior ethmoidal nerves.
Most of the external nose – the dorsum, and the apex are supplied by the infratrochlear nerve, (a branch of the nasociliary nerve). The external branch of the anterior ethmoidal nerve also supplies areas of skin between the root and the alae.
The alae of the nose are supplied by nasal branches of CN V2, the infraorbital nerve, and internal nasal branches of infraorbital nerve that supply the septum and the vestibule.
The maxillary sinus is supplied by superior alveolar nerves from the maxillary and infraorbital nerves. The frontal sinus is supplied by branches of the supraorbital nerve. The ethmoid sinuses are supplied by anterior and posterior ethmoid branches of the nasociliary nerve. The sphenoid sinus is supplied by the posterior ethmoidal nerves.
Movement
The muscles of the nose are supplied by branches of the facial nerve. The nasalis muscle is supplied by the buccal branches. It may also be supplied by one of the zygomatic branches. The procerus is supplied by temporal branches of the facial nerve and lower zygomatic branches; a supply from the buccal branch has also been described. The depressor septi is innervated by the buccal branch, and sometimes by the zygomatic branch, of the facial nerve. The levator labii superioris alaeque nasi is innervated by zygomatic and superior buccal branches of the facial nerve.
Smell
The sense of smell is transmitted by the olfactory nerves. Olfactory nerves are bundles of very small unmyelinated axons that are derived from olfactory receptor neurons in the olfactory mucosa. The axons are in varying stages of maturity, reflecting the constant turnover of neurons in the olfactory epithelium. A plexiform network is formed in the lamina propria, by the bundles of axons that are surrounded by olfactory ensheathing cells. In as many as twenty branches, the bundled axons cross the cribriform plate and enter the overlying olfactory bulb ending as glomeruli. Each branch is enclosed by an outer dura mater that becomes continuous with the nasal periosteum.
Autonomic supply
The nasal mucosa in the nasal cavity is also supplied by the autonomic nervous system. Postganglionic nerve fibers from the deep petrosal nerve join with preganglionic nerve fibers from the greater petrosal nerve to form the nerve of the pterygoid canal. Sympathetic postganglionic fibers are distributed to the blood vessels of the nose. Postganglionic parasympathetic fibres derived from the pterygopalatine ganglion provide the secretomotor supply to the nasal mucous glands, and are distributed via branches of the maxillary nerves.
Development
Development of the nose
In the early development of the embryo, neural crest cells migrate to form the mesenchymal tissue as ectomesenchyme of the pharyngeal arches. By the end of the fourth week, the first pair of pharyngeal arches form five facial prominences or processes - an unpaired frontonasal process, paired mandibular processes and paired maxillary processes. The nose is largely formed by the fusion of these five facial prominences. The frontonasal process gives rise to the bridge of the nose. The medial nasal processes provide the crest and the tip of the nose, and the lateral nasal processes form the alae or sides of the nose. The frontonasal process is a proliferation of mesenchyme in front of the brain vesicles, and makes up the upper border of the stomadeum.
During the fifth week, the maxillary processes increase in size and at the same time the ectoderm of the frontonasal process becomes thickened at its sides and also increases in size, forming the nasal placodes. The nasal placodes are also known as the olfactory placodes. This development is induced by the ventral part of the forebrain. In the sixth week, the ectoderm in each nasal placode invaginates to form an indented oval-shaped pit, which forms a surrounding raised ridge of tissue. Each nasal pit forms a division between the ridges, into a lateral nasal process on the outer edge, and a medial nasal process on the inner edge.
In the sixth week, the nasal pits deepen as they penetrate into the underlying mesenchyme. At this time, the medial nasal processes migrate towards each other and fuse forming the primordium of the bridge of the nose and the septum. The migration is helped by the increased growth of the maxillary prominences medially, which compresses the medial nasal processes towards the midline. Their merging takes place at the surface, and also at a deeper level. The merge forms the intermaxillary segment, and this is continuous with the rostral part of the nasal septum. The tips of the maxillary processes also grow and fuse with the intermaxillary process. The intermaxillary process gives rise to the philtrum of the upper lip.
At the end of the sixth week, the nasal pits have deepened further and they fuse to make a large ectodermal nasal sac. This sac will be above and to the back of the intermaxillary process. Leading into the seventh week, the nasal sac floor and posterior wall grow to form a thickened plate-like ectodermal structure called the nasal fin. The nasal fin separates the sac from the oral cavity. Within the fin, vacuoles develop that fuse with the nasal sac. This enlarges the nasal sac and at the same time thins the fin to a membrane - the oronasal membrane that separates the nasal pits from the oral cavity. During the seventh week the oronasal membrane ruptures and disintegrates to form an opening - the single primitive choana. The intermaxillary segment extends posteriorly to form the primary palate, which makes up the floor of the nasal cavity. During the eighth and ninth weeks, a pair of thin extensions form from the medial walls of the maxillary process. These extensions are called the palatine shelves that form the secondary palate. The secondary palate will endochondrally ossify to form the hard palate - the end-stage floor of the nasal cavity. During this time, ectoderm and mesoderm of the frontonasal process produce the midline septum. The septum grows down from the roof of the nasal cavity and fuses with the developing palates along the midline. The septum divides the nasal cavity into two nasal passages opening into the pharynx through the definitive choanae.
At ten weeks, the cells differentiate into muscle, cartilage, and bone. Problems at this stage of development can cause birth defects such as choanal atresia (absent or closed passage), facial clefts and nasal dysplasia (faulty or incomplete development) or extremely rarely polyrrhinia the formation of a duplicate nose.
Normal development is critical because the newborn infant breathes through the nose for the first six weeks, and any nasal blockage will need emergency treatment to clear.
Development of the paranasal sinuses
The four pairs of paranasal sinuses - the maxillary, ethmoid, sphenoid, and frontal, develop from the nasal cavity as invaginations extending into their named bones. Two pairs of sinuses form during prenatal development and two pairs form after birth. The maxillary sinuses are the first to appear during the fetal third month. They slowly expand within the maxillary bones and continue to expand throughout childhood. The maxillary sinuses form as invaginations from the nasal sac. The ethmoid sinuses appear in the fetal fifth month as invaginations of the middle meatus. The ethmoid sinuses do not grow into the ethmoid bone and do not completely develop until puberty.
The sphenoid sinuses are extensions of the ethmoid sinuses into the sphenoid bones. They begin to develop around two years of age, and continue to enlarge during childhood.
The frontal sinuses only develop in the fifth or sixth year of childhood, and continue expanding throughout adolescence. Each frontal sinus is made up of two independent spaces that develop from two different sources; one from the expansion of ethmoid sinuses into frontal bone, and the other develops from invagination. They never coalesce so drain independently.
Function
Respiration
The nose is the first organ of the upper respiratory tract in the respiratory system. Its main respiratory function is the supply and conditioning, by warming, moisturising and filtering of particulates of inhaled air. Nasal hair in the nostrils traps large particles preventing their entry into the lungs.
The three positioned nasal conchae in each cavity provide four grooves as air passages, along which the air is circulated and moved to the nasopharynx. The internal structures and cavities, including the conchae and paranasal sinuses form an integrated system for the conditioning of the air breathed in through the nose. This functioning also includes the major role of the nasal mucosa, and the resulting conditioning of the air before it reaches the lungs is important in maintaining the internal environment and proper functioning of the lungs. The turbulence created by the conchae and meatuses optimises the warming, moistening, and filtering of the mucosa. A major protective role is thereby provided by these structures of the upper respiratory tract, in the passage of air to the more delicate structures of the lower respiratory tract.
Sneezing is an important protective reflex action initiated by irritation of the nasal mucosa to expel unwanted particles through the mouth and nose. Photic sneezing is a reflex brought on by different stimuli such as bright lights. The nose is also able to provide sense information as to the temperature of the air being breathed.
Variations in shape of the nose have been hypothesised to possibly be adaptive to regional differences in temperature and humidity, though they may also have been driven by other factors such as sexual selection.
Sense of smell
The nose also plays the major part in the olfactory system. It contains an area of specialised cells, olfactory receptor neurons responsible for the sense of smell (olfaction). Olfactory mucosa in the upper nasal cavity, contains a type of nasal gland called olfactory glands or Bowman's glands, which help in olfaction. The nasal conchae also help in olfaction function, by directing air-flow to the olfactory region.
Speech
Speech is produced with pressure from the lungs. This can be modified using airflow through the nose in a process called nasalisation. This involves the lowering of the soft palate to produce nasal vowels and consonants by allowing air to escape from both the nose and the mouth. Nasal airflow is also used to produce a variety of click consonants called nasal clicks. The large, hollow cavities of the paranasal sinuses act as resonating chambers that modify, and amplify speech and other vocal vibrations passing through them.
Clinical significance
One of the most common medical conditions involving the nose is a nosebleed (epistaxis). Most nosebleeds occur in Kiesselbach's plexus, a vascular plexus in the lower front part of the septum involving the convergence of four arteries. A smaller proportion of nosebleeds that tend to be nontraumatic occur in Woodruff's plexus. Woodruff's plexus is a venous plexus of large thin-walled veins lying in the posterior part of the inferior meatus.
Another common condition is nasal congestion, usually a symptom of infection, particularly sinusitis, or other inflammation of the nasal lining called rhinitis, including allergic rhinitis and nonallergic rhinitis. Chronic nasal obstruction resulting in breathing through the mouth can greatly impair or prevent the nostrils from flaring. One of the causes of snoring is nasal obstruction, and anti-snoring devices such as a nasal strip help to flare the nostrils and keep the airway open. Nasal flaring, is usually seen in children when breathing is difficult. Most conditions of nasal congestion also cause a loss of the sense of smell (anosmia). This may also occur in other conditions, for example following trauma, in Kallmann syndrome or Parkinson's disease. A blocked sinus ostium, an opening from a paranasal sinus, will cause fluid to accumulate in the sinus.
In children, the nose is a common site of foreign bodies. The nose is one of the exposed areas that is susceptible to frostbite.
Because of the special nature of the blood supply to the human nose and surrounding area, it is possible for retrograde infections from the nasal area to spread to the brain. For this reason, the area from the corners of the mouth to the bridge of the nose, including the nose and maxilla, is known as the danger triangle of the face.
Infections or other conditions that may result in destruction of, or damage to a part of the nose include rhinophyma, skin cancers particularly basal-cell carcinoma, paranasal sinus and nasal cavity cancer, granulomatosis with polyangiitis, syphilis, leprosy, recreational use of cocaine, chromium and other toxins. The nose may be stimulated to grow in acromegaly, a condition caused by an excess of growth hormone.
A common anatomic variant is an air-filled cavity within a concha known as a concha bullosa. In rare cases a polyp can form inside a bullosa. Usually a concha bullosa is small and without symptoms but when large can cause obstruction to sinus drainage.
Some drugs can be nasally administered, including drug delivery to the brain, and these include nasal sprays and topical treatments. The septal cartilage can be destroyed through the repeated inhalation of recreational drugs such as cocaine. This, in turn, can lead to more widespread collapse of the nasal skeleton.
Sneezing can transmit infections carried in the expelled droplets. This route is called either airborne transmission or aerosol transmission.
Surgical procedures
Badly positioned alar cartilages lack proper support, and can affect the function of the external nasal valve. This can cause breathing problems particularly during deep inhalation. The surgical procedure to correct breathing problems due to disorders in the nasal structures is called a rhinoplasty, and this is also the procedure used for a cosmetic surgery when it is commonly called a "nose job". For surgical procedures of rhinoplasty, the nose is mapped out into a number of subunits and segments. This uses nine aesthetic nasal subunits and six aesthetic nasal segments. A septoplasty is the specific surgery to correct a nasal septum deviation.
A broken nose can result from trauma. Minor fractures may heal on their own. Surgery known as reduction may be carried out on more severe breaks that cause dislocation.
Several nasal procedures of the nose and paranasal sinuses can be carried out using minimally-invasive nasal endoscopy. These procedures aim to restore sinus ventilation, mucociliary clearance, and maintain the health of the sinus mucosa.
Some non-nasal surgeries can also be carried out through the use of an endoscope that is entered through the nose. These endoscopic endonasal surgeries are used to remove tumours from the front of the base of the skull.
Swollen conchae can cause obstruction and nasal congestion, and may be treated surgically by a turbinectomy.
Society and culture
Some people choose to have cosmetic surgery (called a rhinoplasty) to change the appearance of their nose. Nose piercings, such as in the nostril, septum, or bridge, are also common. In certain Asian countries such as China, Japan, South Korea, Malaysia, Thailand and Bangladesh, rhinoplasties are commonly carried out to create a more developed nose bridge or a "high nose". Similarly, "DIY nose lifts" in the form of re-usable cosmetic items have become popular and are sold in many Asian countries such as China, Japan, South Korea, Taiwan, Sri Lanka and Thailand. A high-bridged nose has been a common beauty ideal in many Asian cultures dating back to the beauty ideals of ancient China and India.
In New Zealand, nose pressing ("hongi") is a traditional greeting originating among the Māori people. However it is now generally confined to certain traditional celebrations.
The Hanazuka monument enshrines the mutilated noses of at least 38,000 Koreans killed during the Japanese invasions of Korea from 1592 to 1598.
Nose picking is a common, mildly taboo habit. Medical risks include the spread of infections, nosebleeds and, rarely, perforation of the nasal septum. When it becomes compulsive it is termed rhinotillexomania. The wiping of the nose with the hand, commonly referred to as the "allergic salute", is also mildly taboo and can result in the spreading of infections as well. Habitual as well as fast or rough nose wiping may also result in a crease (known as a transverse nasal crease or groove) running across the nose, and can lead to permanent physical deformity observable in childhood and adulthood.
Nose fetishism (or nasophilia) is the sexual partialism for the nose.
Neanderthals
Clive Finlayson of the Gibraltar Museum said the large Neanderthal noses were an adaptation to the cold, Todd C. Rae of the American Museum of Natural History said primate and arctic animal studies have shown sinus size reduction in areas of extreme cold rather than enlargement in accordance with Allen's rule. Therefore, Todd C. Rae concludes that the design of the large and wide Neanderthal nose was evolved for the hotter climate of the Middle East and Africa and remained unchanged when they entered Europe.
Miquel Hernández of the Department of Animal Biology at the University of Barcelona said the "high and narrow nose of Eskimos and Neanderthals" is an "adaptation to a cold and dry environment", since it contributes to warming and moisturizing the air and the "recovery of heat and moisture from expired air".
See also
Dried nasal mucus
Empty nose syndrome, a nose crippled by excessive resection of the inferior and/or middle turbinates of the nose
Nasothek
Neti (Hatha Yoga), an Ayurvedic technique of nasal cleansing
Obligate nasal breathing
Sròn, the Scottish Gaelic word for nose and the name of some hills in the Scottish Highlands
References
Further reading
External links
Facial features
Human anatomy
Human head and neck
Olfactory system
Sensory organs
Rhinology | 0.764551 | 0.997035 | 0.762284 |
Insufflation (medicine) | Insufflation is the act of blowing something (such as a gas, powder, or vapor) into a body cavity. Insufflation has many medical uses, most notably as a route of administration for various drugs.
Medical uses
Surgery
Gases are often insufflated into a body cavity to inflate the cavity for more workroom, e.g. during laparoscopic surgery. The most common gas used in this manner is carbon dioxide, because it is non-flammable, colorless, and dissolves readily in blood.
Diagnostics
Gases can be insufflated into parts of the body to enhance radiological imaging or to gain access to areas for visual inspection (e.g. during colonoscopy).
Respiratory assistance
Oxygen can be insufflated into the nose by nasal cannulae to assist in respiration.
Mechanical insufflation-exsufflation simulates a cough and assists airway mucus clearance. It is used with patients with neuromuscular disease and muscle weakness due to central nervous system injury.
Glossopharyngeal insufflation is a breathing technique that consists of gulping boluses of air into the lungs. It is also used by breath-hold divers to increase their lung volumes.
Positive airway pressure is a mode of mechanical or artificial ventilation based on insufflation.
Pump inhalers for asthmatics deliver aerosolized drugs into the lungs via the mouth. However, the insufflation by the pump is not adequate for delivery to the lungs, necessitating an active inhalation by the patient.
Anesthesia and critical care
Insufflated gases and vapors are used to ventilate and oxygenate patients (oxygen, air, helium), and to induce, assist in or maintain general anaesthesia (nitrous oxide, xenon, volatile anesthetic agents).
Positive airway pressure is a mode of mechanical or artificial ventilation based on insufflation.
Nasal drug administration
Nasal insufflation is the most common method of nasal administration. Other methods are nasal inhalation and nasal instillation. Drugs administered in this way can have a local effect or a systemic effect. The time of onset for systemic drugs delivered via nasal administration is generally only marginally slower than if given intravenously.
Examples of drugs given
Steroids (local effect) and anti-asthma medication
Hormone replacement
Decongestants (local effect)
Nicotine replacement
Migraine medication
Vaccines
Nasal administration can also be used for treatment of children or patients who are otherwise alarmed or frightened by needles, or where intravenous (IV) access is unavailable.
History
In the 18th century, the tobacco smoke enema, an insufflation of tobacco smoke into the rectum, was a common method of reviving drowning victims.
References
Medical terminology
Routes of administration | 0.764693 | 0.996809 | 0.762253 |
Human science | Human science (or human sciences in the plural) studies the philosophical, biological, social, justice, and cultural aspects of human life. Human science aims to expand the understanding of the human world through a broad interdisciplinary approach. It encompasses a wide range of fields - including history, philosophy, sociology, psychology, justice studies, evolutionary biology, biochemistry, neurosciences, folkloristics, and anthropology. It is the study and interpretation of the experiences, activities, constructs, and artifacts associated with human beings. The study of human sciences attempts to expand and enlighten the human being's knowledge of its existence, its interrelationship with other species and systems, and the development of artifacts to perpetuate the human expression and thought. It is the study of human phenomena. The study of the human experience is historical and current in nature. It requires the evaluation and interpretation of the historic human experience and the analysis of current human activity to gain an understanding of human phenomena and to project the outlines of human evolution. Human science is an objective, informed critique of human existence and how it relates to reality.Underlying human science is the relationship between various humanistic modes of inquiry within fields such as history, sociology, folkloristics, anthropology, and economics and advances in such things as genetics, evolutionary biology, and the social sciences for the purpose of understanding our lives in a rapidly changing world. Its use of an empirical methodology that encompasses psychological experience in contrasts with the purely positivistic approach typical of the natural sciences which exceeds all methods not based solely on sensory observations. Modern approaches in the human sciences integrate an understanding of human structure, function on and adaptation with a broader exploration of what it means to be human. The term is also used to distinguish not only the content of a field of study from that of the natural science, but also its methodology.
Meaning of 'science'
Ambiguity and confusion regarding the usage of the terms 'science', 'empirical science', and 'scientific method' have complicated the usage of the term 'human science' with respect to human activities. The term 'science' is derived from the Latin scientia, meaning 'knowledge'. 'Science' may be appropriately used to refer to any branch of knowledge or study dealing with a body of facts or truths systematically arranged to show the operation of general laws.
However, according to positivists, the only authentic knowledge is scientific knowledge, which comes from the positive affirmation of theories through strict scientific methods the application of knowledge, or mathematics. As a result of the positivist influence, the term science is frequently employed as a synonym for empirical science. Empirical science is knowledge based on the scientific method, a systematic approach to verification of knowledge first developed for dealing with natural physical phenomena and emphasizing the importance of experience based on sensory observation. However, even with regard to the natural sciences, significant differences exist among scientists and philosophers of science with regard to what constitutes valid scientific method—for example, evolutionary biology, geology and astronomy, studying events that cannot be repeated, can use the method of historical narratives. More recently, usage of the term has been extended to the study of human social phenomena. Thus, natural and social sciences are commonly classified as science, whereas the study of classics, languages, literature, music, philosophy, history, religion, and the visual and performing arts are referred to as the humanities. Ambiguity with respect to the meaning of the term science is aggravated by the widespread use of the term formal science with reference to any one of several sciences that is predominantly concerned with abstract form that cannot be validated by physical experience through the senses, such as logic, mathematics, and the theoretical branches of computer science, information theory, and statistics.
History
The phrase 'human science' in English was used during the 17th-century scientific revolution, for example by Theophilus Gale, to draw a distinction between supernatural knowledge (divine science) and study by humans (human science). John Locke also uses 'human science' to mean knowledge produced by people, but without the distinction. By the 20th century, this latter meaning was used at the same time as 'sciences that make human beings the topic of research'.
Early development
The term "moral science" was used by David Hume (1711–1776) in his Enquiry concerning the Principles of Morals to refer to the systematic study of human nature and relationships. Hume wished to establish a "science of human nature" based upon empirical phenomena, and excluding all that does not arise from observation. Rejecting teleological, theological and metaphysical explanations, Hume sought to develop an essentially descriptive methodology; phenomena were to be precisely characterized. He emphasized the necessity of carefully explicating the cognitive content of ideas and vocabulary, relating these to their empirical roots and real-world significance.
A variety of early thinkers in the humanistic sciences took up Hume's direction. Adam Smith, for example, conceived of economics as a moral science in the Humean sense.
Later development
Partly in reaction to the establishment of positivist philosophy and the latter's Comtean intrusions into traditionally humanistic areas such as sociology, non-positivistic researchers in the humanistic sciences began to carefully but emphatically distinguish the methodological approach appropriate to these areas of study, for which the unique and distinguishing characteristics of phenomena are in the forefront (e.g., for the biographer), from that appropriate to the natural sciences, for which the ability to link phenomena into generalized groups is foremost. In this sense, Johann Gustav Droysen contrasted the humanistic science's need to comprehend the phenomena under consideration with natural science's need to explain phenomena, while Windelband coined the terms idiographic for a descriptive study of the individual nature of phenomena, and nomothetic for sciences that aim to defthe generalizing laws.
Wilhelm Dilthey brought nineteenth-century attempts to formulate a methodology appropriate to the humanistic sciences together with Hume's term "moral science", which he translated as Geisteswissenschaft - a term with no exact English equivalent. Dilthey attempted to articulate the entire range of the moral sciences in a comprehensive and systematic way. Meanwhile, his conception of “Geisteswissenschaften” encompasses also the abovementioned study of classics, languages, literature, music, philosophy, history, religion, and the visual and performing arts. He characterized the scientific nature of a study as depending upon:
The conviction that perception gives access to reality
The self-evident nature of logical reasoning
The principle of sufficient reason
But the specific nature of the Geisteswissenschaften is based on the "inner" experience (Erleben), the "comprehension" (Verstehen) of the meaning of expressions and "understanding" in terms of the relations of the part and the whole – in contrast to the Naturwissenschaften, the "explanation" of phenomena by hypothetical laws in the "natural sciences".
Edmund Husserl, a student of Franz Brentano, articulated his phenomenological philosophy in a way that could be thought as a bthesis of Dilthey's attempt. Dilthey appreciated Husserl's Logische Untersuchungen (1900/1901, the first draft of Husserl's Phenomenology) as an “ep"epoch-making"istemological foundation of fors conception of Geisteswissenschaften.
In recent years, 'human science' has been used to refer to "a philosophy and approach to science that seeks to understand human experience in deeply subjective, personal, historical, contextual, cross-cultural, political, and spiritual terms. Human science is the science of qualities rather than of quantities and closes the subject-object split in science. In particular, it addresses the ways in which self-reflection, art, music, poetry, drama, language and imagery reveal the human condition. By being interpretive, reflective, and appreciative, human science re-opens the conversation among science, art, and philosophy."
Objective vs. subjective experiences
Since Auguste Comte, the positivistic social sciences have sought to imitate the approach of the natural sciences by emphasizing the importance of objective external observations and searching for universal laws whose operation is predicated on external initial conditions that do not take into account differences in subjective human perception and attitude. Critics argue that subjective human experience and intention plays such a central role in determining human social behavior that an objective approach to the social sciences is too confining. Rejecting the positivist influence, they argue that the scientific method can rightly be applied to subjective, as well as objective, experience. The term subjective is used in this context to refer to inner psychological experience rather than outer sensory experience. It is not used in the sense of being prejudiced by personal motives or beliefs.
Human science in universities
Since 1878, the University of Cambridge has been home to the Moral Sciences Club, with strong ties to analytic philosophy.
The Human Science degree is relatively young. It has been a degree subject at Oxford since 1969. At University College London, it was proposed in 1973 by Professor J. Z. Young and implemented two years later. His aim was to train general science graduates who would be scientifically literate, numerate and easily able to communicate across a wide range of disciplines, replacing the traditional classical training for higher-level government and management careers. Central topics include the evolution of humans, their behavior, molecular and population genetics, population growth and aging, ethnic and cultural diversity ,and human interaction with the environment, including conservation, disease ,and nutrition. The study of both biological and social disciplines, integrated within a framework of human diversity and sustainability, should enable the human scientist to develop professional competencies suited to address such multidimensional human problems.
In the United Kingdom, Human Science is offered at the degree level at several institutions which include:
University of Oxford
University College London (as Human Sciences and as Human Sciences and Evolution)
King's College London (as Anatomy, Developmental & Human Biology)
University of Exeter
Durham University (as Health and Human Sciences)
Cardiff University (as Human and Social Sciences)
In other countries:
Osaka University
Waseda University
Tokiwa University
Senshu University
Aoyama Gakuin University (As College of Community Studies)
Kobe University
Kanagawa University
Bunkyo University
Sophia University
Ghent University (in the narrow sense, as Moral sciences, "an integrated empirical and philosophical study of values, norms and world views")
See also
History of the Human Sciences (journal)
Social science
Humanism
Humanities
References
Bibliography
Flew, A. (1986). David Hume: Philosopher of Moral Science, Basil Blackwell, Oxford
Hume, David, An Enquiry Concerning the Principles of Morals
External links
Institute for Comparative Research in Human and Social Sciences (ICR) -Japan
Human Science Lab -London
Human Science(s) across Global Academies
Marxism philosophy | 0.768495 | 0.991869 | 0.762247 |
Serotonin syndrome | Serotonin syndrome (SS) is a group of symptoms that may occur with the use of certain serotonergic medications or drugs. The symptoms can range from mild to severe, and are potentially fatal. Symptoms in mild cases include high blood pressure and a fast heart rate; usually without a fever. Symptoms in moderate cases include high body temperature, agitation, increased reflexes, tremor, sweating, dilated pupils, and diarrhea. In severe cases, body temperature can increase to greater than . Complications may include seizures and extensive muscle breakdown.
Serotonin syndrome is typically caused by the use of two or more serotonergic medications or drugs. This may include selective serotonin reuptake inhibitor (SSRI), serotonin norepinephrine reuptake inhibitor (SNRI), monoamine oxidase inhibitor (MAOI), tricyclic antidepressants (TCAs), amphetamines, pethidine (meperidine), tramadol, dextromethorphan, buspirone, , , St. John's wort, triptans, MDMA, metoclopramide, or cocaine. It occurs in about 15% of SSRI overdoses. It is a predictable consequence of excess serotonin on the central nervous system. Onset of symptoms is typically within a day of the extra serotonin.
Diagnosis is based on a person's symptoms and history of medication use. Other conditions that can produce similar symptoms such as neuroleptic malignant syndrome, malignant hyperthermia, anticholinergic toxicity, heat stroke, and meningitis should be ruled out. No laboratory tests can confirm the diagnosis.
Initial treatment consists of discontinuing medications which may be contributing. In those who are agitated, benzodiazepines may be used. If this is not sufficient, a serotonin antagonist such as cyproheptadine may be used. In those with a high body temperature, active cooling measures may be needed. The number of cases of SS that occur each year is unclear. With appropriate medical intervention the risk of death is low, likely less than 1%. The high-profile case of Libby Zion, who is generally accepted to have died from SS, resulted in changes to graduate medical school education in New York State.
Signs and symptoms
Symptom onset is usually relatively rapid, and SS encompasses a wide range of clinical findings. Mild symptoms may consist of increased heart rate, shivering, sweating, dilated pupils, myoclonus (intermittent jerking or twitching), as well as hyperreflexia (overresponsive reflexes). Many of these symptoms may be side effects of the drug or drug interaction causing excessive levels of serotonin rather than an effect of elevated serotonin itself.
Tremor is a common side effect of MDMA's action on dopamine, whereas hyperreflexia is symptomatic of exposure to serotonin agonists. Moderate intoxication includes additional abnormalities such as hyperactive bowel sounds, high blood pressure and hyperthermia; a temperature as high as . The overactive reflexes and clonus in moderate cases may be greater in the lower limbs than in the upper limbs. Mental changes include hypervigilance or insomnia and agitation. Severe symptoms include severe increases in heart rate and blood pressure. Temperature may rise to above in life-threatening cases. Other abnormalities include metabolic acidosis, rhabdomyolysis, seizures, kidney failure, and disseminated intravascular coagulation; these effects usually arising as a consequence of hyperthermia.
The symptoms are often present as a clinical triad of abnormalities:
Cognitive effects: headache, agitation, hypomania, mental confusion, hallucinations, coma
Autonomic effects: shivering, sweating, hyperthermia, vasoconstriction, tachycardia, nausea, diarrhea
Somatic effects: myoclonus (muscle twitching), hyperreflexia (manifested by clonus), tremor
Causes
Numerous medications and street drugs can cause SS when taken alone at high doses or in combination with other serotonergic agents. The table below lists some of these.
Many cases of serotonin toxicity occur in people who have ingested drug combinations that synergistically increase synaptic serotonin. It may also occur due to an overdose of a single serotonergic agent. The combination of monoamine oxidase inhibitors (MAOIs) with precursors such as or pose a particularly acute risk of life-threatening serotonin syndrome. The case of combination of MAOIs with tryptamine agonists (commonly known as ayahuasca) can present similar dangers as their combination with precursors, but this phenomenon has been described in general terms as the cheese effect. Many MAOIs irreversibly inhibit monoamine oxidase. It can take at least four weeks for this enzyme to be replaced by the body in the instance of irreversible inhibitors. With respect to tricyclic antidepressants, only clomipramine and imipramine have a risk of causing SS.
Many medications may have been incorrectly thought to cause SS. For example, some case reports have implicated atypical antipsychotics in SS, but it appears based on their pharmacology that they are unlikely to cause the syndrome. It has also been suggested that mirtazapine has no significant serotonergic effects and is therefore not a dual action drug. Bupropion has also been suggested to cause SS, although as there is no evidence that it has any significant serotonergic activity, it is thought unlikely to produce the syndrome. In 2006 the US Food and Drug Administration (FDA) issued an alert suggesting that the combined use of either SSRIs or SNRIs with triptan medications or sibutramine could potentially lead to severe cases of SS. This has been disputed by other researchers, as none of the cases reported by the FDA met the Hunter criteria for SS. The condition has however occurred in surprising clinical situations, and because of phenotypic variations among individuals, it has been associated with unexpected drugs, including mirtazapine.
The relative risk and severity of serotonergic side effects and serotonin toxicity, with individual drugs and combinations, is complex. SS has been reported in patients of all ages, including the elderly, children, and even newborn infants due to in utero exposure. The serotonergic toxicity of SSRIs increases with dose, but even in overdose, it is insufficient to cause fatalities from SS in healthy adults. Elevations of central nervous system (CNS) serotonin will typically only reach potentially fatal levels when drugs with different mechanisms of action are mixed together. Various drugs, other than SSRIs, also have clinically significant potency as serotonin reuptake inhibitors, (such as tramadol, amphetamine, and MDMA) and are associated with severe cases of the syndrome.
Although the most significant health risk associated with opioid overdoses is respiratory depression, it is still possible for an individual to develop SS from certain opioids without the loss of consciousness. However, most cases of opioid-related SS involve the concurrent use of a serotergenic drug such as antidepressants. Nonetheless, it is not uncommon for individuals taking opioids to also be taking antidepressants due to the comorbidity of pain and depression.
Cases where opioids alone are the cause of SS are typically seen with tramadol, because of its dual mechanism as a serotonin-norepinephrine reuptake inhibitor. SS caused by tramadol can be particularly problematic if an individual taking the drug is unaware of the risks associated with it and attempts to self-medicate symptoms such as headache, agitation, and tremors with more opioids, further exacerbating the condition.
Pathophysiology
Serotonin is a neurotransmitter involved in multiple complex biological processes including aggression, pain, sleep, appetite, anxiety, depression, migraine, and vomiting. In humans the effects of excess serotonin were first noted in 1960 in patients receiving an MAOI and tryptophan. The syndrome is caused by increased serotonin in the CNS. It was originally suspected that agonism of 5-HT1A receptors in central grey nuclei and the medulla oblongata was responsible for the development of the syndrome. Further study has determined that overstimulation of primarily the 5-HT2A receptors appears to contribute substantially to the condition. The 5-HT1A receptor may still contribute through a pharmacodynamic interaction in which increased synaptic concentrations of a serotonin agonist saturate all receptor subtypes. Additionally, noradrenergic CNS hyperactivity may play a role as CNS norepinephrine concentrations are increased in SS and levels appear to correlate with the clinical outcome. Other neurotransmitters may also play a role; NMDA receptor antagonists and γ-aminobutyric acid have been suggested as affecting the development of the syndrome. Serotonin toxicity is more pronounced following supra-therapeutic doses and overdoses, and they merge in a continuum with the toxic effects of overdose.
Spectrum concept
A postulated "spectrum concept" of serotonin toxicity emphasises the role that progressively increasing serotonin levels play in mediating the clinical picture as side effects merge into toxicity. The dose-response relationship is the effect of progressive elevation of serotonin, either by raising the dose of one drug, or combining it with another serotonergic drug which may produce large elevations in serotonin levels. Some experts prefer the terms serotonin toxicity or serotonin toxidrome, to more accurately reflect that it is a form of poisoning.
Diagnosis
There is no specific test for SS. Diagnosis is by symptom observation and investigation of the person's history. Several criteria have been proposed. The first evaluated criteria were introduced in 1991 by Harvey Sternbach. Researchers later developed the Hunter Toxicity Criteria Decision Rules, which have better sensitivity and specificity, 84% and 97%, respectively, when compared with the gold standard of diagnosis by a medical toxicologist. As of 2007, Sternbach's criteria were still the most commonly used.
The most important symptoms for diagnosing SS are tremor, extreme aggressiveness, akathisia, or clonus (spontaneous, inducible and ocular). Physical examination of the patient should include assessment of deep tendon reflexes and muscle rigidity, the dryness of the mucosa of the mouth, the size and reactivity of the pupils, the intensity of bowel sounds, skin color, and the presence or absence of sweating. The patient's history also plays an important role in diagnosis, investigations should include inquiries about the use of prescription and over-the-counter drugs, illicit substances, and dietary supplements, as all these agents have been implicated in the development of SS. To fulfill the Hunter Criteria, a patient must have taken a serotonergic agent and meet one of the following conditions:
Spontaneous clonus, or
Inducible clonus plus agitation or diaphoresis, or
Ocular clonus plus agitation or diaphoresis, or
Tremor plus hyperreflexia, or
Hypertonism plus temperature > plus ocular clonus or inducible clonus
Differential diagnosis
Serotonin toxicity has a characteristic picture which is generally hard to confuse with other medical conditions, but in some situations it may go unrecognized because it may be mistaken for a viral illness, anxiety disorders, neurological disorder, anticholinergic poisoning, sympathomimetic toxicity, or worsening psychiatric condition. The condition most often confused with serotonin syndrome is neuroleptic malignant syndrome (NMS). The clinical features of neuroleptic malignant syndrome and SS share some features which can make differentiating them difficult. In both conditions, autonomic dysfunction and altered mental status develop. However, they are actually very different conditions with different underlying dysfunction (serotonin excess vs dopamine blockade). Both the time course and the clinical features of NMS differ significantly from those of serotonin toxicity. Serotonin toxicity has a rapid onset after the administration of a serotonergic drug and responds to serotonin blockade such as drugs like chlorpromazine and cyproheptadine. Dopamine receptor blockade (NMS) has a slow onset, typically evolves over several days after administration of a neuroleptic drug, and responds to dopamine agonists such as bromocriptine.
Differential diagnosis may become difficult in patients recently exposed to both serotonergic and neuroleptic drugs. Bradykinesia and extrapyramidal "lead pipe" rigidity are classically present in NMS, whereas SS causes hyperkinesia and clonus; these distinct symptoms can aid in differentiation.
Management
Management is based primarily on stopping the usage of the precipitating drugs, the administration of serotonin antagonists such as cyproheptadine (with a regimen of 12 mg for the initial dose followed by 2 mg every 2 hours until clinical, while some claim that a higher initial dose up to 32 mg has more benefit), and supportive care including the control of agitation, the control of autonomic instability, and the control of hyperthermia. Additionally, those who ingest large doses of serotonergic agents may benefit from gastrointestinal decontamination with activated charcoal if it can be administered within an hour of overdose. The intensity of therapy depends on the severity of symptoms. If the symptoms are mild, treatment may only consist of discontinuation of the offending medication or medications, offering supportive measures, giving benzodiazepines for myoclonus, and waiting for the symptoms to resolve. Moderate cases should have all thermal and cardiorespiratory abnormalities corrected and can benefit from serotonin antagonists. The serotonin antagonist cyproheptadine is the recommended initial therapy, although there have been no controlled trials demonstrating its efficacy for SS. Despite the absence of controlled trials, there are a number of case reports detailing apparent improvement after people have been administered cyproheptadine. Animal experiments also suggest a benefit from serotonin antagonists. Cyproheptadine is only available as tablets and therefore can only be administered orally or via a nasogastric tube; it is unlikely to be effective in people administered activated charcoal and has limited use in severe cases. Cyproheptadine can be stopped when the person is no longer experiencing symptoms and the half life of serotonergic medications already passed.
Additional pharmacological treatment for severe case includes administering atypical antipsychotic drugs with serotonin antagonist activity such as olanzapine or asenapine. Critically ill people should receive the above therapies as well as sedation or neuromuscular paralysis. People who have autonomic instability such as low blood pressure require treatment with direct-acting sympathomimetics such as epinephrine, norepinephrine, or phenylephrine. Conversely, hypertension or tachycardia can be treated with short-acting antihypertensive drugs such as nitroprusside or esmolol; longer acting drugs such as propranolol should be avoided as they may lead to hypotension and shock. The cause of serotonin toxicity or accumulation is an important factor in determining the course of treatment. Serotonin is catabolized by monoamine oxidase A in the presence of oxygen, so if care is taken to prevent an unsafe spike in body temperature or metabolic acidosis, oxygenation will assist in dispatching the excess serotonin. The same principle applies to alcohol intoxication. In cases of SS caused by MAOIs, oxygenation will not help to dispatch serotonin. In such instances, hydration is the main concern until the enzyme is regenerated.
Agitation
Specific treatment for some symptoms may be required. One of the most important treatments is the control of agitation due to the extreme possibility of injury to the person themselves or caregivers, benzodiazepines should be administered at first sign of this. Physical restraints are not recommended for agitation or delirium as they may contribute to mortality by enforcing isometric muscle contractions that are associated with severe lactic acidosis and hyperthermia. If physical restraints are necessary for severe agitation they must be rapidly replaced with pharmacological sedation. The agitation can cause a large amount of muscle breakdown. This breakdown can cause severe damage to the kidneys through a condition called rhabdomyolysis.
Hyperthermia
Treatment for hyperthermia includes reducing muscle overactivity via sedation with a benzodiazepine. More severe cases may require muscular paralysis with vecuronium, intubation, and artificial ventilation. Suxamethonium is not recommended for muscular paralysis as it may increase the risk of cardiac dysrhythmia from hyperkalemia associated with rhabdomyolysis. Antipyretic agents are not recommended as the increase in body temperature is due to muscular activity, not a hypothalamic temperature set point abnormality.
Prognosis
Upon the discontinuation of serotonergic drugs, most cases of SS resolve within 24 hours, although in some cases delirium may persist for a number of days. Symptoms typically persist for a longer time frame in patients taking drugs which have a long elimination half-life, active metabolites, or a protracted duration of action.
Cases have reported persisting chronic symptoms, and antidepressant discontinuation may contribute to ongoing features. Following appropriate medical management, SS is generally associated with a favorable prognosis.
Epidemiology
Epidemiological studies of SS are difficult as many physicians are unaware of the diagnosis or they may miss the syndrome due to its variable manifestations. In 1998 a survey conducted in England found that 85% of the general practitioners that had prescribed the antidepressant nefazodone were unaware of SS. The incidence may be increasing as a larger number of pro-serotonergic drugs (drugs which increase serotonin levels) are now being used in clinical practice. One postmarketing surveillance study identified an incidence of 0.4 cases per 1000 patient-months for patients who were taking nefazodone. Additionally, around 14–16% of persons who overdose on SSRIs are thought to develop SS.
Notable cases
The most widely recognized example of SS was the death of Libby Zion in 1984. Zion was a freshman at Bennington College at her death on March 5, 1984, at age 18. She died within 8 hours of her emergency admission to the New York Hospital Cornell Medical Center. She had an ongoing history of depression, and came to the Manhattan hospital on the evening of March 4, 1984, with a fever, agitation and "strange jerking motions" of her body. She also seemed disoriented at times. The emergency room physicians were unable to diagnose her condition definitively but admitted her for hydration and observation. Her death was caused by a combination of pethidine and phenelzine. A medical intern prescribed the pethidine. The case influenced graduate medical education and residency work hours. Limits were set on working hours for medical postgraduates, commonly referred to as interns or residents, in hospital training programs, and they also now require closer senior physician supervision.
See also
Carcinoid syndrome
References
External links
Image demonstrating findings in moderately severe serotonin syndrome from
Neuropharmacology
Clinical pharmacology
Rare syndromes
Adverse effects of psychoactive drugs
Wikipedia medicine articles ready to translate
Wikipedia neurology articles ready to translate | 0.762613 | 0.999483 | 0.762219 |
Flushing (physiology) | Flushing is to become markedly red in the face and often other areas of the skin, from various physiological conditions. Flushing is generally distinguished from blushing, since blushing is psychosomatic, milder, generally restricted to the face, cheeks or ears, and generally assumed to reflect emotional stress, such as embarrassment, anger, or romantic stimulation. Flushing is also a cardinal symptom of carcinoid syndrome—the syndrome that results from hormones (often serotonin or histamine) being secreted into systemic circulation.
Causes
abrupt cessation of physical exertion (resulting in heart output in excess of current muscular need for blood flow)
abdominal cutaneous nerve entrapment syndrome (ACNES), usually in patients who have had abdominal surgery
alcohol flush reaction
antiestrogens such as tamoxifen
atropine poisoning
body contact with warm or hot water (hot tub, bath, shower)
butorphanol reaction with some narcotic analgesics (since butorphanol is also an antagonist)
caffeine consumption
carbon monoxide poisoning
carcinoid tumor
chronic obstructive pulmonary disease (COPD), especially emphysema (also known as "pink puffer")
cluster headache attack or headache
compression of the nerve by the sixth thoracic vertebrae
coughing, particularly severe coughing fits
Cushing's syndrome
dehydration
dysautonomia
emotions: anger, embarrassment (for this reason it is also called erythema pudoris, from the Latinized Greek word for "redness" and the Latin "of embarrassment")
fever
fibromyalgia
histamines
homocystinuria (flushing across the cheeks)
Horner's syndrome
hot flush
hyperglycaemia
hyperstimulation of the parasympathetic nervous system, especially the vagus nerve
hyperthyroidism
inflammation (for example, caused by allergic reaction or infection)
iron poisoning
Jarisch-Herxheimer reaction (caused by antibiotics)
keratosis pilaris rubra faceii
Kratom
mastocytosis
medullary thyroid cancer
mixing an antibiotic with alcohol
neuroendocrine tumors
niacin (vitamin B3)
pheochromocytoma
polycythemia vera
powerful vasodilators, such as dihydropyridine calcium channel blockers
severe pain
sexual arousal, especially orgasm (see following section)
sexual intercourse (see below)
sneezing (red nose)
some recreational drugs, such as alcohol, heroin, cocaine and amphetamines
spicy foods
sunburn (erythema)
tachycardia
vinpocetine
Sex flush
Commonly referred to as the sex flush, vasocongestion (increased blood flow) of the skin can occur during all four phases of the human sexual response cycle. Studies show that the sex flush occurs in approximately 50–75% of females and 25% of males, yet not consistently. The sex flush tends to occur more often under warmer conditions and may not appear at all under lower temperatures.
During the female sex flush, pinkish spots develop under the breasts, then spread to the breasts, torso, face, hands, soles of the feet, and possibly over the entire body. Vasocongestion is also responsible for the darkening of the clitoris and the walls of the vagina during sexual arousal. During the male sex flush, the coloration of the skin develops less consistently than in the female, but typically starts with the epigastrium (upper abdomen), spreads across the chest, then continues to the neck, face, forehead, back, and sometimes, shoulders and forearms.
The sex flush typically disappears soon after reaching orgasm, but in other cases, may take up to two hours or more, and sometimes intense sweating occurs simultaneously.
See also
Cholinergic urticaria
Erythema
Pallor
Rash
References
Sexual arousal
Symptoms and signs: Skin and subcutaneous tissue | 0.765987 | 0.995049 | 0.762195 |
Medical laboratory | A medical laboratory or clinical laboratory is a laboratory where tests are conducted out on clinical specimens to obtain information about the health of a patient to aid in diagnosis, treatment, and prevention of disease. Clinical medical laboratories are an example of applied science, as opposed to research laboratories that focus on basic science, such as found in some academic institutions.
Medical laboratories vary in size and complexity and so offer a variety of testing services. More comprehensive services can be found in acute-care hospitals and medical centers, where 70% of clinical decisions are based on laboratory testing. Doctors offices and clinics, as well as skilled nursing and long-term care facilities, may have laboratories that provide more basic testing services. Commercial medical laboratories operate as independent businesses and provide testing that is otherwise not provided in other settings due to low test volume or complexity.
Departments
In hospitals and other patient-care settings, laboratory medicine is provided by the Department of Pathology and Medical Laboratory, and generally divided into two sections, each of which will be subdivided into multiple specialty areas. The two sections are:
Anatomic pathology: areas included here are histopathology, cytopathology, electron microscopy, and gross pathology.
Medical Laboratory, which typically includes the following areas:
Clinical microbiology: This encompasses several different sciences, including bacteriology, virology, parasitology, immunology, and mycology.
Clinical chemistry: This area typically includes automated analysis of blood specimens, including tests related to enzymology, toxicology and endocrinology.
Hematology: This area includes automated and manual analysis of blood cells. It also often includes coagulation.
Blood bank involves the testing of blood specimens in order to provide blood transfusion and related services.
Molecular diagnostics DNA testing may be done here, along with a subspecialty known as cytogenetics.
Reproductive biology testing is available in some laboratories, including Semen analysis, Sperm bank and assisted reproductive technology.
Layouts of clinical laboratories in health institutions vary greatly from one facility to another. For instance, some health facilities have a single laboratory for the microbiology section, while others have a separate lab for each specialty area.
The following is an example of a typical breakdown of the responsibilities of each area:
Microbiology includes culturing of the bacteria in clinical specimens, such as feces, urine, blood, sputum, cerebrospinal fluid, and synovial fluid, as well as possible infected tissue. The work here is mainly concerned with cultures, to look for suspected pathogens which, if found, are further identified based on biochemical tests. Also, sensitivity testing is carried out to determine whether the pathogen is sensitive or resistant to a suggested medicine. Results are reported with the identified organism(s) and the type and amount of drug(s) that should be prescribed for the patient.
Parasitology is where specimens are examined for parasites. For example, fecal samples may be examined for evidence of intestinal parasites such as tapeworms or hookworms.
Virology is concerned with identification of viruses in specimens such as blood, urine, and cerebrospinal fluid.
Hematology analyzes whole blood specimens to perform full blood counts, and includes the examination of blood films. Other specialized tests include cell counts on various bodily fluids.
Coagulation testing determines various blood clotting times, coagulation factors, and platelet function.
Clinical biochemistry commonly performs dozens of different tests on serum or plasma. These tests, mostly automated, includes quantitative testing for a wide array of substances, such as lipids, blood sugar, enzymes, and hormones.
Toxicology is mainly focused on testing for pharmaceutical and recreational drugs. Urine and blood samples are the common specimens.
Immunology/Serology uses the process of antigen-antibody interaction as a diagnostic tool. Compatibility of transplanted organs may also be determined with these methods.
Immunohematology, or blood bank determines blood groups, and performs compatibility testing on donor blood and recipients. It also prepares blood components, derivatives, and products for transfusion. This area determines a patient's blood type and Rh status, checks for antibodies to common antigens found on red blood cells, and cross matches units that are negative for the antigen.
Urinalysis tests urine for many analytes, including microscopically. If more precise quantification of urine chemicals is required, the specimen is processed in the clinical biochemistry lab.
Histopathology processes solid tissue removed from the body (biopsies) for evaluation at the microscopic level.
Cytopathology examines smears of cells from all over the body (such as from the cervix) for evidence of inflammation, cancer, and other conditions.
Molecular diagnostics includes specialized tests involving DNA and RNA analysis.
Cytogenetics involves using blood and other cells to produce a DNA karyotype. This can be helpful in cases of prenatal diagnosis (e.g. Down's syndrome) as well as in some cancers which can be identified by the presence of abnormal chromosomes.
Surgical pathology examines organs, limbs, tumors, fetuses, and other tissues biopsied in surgery such as breast mastectomies.
Medical laboratory staff
The staff of clinical laboratories may include:
Pathologist
Clinical biochemist
Laboratory assistant (LA)
Laboratory manager
Biomedical scientist (BMS) in the UK, Medical laboratory scientist (MT, MLS or CLS) in the US or Medical laboratory technologist in Canada
Medical laboratory technician/clinical laboratory technician (MLT or CLT in US)
Medical laboratory assistant (MLA)
Phlebotomist (PBT)
Histology technician
Labor shortages
The United States has a documented shortage of working laboratory professionals. For example, vacancy rates for Medical Laboratory Scientists ranged from 5% to 9% for various departments. The decline is primarily due to retirements, and to at-capacity educational programs that cannot expand which limits the number of new graduates. Professional organizations and some state educational systems are responding by developing ways to promote the lab professions in an effort to combat this shortage. In addition, the vacancy rates for the MLS were tested again in 2018. The percentage range for the various departments has developed a broader range of 4% to as high as 13%. The higher numbers were seen in the Phlebotomy and Immunology. Microbiology was another department that has had a struggle with vacancies. Their average in the 2018 survey was around 10-11% vacancy rate across the United States. Recruitment campaigns, funding for college programs, and better salaries for the laboratory workers are a few ways they are focusing to decrease the vacancy rate. The National Center For Workforce Analysis has estimated that by 2025 there will be a 24% increase in demand for lab professionals. Highlighted by the COVID-19 pandemic, work is being done to address this shortage including bringing pathology and laboratory medicine into the conversation surrounding access to healthcare. COVID-19 brought the laboratory to the attention of the government and the media, thus giving opportunity for the staffing shortages as well as the resource challenges to be heard and dealt with.
Types of laboratory
In most developed countries, there are two main types of lab processing the majority of medical specimens. Hospital laboratories are attached to a hospital, and perform tests on their patients. Private (or community) laboratories receive samples from general practitioners, insurance companies, clinical research sites and other health clinics for analysis. For extremely specialised tests, samples may go to a research laboratory. Some tests involve specimens sent between different labs for uncommon tests. For example, in some cases it may be more cost effective if a particular laboratory specializes in a less common tests, receiving specimens (and payment) from other labs, while sending other specimens to other labs for those tests they do not perform.
In many countries there are specialized types of medical laboratories according to the types of investigations carried out. Organisations that provide blood products for transfusion to hospitals, such as the Red Cross, will provide access to their reference laboratory for their customers. Some laboratories specialize in Molecular diagnostic and cytogenetic testing, in order to provide information regarding diagnosis and treatment of genetic or cancer-related disorders.
Specimen processing and work flow
In a hospital setting, sample processing will usually start with a set of samples arriving with a test request, either on a form or electronically via the laboratory information system (LIS). Inpatient specimens will already be labeled with patient and testing information provided by the LIS. Entry of test requests onto the LIS system involves typing (or scanning where barcodes are used) in the laboratory number, and entering the patient identification, as well as any tests requested. This allows laboratory analyzers, computers and staff to recognize what tests are pending, and also gives a location (such as a hospital department, doctor or other customer) for results reporting.
Once the specimens are assigned a laboratory number by the LIS, a sticker is typically printed that can be placed on the tubes or specimen containers. This label has a barcode that can be scanned by automated analyzers and test requests uploaded to the analyzer from the LIS.
Specimens are prepared for analysis in various ways. For example, chemistry samples are usually centrifuged and the serum or plasma is separated and tested. If the specimen needs to go on more than one analyzer, it can be divided into separate tubes.
Many specimens end up in one or more sophisticated automated analysers, that process a fraction of the sample to return one or more test results. Some laboratories use robotic sample handlers (Laboratory automation) to optimize the workflow and reduce the risk of contamination from sample handling by the staff.
The work flow in a hospital laboratory is usually heaviest from 2:00 am to 10:00 am. Nurses and doctors generally have their patients tested at least once a day with common tests such as complete blood counts and chemistry profiles. These orders are typically drawn during a morning run by phlebotomists for results to be available in the patient's charts for the attending physicians to consult during their morning rounds. Another busy time for the lab is after 3:00 pm when private practice physician offices are closing. Couriers will pick up specimens that have been drawn throughout the day and deliver them to the lab. Also, couriers will stop at outpatient drawing centers and pick up specimens. These specimens will be processed in the evening and overnight to ensure results will be available the following day.
Laboratory informatics
The large amount of information processed in laboratories is managed by a system of software programs, computers, and terminology standards that exchange data about patients, test requests, and test results known as a Laboratory information system or LIS. The LIS is often interfaced with the hospital information system, EHR and/or laboratory instruments. Formats for terminologies for test processing and reporting are being standardized with systems such as Logical Observation Identifiers Names and Codes (LOINC) and Nomenclature for Properties and Units terminology (NPU terminology).
These systems enable hospitals and labs to order the correct test requests for each patient, keep track of individual patient and specimen histories, and help guarantee a better quality of results. Results are made available to care providers electronically or by printed hard copies for patient charts.
Result analysis, validation and interpretation
According to various regulations, such as the international ISO 15189 norm, all pathological laboratory results must be verified by a competent professional. In some countries, staffs composed of clinical scientists do the majority of this work inside the laboratory with certain abnormal results referred to the relevant pathologist. Doctor Clinical Laboratory scientists have the responsibility for limited interpretation of testing results in their discipline in many countries. Interpretation of results can be assisted by some software in order to validate normal or non-modified results.
In other testing areas, only professional medical staff (pathologist or clinical Laboratory) is involved with interpretation and consulting. Medical staff are sometimes also required in order to explain pathology results to physicians. For a simple result given by phone or to explain a technical problem, often a medical technologist or medical lab scientist can provide additional information.
Medical Laboratory Departments in some countries are exclusively directed by a specialized Doctor laboratory Science. In others, a consultant, medical or non-medical, may be the head the department. In Europe and some other countries, Clinical Scientists with a Masters level education may be qualified to head the department. Others may have a PhD and can have an exit qualification equivalent to medical staff (e.g., FRCPath in the UK).
In France, only medical staff (Pharm.D. and M.D. specialized in anatomical pathology or clinical Laboratory Science) can discuss Laboratory results.
Medical laboratory accreditation
Credibility of medical laboratories is paramount to the health and safety of the patients relying on the testing services provided by these labs. Credentialing agencies vary by country. The international standard in use today for the accreditation of medical laboratories is ISO 15189 - Medical laboratories - Requirements for quality and competence.
In the United States, billions of dollars is spent on unaccredited lab tests, such as Laboratory developed tests which do not require accreditation or FDA approval; about a billion USD a year is spent on US autoimmune LDTs alone. Accreditation is performed by the Joint Commission, College of American Pathologists, AAB (American Association of Bioanalysts), and other state and federal agencies. Legislative guidelines are provided under CLIA 88 (Clinical Laboratory Improvement Amendments) which regulates Medical Laboratory testing and personnel.
The accrediting body in Australia is NATA, where all laboratories must be NATA accredited to receive payment from Medicare.
In France the accrediting body is the Comité français d'accréditation (COFRAC). In 2010, modification of legislation established ISO 15189 accreditation as an obligation for all clinical laboratories.
In the United Arab Emirates, the Dubai Accreditation Department (DAC) is the accreditation body that is internationally recognised by the International Laboratory Accreditation Cooperation (ILAC) for many facilities and groups, including Medical Laboratories, Testing and Calibration Laboratories, and Inspection Bodies.
In Hong Kong, the accrediting body is Hong Kong Accreditation Service (HKAS). On 16 February 2004, HKAS launched its medical testing accreditation programme.
In Canada, laboratory accreditation is not mandatory, but is becoming more and more popular. Accreditation Canada (AC) is the national reference. Different provincial oversight bodies mandate laboratories in EQA participations like LSPQ (Quebec), IQMH (Ontario) for example.
Industry
The laboratory industry is a part of the broader healthcare and health technology industry. Companies exist at various levels, including clinical laboratory services, suppliers of instrumentation equipment and consumable materials, and suppliers and developers of diagnostic tests themselves (often by biotechnology companies).
Clinical laboratory services includes large multinational corporations such LabCorp, Quest Diagnostics, and Sonic Healthcare but a significant portion of revenue, estimated at 60% in the United States, is generated by hospital labs. In 2018, the total global revenue for these companies was estimated to reach $146 billion by 2024. Another estimate places the market size at $205 billion, reaching $333 billion by 2023. The American Association for Clinical Chemistry (AACC) represents professionals in the field.
Clinical laboratories are supplied by other multinational companies which focus on materials and equipment, which can be used for both scientific research and medical testing. The largest of these is Thermo Fisher Scientific. In 2016, global life sciences instrumentation sales were around $47 billion, not including consumables, software, and services. In general, laboratory equipment includes lab centrifuges, transfection solutions, water purification systems, extraction techniques, gas generators, concentrators and evaporators, fume hoods, incubators, biological safety cabinets, bioreactors and fermenters, microwave-assisted chemistry, lab washers, and shakers and stirrers.
United States
In the United States, estimated total revenue as of 2016 was $75 billion, about 2% of total healthcare spending. In 2016, an estimated 60% of revenue was done by hospital labs, with 25% done by two independent companies (LabCorp and Quest). Hospital labs may also outsource their lab, known as outreach, to run tests; however, health insurers may pay the hospitals more than they would pay a laboratory company for the same test, but as of 2016, the markups were questioned by insurers. Rural hospitals, in particular, can bill for lab outreach under the Medicare's 70/30 shell rule.
Laboratory developed tests are designed and developed inside a specific laboratory and do not require FDA approval; due to technological innovations, they have become more common and are estimated at a total value of $11 billion in 2016.
Due to the rise of high-deductible health plans, laboratories have sometimes struggled to collect when billing patients; consequently, some laboratories have shifted to become more "consumer-focused".
See also
ARUP Laboratories
Healthcare scientist
Point-of-care testing
References
Further reading
Morris, S., Otto, N. C., Golemboski, K. (2013). Improving patient safety and healthcare quality in the 21st century—Competencies required of future medical laboratory science practitioners. Clinical Laboratory Science, 26, 200–204.
Archibong, F., Atangwho, A., Ayuk, E. A., Okoye, E. I., Atroger, M., Okokon, B. I. (2019). Medical law: exploring doctor's knowledge on the laws regulating clinical and medical laboratories in Nigeria. Nigerian Journal of Medicine, 28(4), 386–392.
Plebani, M., Laposata, M., Lippi, G. (2019). Driving the route of laboratory medicine: a manifesto for the future. Internal and Emergency Medicine, 14, 337–340.
Goulding, M. H., Graham, L., Chorney, D., Rajendram, R. (2020). The use of interprofessional stimulation to improve collaboration and problem solving among undergraduate BHSc medical laboratory science and BScN nursing students. Canadian Journal of Medical Laboratory Science, 82(2), 25–33.
Clinical pathology
Laboratory types
Medical diagnosis | 0.766658 | 0.99416 | 0.762181 |
Systemic | Systemic fundamental to a predominant social, economic, or political practice. This refers to:
In medicine
In medicine, systemic means affecting the whole body, or at least multiple organ systems. It is in contrast with topical or local.
Systemic administration, a route of administration of medication so that the entire body is affected
Systemic circulation, carries oxygenated blood from the heart to the body and then returns deoxygenated blood back to the heart
Systemic disease, an illness that affects multiple organs, systems or tissues, or the entire body
Systemic effect, an adverse effect of an exposure that affects the body as a whole, rather than one part
Systemic inflammatory response syndrome, an inflammatory state affecting the whole body, frequently in response to infection
Systemic lupus erythematosus, a chronic autoimmune connective tissue disease that can affect any part of the body
Systemic scleroderma, also known as systemic sclerosis, a systemic connective tissue disease
Systemic venous system, refers to veins that drain into the right atrium without passing through two vascular beds
Systemic exertion intolerance disease, a new name for chronic fatigue syndrome proposed by the Institute of Medicine in 2015
In biology
Systemic acquired resistance, a "whole-plant" resistance response that occurs following an earlier localized exposure to a pathogen
Systemic pesticide, a pesticide that enters and moves freely within the organism under treatment
Other uses
Systemic (amateur extrasolar planet search project), a research project to locate extrasolar planets using distributed computing
Systemic (album), a 2023 album by the band Divide and Dissolve
Systemic bias, the inherent tendency of a process to favor particular outcomes
Systemic functional grammar, a model of grammar that considers language as a system
Systemic functional linguistics, an approach to linguistics that considers language as a system
Systemic psychology or systems psychology, a branch of applied psychology based on systems theory and thinking
Systemic risk, the risk of collapse of an entire financial system or market, as opposed to risk associated with any one entity
Systemic shock, a shock to any system strong enough to drive it out of equilibrium, can refer to a change in many fields
Systemic therapy, a school of psychology dealing with the interactions of groups and their interactional patterns and dynamics
See also
Systematic (disambiguation)
Systematics (disambiguation)
Systemics
de:Systemisch | 0.799973 | 0.952749 | 0.762173 |
Hypertensive heart disease | Hypertensive heart disease includes a number of complications of high blood pressure that affect the heart. While there are several definitions of hypertensive heart disease in the medical literature, the term is most widely used in the context of the International Classification of Diseases (ICD) coding categories. The definition includes heart failure and other cardiac complications of hypertension when a causal relationship between the heart disease and hypertension is stated or implied on the death certificate. In 2013 hypertensive heart disease resulted in 1.07 million deaths as compared with 630,000 deaths in 1990.
According to ICD-10, hypertensive heart disease (I11), and its subcategories: hypertensive heart disease with heart failure (I11.0) and hypertensive heart disease without heart failure (I11.9) are distinguished from chronic rheumatic heart diseases (I05-I09), other forms of heart disease (I30-I52) and ischemic heart diseases (I20-I25). However, since high blood pressure is a risk factor for atherosclerosis and ischemic heart disease, death rates from hypertensive heart disease provide an incomplete measure of the burden of disease due to high blood pressure.
Signs and symptoms
The symptoms and signs of hypertensive heart disease will depend on whether or not it is accompanied by heart failure. In the absence of heart failure, hypertension, with or without enlargement of the heart (left ventricular hypertrophy) is usually symptomless.
Symptoms, signs and consequences of congestive heart failure can include:
Fatigue
Irregular pulse or palpitations
Swelling of feet and ankles
Weight gain
Nausea
Shortness of breath
Difficulty sleeping flat in bed (orthopnea)
Bloating and abdominal pain
Greater need to urinate at night
An enlarged heart (cardiomegaly)
Left ventricular hypertrophy and left ventricular remodeling
Diminished coronary flow reserve and silent myocardial ischemia
Coronary heart disease and accelerated atherosclerosis
Heart failure with normal left ventricular ejection fraction (HFNEF), often termed diastolic heart failure
Atrial fibrillation, other cardiac arrhythmias, or sudden cardiac death
Heart failure can develop insidiously over time or patients can present acutely with acute heart failure or acute decompensated heart failure and pulmonary edema due to sudden failure of pump function of the heart. Sudden failure can be precipitated by a variety of causes, including myocardial ischemia, marked increases in blood pressure, or cardiac arrhythmias.
Diagnosis
Differential diagnosis
Other conditions can share features with hypertensive heart disease and need to be considered in the differential diagnosis. For example:
Coronary artery disease or ischemic heart diseases due to atherosclerosis
Hypertrophic cardiomyopathy
Left ventricular hypertrophy in athletes
Congestive heart failure or heart failure with normal ejection fraction due to other causes
Atrial fibrillation or other disorders of cardiac rhythm due to other causes
Sleep apnea
Prevention
Because there are no symptoms with high blood pressure, people can have the condition without knowing it. Diagnosing high blood pressure early can help prevent heart disease, stroke, eye problems, and chronic kidney disease.
The risk of cardiovascular disease and death can be reduced by lifestyle modifications, including dietary advice, promotion of weight loss and regular aerobic exercise, moderation of alcohol intake and cessation of smoking. Drug treatment may also be needed to control the hypertension and reduce the risk of cardiovascular disease, manage the heart failure, or control cardiac arrhythmias. Patients with hypertensive heart disease should avoid taking over the counter nonsteroidal anti-inflammatory drugs (NSAIDs), or cough suppressants, and decongestants containing sympathomimetics, unless otherwise advised by their physician as these can exacerbate hypertension and heart failure.
Blood pressure goals
According to JNC 7, BP goals should be as follows:
Less than 140/90mm Hg in patients with uncomplicated hypertension
Less than 130/85mm Hg in patients with diabetes and those with renal disease with less than 1g/24-hour proteinuria
Less than 125/75mm Hg in patients with renal disease and more than 1 g/24-hour proteinuria
Treatment
The medical care of patients with hypertensive heart disease falls under 2 categories—
Treatment of hypertension
Prevention (and, if present, treatment) of heart failure or other cardiovascular disease
Epidemiology
Hypertension or high blood pressure affects at least 26.4% of the world's population. Hypertensive heart disease is only one of several diseases attributable to high blood pressure. Other diseases caused by high blood pressure include ischemic heart disease, cancer, stroke, peripheral arterial disease, aneurysms and kidney disease. Hypertension increases the risk of heart failure by two or three-fold and probably accounts for about 25% of all cases of heart failure. In addition, hypertension precedes heart failure in 90% of cases, and the majority of heart failure in the elderly may be attributable to hypertension. Hypertensive heart disease was estimated to be responsible for 1.0 million deaths worldwide in 2004 (or approximately 1.7% of all deaths globally), and was ranked 13th in the leading global causes of death for all ages. A world map shows the estimated disability-adjusted life years per 100,000 inhabitants lost due to hypertensive heart disease in 2004.
Sex differences
There are more women than men with hypertension, and, although men develop hypertension earlier in life, hypertension in women is less well controlled. The consequences of high blood pressure in women are a major public health problem and hypertension is a more important contributory factor in heart attacks in women than men. Until recently women have been under-represented in clinical trials in hypertension and heart failure. Nevertheless, there is some evidence that the effectiveness of antihypertensive drugs differs between men and women and that treatment for heart failure may be less effective in women.
Ethnic differences
Studies in the US indicate that a disproportionate number of African Americans have hypertension compared with non-Hispanic whites and Mexican Americans, and that they have a greater burden of hypertensive heart disease. Heart failure is more common in people of African American ethnicity, mortality from heart failure is also consistently higher than in white patients, and it develops at an earlier age. Recent data suggests that rates of hypertension are increasing more rapidly in African Americans than other ethnic groups. The excess of high blood pressure and its consequences in African Americans is likely to contribute to their shorter life expectancy compared with white Americans.
References
Hypertension | 0.764407 | 0.997076 | 0.762172 |
Heterotrophic nutrition | Heterotrophic nutrition is a mode of nutrition in which organisms depend upon other organisms for food to survive. They can't make their own food like Green plants. Heterotrophic organisms have to take in all the organic substances they need to survive.
All animals, certain types of fungi, and non-photosynthesizing plants are heterotrophic. In contrast, green plants, red algae, brown algae, and cyanobacteria are all autotrophs, which use photosynthesis to produce their own food from sunlight. Some fungi may be saprotrophic, meaning they will extracellularly secrete enzymes onto their food to be broken down into smaller, soluble molecules which can diffuse back into the fungus.
Description
All eukaryotes except for green plants and algae are unable to manufacture their own food: They obtain food from other organisms. This mode of nutrition is also known as heterotrophic nutrition.
All heterotrophs (except blood and gut parasites) have to convert solid food into soluble compounds which are capable of being absorbed (digestion). Then the soluble products of digestion for the organism are being broken down for the release of energy (respiration). All heterotrophs depend on autotrophs for their nutrition. Heterotrophic organisms have only four types of nutrition.
Footnotes
References
Trophic ecology
Biological interactions | 0.771378 | 0.988061 | 0.762169 |
Body orifice | A body orifice is any opening in the body of an animal.
External
In a typical mammalian body such as the human body, the external body orifices are:
The nostrils, for breathing and the associated sense of smell
The mouth, for eating, drinking, breathing, and vocalizations such as speech
The ear canals, for the sense of hearing
The nasolacrimal ducts, to carry tears from the lacrimal sac into the nasal cavity
The anus, for defecation
The urinary meatus, for urination in males and females and ejaculation in males
In females, the vagina, for menstruation, copulation and birth
The nipple orifices
Other animals may have some other body orifices:
cloaca, in birds, reptiles, amphibians, and a few mammals
siphon in mollusks, arthropods, and some other animals
Internal
Internal orifices include the orifices of the outflow tracts of the heart, between the heart valves.
See also
Internal urethral orifice
Mucosa
Mucocutaneous boundary
Meatus
Body cavity
References
Anatomy | 0.767557 | 0.992924 | 0.762126 |
Evolutionary medicine | Evolutionary medicine or Darwinian medicine is the application of modern evolutionary theory to understanding health and disease. Modern biomedical research and practice have focused on the molecular and physiological mechanisms underlying health and disease, while evolutionary medicine focuses on the question of why evolution has shaped these mechanisms in ways that may leave us susceptible to disease. The evolutionary approach has driven important advances in the understanding of cancer, autoimmune disease, and anatomy. Medical schools have been slower to integrate evolutionary approaches because of limitations on what can be added to existing medical curricula. The International Society for Evolution, Medicine and Public Health coordinates efforts to develop the field. It owns the Oxford University Press journal Evolution, Medicine and Public Health and The Evolution and Medicine Review.
Core principles
Utilizing the Delphi method, 56 experts from a variety of disciplines, including anthropology, medicine, nursing, and biology agreed upon 14 core principles intrinsic to the education and practice of evolutionary medicine. These 14 principles can be further grouped into five general categories: question framing, evolution I and II (with II involving a higher level of complexity), evolutionary trade-offs, reasons for vulnerability, and culture. Additional information regarding these principles may be found in the table below.
Human adaptations
Adaptation works within constraints, makes compromises and trade-offs, and occurs in the context of different forms of competition.
Constraints
Adaptations can only occur if they are evolvable. Some adaptations which would prevent ill health are therefore not possible.
DNA cannot be totally prevented from undergoing somatic replication corruption; this has meant that cancer, which is caused by somatic mutations, has not (so far) been eliminated by natural selection.
Humans cannot biosynthesize vitamin C, and so risk scurvy, vitamin C deficiency disease, if dietary intake of the vitamin is insufficient.
Retinal neurons and their axon output have evolved to be inside the layer of retinal pigment cells. This creates a constraint on the evolution of the visual system such that the optic nerve is forced to exit the retina through a point called the optic disc. This, in turn, creates a blind spot. More importantly, it makes vision vulnerable to increased pressure within the eye (glaucoma) since this cups and damages the optic nerve at this point, resulting in impaired vision.
Other constraints occur as the byproduct of adaptive innovations.
Trade-offs and conflicts
One constraint upon selection is that different adaptations can conflict, which requires a compromise between them to ensure an optimal cost-benefit tradeoff.
Running efficiency in women, and birth canal size
Encephalization, and gut size
Skin pigmentation protection from UV, and the skin synthesis of vitamin D
Speech and its use of a descended larynx, and increased risk of choking
Competition effects
Different forms of competition exist and these can shape the processes of genetic change.
mate choice and disease susceptibility
genomic conflict between mother and fetus that results in pre-eclampsia
Lifestyle
Humans evolved to live as simple hunter-gatherers in small tribal bands, while contemporary humans have a more complex life. This change may make present-day humans susceptible to lifestyle diseases.
Diet
In contrast to the diet of early hunter-gatherers, the modern Western diet often contains high quantities of fat, salt, and simple carbohydrates, such as refined sugars and flours.
Trans fat health risks
Dental caries
High GI foods
Modern diet based on "common wisdom" regarding diets in the paleolithic era
Among different countries, the incidence of colon cancer varies widely, and the extent of exposure to a Western pattern diet may be a factor in cancer incidence.
Life expectancy
Examples of aging-associated diseases are atherosclerosis and cardiovascular disease, cancer, arthritis, cataracts, osteoporosis, type 2 diabetes, hypertension and Alzheimer's disease. The incidence of all of these diseases increases rapidly with aging (increases exponentially with age, in the case of cancer).
Of the roughly 150,000 people who die each day across the globe, about two thirds—100,000 per day—die of age-related causes. In industrialized nations, the proportion is much higher, reaching 90%.
Exercise
Many contemporary humans engage in little physical exercise compared to the physically active lifestyles of ancestral hunter-gatherers. Prolonged periods of inactivity may have only occurred in early humans following illness or injury, so a modern sedentary lifestyle may continuously cue the body to trigger life preserving metabolic and stress-related responses such as inflammation, and some theorize that this causes chronic diseases.
Cleanliness
Contemporary humans in developed countries are mostly free of parasites, particularly intestinal ones. This is largely due to frequent washing of clothing and the body, and improved sanitation. Although such hygiene can be very important when it comes to maintaining good health, it can be problematic for the proper development of the immune system. The hygiene hypothesis is that humans evolved to be dependent on certain microorganisms that help establish the immune system, and modern hygiene practices can prevent necessary exposure to these microorganisms. "Microorganisms and macroorganisms such as helminths from mud, animals, and feces play a critical role in driving immunoregulation" (Rook, 2012). Essential microorganisms play a crucial role in building and training immune functions that fight off and repel some diseases, and protect against excessive inflammation, which has been implicated in several diseases. For instance, recent studies have found evidence supporting inflammation as a contributing factor in Alzheimer's Disease.
Specific explanations
This is a partial list: all links here go to a section describing or debating its evolutionary origin.
Life stage related
Adipose tissue in human infants
Arthritis and other chronic inflammatory diseases
Ageing
Alzheimer disease
Childhood
Menarche
Menopause
Menstruation
Morning sickness
Other
Atherosclerosis
Arthritis and other chronic inflammatory diseases
Cough]
Cystic fibrosis
Dental occlusion
Diabetes Type II
Diarrhea
Essential hypertension
Fever
Gestational hypertension
Gout
Iron deficiency (paradoxical benefits)
Obesity
Phenylketonuria
Placebos
Osteoporosis
Red blood cell polymorphism disorders
Sickle cell anemia
Sickness behavior
Women's reproductive cancers
Evolutionary psychology
As noted in the table below, adaptationist hypotheses regarding the etiology of psychological disorders are often based on analogies with evolutionary perspectives on medicine and physiological dysfunctions (see in particular, Randy Nesse and George C. Williams' book Why We Get Sick).
Evolutionary psychiatrists and psychologists suggest that some mental disorders likely have multiple causes.
See several topic areas, and the associated references, below.
Agoraphobia
Anxiety
Depression
Drug abuse
Schizophrenia
Unhappiness
History
Charles Darwin did not discuss the implications of his work for medicine, though biologists quickly appreciated the germ theory of disease and its implications for understanding the evolution of pathogens, as well as an organism's need to defend against them.
Medicine, in turn, ignored evolution, and instead focused (as done in the hard sciences) upon proximate mechanical causes.
George C. Williams was the first to apply evolutionary theory to health in the context of senescence. Also in the 1950s, John Bowlby approached the problem of disturbed child development from an evolutionary perspective upon attachment.
An important theoretical development was Nikolaas Tinbergen's distinction made originally in ethology between evolutionary and proximate mechanisms.
Randolph M. Nesse summarizes its relevance to medicine:
The paper of Paul Ewald in 1980, "Evolutionary Biology and the Treatment of Signs and Symptoms of Infectious Disease", and that of Williams and Nesse in 1991, "The Dawn of Darwinian Medicine" were key developments. The latter paper "draw a favorable reception",page x and led to a book, Why We Get Sick (published as Evolution and healing in the UK). In 2008, an online journal started: Evolution and Medicine Review.
In 2000, Paul Sherman hypothesised that morning sickness could be an adaptation that protects the developing fetus from foodborne illnesses, some of which can cause miscarriage or birth defects, such as listeriosis and toxoplasmosis.
See also
Evolutionary therapy
Evolutionary psychiatry
Evolutionary physiology
Evolutionary psychology
Evolutionary developmental psychopathology
Evolutionary approaches to depression
Illness
Paleolithic lifestyle
Universal Darwinism
References
Further reading
Books
Online articles
External links
Evolution and Medicine Network
Special Issue of Evolutionary Applications on Evolutionary Medicine
Evolutionary biology
Clinical medicine | 0.777073 | 0.980744 | 0.76211 |
Physiological psychology | Physiological psychology is a subdivision of behavioral neuroscience (biological psychology) that studies the neural mechanisms of perception and behavior through direct manipulation of the brains of nonhuman animal subjects in controlled experiments. This field of psychology takes an empirical and practical approach when studying the brain and human behavior. Most scientists in this field believe that the mind is a phenomenon that stems from the nervous system. By studying and gaining knowledge about the mechanisms of the nervous system, physiological psychologists can uncover many truths about human behavior. Unlike other subdivisions within biological psychology, the main focus of psychological research is the development of theories that describe brain-behavior relationships.
Physiological psychology studies many topics relating to the body's response to a behavior or activity in an organism. It concerns the brain cells, structures, components, and chemical interactions that are involved in order to produce actions. Psychologists in this field usually focus their attention to topics such as sleep, emotion, ingestion, senses, reproductive behavior, learning/memory, communication, psychopharmacology, and neurological disorders. The basis for these studies all surround themselves around the notion of how the nervous system intertwines with other systems in the body to create a specific behavior.
Nervous system
The nervous system can be described as a control system that interconnects the other body systems. It consists of the brain, spinal cord, and other nerve tissues throughout the body. The system's primary function is to react to internal and external stimuli in the human body. It uses electrical and chemical signals to send out responses to different parts of the body, and it is made up of nerve cells called neurons. Through the system, messages are transmitted to body tissues such as a muscle. There are two major subdivisions in the nervous system known as the central and peripheral nervous system. The central nervous system is composed of the brain and spinal cord. The brain is the control center of the body and contains millions of neural connections. This organ is responsible for sending and receiving messages from the body and its environment. Each part of the brain is specialized for different aspects of the human being. For example, the temporal lobe has a major role in vision and audition, whereas the frontal lobe is significant for motor function and problem solving. The spinal cord is attached to the brain and serves as the main connector of nerves and the brain. The nerve tissue that lies outside of the central nervous system is collectively known as the peripheral nervous system. This system can be further divided into the autonomic and somatic nervous system. The autonomic system can be referred to as the involuntary component that regulates bodily organs and mechanisms, such as digestion and respiration. The somatic system is responsible for relaying messages back and forth from the brain to various parts of the body, whether it is taking in sensory stimuli and sending it to the brain or sending messages from the brain in order for muscles to contract and relax.
The nervous system is a complex and intricate network of cells and fibers that serves as the communication hub within the human body. Consisting of the central nervous system (CNS), which includes the brain and spinal cord, and the peripheral nervous system (PNS), which extends throughout the rest of the body, this system is responsible for transmitting signals between different parts of the body and facilitating the coordination of various physiological functions. Neurons, the fundamental building blocks of the nervous system, transmit electrical and chemical signals, enabling the rapid exchange of information. The CNS, as the command center, processes sensory input, initiates responses, and stores memories. In contrast, the PNS connects the CNS to organs, muscles, and glands, allowing for voluntary and involuntary actions. The intricate interplay of the nervous system is essential for maintaining homeostasis, responding to stimuli, and orchestrating complex behaviors and cognitive processes. Understanding the structure and function of the nervous system is fundamental to comprehending various neurological disorders and advancing medical interventions to support overall human health and well-being.
Emotion
Emotion constitutes a major influence for determining human behaviors. It is thought that emotions are predictable and are rooted in different areas in our brains, depending on what emotion it evokes.
An emotional response can be divided into three major categories including behavioral, autonomic, and hormonal.
The behavioral component is explained by the muscular movements that accompany the emotion. For example, if a person is experiencing fear, a possible behavioral mechanism would be to run away from the fear factor.
The autonomic aspect of an emotion provides the ability to react to the emotion. This would be the fight-or-flight response that the body automatically receives from the brain signals.
Lastly, hormones released facilitate the autonomic response. For example, the autonomic response, which has sent out the fight-or-flight response, would be aided by the release of such chemicals like epinephrine and norepinephrine, both secreted by the adrenal gland, in order to further increase blood flow to aid in muscular rejuvenation of oxygen and nutrients.
Emotions in decision making can cause irrational outcomes. Two types of emotions occur in the decision making process which anticipating emotions and immediate emotions. Loss and gain in anticipated emotions people will experience the outcomes differently depending on the situation. Immediate emotions are considered true emotions which integrates cognition with somatic or bodily components of the autonomic nervous system to express the emotion externally.
Emotion activates several areas of the brain inside the limbic system and varies per emotion:
Fear: the amygdala is the main component for acquisition, storage, and expression of fear.
Lesions on the central amygdaloid can lead to disruptions in the behavioral and autonomic emotional responses of fear.
Anger/aggression: the hypothalamus and amygdala work together to send inhibitory/excitatory impulses to the periaqueductal gray which then carries out usually defensive behaviors.
Happiness: the ventral tegmental area works closely with the prefrontal cortex to produce emotions of happiness as they lie upon the same dopamine pathways.
Several hormones are secreted in response to emotions and vary from general emotional tuning to specific hormones released from certain emotions alone:
Emotions are seen as a positive feedback cycle in the brain. Oxytocin acts to over-sensitize the limbic system to emotional responses leading to even larger emotional responses. Under the response to emotions, even more oxytocin is secreted therefore increasing the response further. In addition to the general effects oxytocin has on the limbic system, it provides a more specific purpose as well in the body. It acts as an anxiety suppressant mainly found in stressful and social situations. It provides a calming effect to the body during these high stress situations. Oxytocin is also seen as a strong hormone in maternal attachment and aggression found in new mothers. This hormone also plays a slight part in the female desire to pair and mate.
Another hormone found in the direct response from emotion is adrenocorticotropic hormone (ACTH) secreted in response to fearful stimuli. ACTH is secreted by the posterior pituitary in response to fear and plays a role in the facilitation or inhibition of behaviors and actions to follow. In most cases, a high ACTH secretion will lead to the inhibition of actions that would produce the same fearful response that just occurred.
Happiness is primarily controlled by the levels of dopamine and serotonin in the body. Both are monoamine neurotransmitters that act on different sites in the body. Serotonin acts on receptors in the gastrointestinal tract while dopamine acts on receptors in the brain, while both performing similar functions. Dopamine is known to be the primary hormone acting on the brain's reward system, while this has recently begun to be a point of debate in the research community. Serotonin has less known on how it carries out its function in reducing depression, but only that it works. Specific-serotonin reuptake inhibitors (SSRI) are the type of drug given to patients with depression in which the serotonin is left in the synapse to continue to be absorbed in the body.
Sleep
Sleep is a behavior that is provoked by the body initiating the feeling of sleepiness in order for people to rest for usually several hours at a time. During sleep, there is a reduction of awareness, responsiveness, and movement. On average, an adult human sleeps between seven and eight hours per night. There is a minute percentage that sleeps less than five to six hours, which is also a symptom of sleep deprivation, and an even smaller percentage of people who sleep more than ten hours a day. Oversleeping has been shown to have a correlation with higher mortality. There are no benefits to oversleeping and it can result in sleep inertia, which is the feeling of drowsiness for a period of time after waking. There are two phases of sleep: rapid eye movement (REM) and Non-REM sleep (NREM).
REM sleep is the less restful stage in which one dreams and experiences muscle movements or twitches. Also during this stage in sleep, a person's heart rate and breathing are typically irregular. The electrical activity in the brain during REM sleep causes signals the same overwhelming intensity of being awake inside. The same brain energy is used during REM sleep measured by oxygen and glucose metabolism equally to being awake. EEGs are used to observe these patterns in the brain during the different stages of REM and Non-REM sleep.
Non-REM sleep, also sometimes referred to as slow-wave sleep, is associated with deep sleep. The body's blood pressure, heart rate, and breathing are generally significantly decreased compared to an alert state. Dreaming can occur in this state; however a person is not able to remember them due to how deep in sleep they are and the inability for consolidation to occur in memory. REM cycles typically occur in 90 minute intervals and increase in length as the amount of sleep in one session progresses. In a typical night's rest, a person will have about four to six cycles of REM and Non-REM sleep.
Sleep is important for the body in order to restore itself from the depletion of energy during wakefulness and allows for recovery since cell division occurs the fastest during the Non-REM cycle. Sleep is also important for maintaining the functioning of the immune system, as well as helping with the consolidation of information previously learned and experienced into the memory. If sleep deprived, recall of information is typically decreased. Dreams that occur during sleep have been shown to increase mental creativity and problem solving skills.
As the period of time since the last Non-REM cycle has occurred increases, the body's drive towards sleep also increases. Physical and environmental factors can have a great influence over the body's drive towards sleep. Mental stimulation, pain and discomfort, higher/lower than normal environmental temperatures, exercise, light exposure, noise, hunger, and overeating all result in an increase in wakefulness. On the contrary, sexual activity and some foods such as carbohydrates and dairy products promote sleep.
Careers in the field
In the past, physiological psychologists received a good portion of their training in psychology departments of major universities. Currently, physiological psychologists are also being trained in behavioral neuroscience or biological psychology programs that are affiliated with psychology departments, or in interdisciplinary neuroscience programs. Most physiological psychologists receive PhDs in neuroscience or a related subject and either teach and carry out research at colleges or universities, are employed for research for government laboratories or other private organizations, or are hired by pharmaceutical companies to study the effects that various drugs have on an individual's behavior.
Various forms of psychology concentrations includes in the sectors of health psychology, forensic psychology, clinical psychology, industrial and organizational psychology, and school psychology. Health psychology is a discipline that understands the psychological, behavioral, and cultural aspects that affect the physical health and illnesses within individuals. A psychologists with the focus of health psychology would have a biopsychosocial model approach with patients. Forensic psychology usually have a background in criminal justice and pursue a master's in forensic psychology. Clinical psychology can be pursued in education by a master's or a PsyD program to receive more research experience or academic. This pursuit will learn psychological assessment, consultation, and psychotherapy. Industrial and organizational psychology focuses on the corporate world to help the function of the work flow for organizations and their relationships with employees. This helps increase job satisfaction and work goals by using surveys and reinforcement with a reward system between employee and employer. School psychologists work in education to partner with schools to provide in house counseling assistance.
Medical Treatment
Pharmacology is biomedical science that helps in the branch of research to discover the characteristics of chemicals that affect biological functionalism and the relationship to other parts of the body. The use of science to help the increases knowledge of drugs to then become part of the traditional medical practice that patients may benefit from. Most people cannot afford mental healthcare so they seek the options from clinics are any assistance that is offered through work or school. Medical insurance can help cover expenses or government assisted insurances can help as well. There are programs like company Betterhelp who provide mental health services at discounted rates as well as financial aid to help reduce costs.
See also
Cognitive neuroscience
Psychophysics
Psychophysiology
References
Behavioral neuroscience
Human physiology | 0.773118 | 0.985748 | 0.762099 |
Thrombosis | Thrombosis is the formation of a blood clot inside a blood vessel, obstructing the flow of blood through the circulatory system. When a blood vessel (a vein or an artery) is injured, the body uses platelets (thrombocytes) and fibrin to form a blood clot to prevent blood loss. Even when a blood vessel is not injured, blood clots may form in the body under certain conditions. A clot, or a piece of the clot, that breaks free and begins to travel around the body is known as an embolus.
Thrombosis may occur in veins (venous thrombosis) or in arteries (arterial thrombosis). Venous thrombosis (sometimes called DVT, deep vein thrombosis) leads to a blood clot in the affected part of the body, while arterial thrombosis (and, rarely, severe venous thrombosis) affects the blood supply and leads to damage of the tissue supplied by that artery (ischemia and necrosis). A piece of either an arterial or a venous thrombus can break off as an embolus, which could then travel through the circulation and lodge somewhere else as an embolism. This type of embolism is known as a thromboembolism. Complications can arise when a venous thromboembolism (commonly called a VTE) lodges in the lung as a pulmonary embolism. An arterial embolus may travel further down the affected blood vessel, where it can lodge as an embolism.
Signs and symptoms
Thrombosis is generally defined by the type of blood vessel affected (arterial or venous thrombosis) and the precise location of the blood vessel or the organ supplied by it.
Venous thrombosis
Deep vein thrombosis
Deep vein thrombosis (DVT) is the formation of a blood clot within a deep vein. It most commonly affects leg veins, such as the femoral vein.
Three factors are important in the formation of a blood clot within a deep vein—these are:
the rate of blood flow,
the thickness of the blood and
qualities of the vessel wall.
Classical signs of DVT include swelling, pain and redness of the affected area.
Paget-Schroetter disease
Paget-Schroetter disease or upper extremity DVT (UEDVT) is the obstruction of an arm vein (such as the axillary vein or subclavian vein) by a thrombus. The condition usually comes to light after vigorous exercise and usually presents in younger, otherwise healthy people. Men are affected more than women.
Budd-Chiari syndrome
Budd-Chiari syndrome is the blockage of a hepatic vein or of the hepatic part of the inferior vena cava. This form of thrombosis presents with abdominal pain, ascites and enlarged liver. Treatment varies between therapy and surgical intervention by the use of shunts.
Portal vein thrombosis
Portal vein thrombosis affects the hepatic portal vein, which can lead to portal hypertension and reduction of the blood supply to the liver. It usually happens in the setting of another disease such as pancreatitis, cirrhosis, diverticulitis or cholangiocarcinoma.
Renal vein thrombosis
Renal vein thrombosis is the obstruction of the renal vein by a thrombus. This tends to lead to reduced drainage from the kidney.
Cerebral venous sinus thrombosis
Cerebral venous sinus thrombosis (CVST) is a rare form of stroke which results from the blockage of the dural venous sinuses by a thrombus. Symptoms may include headache, abnormal vision, any of the symptoms of stroke such as weakness of the face and limbs on one side of the body and seizures. The diagnosis is usually made with a CT or MRI scan. The majority of persons affected make a full recovery. The mortality rate is 4.3%.
Jugular vein thrombosis
Jugular vein thrombosis is a condition that may occur due to infection, intravenous drug use or malignancy. Jugular vein thrombosis can have a varying list of complications, including: systemic sepsis, pulmonary embolism, and papilledema. Though characterized by a sharp pain at the site of the vein, it can prove difficult to diagnose, because it can occur at random.
Cavernous sinus thrombosis
Cavernous sinus thrombosis is a specialised form of cerebral venous sinus thrombosis, where there is thrombosis of the cavernous sinus of the basal skull dura, due to the retrograde spread of infection and endothelial damage from the danger triangle of the face. The facial veins in this area anastomose with the superior and inferior ophthalmic veins of the orbit, which drain directly posteriorly into the cavernous sinus through the superior orbital fissure. Staphyloccoal or Streptococcal infections of the face, for example nasal or upper lip pustules may thus spread directly into the cavernous sinus, causing stroke-like symptoms of double vision, squint, as well as spread of infection to cause meningitis.
Arterial thrombosis
Arterial thrombosis is the formation of a thrombus within an artery. In most cases, arterial thrombosis follows rupture of atheroma (a fat-rich deposit in the blood vessel wall), and is therefore referred to as atherothrombosis. Arterial embolism occurs when clots then migrate downstream and can affect any organ. Alternatively, arterial occlusion occurs as a consequence of embolism of blood clots originating from the heart ("cardiogenic" emboli). The most common cause is atrial fibrillation, which causes a blood stasis within the atria with easy thrombus formation, but blood clots can develop inside the heart for other reasons too as infective endocarditis.
Stroke
A stroke is the rapid decline of brain function due to a disturbance in the supply of blood to the brain. This can be due to ischemia, thrombus, embolus (a lodged particle) or hemorrhage (a bleed).
In thrombotic stroke, a thrombus (blood clot) usually forms around atherosclerotic plaques. Since blockage of the artery is gradual, the onset of symptomatic thrombotic strokes is slower. Thrombotic stroke can be divided into two categories — large vessel disease or small vessel disease. The former affects vessels such as the internal carotids, vertebral and the circle of Willis. The latter can affect smaller vessels, such as the branches of the circle of Willis.
Myocardial infarction
Myocardial infarction (MI), or heart attack, is caused by ischemia (restriction in the blood supply), which is often due to the obstruction of a coronary artery by a thrombus. This restriction gives an insufficient supply of oxygen to the heart muscle which then results in tissue death (infarction). A lesion is then formed which is the infarct. MI can quickly become fatal if emergency medical treatment is not received promptly. If diagnosed within 12 hours of the initial episode (attack) then thrombolytic therapy is initiated.
Limb ischemia
An arterial thrombus or embolus can also form in the limbs, which can lead to acute limb ischemia.
Other sites
Hepatic artery thrombosis usually occurs as a devastating complication after liver transplantation.
Causes
Thrombosis prevention is initiated with assessing the risk for its development. Some people have a higher risk of developing thrombosis and its possible development into thromboembolism. Some of these risk factors are related to inflammation.
"Virchow's triad" has been suggested to describe the three factors necessary for the formation of thrombosis:
hemodynamic changes (blood stasis or turbulence),
vessel wall (endothelial) injury/dysfunction, and
altered blood coagulation (hypercoagulability).
Some risk factors predispose for venous thrombosis while others increase the risk of arterial thrombosis. Newborn babies in the neonatal period are also at risk of a thromboembolism.
Mechanism
Pathogenesis
The main causes of thrombosis are given in Virchow's triad which lists thrombophilia, endothelial cell injury, and disturbed blood flow. Generally speaking the risk for thrombosis increases over the life course of individuals, depending on life style factors like smoking, diet, and physical activity, the presence of other diseases like cancer or autoimmune disease, while also platelet properties change in aging individuals which is an important consideration as well.
Hypercoagulability
Hypercoagulability or thrombophilia, is caused by, for example, genetic deficiencies or autoimmune disorders. Recent studies indicate that white blood cells play a pivotal role in deep vein thrombosis, mediating numerous pro-thrombotic actions.
Endothelial cell injury
Any inflammatory process, such as trauma, surgery or infection, can cause damage to the endothelial lining of the vessel's wall. The main mechanism is exposure of tissue factor to the blood coagulation system. Inflammatory and other stimuli (such as hypercholesterolemia) can lead to changes in gene expression in endothelium producing to a pro-thrombotic state. When this occurs, endothelial cells downregulate substances such as thrombomodulin, which is a key modulator of thrombin activity. The result is a sustained activation of thrombin and reduced production of protein C and tissue factor inhibitor, which furthers the pro-thrombotic state.
Endothelial injury is almost invariably involved in the formation of thrombi in arteries, as high rates of blood flow normally hinder clot formation. In addition, arterial and cardiac clots are normally rich in platelets–which are required for clot formation in areas under high stress due to blood flow.
Disturbed blood flow
Causes of disturbed blood flow include stagnation of blood flow past the point of injury, or venous stasis which may occur in heart failure, or after long periods of sedentary behaviour, such as sitting on a long airplane flight. Also, atrial fibrillation, causes stagnant blood in the left atrium (LA), or left atrial appendage (LAA), and can lead to a thromboembolism. Cancers or malignancies such as leukemia may cause increased risk of thrombosis by possible activation of the coagulation system by cancer cells or secretion of procoagulant substances (paraneoplastic syndrome), by external compression on a blood vessel when a solid tumor is present, or (more rarely) extension into the vasculature (for example, renal cell cancers extending into the renal veins). Also, treatments for cancer (radiation, chemotherapy) often cause additional hypercoagulability. There are scores that correlate different aspects of patient data (comorbidities, vital signs, and others) to risk of thrombosis, such as the POMPE-C, which stratifies risk of mortality due to pulmonary embolism in patients with cancer, who typically have higher rates of thrombosis. Also, there are several predictive scores for thromboembolic events, such as Padua, Khorana, and ThroLy score.
Pathophysiology
Natural history
Fibrinolysis is the physiological breakdown of blood clots by enzymes such as plasmin.
Organisation: following the thrombotic event, residual vascular thrombus will be re-organised histologically with several possible outcomes. For an occlusive thrombus (defined as thrombosis within a small vessel that leads to complete occlusion), wound healing will reorganise the occlusive thrombus into collagenous scar tissue, where the scar tissue will either permanently obstruct the vessel, or contract down with myofibroblastic activity to unblock the lumen. For a mural thrombus (defined as a thrombus in a large vessel that restricts the blood flow but does not occlude completely), histological reorganisation of the thrombus does not occur via the classic wound healing mechanism. Instead, the platelet-derived growth factor degranulated by the clotted platelets will attract a layer of smooth muscle cells to cover the clot, and this layer of mural smooth muscle will be vascularised by the blood inside the vessel lumen rather than by the vasa vasorum.
Ischemia/infarction: if an arterial thrombus cannot be lysed by the body and it does not embolise, and if the thrombus is large enough to impair or occlude blood flow in the involved artery, then local ischemia or infarction will result. A venous thrombus may or may not be ischemic, since veins distribute deoxygenated blood that is less vital for cellular metabolism. Nevertheless, non-ischemic venous thrombosis may still be problematic, due to the swelling caused by blockage to venous drainage. In deep vein thrombosis this manifests as pain, redness, and swelling; in retinal vein occlusion this may result in macular oedema and visual acuity impairment, which if severe enough can lead to blindness.
Embolization
A thrombus may become detached and enter circulation as an embolus, finally lodging in and completely obstructing a blood vessel, which unless treated very quickly will lead to tissue necrosis (an infarction) in the area past the occlusion. Venous thrombosis can lead to pulmonary embolism when the migrated embolus becomes lodged in the lung. In people with a "shunt" (a connection between the pulmonary and systemic circulation), either in the heart or in the lung, a venous clot can also end up in the arteries and cause arterial embolism.
Arterial embolism can lead to obstruction of blood flow through the blood vessel that is obstructed by it, and a lack of oxygen and nutrients (ischemia) of the downstream tissue. The tissue can become irreversibly damaged, a process known as necrosis. This can affect any organ; for instance, arterial embolism of the brain is one of the causes of stroke.
Prevention
The use of heparin following surgery is common if there are no issues with bleeding. Generally, a risk-benefit analysis is required, as all anticoagulants lead to an increased risk of bleeding. In people admitted to hospital, thrombosis is a major cause for complications and occasionally death. In the UK, for instance, the Parliamentary Health Select Committee heard in 2005 that the annual rate of death due to thrombosis was 25,000, with at least 50% of these being hospital-acquired. Hence thromboprophylaxis (prevention of thrombosis) is increasingly emphasized. In patients admitted for surgery, graded compression stockings are widely used, and in severe illness, prolonged immobility and in all orthopedic surgery, professional guidelines recommend low molecular weight heparin (LMWH) administration, mechanical calf compression or (if all else is contraindicated and the patient has recently developed deep vein thrombosis) the insertion of a vena cava filter. In patients with medical rather than surgical illness, LMWH too is known to prevent thrombosis, and in the United Kingdom the Chief Medical Officer has issued guidance to the effect that preventative measures should be used in medical patients, in anticipation of formal guidelines.
Treatment
The treatment for thrombosis depends on whether it is in a vein or an artery, the impact on the person, and the risk of complications from treatment.
Anticoagulation
Warfarin and vitamin K antagonists are anticoagulants that can be taken orally to reduce thromboembolic occurrence. Where a more effective response is required, heparin can be given (by injection) concomitantly. As a side effect of any anticoagulant, the risk of bleeding is increased, so the international normalized ratio of blood is monitored. Self-monitoring and self-management are safe options for competent patients, though their practice varies. In Germany, about 20% of patients were self-managed while only 1% of U.S. patients did home self-testing (according to one 2012 study). Other medications such as direct thrombin inhibitors and direct Xa inhibitors are increasingly being used instead of warfarin.
Thrombolysis
Thrombolysis is the pharmacological destruction of blood clots by administering thrombolytic drugs including recombinant tissue plasminogen activator, which enhances the normal destruction of blood clots by the body's enzymes. This carries an increased risk of bleeding so is generally only used for specific situations (such as severe stroke or a massive pulmonary embolism).
Surgery
Arterial thrombosis may require surgery if it causes acute limb ischemia.
Endovascular treatment
Mechanical clot retrieval and catheter-guided thrombolysis are used in certain situations.
Antiplatelet agents
Arterial thrombosis is platelet-rich, and inhibition of platelet aggregation with antiplatelet drugs such as aspirin may reduce the risk of recurrence or progression.
Targeting ischemia/reperfusion injury
With reperfusion comes ischemia/reperfusion (IR) injury (IRI), which paradoxically causes cell death in reperfused tissue and contributes significantly to post-reperfusion mortality and morbidity. For example, in a feline model of intestinal ischemia, four hours of ischemia resulted in less injury than three hours of ischemia followed by one hour of reperfusion. In ST-elevation myocardial infarction (STEMI), IRI contributes up to 50% of final infarct size despite timely primary percutaneous coronary intervention. This is a key reason for the continued high mortality and morbidity in these conditions, despite endovascular reperfusion treatments and continuous efforts to improve timeliness and access to these treatments. Hence, protective therapies are required to attenuate IRI alongside reperfusion in acute ischemic conditions to improve clinical outcomes. Therapeutic strategies that have potential to improve clinical outcomes in reperfused STEMI patients include remote ischemic conditioning (RIC), exenatide, and metoprolol. These have emerged amongst a multitude of cardioprotective interventions investigated with largely neutral clinical data. Of these, RIC has the most robust clinical evidence, especially in the context of STEMI, but also emerging for other indications such as acute ischemic stroke and aneurysmal subarachnoid hemorrhage.
Neonatal thrombosis
Treatment options for full-term and preterm babies who develop thromboembolism include expectant management (with careful observation), nitroglycerin ointment, pharmacological therapy (thrombolytics and/or anticoagulants), and surgery. The evidence supporting these treatment approaches is weak. For anticoagulant treatment, it is not clear if unfractionated and/or low molecular weight heparin treatment is effective at decreasing mortality and serious adverse events in this population. There is also insufficient evidence to understand the risk of adverse effects associated with these treatment approaches in term or preterm infants.
See also
Blood clotting tests
Disseminated intravascular coagulation
Hepatic artery thrombosis
Thrombotic microangiopathy
References
Bibliography
External links
Hematology | 0.762915 | 0.998888 | 0.762067 |