answer
stringclasses 2
values | text
stringlengths 169
105k
|
---|---|
No, this text is not related with defense topics | Bough Down is a collection of poetry and small mixed media collages, created by Karen Green. It was published in 2013 and won the Believer Poetry Award the same year. In her book, Green explores how contradictory emotions can coexist and processes these sentiments through her prose and art. Background Green is known for a variety of exhibitions, such as The Forgiveness Machine, an interactive display from 2009, and Tiny Stampede, a collection of collages from 2011. Alongside Bough Down, Green is also the author of Voices from La Frontera: Pioneer Women from the Big Bend Tell Their Stories (2002) and Frail Sister (2018). Green lives in Northern California. Green was married to American author David Foster Wallace in 2004 until his death of suicide in 2008. Although his name is never mentioned in Bough Down, Green's vulnerable words and dissection of grief stem from this loss. Book's content Organization Bough Down's order is a reflection of the confusion and imbrication of emotions that accompany the loss of a loved one. There are stretches of pages with a paragraph or two of verse, surrounded by large margins and empty space. These are followed by a randomly placed blank page or a postage-sized print of a collage on a single page. The only method of organization is chronologically, as the memories she writes about progress by month and through the evolution of her emotions. Many of her poems are written about a specific person. These people include a variety of characters, such as doctors, the jazz lady, Green's relatives, pets, and even Green herself. These verses vary in content; some are memories, some are comparable to a diary of her thoughts, and others are a combination of the two. Although many of the themes are very serious, some lines are filled with humor, easing the tension. Collages The small collages printed throughout the book are a part of Green's collection, Tiny Stampede. This series was displayed in an exhibition in 2011 in Pasadena, California. She combines snippets of sentences, printed images, inked fingerprints, pieces of a postage stamp collecting book, pencils shavings, her own drawings and watercolors, and other mediums. The mix of media is a form of found poetry, which uses fragments of sentences and random words from other sources to create a new unified work. Green used this process as an escape, a way of coping with trauma and grounding herself. Each printed image in the book is relatively small as well, ranging between 0.5 inches and 3.5 inches in length and width. Green has stated that her choice in size is reflective of how minuscule and lost she felt during this time in her life. The mediums, specific scraps, and colors that she chose to use in each piece are representative of different parts of herself. The inked fingerprints point to her sense of identity as she becomes part of the collective of widows. The repetitive presence of faces and human-like forms is also illustrative of her exploration of selfhood. Green uses color and the names of colors in her work to represent what she calls "unimaginables", a list of her fears and faiths, which she chose to organize by color. The specific meaning behind each color is unknown, but they are used to express the emotions attached to this list. Some images included also depict particular memories and Green's raw self reflection associated with them. Publication Bough Down was published by Siglio Press in 2013 and is on its third printing. Awards and recognition In 2013, the year of its publication, Green won the Believer Poetry Award for Bough Down. Bough Down has been compared to Anne Carson's Nox (2010) and Joan Didion's The Year of Magical Thinking (2005) in the ways she explores her grief through prose and visual art. References Wikipedia Student Program Poetry Poetry collections |
No, this text is not related with defense topics | The Mott–Schottky equation relates the capacitance to the applied voltage across a semiconductor-electrolyte junction. where is the differential capacitance , is the dielectric constant of the semiconductor, is the permittivity of free space, is the area such that the depletion region volume is , is the elementary charge, is the density of dopants, is the applied potential, is the flat band potential, is the Boltzmann constant, and T is the absolute temperature. This theory predicts that a Mott–Schottky plot will be linear. The doping density can be derived from the slope of the plot (provided the area and dielectric constant are known). The flatband potential can be determined as well; absent the temperature term, the plot would cross the -axis at the flatband potential. Derivation Under an applied potential , the width of the depletion region is Using the abrupt approximation, all charge carriers except the ionized dopants have left the depletion region, so the charge density in the depletion region is , and the total charge of the depletion region, compensated by opposite charge nearby in the electrolyte, is Thus, the differential capacitance is which is equivalent to the Mott-Schottky equation, save for the temperature term. In fact the temperature term arises from a more careful analysis, which takes statistical mechanics into account by abandoning the abrupt approximation and solving the Poisson–Boltzmann equation for the charge density in the depletion region. References Equations |
No, this text is not related with defense topics | A mesenchymal–epithelial transition (MET) is a reversible biological process that involves the transition from motile, multipolar or spindle-shaped mesenchymal cells to planar arrays of polarized cells called epithelia. MET is the reverse process of epithelial–mesenchymal transition (EMT) and it has been shown to occur in normal development, induced pluripotent stem cell reprogramming, cancer metastasis and wound healing. Introduction Unlike epithelial cells – which are stationary and characterized by an apico-basal polarity with binding by a basal lamina, tight junctions, gap junctions, adherent junctions and expression of cell-cell adhesion markers such as E-cadherin, mesenchymal cells do not make mature cell-cell contacts, can invade through the extracellular matrix, and express markers such as vimentin, fibronectin, N-cadherin, Twist, and Snail. MET plays also a critical role in metabolic switching and epigenetic modifications. In general epithelium-associated genes are upregulated and mesenchyme-associated genes are downregulated in the process of MET. In development During embryogenesis and early development, cells switch back and forth between different cellular phenotypes via MET and its reverse process, epithelial–mesenchymal transition (EMT). Developmental METs have been studied most extensively in embryogenesis during somitogenesis and nephrogenesis and carcinogenesis during metastasis, but it also occurs in cardiogenesis or foregut development. MET is an essential process in embryogenesis to gather mesenchymal-like cells into cohesive structures. Although the mechanism of MET during various organs morphogenesis is quite similar, each process has a unique signaling pathway to induce changes in gene expression profiles. Nephrogenesis One example of this, the most well described of the developmental METs, is kidney ontogenesis. The mammalian kidney is primarily formed by two early structures: the ureteric bud and the nephrogenic mesenchyme, which form the collecting duct and nephrons respectively (see kidney development for more details). During kidney ontogenesis, a reciprocal induction of the ureteric bud epithelium and nephrogenic mesenchyme occurs. As the ureteric bud grows out of the Wolffian duct, the nephrogenic mesenchyme induces the ureteric bud to branch. Concurrently, the ureteric bud induces the nephrogenic mesenchyme to condense around the bud and undergo MET to form the renal epithelium, which ultimately forms the nephron. Growth factors, integrins, cell adhesion molecules, and protooncogenes, such as c-ret, c-ros, and c-met, mediate the reciprocal induction in metanephrons and consequent MET. Somitogenesis Another example of developmental MET occurs during somitogenesis. Vertebrate somites, the precursors of axial bones and trunk skeletal muscles, are formed by the maturation of the presomitic mesoderm (PSM). The PSM, which is composed of mesenchymal cells, undergoes segmentation by delineating somite boundaries (see somitogenesis for more details). Each somite is encapsulated by an epithelium, formerly mesenchymal cells that had undergone MET. Two Rho family GTPases – Cdc42 and Rac1 – as well as the transcription factor Paraxis are required for chick somitic MET. Cardiogenesis Development of heart is involved in several rounds of EMT and MET. While development splanchnopleure undergo EMT and produce endothelial progenitors, these then form the endocardium through MET. Pericardium is formed by sinus venosus mesenchymal cells that undergo MET. Quite similar processes occur also while regeneration in the injured heart. Injured pericardium undergoes EMT and is transformed into adipocytes or myofibroblasts which induce arrhythmia and scars. MET than leads to the formation of vascular and epithelial progenitors that can differentiate into vasculogenic cells which lead to regeneration of heart injury. Hepatogenesis In cancer While relatively little is known about the role MET plays in cancer when compared to the extensive studies of EMT in tumor metastasis, MET is believed to participate in the establishment and stabilization of distant metastases by allowing cancerous cells to regain epithelial properties and integrate into distant organs. Between these two states, cells occur in 'intermediate‐state', or so‐called partial EMT. In recent years, researchers have begun to investigate MET as one of many potential therapeutic targets in the prevention of metastases. This approach to preventing metastasis is known as differentiation-based therapy or differentiation therapy and it can be used for development of new anti-cancer therapeutic strategies. In iPS cell reprogramming A number of different cellular processes must take place in order for somatic cells to undergo reprogramming into induced pluripotent stem cells (iPS cells). iPS cell reprogramming, also known as somatic cell reprogramming, can be achieved by ectopic expression of Oct4, Klf4, Sox2, and c-Myc (OKSM). Upon induction, mouse fibroblasts must undergo MET to successfully begin the initiation phase of reprogramming. Epithelial-associated genes such as E-cadherin/Cdh1, Cldns −3, −4, −7, −11, Occludin (Ocln), Epithelial cell adhesion molecule (Epcam), and Crumbs homolog 3 (Crb3), were all upregulated before Nanog, a key transcription factor in maintaining pluripotency, was turned on. Additionally, mesenchymal-associated genes such as Snail, Slug, Zeb −1, −2, and N-cadherin were downregulated within the first 5 days post-OKSM induction. Addition of exogenous TGF-β1, which blocks MET, decreased iPS reprogramming efficiency significantly. These findings are all consistent with previous observations that embryonic stem cells resemble epithelial cells and express E-cadherin. Recent studies have suggested that ectopic expression of Klf4 in iPS cell reprogramming may be specifically responsible for inducing E-cadherin expression by binding to promoter regions and the first intron of CDH1 (the gene encoding for E-cadherin). See also Epithelial–mesenchymal transition References Developmental biology Oncology |
No, this text is not related with defense topics | A string bog or strong mire is a bog consisting of slightly elevated ridges and islands, with woody plants, alternating with flat, wet sedge mat areas. String bogs occur on slightly sloping surfaces, with the ridges at right angles to the direction of water flow. They are an example of patterned vegetation. Known as "aapa" moore (from Finnish aapasuo) or strangemoore in Northern Europe. A string bog has a pattern of narrow (2–3m wide), low (<1m high) ridges oriented at right angles to the direction of drainage with wet depressions or pools occurring between the ridges. The water and peat are very low in nutrients because the water has been derived from other ombrotrophic wetlands, which receive all of their water and nutrients from precipitation, rather than from streams or springs. The peat thickness is >1m. String bogs are features associated with periglacial climates, where the temperature results in long periods of subzero temperatures. The active layer exists as frozen ground for long periods and melts in the spring thaw. Slow melting results in characteristic mass movement processes and features associated with specific periglacial environments. See also Blanket bog Flark Marsh References Canadian Soil Information Service - Local Surface Forms (checked 2014-10-18) String bog Ecology de:Regenmoor#Aapamoore |
No, this text is not related with defense topics | The Master of Public Administration (M.P.Adm., M.P.A., or MPA) is a professional graduate degree in public administration, similar to the Master of Business Administration but with an emphasis on the issues of public services. Overview The MPA program is a professional degree and a graduate degree for the public sector and it prepares individuals to serve as managers, executives and policy analysts in the executive arm of local, state/provincial, and federal/national government, and increasingly in non-governmental organization (NGO) and nonprofit sectors; it places a focus on the systematic investigation of executive organization and management. Instruction includes the roles, development, and principles of public administration; public policy management and implementation. Through its history, the MPA degree has become more interdisciplinary by drawing from fields such as economics, sociology, law, anthropology, political science, and regional planning in order to equip MPA graduates with skills and knowledge covering a broad range of topics and disciplines relevant to the public sector. A core curriculum of a typical MPA program usually includes courses on microeconomics, public finance, research methods, statistics, policy analysis, managerial accounting, ethics, public management, geographic information systems (GIS), and program evaluation. MPA students may focus their studies on public sector fields such as urban planning, emergency management, transportation, health care (especially public health), economic development, community development, non-profit management, environmental policy, cultural policy, international affairs, and criminal justice. MPA graduates currently serve in some important positions within the public sector including the current Prime Minister of Singapore Lee Hsien Loong, the Nobel Peace Prize laureate and former President of Colombia Juan Manuel Santos, former UN Secretary General Ban Ki-Moon, three former presidents of Mexico (Felipe Calderón, Carlos Salinas de Gortari and Miguel de la Madrid), former Canadian Prime Minister Pierre Elliott Trudeau, former President of Bolivia Eduardo Rodríguez Veltzé, former President of Ecuador Jamil Mahuad Witt (MPA '89), former President of Costa Rica José María Figueres Olsen, former CIA Director David Petraeus, former president of Liberia Ellen Johnson Sirleaf, Foreign Minister of Serbia Vuk Jeremić, former New York City Police Commissioner Raymond Kelly, former Secretary of Health and Human Services Kathleen Sebelius, current Treasurer of Australia Josh Frydenberg. Other notable MPA graduates include U.S. Representative Dan Crenshaw, Bill O'Reilly and pilot Chesley Sullenberger. A Master of Public Administration can be acquired at various institutions. See List of schools offering MPA degrees. See also Master of Public Affairs Master of Public Policy Master of Nonprofit Organizations Public policy schools Master of Business Administration Doctor of Public Administration List of master's degrees References External links Network of Schools of Public Policy, Affairs, and Administration - Accrediting body for MPA and MPP programs in the U.S. Association for Public Policy Analysis and Management American Society for Public Administration - Professional society for public administration (PA) practitioners and educator] Public Administration, Master Public administration |
No, this text is not related with defense topics | In psychology, steering cognition is a model of a cognitive executive function which contributes to how attention is regulated and corresponding responses coordinated. History The term 'steering cognition' was coined by the researcher Simon P. Walker who discovered consistent, replicable patterns of attention and corresponding response through repeated cognitive tests between 2000 and 2015, in studies with over 15,000 individuals. Working with his colleague Jo Walker, he was able to show that these patterns correlated with other cognitive attributes such as mental wellbeing, social competency and academic performance. Together, Walker and Walker conjecture that steering cognition is a central mechanism by which people self-regulate their cognitive, emotional and social states. Theoretical model Steering cognition describes how the brain biases attention toward specific stimuli whilst ignoring others, before coordinating responsive actions which cohere with our past patterns of self-representation. Steering cognition enables the use of limited cognitive resources to make sense of the world that someone expects to see. Empirical evidence Walker developed a specific steering cognition test used with more than 11,000 candidates between the ages of eight and 60 between 2002 and 2015. Using principle component analysis, Walker was able to identify 7 latent largely independent 'heuristic substitution' factors which he labelled S, L, X, P, M, O, T. He labelled this data model 'the Human Ecology model of Cognitive Affective Social state' or CAS for short. In the most recent and largest ever study, involving 8,000 secondary pupils in the UK, exploratory factor analysis confirmed a largely orthogonal factor analysis structure in which Eigen values revealed the CAS model 7 latent factors explained 50% of the overall variance. For the sake of parsimony, a 7 factor solution has been regarded as acceptable. Studies have shown that steering cognition is distinct from the mind's engine or 'algorithmic processing' which is responsible for how we process complex calculations. The state of steering cognition at any time is influenced by 'priming' effects - cues in the surrounding environment such as sights, sounds and messages of which we may not be conscious. Studies have shown that environmental biasing of our steering cognition can contribute to non-conscious in-group behaviours, e.g. an increased likelihood of groupthink or emotional contagion. Studies have shown that, during adolescence, individuals develop more fixed patterns of steering. By adulthood, these patterns become recognisable as mental traits, behaviours and social attributes. There are some authors, including Meredith Belbin, who claim that people with more flexible steering cognition have advantages in jobs which require greater social or cognitive dexterity because of improved social relating and leadership skills. Steering cognition has been shown to depend on our ability to mental simulate or imagine ourselves performing tasks and functions. As such, Steering cognition requires the capacity to self-represent, associating memories of our past and possible future selves. Steering cognition has been shown to implicate our affective (emotional), social and abstract cognitions. Effects on learning, social and emotional development The ability to regulate steering cognition has been shown to account for up to 15% of academic outcomes at secondary school not accounted for by IQ. Steering cognition can be improved through pupil feedback, coaching and more carefully structured and supportive environments. Poorly regulated steering cognition has been shown to correlate strongly with increased mental health and welfare risks during adolescence. A study in 2015 showed that pupils with certain fixed biases in their steering cognition were four times more likely to exhibit self-harm, be bullied or not cope with school pressures. A large 2014 study showed that boarding school education resulted in better pupil ability to regulate steering cognition across social situations than day school education. This so-called 'tribe effect' is conjectured to lead to continued social advantages beyond school, such as access to future in-group benefits in work and wider society. Practical applications The importance of steering cognition lies in its explanation of human behaviours which lead to either risks or advantages for individuals and groups. The ability to regulate one's steering cognition is unrelated to IQ or rational group behaviour, so measuring steering cognition offers an explanation of behaviours and events not currently detected by traditional metrics and models. The Sunday Times reported in October 2015 that a growing number of schools in the UK, including independent schools Monkton Combe School and Wellington College, were now using a technology, AS Tracking, developed by Mind.World to measure student steering cognition as an 'early warning system' for welfare and mental health risks. Thomas's London Day Schools are using a curriculum, Footprints, to train pupils as young as eight to improve their steering cognition as part of their social and emotional development. Wellington College has engaged in steering cognition research studies as part of the school's evidenced-based education programme. Harrow School, one of Britain's leading independent schools, is piloting AS Tracking as part of a proactive strategy to provide the best possible pastoral care for pupils. The school is measuring steering cognition to gather concrete measurements that can be used as supporting evidence when planning or dealing with individuals and also tracking changes over time as boys move up through the school. Educational campaigners Sir Anthony Seldon and Sir Peter Lampl have suggested steering cognition has application for understanding and improving social mobility. Research fields related to steering cognition Executive function Steering cognition is a model of social and cognitive executive function. It is explains a functional governor mechanism by which the mind coordinates attention and executes responsive action. Metacognition Steering cognition is a model of metacognition. It describes the capacity of the mind to exert conscious control over its reasoning and processing strategies in relation to external data and internal state. Self-regulation Steering cognition is an explanatory mechanism of some phenomena of affective, cognitive and social self-regulation. It describes effortful control processes which exhibit depletion after strain. Mental simulation circuitry Steering cognition has been repeatedly shown to implicate the mind's mental simulation circuitry. As such, it is associated with functional neural circuits involved in prospective and retrospective memory, self-representation, associative processing and imagination. Dual process theory According to the steering cognition model, dual process System 1 functions as a serial cognitive steering processor for System 2, rather than the traditionally understood parallel system. In order to process epistemically varied environmental data, a steering cognition orientation system is required to align varied, incoming environmental data with existing neural algorithmic processes. The brain's associative simulation capacity, centered around the imagination, plays an integrator role to perform this function. Cognitive biases In the cognitive steering model, a conscious state emerges from effortful associative simulation, required to align novel data accurately with remote memory, via later algorithmic processes. By contrast, fast unconscious automaticity is constituted by unregulated simulatory biases, which induce errors in subsequent algorithmic processes. The phrase 'rubbish in, rubbish out' is used to explain errorful steering cognition processing: errors will always occur if the accuracy of initial retrieval and location of data is poorly self-regulated. Social priming Steering cognition provides an explanation of how the mind is non-consciously influenced by the environmental cues, or primes, around it. Steering cognition studies have produced data of attentional bias and blindness best explained by environmental priming. See also Cognitive bias Emotional self-regulation Simulation heuristic References and notes Further reading Daniel Kahneman (25 October 2011). Thinking, Fast and Slow. Attention Cognition |
No, this text is not related with defense topics | A Registered Cardiovascular Invasive Specialist or RCIS assists a cardiologist with cardiac catheterization procedures in the United States. These procedures can determine if a blockage exists in the blood vessels that supply the heart muscle and can help diagnose other problems. To become registered they have to pass the registry proctored by CCI (Cardiovascular Credentialing International). The exam consists of 170 multiple choice questions. Some questions involve mathematical computation as well as pictures where one must identify anatomy and equipment. To be registry eligible, they must have worked in the Cardiac Catheterization Laboratory for one years or have graduated from a registry eligible program. Santa Fe College in Gainesville, Florida has one such accredited program. See also Cardiac catheterization References Cardiology |
No, this text is not related with defense topics | Marine conservation activism is the efforts of non-governmental organizations and individuals to bring about social and political change in the area of marine conservation. Marine conservation is properly conceived as a set of management strategies for the protection and preservation of ecosystems in oceans and seas. Activists raise public awareness and support for conservation, while pushing governments and corporations to practice sound ocean management, create conservation policy, and enforce existing laws and policy through effective regulation. There are many different kinds of organizations and agencies that work toward these common goals. They all are a part of the growing movement that is ocean conservation. These organizations fight for many causes including stopping pollution, overfishing, whaling and by-catching, and supporting marine protected areas. History United States Though the environmental movement began in the United States during the 1960s, the idea of marine conservation really did not take off in the country until the 1972 Marine Protection, Research, and Sanctuaries Act (MPRSA) passed, beginning the movement. The act allowed the regulation by the United States Environmental Protection Agency (EPA) over dumping in the seas. Though the act was later amended, it was one of several key events to bring marine issues towards the front of environmental issues in the United States. Notable people Jacques Cousteau: Explorer, Conservationist, Researcher & Author Sylvia Earle: Marine Biologist, Explorer, & Author Steve Irwin: Naturalist, Conservationist, Zoologist, Herpetologist, & Television Personality Ric O'Barry Ric O'Barry is an author of the books Behind the Dolphin Smile and To Free a Dolphin: A Dramatic Case for Keeping Dolphins in their Natural Environment, by the Trainer of "Flipper", both focusing on dolphin preservation. O'Barry was also the star of Oscar award winning documentary, The Cove, which aimed to raise public support for preventing dolphin drive hunting. On April 22, 1970, he founded the Dolphin Project, a non-profit marine environmentalist organization concentrating on dolphins' welfare. Paolo Bray Founder and Director of major sustainability certification programs: Dolphin-Safe, Friend of the Sea and Friend of the Earth. Environmentalist and promoter of conservation projects and campaigns. Since 1990, Director of International Programs for the DOLPHIN-SAFE project of the Earth Island Institute. The project saved millions of dolphins from tuna fishing nets. 95% of world tuna industry adhere to the project. In 2008 founded Friend of the Sea, the major international certification for sustainable seafood and the only one covering both fisheries and aquaculture. The only seafood certification recognized by the national accreditation bodies. Over 800 companies in 70 countries have products certified Friend of the Sea. Certifying also sustainable shipping, whale watching, aquaria, ornamental fish. Friend of the Earth supports conservation projects In 2016 founded Friend of the Earth an international certification of products from sustainable agriculture and farming. 50 companies from 4 continents have products certified Friend of the Earth (including rice, oil, wine, tomato, quinoa, cheese, eggs, etc). Friend of the Earth support also conservation projects. International issues Debris Marine debris is defined as "any persistent solid material that is manufactured or processed and directly or indirectly, intentionally or unintentionally, disposed of or abandoned into the marine environment or the Great Lakes". This debris can injure or even kill marine organisms; it can also interfere with navigation safety and could pose a threat to human health. Marine debris can range from soda cans to plastic bags and can even include abandoned vessels or neglected fishing gear. Ocean Conservancy is a non-profit environmental group that fights for the improvement and conservation of marine ecosystems and marine life. They work to find science-based solutions to protect the world's oceans from the global challenges that they face today. One of the many issues that they work closely to stop is the flow of trash that enters the ocean. The International Coastal Cleanup (ICC) is one of the methods Ocean Conservancy uses to prevent marine debris. The ICC is the largest volunteer effort to clean up the world's oceans and other waterways; over the past 25 years the ICC has cleaned up approximately 144,606,491 pounds of trash from beaches all over the world. Whaling International Whaling Commission Whaling is the hunting of free roaming whales; many whaling practices have led to drastic population loss in many whale populations around the world. In 1986, The International Whaling Commission (IWC) was founded to put a ban on commercial whaling. The commission recognizes three different types of whaling: aboriginal subsistence, commercial, and special permit (or scientific) whaling. Aboriginal subsistence whaling This form of whaling supports indigenous communities where whale products play an important role in cultural and nutritional life. The IWC sets catch limits for aboriginal subsistence whaling every six years. Commercial whaling This form of whaling is highly regulated by the IWC and is currently on a moratorium. There are a few countries that oppose the moratorium and continue to hunt for whales; these countries share catch data with the Commission but are not regulated by it. Since the moratorium was put in place in 1986, more than 50,000 whales have been hunted and killed; there are three nations that are still able to hunt whales because of loopholes in the ban. Norway is able to hunt because of an "objection" to the ban; Iceland is able to hunt because of a "reservation" and Japan is able to hunt because they claim it is for "research purposes". If combined these nations kill around 2,000 whales each year; these whales include humpback, minke, sperm, fin, Bryde's, and sei. The IWC ban does allow for some Aboriginal Subsistence Whaling (ASW) in certain countries. Special permit/scientific whaling This category of whaling is separated from IWC-regulated whaling by international law. Special permit research proposals are to be submitted by countries to the IWC for scientific scrutiny. The role of the IWC is advisory only. Greenpeace Greenpeace, an international environmental organization founded in 1971 in British Columbia, fights against whaling. Their campaigns are nonviolent and many times involve one or more of the five Greenpeace ships which first made the organization famous in the 1970s. In late December 2005, Japanese whaling fleets experienced heavy opposition from Greenpeace, who protested that the Japanese were continuing their commercial whaling under the guise of research, which was being done in the Southern Ocean Whale Sanctuary. They sent volunteer workers in inflatable boats to get in the line of fire in order to stop the whaling. Sea Shepherd Conservation Society Sea Shepherd Conservation Society is a non-profit, marine wildlife conservation organization that works internationally on numerous campaigns to protect the world's oceans. Their mission is to conserve and protect the world's ecosystems and species; they work to end the destruction of habitat and slaughter of the ocean's wildlife. Unlike many other non-profit environmental groups, Sea Shepherd uses direct-action tactics to expose and challenge illegal activities at sea; they strive to ensure that the ocean can survive for future generations. In doing so, they refer to the United Nations World Charter for Nature that calls on individuals to "safeguard and conserve nature in areas beyond national jurisdiction". Sea Shepherd was founded in 1977 by Captain Paul Watson in Vancouver, British Columbia, Canada; it was not until 1981 that it was formally incorporated in the United States. Throughout the years their campaigns have ranged from stopping the annual killing of baby harp seals in Eastern Canada to preventing Japanese whalers from killing endangered whale species. They claim only to work to uphold international conservation law and to protect the endangered ocean habitats and species; they do this without prejudice against race, nationality, color, or religious belief. Their crews are made up of volunteers from all over the world, some of which are from countries that Sea Shepherd has campaigns against; they describe themselves as "pro-ocean" instead of "anti-any nationality or culture". Shark finning Shark finning is a worldwide issue that involves cutting off the fins of sharks. This is done while the shark is still alive followed by the rest of the body being thrown back into the ocean, leaving it to die days after. Used in countries like China and Japan, shark fins are a key ingredient in the world-renowned meal, shark fin soup. The high demand for this particular type of soup has skyrocketed in the last few decades and sells for around $100 on average and is often catered at special occasions such as weddings and banquets. Due to the increased want for these shark fins, traders seek out the fins in order to make a profit. However, the fins are the only part of the shark that fishermen seek out to retrieve due to the low economical value of the actual shark meet. This recently exposed issue along with other overfishing issues has brought upon roughly 80 percent of the shark population decline. It has become prominent concern in marine conservation activism for millions of sharks are killed yearly at an often-unregulated expense. Project AWARE Current campaign known as Project AWARE is working globally to advocate solutions for long-term protection for these animals. Created initially as an environmental initiative project, this campaign was developed by the Professional Associations of Diving Instructors (PADI) in 1989. Used to educate divers about environmental problems this program eventually grew to become a registered non-profit organization in the US in 1992 and eventually became recognized in the UK and Australia in 1999 and 2002 respectively. In spite of the arising issues with marine challenges, Project AWARE has continued to grow towards meeting the needs of the marine ecosystem as they see fit. Marine debris and shark and ray conservation activism are the two most prevalent issues that are being further worked toward improving since 2011. Shark Savers Another campaign working to ensure the protection of these marine species is a group called Shark Savers that is sponsored by the group called WildAid. Through the use of community motivation, the project encourages the public to stop eating sharks and shark fin soup. By also working to improve global regulations and creating sanctuaries for sharks, the project aims to take action and get results. Similarly to Project Aware, the Shark Savers program was founded by a group of divers that wanted to help the marine system in 2007. Through the recent creation of shark sanctuaries, the program focuses on sustainability when thinking about the economical and environmental benefits. These created sanctuaries provide a protected area for the sharks and also promote change in nearby communities. Bite-Back Bite-Back is another organization that is active in the community and aims to stop the sale of shark fins for the making of shark fin soup in Great Britain. By exposing the UK and their acts toward profiting from shark products, they aim to put an end to their ways of over fishing and exploitation. The organizations main goal is to allow marine life a chance to thrive while they are busy doing the dirty work of lowering consumer stipulation. Shark Trust Part of a worldwide alliance called The Global Shark and Ray Initiative (GSRI), the Shark Trust is working in efforts to better the ocean for marine animals such as shark and rays. The Initiative created a plan for changing the status of the shark population that would span over 10 years starting on February 15, 2016. Teaming up with other large conservation organization such as Shark Advocates International and World Wide Fund for Nature (WWF), the team aims to ultimately give these vulnerable animals the safety and security that the ought to have in their natural environment. Overfishing Overfishing occurs when fish stocks are over-exploited to below acceptable levels; eventually the fish populations will no longer be able to sustain themselves. This can lead to resource depletion, reduced biological growth, and low biomass levels. In September 2016, a partnership of Google and Oceana and Skytruth introduced Global Fishing Watch, a website designed to assist citizens of the globe in monitoring fishing activities. Marine protected areas Although the idea of marine protected areas is an internationally known concept, there is no one term used internationally. Rather, each country has its own name for the areas. Marine Reserves, Specially Protected Areas, and Marine Park all relate to this concept, though they differ slightly. Some of the most famous marine protected areas are the Ligurian Sea Cetacean Sanctuary along the coasts of Spain, Monaco, and Italy, and Australia's Great Barrier Reef. The largest sanctuary in the world is the Northwestern Hawaiian Islands National Monument. The purpose of these sanctuaries is to provide protection for the living and non-living resources of the oceans and seas. They are created to save species, nursing resources and to help sustain the fish population. The activists at the Ocean Conservancy fight for this cause. They believe that the United States should put forth a consistent and firm commitment in using marine protected areas as a management strategy. Currently, the argument in the United States is whether or not they are necessary, when it should be how can they work the most efficiently. Activists at the Ocean Conservancy have been working on a campaign called the Save Our Ocean Legacy, a campaign lasting several years trying to establish Marine Protected Areas' off of the California coasts. Twenty-nine Marine Protected Areas were planned to be established when the legislation bill passed in 1999. The hope is that the plan will be finalized in 2007. Some fishers do not accept that marine protected areas benefit fish stocks and provide insurance against stock collapse. Marine protected areas can cause a short-term loss in fisheries production. However, the concept of spillover, where fish within a marine protected area move into fished areas, thus benefiting fisheries, has been misunderstood by some fishers. The term is a simplification of numerous ecological benefits that are derived from removing fishing from nursery, breeding grounds and essential fish habitats. References Environmentalism Marine conservation Fishing and the environment |
No, this text is not related with defense topics | "For Want of a Nail" is a proverb, having numerous variations over several centuries, reminding that seemingly unimportant acts or omissions can have grave and unforeseen consequences. Analysis The proverb has come down in many variations over the centuries. It describes a situation in where there is a failure to predict or correct a minor issue; the minor issue escalates and compounds itself into a major issue. The rhyme's implied small difference in initial conditions is the lack of a spare horseshoe nail, relative to a condition of its availability. At a more literal level, it expresses the importance of military logistics in warfare. Such chains of causality are perceived only in hindsight. No one ever lamented, upon seeing his unshod horse, that the kingdom would eventually fall because of it. Related sayings are "A stitch, in time, saves nine" and "An ounce of prevention is worth a pound of cure". A somewhat similar idea is referred to in the metaphor known as the camel's nose. Historical references The proverb is found in a number of forms, beginning as early as the 13th century: Middle High German (positively formulated): Diz ſagent uns die wîſen, ein nagel behalt ein îſen, ein îſen ein ros, ein ros ein man, ein man ein burc, der ſtrîten kan. ("The wise tell us that a nail keeps a shoe, a shoe a horse, a horse a man, a man a castle, that can fight.") (c. 1230 Freidank Bescheidenheit) "For sparinge of a litel cost, Fulofte time a man hath lost, The large cote for the hod." ("For sparing a little cost often a man has lost the large coat for the hood.") (c 1390 John Gower, Confessio Amantis v. 4785–4787) French: "Par ung seul clou perd on ung bon cheval." ("By just one nail one loses a good horse.") (c 1507 Jean Molinet, Faictz Dictz D., v768). "The French-men haue a military prouerbe; 'The losse of a nayle, the losse of an army'. The want of a nayle looseth the shooe, the losse of shooe troubles the horse, the horse indangereth the rider, the rider breaking his ranke molests the company, so farre as to hazard the whole Army". (1629 Thomas Adams (clergyman), "The Works of Thomas Adams: The Sum Of His Sermons, Meditations, And Other Divine And Moral Discourses", p. 714") "For want of a naile the shoe is lost, for want of a shoe the horse is lost, for want of a horse the rider is lost." (1640 George Herbert Outlandish Proverbs no. 499) Benjamin Franklin included a version of the rhyme in his Poor Richard's Almanack. (Benjamin Franklin, Poor Richards Almanack, June 1758, The Complete Poor Richard Almanacks, facsimile ed., vol. 2, pp. 375, 377) In British Columbia Saw-Mill Co. v. Nettleship (1868), L.R. 3 C.P. 499 (Eng. Q.B.), a variation on the story is given a legal flavor: "Cases of this kind have always been found to be very difficult to deal with, beginning with a case said to have been decided about two centuries and a half ago, where a man going to be married to an heiress, his horse having cast a shoe on the journey, employed a blacksmith to replace it, who did the work so unskilfully that the horse was lamed, and, the rider not arriving in time, the lady married another; and the blacksmith was held liable for the loss of the marriage. The question is a very serious one; and we should inevitably fall into a similar absurdity unless we applied the rules of common sense to restrict the extent of liability for the breach of contract of this sort." "Don't care" was the man who was to blame for the well-known catastrophe: "For want of a nail the shoe was lost, for want of a shoe the horse was lost, and for want of a horse the man was lost." (1880 Samuel Smiles, Duty) A short variation of the proverb (shown to the right) was published in 1912 in Fifty Famous People by James Baldwin. The story associated with the proverb describes the unhorsing of King Richard III during the Battle of Bosworth Field, which took place on 22 August 1485. However, historically Richard's horse was merely mired in the mud. In Baldwin's story, the proverb and its reference to losing a horse is directly linked to King Richard famously shouting "A Horse! A Horse! My Kingdom for a Horse!", as depicted in Act V, Scene 4 from the Shakespeare play Richard III. "You bring your long-tailed shovel, an' I'll bring me navvy [labourer; in this context referring to a navvy shovel (square mouth shovel)]. We mighten' want them, an', then agen, we might: for want of a nail the shoe was lost, for want of a shoe the horse was lost, an' for want of a horse the man was lost—aw, that's a darlin' proverb, a daarlin'".(1925 S. O'casey Juno and the Paycock i. 16) During World War II, this verse was framed and hung on the wall of the Anglo-American Supply Headquarters in London, England. Modern references Along with the long history of the proverb listed above, it has continued to be referenced in some form or another since the mid 20th century in modern culture. The examples below show how the proverb has had profound implications into a variety of issues and commentary in modern culture. Legal In his dissent in Massachusetts v. Environmental Protection Agency (549 US 497, 2007), Chief Justice John G. Roberts of the U.S. Supreme Court cites "all for the want of a horseshoe nail" as an example of a possible chain of causation. He claimed that, by contrast, the threshold jurisdictional issue of standing requires a likely chain of causation, which was not satisfied by the U.S. Environmental Protection Agency's regulation of new automobile emissions to prevent the loss of Massachusetts coastal land due to climate change. In his dissent in CSX Transportation, Inc. v. McBride, Roberts again invokes the proverb, explaining that, in tort law, the doctrine of proximate cause is meant to "limit[] liability at some point before the want of a nail leads to loss of the kingdom." Literary For Want of a Nail: If Burgoyne Had Won at Saratoga is an alternate history novel published in 1973 by the American business historian Robert Sobel. The novel depicts an alternative world where the American Revolution was unsuccessful. Cannibals And Missionaries, by Mary McCarthy, quotes on page 199: "No detail... was too small to be passed over.... 'For want of a nail,' as the proverb said." In the novel Rage, by Stephen King, using the pseudonym Richard Bachman, the main character Charlie Decker references the proverb: "But you can't go back. For want of a shoe the horse was lost, and all that." King's 1987 novel The Tommyknockers also references the proverb in its first line: "For want of a nail the kingdom was lost – that's how the catechism goes when you boil it down." JLA: The Nail is a three-issue comic book limited series published by DC Comics in 1998 about a world where the baby Kal-El was never found by Ma and Pa Kent because a nail punctured their truck tire on the day when they would have found his ship and so the child does not grow up to become Superman. The story uses the English ("Knight") variation of the rhyme as a theme. A Wind in the Door is a fantasy/science fiction novel by Madeleine L'Engle which was a sequel to A Wrinkle in Time. The proverb is used in the novel as an explanation by Meg Murry to help Mr. Jenkins understand how a microscopic creature can affect the fate of the universe and is the impetus for much of the action. "For Want of a Nail", a 2011 Hugo award-winning short story by Mary Robinette Kowal, explores the choices that an artificial intelligence and her wrangler must make to solve a seemingly-simple technical problem. The poem "Kiss", found in the collection Full Volume, by Robert Crawford (Scottish poet), is based on the proverb. The poem "Tale of a Nail", by the Polish poet Zbigniew Herbert, starts with the line "For lack of a nail the kingdom fell". The children's poem "The Nail and the Horseshoe (Гвоздь и Подкова)", by the Russian writer Samuil Marshak, retells the proverb in a slight variation where the enemy captured a city because a blacksmith shop did not have a nail in stock. The flow of the poem is very similar to that of its English equivalent. William Golding quotes the whole poem at the end of chapter 9 of his novel The Spire. There, the nail referred to is one of the Nails of the Holy Cross. That relic, when it is embedded at the base of the cross which had to be erected on top of the spire under construction next to Salisbury Cathedral, was thought to ensure stability to all of the daring building and to defeat the evil forces that rage against it, which are symbolized by the howling wind. The proverb is told to Katy Carr by her father in the novel What Katy Did, by Susan Coolidge. Katys is angry about getting into trouble after being late to school because she had not bothered to sew a string onto her bonnet. In his short story "Anxiety Is the Dizziness of Freedom" in Exhalation: Stories, Ted Chiang cites the proverb. The story is set in a world in which people can see alternate timelines through the use of prisms. It is given in the book The Fallacy Detective as a potential example of a slippery-slope logical fallacy. In writer Michael Flynn's 1990 Science Fiction novel, IN THE COUNTRY OF THE BLIND, the proverb is mentioned when discussing an unusual list of names & historical events found when remodeling an old building. In response the term "horseshoe nails" is coined to refer to historical events or actions with disproportionately large impacts. The Horeshoe Nail concept is a key part of the novel's plot and is mentioned several times. Musical Todd Rundgren's song "The Want of a Nail" from his album Nearly Human uses the rhyme as a metaphor for a man who has lived his entire life without love, and how, if you "multiply it a billion times" and "spread it all over the world," things fall apart. A cover of Rundgren's version is also used in the 2003 film Camp as the cast is introduced at the end of the film. Aesop Rock's song "No City" from his album None Shall Pass samples a voice reading the proverb, setting the tone for the idiosyncratic rap. Tom Waits's song "Misery Is the River of the World" from his album Blood Money includes the line "for want of a nail, a shoe was lost" as well as several other variations on the theme. The proverb was set to music on Bing Crosby's 1958 children's album, Jack B. Nimble. Israeli songwriter Naomi Shemer wrote a translated version of the song called "HaKol Biglal Masmer" (All Because of a Nail). Newsboys song "It's All Who You Know" from the album Take Me to Your Leader is based on variations of the theme Cinema and television The title of the season two episode of M*A*S*H, "For Want of a Boot", is adapted from the proverb. The episode's concept itself is also based on the proverb, with the character of Hawkeye going through a convoluted process involving several camp personnel, in order to get a new boot. In the movie The Fast and the Furious: Tokyo Drift, the proverb was used by Kamata (Sonny Chiba) to explain to his nephew the result of a small detail being overlooked. In the movie Father Goose, Frank Houghton (Trevor Howard) in his first scene of the movie, while talking to an Admiral on the telephone, uses part of the proverb by saying "For want of a nail, the war was..." in reference to finding an additional coastal plane spotter. In the episode of USA's Monk, "Mr. Monk at Your Service", Monk quotes the proverb after being challenged by an employee that suggest a fork being a centimeter off center wasn't a problem. Monk: "For the want of a nail, the kingdom was lost." In the 1982 movie The Verdict, Ed Concannon (James Mason) uses the proverb, "for want of a shoe the horse was lost" to his disciples to describe what the case has become after Frank Galvin turned down the settlement. The entire proverbial rhyme is recited by the character Abraham Farlan in the 1946 motion picture A Matter of Life and Death. Here it was used to describe the chain of circumstances which formed the life of the main character, Peter Carter. In season two, episode three of the television show Sliders, while trying to repair the timer device in a world crippled by 'anti-technology' Professor Arturo exclaims, "For want of a shoe the war was lost." In the 50th episode of Dead or Alive, Man On Horseback, Josh Randall, Steve McQueen's character, uses the proverb "For the want of a nail, they lost the shoe. For the want of a shoe, they lost the horse. For the want of a horse, they lost the rider" to justify the reason why he is taking with him four extra horseshoes. In the 1967 Mannix episode 'Turn Every Stone,' Joe Mannix alludes to the saying at the end when he says, "It's the old horseshoe-nail bit again. For want of $10,000, a million was lost." In the 1954 movie The Caine Mutiny, Captain Queeg (Bogart) refers to the proverb during the following conversation with Ensign Keith after he reprimanded him for failing to enforce the untucked shirt-tails rule. "I know a man's shirt's a petty detail, but big things are made up of details. Don't forget, 'For want of a nail, a horseshoe was lost and then the whole battle.' A captain's job is a lonely one. He's easily misunderstood. Forget that I bawled you out." In June 2021, on his show “Tenebrozo”, Mexican clown-politic analyst “Brozo”, used the proverb to describe the political climate in Mexico, regarding a fatal metro accident in Mexico City: “For want of a bolt, a concret tablet was lost, for the want of a tablet, a lock is gone, for the want of a lock, a convoy is gone, for the want of a convoy, a candidate for president is gone”. Video games In the 1996 computer game Star Trek: Borg "Q" quips the line "For want of a horseshoe nail" to the player during a dialog sequence. The 2016 video game Tom Clancy's The Division contains a reference to the proverb in one of antagonist Aaron Keener's audio logs. See also Alliteration Broken windows theory Butterfly effect Camel's nose Cascading failure Causality Chaos theory Domino effect Folklore Parallelism Proverb Remoteness in English law Rhyme Slippery slope Bibliography Benjamin Franklin, Poor Richards Almanack, June 1758, The Complete Poor Richards Almanacks, facsimile ed., vol. 2, pp. 375, 377 G. Herbert, Outlandish Proverbs, c. 1640, no. 499 Oxford Dictionary of Nursery Rhymes, ed. Iona and Peter Opie, Oxford 1951, pg 324 References External links Famous Quotes UK (Retrieved 14-Feb-2008) "For want of a nail" at Everything2.com (Retrieved 14-Feb-2008) The Lorenz Butterfly (Retrieved 14-Feb-2008) JSTOR:For Want of a Nail, E. J. Lowe, Analysis, Vol. 40, No. 1 (Jan., 1980), pp. 50–52 (Retrieved 14-Feb-2008) James S. Robbins on 9/11 Commission published 9 April 2004 by National Review Online "For want of a nail:Lady Condoleezza on the battle of the Saracens." (Retrieved 14-Feb-2008) Benjamin Franklin (1706–1790), U.S. statesman, writer. Poor Richard’s Almanac, preface (1758). (Retrieved 14-Feb-2008) Poems Oral tradition Cultural anthropology Chaos theory Causality |
No, this text is not related with defense topics | Retailtainment is retail marketing as entertainment. In his book, Enchanting a Disenchanted World: Revolutionizing the Means of Consumption (1999), author George Ritzer describes "retailtainment" as the "use of ambience, emotion, sound and activity to get customers interested in the merchandise and in a mood to buy." Sometimes called "inspirational retailing" or "entertailing," it has also been defined as "the modern trend of combining shopping and entertainment opportunities as an anchor for customers." In 2001, Codeluppi described it as a way for marketers to "offer the consumer physical and emotional sensations during the shopping experience." And, in an article entitled "Using sonic branding in the retail environment" in the 2003 issue of the Journal of Consumer Behaviour, Fulberg described it as a way for retailers to entertain the consumer with a dramatization of their values." According to Michael Morrison at the Australian Centre for Retail Studies: “There is a move towards the concept of 'retailtainment.' This phenomenon, which brings together retailing, entertainment, music and leisure ... Retailers need to look further than the traditional retail store elements such as colour, lighting and visual merchandising to influence buying decisions. The specific atmosphere the retailer creates can, in some cases, be more influential in the decision-making process than the product itself. As goods and services become more of a commodity, it is what a shopper experiences and what atmosphere retailers create that really matters. Brand building is a combination of physical, functional, operational and psychological elements. Consumers will be willing to pay more for a brand if there is a perceived or actual added value from their experience of using the product or service.” Shopper marketing expert Simon Temperley of Los Angeles agency The Marketing Arm, formerly U.S. Marketing & Promotions (Usmp), describes "retailtainment" as a "live brand experience" that frequently includes the use of "brand ambassadors" who "converse with the consumer." References Neologisms Marketing Entertainment |
No, this text is not related with defense topics | The administration of territory in dynastic China is the history of practices involved in governing the land from the Qin dynasty (221–206 BC) to the Qing dynasty (1636–1912). Administrative divisions in imperial China Qin dynasty (221–206 BC) After the state of Qin conquered China in 221 BC, the "First Emperor of Qin", Qin Shi Huang, divided the Qin dynasty into 36, and then ultimately, 40 commanderies, which were divided into counties, which were further divided into townships (xiang). The imperial capital was excluded from the normal administrative units and was administered by a Chamberlain (neishi). Administrative control of a commandery was divided between a Governor (shou), who handled general administration, and a Defender (wei), who supervised military garrisons. Counties were administered by a Magistrate (ling). Control of a township was divided between an Elder (sanlao), the moral authority, a Husbander (sefu), who handled fiscal affairs, and a Patroller (youjiao), who kept the local peace. Below townships were even smaller divisions of a thousand households, which constituted a neighborhood (ting), and a hundred households, which constituted a village (li). There was no formal system of recruitment for personnel during Qin times. All appointments down to the county level were based on recommendation and decided by the Grand Chancellor and emperor. Tenures were indefinite. Officials could obtain titles graded from 20 to 1 for meritorious service, but such titles were not hereditary, and did not confer a fief to the holder. Han dynasty (202 BC–220 AD) The founder of the Han dynasty, Emperor Gaozu of Han (r. 28 February 202 – 1 June 195 BC), separated the dynasty's territory between the western half directly controlled by the imperial capital, and the eastern half, ruled by Kings of the Han dynasty. In the areas controlled by the central government, regional hierarchy followed the Qin model of commandery and county. The eastern nobility ruled kingdoms (wangguo) or marquisates (houguo) that were largely autonomous until 154 BC when a series of imperial actions gradually brought them under central control. By the end of the millennium they differed from commanderies and counties only in name and were controlled by a Counselor-delegate (guoxiang) appointed by the central government. Until 106 BC, the central government supervised the commanderies through touring Censors, but in that year, Emperor Wu of Han formally divided the commanderies into 13 provinces. These provinces were governed by Regional Inspectors (cishi) or Regional Governors (zhoumu). Regional Inspectors and Governors were not allowed to serve in their native commandery. After 104 BC, the imperial capital was governed by the Three Guardians (sanfu): Metropolitan Governor (jingzhaoyin), Guardian of the Left (zuopingyi), and Guardian of the Right (youpingyi). After 89 BC, these three positions were subordinated by the Military Commandant (sili xiaowei) who reported directly to the emperor. Recruitment Han officialdom was ruled by an aristocracy down to the county level. Candidates for offices recommended by the provinces were examined by the Ministry of Rites and then presented to the emperor. Some candidates for clerical positions would be given a test to determine whether they could memorize nine thousand Chinese characters. The tests administered during the Han dynasty did not offer formal entry into government posts. Recruitment and appointment in the Han dynasty were primarily through recommendations by aristocrats and local officials. Recommended individuals were also primarily aristocrats. In theory, recommendations were based on a combination of reputation and ability but it's not certain how well this worked in practice. Oral examinations on policy issues were sometimes conducted personally by the emperor himself during Western Han times. Although executive officials were appointed by the central government, they were allowed to freely appoint their own sons and favored friends. An appointed official first served one year in probationary status and then obtained indefinite tenure with three year intervals, at which point they were assessed by their supervisors for promotion, demotion, or dismissal. During the reign of Emperor Wu (r. 9 March 141 BC – 29 March 87 BC), every commandery and kingdom was called on to nominate one or two men for appointment each year. Later the number of nominations was fixed to one per 200,000 people. From 165 BC onward, nominees were given written examinations to confirm their literacy and learning. In 124 BC, Emperor Wu established the Taixue with a faculty of five Erudites (boshi) and student body of 50, recommended by Commandery Governors, that grew to 3,000 by the end of the millennium. Students studied the classics at the Taixue for one year and then sat a written graduation exam, after which they were either appointed or returned home to seek positions on the commandery staff. Officials were paid on a monthly basis in both grain and coin corresponding to their rank. The number of graduates who went on to hold office were few. The examinations did not offer a formal route to commissioned office and the primary path to office remained through recommendations. Sui dynasty (581–618) Sui dynasty administrative divisions were initially the same as the Han, but in 586, Emperor Wen of Sui abolished commanderies and left provinces in direct control of counties. In 605, Emperor Yang of Sui revived the commandery. In the early years of the Sui dynasty, Area Commanders-in-chief (zongguan) ruled as semi-autonomous warlords, but they were gradually replaced with Branch Departments of State Affairs (xing taisheng). In 587, the Sui dynasty mandated every province to nominate three "cultivated talents" (xiucai) per year for appointment. In 599, all capital officials of rank five and above were required to make nominations for appointment in several categories. Imperial examinations Examination categories for "classicists" (mingjing ke) and "cultivated talents" (xiucai ke) were introduced. Classicists were tested on the Confucian canon, which was considered an easy task at the time, so those who passed were awarded posts in the lower rungs of officialdom. Cultivated talents were tested on matters of statecraft as well as the Confucian canon. In AD 607, Emperor Yang established a new category of examinations for the "presented scholar" (jinshi ke ). These three categories of examination were the origins of the imperial examination system that would last until 1905. Consequently, the year 607 is also considered by many to be the real beginning of the imperial examination system. The Sui dynasty was itself short lived however and the system was not developed further until much later. The imperial examinations did not significantly shift recruitment selection in practice during the Sui dynasty. Schools at the capital still produced students for appointment. Inheritance of official status was also still practiced. Men of the merchant and artisan classes were still barred from officialdom. However the reign of Emperor Wen (r. 4 March 581 – 13 August 604) did see much greater expansion of government authority over officials. Under Emperor Wen, all officials down to the county level had to be appointed by the Department of State Affairs in the capital and were subjected to annual merit rating evaluations. Regional Inspectors and County Magistrates had to be transferred every three years and their subordinates every four years. They were not allowed to bring their parents or adult children with them upon reassignment of territorial administration. The Sui did not establish any hereditary kingdoms or marquisates of the Han sort. To compensate, nobles were given substantial stipends and staff. Aristocratic officials were ranked based on their pedigree with distinctions such as "high expectations", "pure", and "impure" so that they could be awarded offices appropriately. Tang dynasty (618–907) The Tang dynasty was divided into circuits, which were divided into prefectures, which were further divided into counties. There were three Superior Prefectures known as Jingzhao, in the Chang'an area, Henan, in the Luoyang area, and Taiyuan, in modern Shanxi Province. Each Superior Prefecture was nominally administered by an Imperial Prince but usually another official was actually in charge. A normal prefecture was administered by a Prefect. Sometimes a prefecture could be designated an Area Command (dudu fu) under an Area Commander (dudu) and a few prefectures could be grouped together into a Superior Area Command (da dudu fu) under a Commander-in-chief (da dudu). Area Commanders were later replaced by Defense Commands (zhen) under Military Commissioners (jiedushi). The circuit was assigned a Surveillance Commissioner (ancha shi), who functioned as an overall coordinator rather than a governor, and visited prefectures and checked up on the performance of officials. After the An Lushan Rebellion (16 December 755 – 17 February 763), the role of the Surveillance Commissioner shifted to that of a more direct civil governor while many Military Commissioners became autonomous warlords in all but name. Sometimes borderlands were designated a Protectorate (duhu fu) under a Protector (duhu). Expansion of the imperial examinations During the Tang dynasty, candidates were either recommended by their schools or had to register for exams at their home prefecture. In 693, Wu Zetian expanded the examination system by allowing commoners and gentry previously disqualified by their non-elite backgrounds to take the tests. Six categories of regular civil-service examinations were organized by the Department of State Affairs and held by the Ministry of Rites: cultivated talents, classicists, presented scholars, legal experts, writing experts, and arithmetic experts. Emperor Xuanzong of Tang also added categories for Daoism and apprentices. The hardest of these examination categories, the presented scholar jinshi degree, became more prominent over time until it superseded all other examinations. By the late Tang the jinshi degree became a prerequisite for appointment into higher offices. Appointments by recommendation were also required to take examinations. However candidates who passed the exams were not automatically granted office. They still had to pass a quality evaluation by the Ministry of Rites, after which they were allowed to wear official robes. Successful candidates reported to the Ministry of Personnel for placement examinations. Unassigned officials and honorary title holders were expected to take placement examinations at regular intervals. Non-assigned status could last a very long time especially when waiting for a substantive appointment. After being assigned to office, a junior official was given an annual merit rating. There was no specified term limit, but most junior officials served for at least three years or more in one post. Senior officials served indefinitely at the pleasure of the emperor. The Tang emperors placed the palace exam graduates, the jinshi, in important government posts, where they came into conflict with hereditary elites. During the reign of Emperor Xuanzong of Tang (713-56), about a third of the Grand Chancellors appointed were jinshi, but by the time of Emperor Xianzong of Tang (806-21), three fifths of the Grand Chancellors appointed were jinshi. This change in the way government was organized dealt a real blow to the aristocrats, but they did not sit idly by and wait to become obsolete. Instead they themselves entered the examinations to gain the privileges associated with it. By the end of the dynasty, the aristocratic class had produced 116 jinshi, so that they remained a significant influence in the government. Hereditary privileges were also not completely done away with. The sons of high ministers and great generals had the right to hold minor offices without taking the examinations. In addition, the number of graduates were not only small, but also formed their own clique in the government based around the examiners and the men they passed. In effect the graduates became another interest group the emperor had to contend with. Liao dynasty (916–1125) The Khitan-led Liao dynasty was divided between a nomadic tribal Northern Administration and a sedentary Chinese Southern Establishment. They were each headed by a Prime Minister, the northern one appointed by the Xiao consort clan, and the southern one appointed by the ruling Yelü clan. The Southern Establishments were divided into five "circuits", each with a capital city. Each circuit except for the one dominated by the Supreme Capital (shangjing) was ruled by a Regent (liushou). Under the Regent were Governors (yin''') of prefectures who ruled below them Magistrates of counties. Under the Northern Administration, Khitans were organized around an ordo, the moving camp of a chief. Throughout the duration of the Liao dynasty, the number of ordos fluctuated between 10 and 44. The tribal vassals of the Liao were organized into territories known as routes (lu) headed by a tribal chief. Imperial examinations were only held for the Southern Establishments until the last decade of the dynasty when Khitans found it an acceptable avenue for advancing their careers. The examinations focused primarily on lyric-meter poetry and rhapsodies. Recruitment through examinations was irregular and all offices of note were hereditary in nature and held by Khitans. Song dynasty (960–1279) The Song dynasty kept the circuit, prefecture, county hierarchy. The Military Prefecture was called an "army" (jun) and a handful of prefectures containing mines and salterns were designated Industrial Prefectures (jian). The prefectures were nominally administered by a Prefect, but in practice the central government appointed another Manager of the Affairs to administer groups of prefectures. Actions by Prefects also had to be signed off by a prefectural supervisor. Like the Tang before them, the Song used circuits not as provincial governorships but regions for Commissioners to coordinate government activity. Four Commissions were assigned to every circuit, each tasked with a different administrative activity: military, fiscal, judicial, and supply. Scholar bureaucracy The imperial examinations became the primary method of recruitment for official posts. More than a hundred palace examinations were held during the dynasty, resulting in a greater number of jinshi degrees rewarded. The examinations were opened to adult Chinese males, with some restrictions, including even individuals from the occupied northern territories of the Liao and Jin dynasties. Many individuals of low social status were able to rise to political prominence through success in the imperial examination. The process of studying for the examination tended to be time-consuming and costly, requiring time to spare and tutors. Most of the candidates came from the numerically small but relatively wealthy land-owning scholar-official class. Successful candidates were appointed to office almost immediately and waiting periods between appointments were not long. Annual merit ratings were taken and officials could request evaluation for reassignment. Officials who wished to escape harsh assignments often requested reassignment as a state supervisor of a Taoist temple or monastery. Senior officials in the capital also sometimes nominated themselves for the position of Prefect in obscure prefectures. Hereditary prefectures When Emperor Taizu of Song expanded southwest he encountered four powerful families: the Yang of Bozhou, the Song of Manzhou, the Tian of Sizhou, and the Long of Nanning. Long Yanyao, patriarch of the Long family, submitted to Song rule in 967 with the guarantee that he could rule Nanning as his personal property, to be passed down through his family without Song interference. In return the Long family was required to present tribute to the Song court. The other families were also offered the same conditions, which they accepted. Although they were included among the official prefectures of the Song dynasty, in practice, these families and their estates constituted independent hereditary kingdoms within the Song realm. In 975, Emperor Taizong of Song ordered Song Jingyang and Long Hantang to attack the Mu'ege kingdom and drive them back across the Yachi River. Whatever territory they seized they were allowed to keep. After a year of fighting, they succeeded in the endeavor. Jin dynasty (1115–1234) The Jurchen-led Jin dynasty was divided into 19 routes, five of which were governed from capitals under the control of Regents. The 14 routes not controlled by capitals were under the administration of Area Commands (zongguanfu). Under the routes were prefectures. The Jurchens adopted a more Chinese administration than the Khitans. They instituted an examination system in 1123 and adopted the triennial examination cycle in 1129. Two separate examinations were held to accommodate their former Liao and Song subjects. In the north, examinations focused on lyric-meter poetry and rhapsodies while in the south, Confucian Classics were tested. During the reign of Emperor Xizong of Jin (r. 1135–1150), the contents of both examinations were unified and examinees were tested on both genres. Emperor Zhangzong of Jin (r. 1189–1208) abolished the prefectural examinations. Emperor Shizong of Jin (r. 1161–1189) created the first examination conducted in the Jurchen language, with a focus on political writings and poetry. Graduates of the Jurchen examination were called "treatise graduates" (celun jinshi) to distinguish them from the regular Chinese jinshi. Posts were regularly filled by examination graduates and it was not uncommon for one in three candidates to pass. An average of 200 Metropolitan Graduate Degrees were handed out per year. Although Chinese subjects were able to obtain offices through the examinations, a regional quota assured that northerners (principally Jurchens) passed more consistently and were more quickly promoted upon obtaining office. Often Jurchen examinees had to demonstrate little more than literacy to pass. Chinese officials also faced discrimination, at times physical, while Jurchens retained all final decision making powers within the Jin government. Yuan dynasty (1271–1368) Under the Mongol-led Yuan dynasty, the largest administrative division was the province, also known as a Branch Secretariat (xing zhongshu sheng). A province was governed by two Managers of Governmental Affairs (pingchang zhengshi). Occasionally a Grand Chancellor (chengxiang) was put in charge of an entire province. It's questionable how much authority the central government had over the provinces as they were essentially the administrative bases of Mongol nobles. Between the provinces and the central government were two agencies: the Branch Bureau of Military Affairs (xing shumi yuan) and the Branch Censorate (xing yushi tai). The Military Branch handled military affairs and had jurisdiction over vaguely defined territories known as Regions (chu). There were three Branch Censorates that handled overseeing the provincial affairs of the Yuan dynasty. Below the provinces were circuits with agencies headed by Commissioners who coordinated matters between the provincial level authorities and lower-level routes, prefectures, and districts. The route was governed by an Overseer and a Commander. Below routes were prefectures headed by an Overseer and a Prefect. At the lowest level, below the prefectures, were counties headed by an Overseer and a Magistrate. The capital Khanbaliq was governed by the Dadu Route under the administration of two Police Commissions, while the summer capital Shangdu was under another Police Commission. All residents of the Yuan dynasty were grouped into four categories: Mongols, Semu-ren, Han-ren, and Manzi. Semu-ren were subjects of the Yuan dynasty west of China, Han-ren were the former subjects of the Jin dynasty, and the Manzi were all former subjects of the Song dynasty. All important government positions were held by Mongols and Semu-ren with some minor offices held by Han-ren, while Manzi were relegated to local offices in their own area. Mongol Overseers were assigned to every office down to the county level. Imperial examinations were ceased for a time with the defeat of the Song in 1279 by Kublai Khan. One of Kublai's main advisers, Liu Bingzhong, recommended restoring the examination system, however Kublai distrusted the examinations and did not heed his advice. Kublai believed that Confucian learning was not needed for government posts and was opposed to such a commitment to the Chinese language and to the Chinese scholars who were so adept at it, as well as its accompanying ideology. He wished to appoint his own people without relying on an apparatus inherited from a newly conquered and sometimes rebellious country.Wendy, Frey. History Alive!: The Medieval World and beyond. Palo Alto, CA: Teacher's Curriculum Institute, 2005. The examination system was revived in 1315 with significant changes during the reign of Ayurbarwada Buyantu Khan. The new examination system organized its examinees into regional categories in a way which favored Mongols and severely disadvantaged the Manzi. A quota system both for number of candidates and degrees awarded was instituted based on the classification of the four groups, those being the Mongols, (Semu-ren), Han-ren, and Manzi, with further restrictions by province favoring the northeast of the empire (Mongolia) and its vicinities. A quota of 300 persons was fixed for provincial examinations with 75 persons from each group. The metropolitan exam had a quota of 100 persons with 25 persons from each group. Candidates were enrolled on two lists with the Mongols and Semu-ren located on the left and the Han-ren and Manzi on the right. Examinations were written in Chinese and based on Confucian and Neo-Confucian texts but the Mongols and Semu-ren received easier questions to answer than the Chinese. Successful candidates were awarded one of three ranks. All graduates were eligible for official appointment. Under the revised system the yearly averages for examination degrees awarded was about 21. The way in which the four regional categories were divided tended to favor the Mongols, Semu-ren, and Han-ren, despite the Manzi being by far the largest portion of the population. The 1290 census figures record some 12,000,000 households (about 48% of the total Yuan population) for South China, versus 2,000,000 North Chinese households, and the populations of Mongols and Semu-ren were both less. While South China was technically allotted 75 candidates for each provincial exam, only 28 Han Chinese from South China were included among the 300 candidates, the rest of the South China slots (47) being occupied by resident Mongols or Semu-ren, although 47 "racial South Chinese" who were not residents of South China were approved as candidates. Recruitment by examination during the Yuan dynasty constituted a very minor part of the Yuan administration. Hereditary Mongol nobility formed the elite nucleus of the government. Initially the Mongols drew administrators from their subjects but in 1261, attempts were made by Kublai to increase Mongol personnel by ordering the establishment of Mongolian schools to draw officials from. The School for the Sons of the State was established in 1271 to give two or three years of training for the sons of Imperial Bodyguards so that they might become suitable for official recruitment. Officials serving in the capital were nominally supposed to receive merit ratings every 30 months, for demotion or promotion, but in practice government posts were inherited from father to son. Tusi Southwestern tribal chieftainships were organized under the tusi system. The tusi system was inspired by the Jimi system () implemented in regions of ethnic minorities groups during the Tang dynasty. It was established as a specific political term during the Yuan dynasty and was used as a political institution to administer newly acquired territories following their conquest of the Dali Kingdom in 1253. Members of the former Duan imperial clan were appointed as governors-general with nominal authority using the title "Dali chief steward" (, p Dàlǐ Zǒngguǎn), and local leaders were co-opted under a variety of titles as administrators of the region. Some credit the Turkoman governor Sayyid Ajjal Shams al-Din Omar with introducing the system into China. Duan Xingzhi, the last king of Dali, was appointed as the first local ruler, and he accepted the stationing of a pacification commissioner there. Duan Xingzhi offered the Yuan maps of Yunnan and led a considerable army to serve as guides for the Yuan army. By the end of 1256, Yunnan was considered to have been pacified. Under the Yuan dynasty, the native officials, or tusi, were the clients of a patron-client relationship. The patron, the Yuan emperors, exercised jurisdictional control over the client, but not his/her territory itself. The tusi chieftains in Yunnan, Guizhou and Sichuan who submitted to Yuan rule and were allowed to keep their titles. The Han Chinese Yang family ruling the Chiefdom of Bozhou which was recognized by the Song and Tang dynasties also received recognition by the subsequent Yuan and Ming dynasties. The Luo clan in Shuixi led by Ahua were recognized by the Yuan emperors, as they were by the Song emperors when led by Pugui and Tang emperors when led by Apei. They descended from the Shu Han era king Huoji who helped Zhuge Liang against Meng Huo. They were also recognized by the Ming dynasty. Ming dynasty (1368–1644) The lowest administrative unit during the Ming dynasty was the county which was supervised by a prefecture through a subprefecture. Prefectures were organized into provinces and administered by three cooperating agencies: the Provincial Administration Commission (chengxuan buzheng shisi), the Provincial Surveillance Commission (tixing ancha shisi), and the Regional Military Commission (du zhihui shisi). The three agencies were directed by a Grand Coordinator and Supreme Commander. The post of Grand Coordinator was indefinite and could last as long as 10 or even 20 years. A Supreme Commander handled military affairs. Neither posts were governorships and were considered special-purpose representatives of the government. The Provincial Administration Commission was in general charge of all civil matters, especially fiscal matters. The Provincial Administration kept three to eight branch offices in each province. Each branch office was headed by an Intendant (daotai) to exercise administrative authority. Each province also had a Tax Intendant (duliang dao). The Provincial Surveillance Commission was headed by a single Surveillance Commissioner, under whom were various vice and assistant commissioners who held censorial and judicial powers. Regional Military Commissioners were responsible for military garrisons in the provinces. Executive officials of the Three Provincial Commissions were collectively known as Regional Overseers. The purpose of this tripartite administration of provinces was so that no one had supreme power in one region. Recruitment by examination flourished after 1384 in the Ming dynasty. Provincial graduates were sometimes appointed to low-ranking offices or entered the Guozijian for further training, after which they might be considered for better appointments. Before appointment to office, metropolitan graduates were assigned to observe the functions of an office for up to one year. The maximum tenure for an office was nine years, but triennial evaluations were also taken, at which point an official could be reassigned. Magistrates of counties submitted monthly evaluation reports to their prefects and the prefects submitted annual evaluations to provincial authorities. Every third year, provincial authorities submitted evaluations to the central government, at which point an "outer evaluation" was conducted, requiring local administration to send representatives to attend a grand audience at the capital. Officials at the capital conducted an evaluation every six years. Capital officials of rank 4 and above were exempted from regular evaluations. Irregular evaluations were conducted by censorial officials. Gaitu guiliu The Ming dynasty continued the Yuan tusi chiefdom system. The Ming tusi were categorized into civil and military ranks. The civilian tusi were given the titles of Tu Zhifu ("native prefecture"), Tu Zhizhou ("native department") and Tu Zhixian ("native county") according to the size and population of their domains. Nominally, they had the same rank as their counterparts in the regular administration system The central government gave more autonomy to those military tusi who controlled areas with fewer Han Chinese people and had underdeveloped infrastructure. They pledged loyalty to the Ming emperor but had almost unfettered power within their domains. All the native chieftains were nominally subordinate to Pacification Commissioners (Xuanfushi, Xuanweishi, Anfushi). The Pacification Commissioners were also native chieftains who received their title from the Ming court. As a way of checking their power, Pacification Commissioners were put under the supervision of the Ministry of War. Throughout its 276 year history, the Ming dynasty bestowed a total of 1608 tusi titles, 960 of which were military-rank and 648 were civilian-rank, the majority of which were in Yunnan, Guizhou and Sichuan. In Tibet, Qinghai and Sichuan, the Ming court sometimes gave both tusi titles and religious titles to leaders. As a result, those tusi had double identities. They played both the role of political leaders and religious leaders within their domains. For example, during the reign of the Yongle Emperor, the leader of the Jinchuan monastery assisted the Ming army in a battle against the Mongols. The leader was later given the title Yanhua Chanshi (演化禅师), or "Evolved Chan Master", and the power to rule 15 villages as his domain as a reward. Under Ming administration, the jurisdictional authority of tusi began to be replaced with state territorial authority. The tusi acted as stop gaps until enough Chinese settlers arrived for a "tipping point" to be reached, and they were then converted into official prefectures and counties to be fully annexed into the central bureaucratic system of the Ming dynasty. This process was known as gaitu guiliu (), or "turning native rule into regular administration". The most notable example of this was the consolidation of southwestern tusi chiefdoms into the province of Guizhou in 1413. In sum, gaitu guiliu was the process of replacing tusi with state-appointed officials, the transition from jurisdictional sovereignty to territorial sovereignty, and the start of formal empire rather than informal. Qing dynasty (1636–1912) The Qing dynasty kept the Ming province system and expanded it to 18 provinces by 1850. However unlike the Ming tripartite provincial administration, Qing provinces were governed by a single Governor (xunfu) who held substantial power. Although all provincial agencies communicated with the central government through him, he himself was subordinate to a Governors-general (zongdu). While nominally superior to a Governor, usually the Governors-general cooperated closely with the Governor and acted jointly in reporting to the central government. Governors and Governors-generals did not have to have a Manchu-Han Chinese balance, unlike in the central government. Subordinate to Governors were two kinds of agencies: Provincial Administration Commissions (chengxuan buzheng shisi) and Provincial Surveillance Commissions (tixing ancha shisi). The Provincial Administration Commissioner was a lieutenant-general who bore fiscal responsibilities. The Provincial Surveillance Commissioner was responsible for the administration of judicial and censorial matters. There was also an unofficial Provincial Education Commissioner (tidu xuezheng) in every province who supervised schools and certified candidates for the civil service examinations. Under the provincial administration were Circuit Intendants (daotai) who served as intermediaries between prefectures and provincial administration. Lifan Yuan Peripheral territories such as Mongolia, Xinjiang, and Tibet were supervised by the Lifan Yuan (Court of Colonial Affairs). The people living in these areas were generally able to keep their own way of life so long as they kept the peace and showed deference to the Qing emperor. Many of the Mongols were organized into Manchu-style banners or leagues and it was not until the 19th century that Mongolia was brought under tighter control under a Manchu general or Grand Minister Consultant (canzan dachen) and several Judicial Administrators (banshi siyuan). The people of Xinjiang were treated as tributary vassals and their leaders used Chinese titles. Tibet's religious leaders were relatively autonomous and treated as tributary princes until the 1720s when rebelliousness prompted the Qing government to place the area under the administration of two Grand Minister Residents (zhuzang dachen), who were supported by Qing military garrisons. Citations References Kracke, E. A., Jr. (1967 [1957]). "Region, Family, and Individual in the Chinese Examination System", in Chinese Thoughts & Institutions, John K. Fairbank, editor. Chicago and London: University of Chicago Press. Yu, Pauline (2002). "Chinese Poetry and Its Institutions", in Hsiang Lectures on Chinese Poetry, Volume 2'', Grace S. Fong, editor. (Montreal: Center for East Asian Research, McGill University). Chinese culture Civil services Confucian education Examinations Government recruitment History of Imperial China Public administration |
No, this text is not related with defense topics | During most of the 20th century photography depended mainly upon the photochemical technology of silver halide emulsions on glass plates or roll film. Early in the 21st century this technology was displaced by the electronic technology of digital cameras. The development of digital image sensors, microprocessors, memory cards, miniaturised devices and image editing software enabled these cameras to offer their users a much wider range of operating options than was possible with the older silver halide technology. This has led to a proliferation of new abbreviations, acronyms and initialisms. The commonest of these are listed below. Some are used in related fields of optics and electronics but many are specific to digital photography. Acronyms and initialisms that are not brand-specific Initialisms that are used mainly by specific brands References General references Blair, John G. The Glossary of Digital Photography. Rocky Nook, 2007, . Peres, Michael R. The Focal Encyclopedia of Photography, Fourth Edition. Focal, 2007, . Taylor, Phil. Digital Photographic Imaging Glossary. Trafford, 2006, . Glossary, issued by Nikon, explaining the Nikkor lens codes. Retrieved 2011-01-01. Photography |
No, this text is not related with defense topics | Holistic nursing is a way of treating and taking care the patient as a whole body which involves physical, social environment, psychological, cultural and religious beliefs. There are many theories that support the importance of nurses approaching the patient holistically and how education on this are there to support the goal of holistic nursing. The important skill to be used in holistic nursing would be communicating skills with patients and other practitioners. These emphasizes that patients being treated would be treated not only their body but also mind and spirit. Holistic nursing is a nursing speciality concerning the integration of one's mind, body, and spirit with his or her environment. This speciality has a theoretical basis in a few grand nursing theories, most notably the science of unitary human beings, as published by Martha E. Rogers in An Introduction to the Theoretical Basis of Nursing, and the mid-range theory Empowered Holistic Nursing Education, as published by Dr. Katie Love. Holistic nursing has gained recognition by the American Nurses Association (ANA) as a nursing specialty with a defined scope of practice and standards. Holistic nursing focuses on the mind, body, and spirit working together as a whole and how spiritual awareness in nursing can help heal illness. Holistic medicine focuses on maintaining optimum well-being and preventing rather than just treating disease. Core values The Holistic philosophy: theory and ethics Holistic nursing is based on the fundamental theories of nursing, such as the works of Florence Nightingale and Jean Watson as well as alternative theories of world connectedness, wholeness, and healing. Holistic nurses respect the patient as the decision-maker throughout the continuum of care. The holistic nurse and patient relationship is based on a partnership in which the holistic nurse engages the patient in treatment options and healthcare choices. The holistic nurse seeks to establish a professional and ethical relationship with the patient in order to preserve the patient's sense of dignity, wholesomeness, and inner worth. Theories of Holistic Nursing The goal for holistic nursing is in the definition of holistic where it is to treat the patient in whole not just physically. Various nursing theories have helped on viewing the importance holistic nursing. These theories may differ on the views of holistic nursing care but have common goal which is to treat the patient in whole body and mind. One of the theories is The Intersystem Model, explaining that individuals are holistic being therefore their illness are interacted and adapted them as a whole not just physically. Also as health can be a different value to individuals which ranges constantly from well-being to disease. For example, despite their chronic condition the patient is satisfied with the changed healthy life for their living. In holistic nursing knowing the theory does not mean that this will be implanted in doing in real life practice many nurses are not able to apply the theory in real life. Holistic caring process Holistic nursing combines standard nursing interventions with various modalities that are focused on treating the patient in totality. Alternative therapies can include stress management, aroma therapy, and therapeutic touch. The combination of interventions allows the patient to heal in mind, body, and spirit by focusing on the patient's emotions, spirituality, and cultural identity as much as the illness. The six steps of the holistic caring process occur simultaneously, including assessment, diagnosis, outcomes, therapeutic plan of care, implementation, and evaluation. The holistic assessment of the patient can include spiritual, transpersonal, and energy-field assessments in combination with the standard physical and emotional assessments. The therapeutic plan of care in holistic nursing includes a highly individualized and unique plan for each patient. Holistic nurses recognize that the plan of care will change based on the individual patient, and therefore embrace healing as a process that is always changing and adapting to the individual's personal healing journey. Therapies utilized by holistic nurses include stress management techniques and alternative or complementary practices such as reiki and guided imagery. These therapy modalities are focused on empowering individuals to reduce stress levels and elicit a relaxation response in order to promote healing and well-being. The caring for patients in holistic nursing may differ from other nursing care as some may lack in caring for the patient as a whole, which includes spiritually. In holistic nursing, taking care of the patient does not differ from other nursing, but is focused on mental and spiritual needs as well as physical health. In holistic nursing there should be a therapeutic trust between the patient and nurse, as caring holistically involves knowing the patient’s illness as whole. This can be only done by the patient who is the one to tell the nurse about the social, spiritual and internal illness that they are experiencing. Also as caring could be involved as assertive action, quiet support or even both which assist in understanding a person’s cultural differences, physical and social needs. Through this the nurse is able to give more holistic care to meet the social and spiritual needs of the patient. The attitude of nurse includes helping, sharing and nurturing. In holistic caring there is spiritual care where it needs an understanding of patient’s beliefs and religious views. This is the reason why there should be therapeutic trust between nurse and patient, as in order to understand and respect the patient’s religious beliefs the nurse has to get information from the patient directly which is hard to get when there is not therapeutic trust. There is no specific order or template for how to care holistically, but the principle of holistic caring is to include patient’s social and internal needs and not just focus on treating the physical illness. Holistic Communication Holistic nurses use intentional listening techniques ("Focus completely on the speaker") and unconditional positive regard to communicate with patients. The goal of using these communication techniques is to create authentic, compassionate, and therapeutic relationships with each patient. In holistic nursing having therapeutic trust with patient and nurse gives great advantage of achieving the goal of treating patients as a whole. Therapeutic trust can be developed by having conversation with the patient. In communication the sender can also become a receiver or vice versa which in holistic nursing the nurses are the receiver of patients concern and the pass the information on to the doctor and do the vice versa. As communication is vital element in nursing it is strongly recommended to nurses to understand what is needed and how to communicate with patients. Communicating with patients can help in the performances of nurses in holistic nursing as by communicating the nurses are able to understand the cultural, social values and psychological conditions. Through this the nurses are able to satisfy the needs of a patients and as well as protecting the nurse for doing their roles as a nurse. In holistic nursing non-verbal communication is also another skill that is taught to nurses which are expressed by gestures, facial expression, posture and creating physical barriers. In holistic nursing as all individuals are not all the same but their social and psychological illness should be treated it is up to the nurse on how they communicate in order to build a therapeutic trust. To achieve the goal of holistic nursing it is important to communicate with the patient properly and to this successfully between the nurse and patient is freakiness and honesty. Without these communicating skills the nurse would not be able to build therapeutic trust and is likely to fail the goal of holistic nursing. Building a Therapeutic Environment Holistic nursing focuses on creating not only a therapeutic relationship with patients but also on creating a therapeutic environment for patients. Several of the therapies included in holistic nursing rely on therapeutic environments to be successful and effective. A therapeutic environment empowers patients to connect with the holistic nurse and with themselves introspectively. Depending on the environment of where the patient is holistic approach may be different and knowing this will help nurses to achieve better in holistic nursing. For patients with illness, trauma and surgery increasing sleep will benefit in recovery, blood pressure, pain relief and emotional wellbeing. As in hospital there are many disturbances which can effect in patients’ quality of sleep and due to this the patients are lacking in aid for healing, recovery and emotional wellbeing. Nurse being able note or take care of patient’s sleep will determine how closely they are approaching to holistic nursing. Depending on disease some the treatment may differ and may need further check-ups or program for patients to do. For example, there are higher chance for women to get cardiovascular disease but there is less number of enrollment for cardiac rehabilitation program compared to men. This was due to the environment of hospital not being able to support females in completing the CR programs. Some examples are physicians are less likely refer CR programs to women and patient’s thought against safety of the program. In situation like this from the knowledge and education that was done from holistic nursing the nurses will be able to approach the patient as they can relate to what the patient is going thought which gives more comfort and safety to patients in doing the programs. Cultural Diversity Part of any type of nursing includes understanding the patient's comprehension level, ability to cope, social supports, and background or base knowledge. The nurse must use this information to effectively communicate with the patient and the patient's family, to build a trusting relationship, and to comprehensively educate the patient. The ability of a holistic nurse to build a therapeutic relationship with a patient is especially important. Holistic nurses ask themselves how they can culturally care for patients through holistic assessment because holistic nurses engage in ethical practices and the treatment of all aspects of the individual. Australia has many different cultures as they are many people who were born overseas and migrated to Australia, which we can experience many cultural diversities. Culture can be defined as how people create collective beliefs and shared practices in order to make sense of their lived experiences which how concepts of language, religion and ethnicity are built in the culture. As the meaning of holistic nursing to heal the person as a whole knowing their cultural identities or backgrounds will help to reach the goal (Mariano, 2007). Understanding peoples culture may help to approach treatment correctly to the patient as it provides knowledge to nurses how patient’s view of the concept of illness and disease are to their values and identity. As in holistic approach culture, beliefs and values are essential components to achieve the goal. People’s actions to promote, maintain and restore health and healing are mainly influenced by their culture which is why knowing other cultures will assist in holistic nursing. By developing knowledge, communication, assessment skills and practices for nurses it guides to provide better experiences to patients who have diverse beliefs, values, and behaviors that respects their social, cultural and linguistic needs. As for most patients and families their decision on having treatment against illness or disease are done from cultural beliefs. This means if the nurses are unable to understand and give information relating to what they believe in the patients will most likely reject the treatment and give hardship on holistic nursing. Holistic Education and Research Holistic registered nurses are responsible for learning the scope of practice established in Holistic Nursing: Scope and Standards of Practice(2007) and for incorporating every core value into daily practice. It is the holistic nurse's responsibility to become familiar with both conventional practices as well as alternative therapies and modalities. Through continuing education and research, the holistic nurse will remain updated on all treatment options for patients. Areas of research completed by holistic nurses includes: measurements of outcomes of holistic therapies, measurements of caring behaviors and spirituality, patient responsiveness to holistic care, and theory development in areas such as intentionality, empowerment, and several other topics. The goal of holistic nursing is treat the patient’s individual’s social, cognitive, emotional and physical problems as well as understanding their spiritual and cultural beliefs. Involving holistic nursing in the education will help future nurses to be more familiar in the terms holistic and how to approach the concept. In the education of holistic nursing all other nursing knowledge is included which once again developed through reflective practice. In holistic nursing the nurses are taught on the five core values in caring, critical thinking, holism, nursing role development and accountability. These values help the nurse to be able to focus on the health care on the clients, their families and the allied health practitioners who is also involved in patient care. Education in holistic nursing is continuous education program which will be ongoing even after graduation to improve in reaching the goal. Education on holistic nursing would be beneficial to nurses if this concept is introduced earlier as repetition of educating holistic nursing could also be the revision of it. There is different education on commutating skills and an example would be the non-verbal and verbal communication with patients. This is done to improve when would the right or wrong to use the communication skill and how powerful skills this could be. Holistic Nurse Self-Care Through the holistic nurse's integration of self-care, self-awareness, and self-healing practices, the holistic nurse is living the values that are taught to patients in practice. Holistic "nurses cannot facilitate healing unless they are in the process of healing themselves." In order to provide holistic nursing to patient it is also important for nurses to take care of themselves. There are various ways which the nurses can heal, assess and care for themselves such as self-assessment, meditation, yoga, good nutrition, energy therapies, support and lifelong learning. By nurses being able achieve balance and harmony in their lives it can assist to understand how to take care of patient holistically. In Florida Atlantic University there is a program that focus on all caring aspects and recognize how to take care of others as well as on how to start evaluation on their own mind, body and spirit. Also there is Travis’ Wellness Model which explores the idea of “self-care, wellness results from an ongoing process of self-awareness, exploring options, looking within, receiving from others (education), trying out new options (growth), and constantly re-evaluating the entire process. Self-awareness and education precede personal growth and wellness”. This model of concepts shows being able to understand own status of health can benefit to patients and reach the goal of holistic nursing. Certification National certification for holistic nursing is regulated by the American Holistic Nurses Certification Corporation (AHNCC). There are two levels of certification: one for nurses holding a bachelor's degree and one for nurses holding a master's degree. Accreditation through the AHNCC is approved by the American Nurses Credentialing Center (ANCC). Global initiatives United States American Holistic Nurses Association (AHNA): Mission Statement "The Mission of the American Holistic Nurses Association is to illuminate holism in nursing practice, community, advocacy, research and education." Canada Canadian Holistic Nurses Association (CHNA): Mission Statement "To support the practice of holistic nursing across Canada by: acting as a body of knowledge for its practitioners, by advocating with policy makers and provincial regulatory bodies and by educating Canadians on the benefits of complementary and integrative health care." Australia Australian Holistic Nurses Association (AHNA) "The Mission of the Australian Holistic Nurses Association (AHNA) is to illuminate holism in nursing practice, research, and education; act as a body of knowledge for its practitioners; advocate with policymakers and regulatory bodies; and educate Australians on the benefits of Complementary and Alternative Medicine (CAM) and integrative health care." See also Alternative medicine Alternative Therapies in Health and Medicine Journal of Holistic Nursing Nursing References Nursing specialties Alternative medicine |
No, this text is not related with defense topics | Cognitive skills, also called cognitive functions, cognitive abilities or cognitive capacities, are brain-based skills which are needed in acquisition of knowledge, manipulation of information and reasoning. They have more to do with the mechanisms of how people learn, remember, solve problems and pay attention, rather than with actual knowledge. Cognitive skills or functions encompass the domains of perception, attention, memory, learning, decision making, and language abilities. Specialisation of functions Cognitive science has provided theories of how the brain works, and these have been of great interest to researchers who work in the empirical fields of brain science. A fundamental question is whether cognitive functions, for example visual processing and language, are autonomous modules, or to what extent the functions depend on each other. Research evidence points towards a middle position, and it is now generally accepted that there is a degree of modularity in aspects of brain organisation. In other words, cognitive skills or functions are specialised, but they also overlap or interact with each other. Deductive reasoning, on the other hand, has been shown to be related to either visual or linguistic processing, depending on the task; although there are also aspects that differ from them. All in all, research evidence does not provide strong support for classical models of cognitive psychology. Cognitive functioning Cognitive functioning refers to a person's ability to process thoughts. It is defined as "the ability of an individual to perform the various mental activities most closely associated with learning and problem-solving. Examples include the verbal, spatial, psychomotor, and processing-speed ability." Cognition mainly refers to things like memory, speech, and the ability to learn new information. The brain is usually capable of learning new skills in the aforementioned areas, typically in early childhood, and of developing personal thoughts and beliefs about the world. Old age and disease may affect cognitive functioning, causing memory loss and trouble thinking of the right words while speaking or writing ("drawing a blank"). Multiple sclerosis (MS), for example, can eventually cause memory loss, an inability to grasp new concepts or information, and depleted verbal fluency. Humans generally have a high capacity for cognitive functioning once born, so almost every person is capable of learning or remembering. Intelligence is tested with IQ tests and others, although these have issues with accuracy and completeness. In such tests, patients may be asked a series of questions, or to perform tasks, with each measuring a cognitive skill, such as level of consciousness, memory, awareness, problem-solving, motor skills, analytical abilities, or other similar concepts. Early childhood is when the brain is most malleable to orientate to tasks that are relevant in the person's environment. See also Adaptive behavior Adaptive functioning Intelligence Quotient (IQ) Cognition Cognitive Abilities Test Jungian cognitive functions Further reading Cognitive Functioning at Edublox References NCME - Glossary of Important Assessment and Measurement Terms [cognitive ability] Cognition |
No, this text is not related with defense topics | Enhanced biological phosphorus removal (EBPR) is a sewage treatment configuration applied to activated sludge systems for the removal of phosphate. The common element in EBPR implementations is the presence of an anaerobic tank (nitrate and oxygen are absent) prior to the aeration tank. Under these conditions a group of heterotrophic bacteria, called polyphosphate-accumulating organisms (PAO) are selectively enriched in the bacterial community within the activated sludge. In the subsequent aerobic phase, these bacteria can accumulate large quantities of polyphosphate within their cells and the removal of phosphorus is said to be enhanced. Generally speaking, all bacteria contain a fraction (1-2%) of phosphorus in their biomass due to its presence in cellular components, such as membrane phospholipids and DNA. Therefore, as bacteria in a wastewater treatment plant consume nutrients in the wastewater, they grow and phosphorus is incorporated into the bacterial biomass. When PAOs grow they not only consume phosphorus for cellular components but also accumulate large quantities of polyphosphate within their cells. Thus, the phosphorus fraction of phosphorus accumulating biomass is 5-7%. In mixed bacterial cultures the phosphorus content will be maximal 3 - 4 % on total organic mass. If additional chemical precipitation takes place, for example to reach discharge limits, the P-content could be higher, but that is not affected by EBPR. This biomass is then separated from the treated (purified) water at end of the process and the phosphorus is thus removed. Thus if PAOs are selectively enriched by the EBPR configuration, considerably more phosphorus is removed, compared to the relatively poor phosphorus removal in conventional activated sludge systems. See also List of waste-water treatment technologies References Further reading External links Handbook Biological Waste Water Treatment - Principles, Configuration and Model EPBR Metagenomics: The Solution to Pollution is Biotechnological Revolution - A Review from the Science Creative Quarterly Website of the Technische Universität Darmstadt and the CEEP about Phosphorus Recovery Biotechnology Waste treatment technology |
No, this text is not related with defense topics | Drafting Tape, also known as artist's tape, is similar to masking tape in that it has a wide variety of uses, but differs in several key areas. Drafting tape should not leave a sticky residue behind Drafting tape is easily removable, even from delicate surfaces like paper. Drafting tape should not tear the paper during removal. This is the main reason engineers and architects use this kind of tape in their blueprints. Drafting tape should have a neutral pH. Drafting tape is slightly more water resistant to help with masking for paint. While the obvious use of drafting tape is for drawing, drafting tape, like masking tape, can also be used for labeling and hanging posters. Its white or cream coloring goes well with many other colors, and it can be written on easily with any felt-tipped marker. In addition, drafting tape costs less than conventional labels, and its low cost also makes it more forgiving of errors. Drafting tape can also be used in Technical Drawing to help keep the paper well positioned and ensure no residue is left behind when removed. Drafting tape is designed to be temporary so it may disintegrate over time. Drafting tape is not nearly as strong as duct tape or Gaffer tape; it will break with minimal effort, it has very little odor, smelling like glue and paper, and it is not waterproof. Painter's Tape, or "blue tape," behaves similarly to artist's tape however painter's tape is not acid free and is meant for household use instead of art use. See also List of adhesive tapes References Visual arts materials Drawing Adhesive tape |
No, this text is not related with defense topics | Design optimization is an engineering design methodology using a mathematical formulation of a design problem to support selection of the optimal design among many alternatives. Design optimization involves the following stages: Variables: Describe the design alternatives Objective: Elected functional combination of variables (to be maximized or minimized) Constraints: Combination of Variables expressed as equalities or inequalities that must be satisfied for any acceptable design alternative Feasibility: Values for set of variables that satisfies all constraints and minimizes/maximizes Objective. Design optimization problem The formal mathematical (standard form) statement of the design optimization problem is where is a vector of n real-valued design variables is the objective function are equality constraints are inequality constraints is a set constraint that includes additional restrictions on besides those implied by the equality and inequality constraints. The problem formulation stated above is a convention called the negative null form, since all constraint function are expressed as equalities and negative inequalities with zero on the right-hand side. This convention is used so that numerical algorithms developed to solve design optimization problems can assume a standard expression of the mathematical problem. We can introduce the vector-valued functions to rewrite the above statement in the compact expression We call the set or system of (functional) constraints and the set constraint. Application Design optimization applies the methods of mathematical optimization to design problem formulations and it is sometimes used interchangeably with the term engineering optimization. When the objective function f is a vector rather than a scalar, the problem becomes a multi-objective optimization one. If the design optimization problem has more than one mathematical solutions the methods of global optimization are used to identified the global optimum. Optimization Checklist Problem Identification Initial Problem Statement Analysis Models Optimal Design Model Model Transformation Local Iterative Techniques Global Verification Final Review A detailed and rigorous description of the stages and practical applications with examples can be found in the book Principles of Optimal Design. Practical design optimization problems are typically solved numerically and many optimization software exist in academic and commercial forms. There are several domain-specific applications of design optimization posing their own specific challenges in formulating and solving the resulting problems; these include, shape optimization, wing-shape optimization, topology optimization, architectural design optimization, power optimization. Several books, articles and journal publications are listed below for reference. Journals Journal of Engineering for Industry Journal of Mechanical Design Journal of Mechanisms, Transmissions, and Automation in Design Design Science Engineering Optimization Journal of Engineering Design Computer-Aided Design Journal of Optimization Theory and Applications Structural and Multidisciplinary Optimization Journal of Product Innovation Management International Journal of Research in Marketing See also Design Decisions Wiki (DDWiki) : Established by the Design Decisions Laboratory at Carnegie Mellon University in 2006 as a central resource for sharing information and tools to analyze and support decision-making References Further reading Rutherford., Aris, ([2016], ©1961). The optimal design of chemical reactors : a study in dynamic programming. Saint Louis: Academic Press/Elsevier Science. . OCLC 952932441 Jerome., Bracken, ([1968]). Selected applications of nonlinear programming. McCormick, Garth P.,. New York,: Wiley. . OCLC 174465 L., Fox, Richard ([1971]). Optimization methods for engineering design. Reading, Mass.,: Addison-Wesley Pub. Co. . OCLC 150744 Johnson, Ray C. Mechanical Design Synthesis With Optimization Applications. New York: Van Nostrand Reinhold Co, 1971. 1905-, Zener, Clarence, ([1971]). Engineering design by geometric programming. New York,: Wiley-Interscience. . OCLC 197022 H., Mickle, Marlin ([1972]). Optimization in systems engineering. Sze, T. W., 1921-2017,. Scranton,: Intext Educational Publishers. . OCLC 340906. Optimization and design; [papers]. Avriel, M.,, Rijckaert, M. J.,, Wilde, Douglass J.,, NATO Science Committee., Katholieke Universiteit te Leuven (1970- ). Englewood Cliffs, N.J.,: Prentice-Hall. [1973]. . OCLC 618414. J., Wilde, Douglass (1978). Globally optimal design. New York: Wiley. . OCLC 3707693. J., Haug, Edward (1979). Applied optimal design : mechanical and structural systems. Arora, Jasbir S.,. New York: Wiley. . OCLC 4775674. Uri., Kirsch, (1981). Optimum structural design : concepts, methods, and applications. New York: McGraw-Hill. . OCLC 6735289. Uri., Kirsch, (1993). Structural optimization : fundamentals and applications. Berlin: Springer-Verlag. . OCLC 27676129. Structural optimization : recent developments and applications. Lev, Ovadia E., American Society of Civil Engineers. Structural Division., American Society of Civil Engineers. Structural Division. Committee on Electronic Computation. Committee on Optimization. New York, N.Y.: ASCE. 1981. . OCLC 8182361. Foundations of structural optimization : a unified approach. Morris, A. J. Chichester [West Sussex]: Wiley. 1982. . OCLC 8031383. N., Siddall, James (1982). Optimal engineering design : principles and applications. New York: M. Dekker. . OCLC 8389250. 1944-, Ravindran, A., (2006). Engineering optimization : methods and applications. Reklaitis, G. V., 1942-, Ragsdell, K. M. (2nd ed.). Hoboken, N.J.: John Wiley & Sons. . OCLC 61463772. N.,, Vanderplaats, Garret (1984). Numerical optimization techniques for engineering design : with applications. New York: McGraw-Hill. . OCLC 9785595. T., Haftka, Raphael (1990). Elements of Structural Optimization. Gürdal, Zafer., Kamat, Manohar P. (Second rev. edition ed.). Dordrecht: Springer Netherlands. . OCLC 851381183. S., Arora, Jasbir (2011). Introduction to optimum design (3rd ed.). Boston, MA: Academic Press. . OCLC 760173076. S.,, Janna, William. Design of fluid thermal systems (SI edition ; fourth edition ed.). Stamford, Connecticut. . OCLC 881509017. Structural optimization : status and promise. Kamat, Manohar P. Washington, DC: American Institute of Aeronautics and Astronautics. 1993. . OCLC 27918651. Mathematical programming for industrial engineers. Avriel, M., Golany, B. New York: Marcel Dekker. 1996. . OCLC 34474279. Hans., Eschenauer, (1997). Applied structural mechanics : fundamentals of elasticity, load-bearing structures, structural optimization : including exercises. Olhoff, Niels., Schnell, W. Berlin: Springer. . OCLC 35184040. 1956-, Belegundu, Ashok D., (2011). Optimization concepts and applications in engineering. Chandrupatla, Tirupathi R., 1944- (2nd ed.). New York: Cambridge University Press. . OCLC 746750296. Okechi., Onwubiko, Chinyere (2000). Introduction to engineering design optimization. Upper Saddle River, NJ: Prentice-Hall. . OCLC 41368373. Optimization in action : proceedings of the Conference on Optimization in Action held at the University of Bristol in January 1975. Dixon, L. C. W. (Laurence Charles Ward), 1935-, Institute of Mathematics and Its Applications. London: Academic Press. 1976. . OCLC 2715969. P., Williams, H. (2013). Model building in mathematical programming (5th ed.). Chichester, West Sussex: Wiley. . OCLC 810039791. Integrated design of multiscale, multifunctional materials and products. McDowell, David L., 1956-. Oxford: Butterworth-Heinemann. 2010. . OCLC 610001448. M.,, Dede, Ercan. Multiphysics simulation : electromechanical system applications and optimization. Lee, Jaewook,, Nomura, Tsuyoshi,. London. . OCLC 881071474. 1962-, Liu, G. P. (Guo Ping), (2001). Multiobjective optimisation and control. Yang, Jian-Bo, 1961-, Whidborne, J. F. (James Ferris), 1960-. Baldock, Hertfordshire: Research Studies Press. . OCLC 54380075. Structural Topology Optimization "Generating optimal topologies in structural design using a homogenization method". Computer Methods in Applied Mechanics and Engineering. 71 (2): 197–224. 1988-11-01. doi:10.1016/0045-7825(88)90086-2. ISSN 0045-7825. Bendsøe, Martin P (1995). Optimization of structural topology, shape, and material. Berlin; New York: Springer. . Behrooz., Hassani, (1999). Homogenization and Structural Topology Optimization : Theory, Practice and Software. Hinton, E. (Ernest). London: Springer London. . OCLC 853262659. P., Bendsøe, Martin (2003). Topology optimization : theory, methods, and applications. Sigmund, O. (Ole), 1966-. Berlin: Springer. . OCLC 50448149. Topology optimization in structural and continuum mechanics. Rozvany, G. I. N.,, Lewiński, T.,. Wien. . OCLC 859524179. Design |
No, this text is not related with defense topics | The history of herbalism is closely tied with the history of medicine from prehistoric times up until the development of the germ theory of disease in the 19th century. Modern medicine from the 19th century to today has been based on evidence gathered using the scientific method. Evidence-based use of pharmaceutical drugs, often derived from medicinal plants, has largely replaced herbal treatments in modern health care. However, many people continue to employ various forms of traditional or alternative medicine. These systems often have a significant herbal component. The history of herbalism also overlaps with food history, as many of the herbs and spices historically used by humans to season food yield useful medicinal compounds, and use of spices with antimicrobial activity in cooking is part of an ancient response to the threat of food-borne pathogens. Prehistory The use of plants as medicines predates written human history. Archaeological evidence indicates that humans were using medicinal plants during the Paleolithic, approximately 60,000 years ago. (Furthermore, other non-human primates are also known to ingest medicinal plants to treat illness) Plant samples gathered from prehistoric burial sites have been thought to support the claim that Paleolithic people had knowledge of herbal medicine. For instance, a 60,000-year-old Neanderthal burial site, "Shanidar IV", in northern Iraq has yielded large amounts of pollen from 8 plant species, 7 of which are used now as herbal remedies. More recently Paul B. Pettitt has written that "A recent examination of the microfauna from the strata into which the grave was cut suggests that the pollen was deposited by the burrowing rodent Meriones tersicus, which is common in the Shanidar microfauna and whose burrowing activity can be observed today". Medicinal herbs were found in the personal effects of Ötzi the Iceman, whose body was frozen in the Ötztal Alps for more than 5,000 years. These herbs appear to have been used to treat the parasites found in his intestines. Ancient history Mesopotamia In Mesopotamia, the written study of herbs dates back over 5,000 years to the Sumerians, who created clay tablets with lists of hundreds of medicinal plants (such as myrrh and opium). Ancient Egypt Ancient Egyptian texts are of particular interest due to the language and translation controversies that accompany texts from this era and region. These differences in conclusions stem from the lack of complete knowledge of the Egyptian language: many translations are composed of mere approximations between Egyptian and modern ideas, and there can never be complete certainty of meaning or context. While physical documents are scarce, texts such as the Papyrus Ebers serve to illuminate and relieve some of the conjecture surrounding ancient herbal practices. The Papyrus consists of lists of ailments and their treatments, ranging from "disease of the limbs" to "diseases of the skin" and has information on over 850 plant medicines, including garlic, juniper, cannabis, castor bean, aloe, and mandrake. Treatments were mainly aimed at ridding the patient of the most prevalent symptoms because the symptoms were largely regarded as the disease itself. Knowledge of the collection and preparation of such remedies are mostly unknown, as many of the texts available for translation assume the physician already has some knowledge of how treatments are conducted and therefore such techniques would not need restating. Though modern understanding of Egyptian herbals stem form the translation of ancient texts, there is no doubt that trade and politics carried the Egyptian tradition to regions across the world, influencing and evolving many cultures medical practices and allowing for a glimpse into the world of ancient Egyptian medicine. Herbs used by Egyptian healers were mostly indigenous in origin, although some were imported from other regions like Lebanon. Other than papyri, evidence of herbal medicine has also been found in tomb illustrations or jars containing traces of herbs. India In India, Ayurveda medicine has used many herbs such as turmeric possibly as early as 4,000 BC. Earliest Sanskrit writings such as the Rig Veda, and Atharva Veda are some of the earliest available documents detailing the medical knowledge that formed the basis of the Ayurveda system. Many other herbs and minerals used in Ayurveda were later described by ancient Indian herbalists such as Charaka and Sushruta during the 1st millennium BC. The Sushruta Samhita attributed to Sushruta in the 6th century BC describes 700 medicinal plants, 64 preparations from mineral sources, and 57 preparations based on animal sources. China In China, seeds likely used for herbalism have been found in the archaeological sites of Bronze Age China dating from the Shang Dynasty. The mythological Chinese emperor Shennong is said to have written the first Chinese pharmacopoeia, the "Shennong Ben Cao Jing". The "Shennong Ben Cao Jing" lists 365 medicinal plants and their uses - including Ephedra (the shrub that introduced the drug ephedrine to modern medicine), hemp, and chaulmoogra (one of the first effective treatments for leprosy). Succeeding generations augmented on the Shennong Bencao Jing, as in the Yaoxing Lun (Treatise on the Nature of Medicinal Herbs), a 7th-century Tang Dynasty treatise on herbal medicine. Ancient Greece and Rome Hippocrates The Hippocratic Corpus serves as a collection of texts that are associated with the 'Father of Western Medicine', Hippocrates of Kos. Though the actual authorship of some of these texts is disputed, each reflects the general ideals put forth by Hippocrates and his followers. The recipes and remedies included in parts of the Corpus no doubt reveal popular and prevalent treatments of the early ancient Greek period. Though any of the herbals included in the Corpus are similar to those practiced in the religious sectors of healing, they differ strikingly in the lack of rites, prayers, or chants used in the application of remedies. This distinction is truly indicative of the Hippocratic preference for logic and reason within the practices of medicine. The ingredients mentioned in the Corpus consist of a myriad of herbs, both local to Greece and imported from exotic locales such as Arabia. While many imported goods would have been too expensive for common household use, some of the suggested ingredients include the more common and cheaper elderberries and St. John's Wort. Galen Galen of Pergamon, a Greek physician practicing in Rome, was certainly prolific in his attempt to write down his knowledge on all things medical – and in his pursuit, he wrote many texts regarding herbs and their properties, most notably his Works of Therapeutics. In this text, Galen outlines the merging of each discipline within medicine that combine to restore health and prevent disease. While the subject of therapeutics encompasses a wide array of topics, Galen's extensive work in the humors and four basic qualities helped pharmacists to better calibrate their remedies for the individual person and their unique symptoms. Diocles of Carystus The writings of Diocles of Carystus were also extensive and prolific in nature. With enough prestige to be referred to as "the second Hippocrates", his advice in herbalism and treatment was to be taken seriously. Though the original texts no longer exist, many medical scholars throughout the ages have quoted Diocles rather extensively, and it is in these fragments that we gain knowledge of his writings. It is purported that Diocles actually wrote the first comprehensive herbal- this work then cited numerous times by contemporaries such as Galen, Celsus, and Soranus. Pliny In what is one of the first encyclopedic texts, Pliny the Elder's Natural History serves as a comprehensive guide to nature and also presents an extensive catalog of herbs valuable in medicine. With over 900 drugs and plants listed, Pliny's writings provide a very large knowledge base upon which we may learn more about ancient herbalism and medical practices. Pliny himself referred to ailments as "the greatest of all the operations of nature," and the act of treatment via drugs as impacting the "state of peace or of war which exists between the various departments of nature". Dioscorides Much like Pliny, Pedanius Dioscorides constructed a pharmacopeia, De Materia Medica, consisting of over 1000 medicines produced form herbs, minerals, and animals. The remedies that comprise this work were widely utilized throughout the ancient period and Dioscorides remained the greatest expert on drugs for over 1,600 years. Similarly important for herbalists and botanists of later centuries was Theophrastus' Historia Plantarum, written in the 4th century BC, which was the first systematization of the botanical world. Middle Ages While there are certainly texts from the medieval period that denote the uses of herbs, there has been a long-standing debate between scholars as to the actual motivations and understandings that underline the creation of herbal documents during the medieval period. The first point of view dictates that the information presented in these medieval texts were merely copied from their classical equivalents without much thought or understanding. The second viewpoint, which is gaining traction among modern scholars, states that herbals were copied for actual use and backed by genuine understanding. Some evidence for the suggestion that herbals were utilized with knowledgeable intent, was the addition of several chapters of plants, lists of symptoms, habitat information, and plant synonyms added to texts such as the Herbarium. Notable texts utilized in this time period include Bald's Leechbook, the Lacnunga, the peri didaxeon, Herbarium Apulei, Da Taxone, and Madicina de Quadrupedidus, while the most popular during this time period were the Ex Herbis Femininis, the Herbarius, and works by Dioscorides. Benedictine monasteries were the primary source of medical knowledge in Europe and England during the Early Middle Ages. However, most of these monastic scholars' efforts were focused on translating and copying ancient Greco-Roman and Arabic works, rather than creating substantial new information and practices. Many Greek and Roman writings on medicine, as on other subjects, were preserved by hand copying of manuscripts in monasteries. The monasteries thus tended to become local centers of medical knowledge, and their herb gardens provided the raw materials for simple treatment of common disorders. At the same time, folk medicine in the home and village continued uninterrupted, supporting numerous wandering and settled herbalists. Among these were the "wise-women" and "wise men", who prescribed herbal remedies often along with spells, enchantments, divination and advice. One of the most famous women in the herbal tradition was Hildegard of Bingen. A 12th-century Benedictine nun, she wrote a medical text called Causae et Curae. During this time, herbalism was mainly practiced by women, particularly among Germanic tribes. There were three major sources of information on healing at the time including the Arabian School, Anglo-Saxon leechcraft, and Salerno. A great scholar of the Arabian School was Avicenna, who wrote The Canon of Medicine which became the standard medical reference work of the Arab world. "The Canon of Medicine is known for its introduction of systematic experimentation and the study of physiology, the discovery of contagious diseases and sexually transmitted diseases, the introduction of quarantine to limit the spread of infectious diseases, the introduction of experimental medicine, clinical trials, and the idea of a syndrome in the diagnosis of specific diseases. ...The Canon includes a description of some 760 medicinal plants and the medicine that could be derived from them." With Leechcraft, though bringing to mind part of their treatments, leech was the English term for medical practitioner. Salerno was a famous school in Italy centered around health and medicine. A student of the school was Constantine the African, credited with bringing Arab medicine to Europe. Translation of herbals During the Middle Ages, the study of plants began to be based on critical observations. "In the 16th and 17th century an interest in botany revived in Europe and spread to America by way of European conquest and colonization." Philosophers started to act as herbalists and academic professors studied plants with great depth. Herbalists began to explore the use of plants for both medicinal purposes and agricultural uses. Botanists in the Middle Ages were known as herbalists; they collected, grew, dried, stored, and sketched plants. Many became experts in identifying and describing plants according to their morphology and habitats, as well as their usefulness. These books, called herbals included beautiful drawings and paintings of plants as well as their uses. At that time both botany and the art of gardening stressed the utility of plants for man; the popular herbal, described the medical uses of plants. During the Middle Ages, there was an expansion of book culture that spread through the medieval world. The phenomenon of translation is well-documented, from its beginnings as a scholarly endeavor in Baghdad as early as the eighth century to its expansion throughout European Mediterranean centers of scholarship by the eleventh and twelfth centuries. The process of translation is collaborative effort, requiring a variety of people to translate and add to them. However, how the Middle Ages viewed nature seems to be a mystery. Translation of text and image has provided numerous versions and compilations of individual manuscripts from diverse sources, old and new. Translation is a dynamic process as well as a scholarly endeavor that contributed great to science in the Middle Ages; the process naturally entailed continuous revisions and additions. The Benedictine monasteries were known for their in-depth knowledge of herbals. These gardens grew the herbs which were considered to be useful for the treatment of the various human ills; the beginnings of modern medical education can be connected with monastic influence. Monastic academies were developed and monks were taught how to translate Greek manuscripts into Latin. Knowledge of medieval botanicals was closely related to medicine because the plant's principal use was for remedies. Herbals were structured by the names of the plants, identifying features, medicinal parts of plant, therapeutic properties, and some included instructions on how to prepare and use them. For medical use of herbals to be effective, a manual was developed. Dioscorides' De material medica was a significant herbal designed for practical purposes. Theophrastus wrote more than 200 papers describing the characteristics of over 500 plants. He developed a classification system for plants based on their morphology such as their form and structure. He described in detail pepper, cinnamon, bananas, asparagus, and cotton. Two of his best-known works, Enquiry into Plants and The Causes of Plants, have survived for many centuries and were translated into Latin. He has been referred to as the "grandfather of botany". Crateuas was the first to produce a pharmacological book for medicinal plants, and his book influenced medicine for many centuries. A Greek physician, Pedanius Dioscorides described over 600 different kinds of plants and describes their useful qualities for herbal medicine, and his illustrations were used for pharmacology and medicine as late as the Renaissance years. Monasteries established themselves as centers for medical care. Information on these herbals and how to use them was passed on from monks to monks, as well as their patients. These illustrations were of no use to everyday individuals; they were intended to be complex and for people with prior knowledge and understanding of herbal. The usefulness of these herbals have been questioned because they appear to be unrealistic and several plants are depicted claiming to cure the same condition, as “the modern world does not like such impression." When used by experienced healers, these plants can provide their many uses. For these medieval healers, no direction was needed their background allowed them to choose proper plants to use for a variety of medical conditions. The monk's purpose was to collect and organize text to make them useful in their monasteries. Medieval monks took many remedies from classical works and adapted them to their own needs as well as local needs. This may be why none of the collections of remedies we have presently agrees fully with another. Another form of translation was oral transmission; this was used to pass medical knowledge from generation to generation. A common misconception is that one can know early medieval medicine simply by identifying texts, but it is difficult to compose a clear understanding of herbals without prior knowledge. There are many factors that played in influenced in the translation of these herbals, the act of writing or illustrating was just a small piece of the puzzle, these remedies stems from many previous translations the incorporated knowledge from a variety of influences. Early modern era The 16th and 17th centuries were the great age of herbals, many of them available for the first time in English and other languages rather than Latin or Greek. The 18th and 19th centuries saw more incorporation of plants found in the Americas, but also the advance of modern medicine. 16th century The first herbal to be published in English was the anonymous Grete Herball of 1526. The two best-known herbals in English were The Herball or General History of Plants (1597) by John Gerard and The English Physician Enlarged (1653) by Nicholas Culpeper. Gerard's text was basically a pirated translation of a book by the Belgian herbalist Dodoens and his illustrations came from a German botanical work. The original edition contained many errors due to faulty matching of the two parts. Culpeper's blend of traditional medicine with astrology, magic, and folklore was ridiculed by the physicians of his day, yet his book - like Gerard's and other herbals - enjoyed phenomenal popularity. The Age of Exploration and the Columbian Exchange introduced new medicinal plants to Europe. The Badianus Manuscript was an illustrated Mexican herbal written in Nahuatl and Latin in the 16th century. 17th century The second millennium, however, also saw the beginning of a slow erosion of the pre-eminent position held by plants as sources of therapeutic effects. This began with the Black Death, which the then dominant Four Element medical system proved powerless to stop. A century later, Paracelsus introduced the use of active chemical drugs (like arsenic, copper sulfate, iron, mercury, and sulfur). 18th century In the Americas, herbals were relied upon for most medical knowledge with physicians being few and far between. These books included almanacs, Dodoens' New Herbal, Edinburgh New Dispensatory, Buchan's Domestic Medicine, and other works. Aside from European knowledge on American plants, Native Americans shared some of their knowledge with colonists, but most of these records were not written and compiled until the 19th century. John Bartram was a botanist that studied the remedies that Native Americans would share and often included bits of knowledge of these plants in printed almanacs. 19th century The formalization of pharmacology in the 19th century led to greater understanding of the specific actions drugs have on the body. At that time, Samuel Thompson was an uneducated but well respected herbalist who influenced professional opinions so much that Doctors and Herbalists would refer to themselves as Thompsonians. They distinguished themselves from "regular" doctors of the time who used calomel and bloodletting, and led to a brief renewal of the empirical method in herbal medicine. Modern era Traditional herbalism has been regarded as a method of alternative medicine in the United States since the Flexner Report of 1910 led to the closing of the eclectic medical schools where botanical medicine was exclusively practiced. In China, Mao Zedong reintroduced Traditional Chinese Medicine, which relied heavily on herbalism, into the health care system in 1949. Since then, schools have been training thousands of practitioners – including Americans – in the basics of Chinese medicines to be used in hospitals. While Britain in the 1930s was experiencing turbulence over the practice of herbalism, in the United States, government regulation began to prohibit the practice. "The World Health Organization estimated that 80% of people worldwide rely on herbal medicines for some part of their primary health care. In Germany, about 600 to 700 plant based medicines are available and are prescribed by some 70% of German physicians." The practice of prescribing treatments and cures to patients requires a legal medical license in the United States of America, and the licensing of these professions occurs on a state level. "There is currently no licensing or certification for herbalists in any state that precludes the rights of anyone to use, dispense, or recommend herbs." "Traditional medicine is a complex network of interaction of both ideas and practices, the study of which requires a multidisciplinary approach." Many alternative physicians in the 21st century incorporate herbalism in traditional medicine due to the diverse abilities plants have and their low number of side effects. See also Physic garden History of pharmacy Ethnobotany Medieval medicine of Western Europe Traditional African medicine References Further reading Botany History of botany |
No, this text is not related with defense topics | Kimodameshi ( or ; "test one's liver"), or test of courage is a Japanese activity in which people explore frightening, and potentially dangerous, places to build up courage. Kimodameshi is usually played in the summer, in group activities such as school club trips or camping. At night, group of people visit scary places such as a cemetery, haunted house, or a forest path to carry out specific missions there. See also Ghost hunting Haunted house Hyakumonogatari Kaidankai Kaidan, Japanese ghost stories References Japanese culture Ghosts Parapsychology Pseudoscience Hobbies |
No, this text is not related with defense topics | Tactical urbanism includes low-cost, temporary changes to the built environment, usually in cities, intended to improve local neighbourhoods and city gathering places. Tactical urbanism is also commonly referred to as guerrilla urbanism, pop-up urbanism, city repair, or D.I.Y. urbanism. Other terms include planning-by-doing, urban acupuncture, and urban prototyping. Terminology The term was popularized around 2010 to refer to a range of existing techniques. The Street Plans Collaborative defines "tactical urbanism" as an approach to urban change that features the following five characteristics: A deliberate, phased approach to instigating change; The offering of local solutions for local planning challenges; Short-term commitment as a first step towards longer-term change; Lower-risk, with potentially high rewards; and The development of social capital between citizens and the building of organizational capacity between public and private institutions, non-profits, and their constituents. While the 1984 English translation of The Practice of Everyday Life by French author Michel de Certeau used the term tactical urbanism, this was in reference to events occurring in Paris in 1968; the "tactical urbanism" that Certeau described was in opposition to "strategic urbanism", which modern concepts of tactical urbanism tend not to distinguish. The modern sense of the term is attributed to New York-based urban planner Mike Lydon. The Project for Public Spaces uses the phrase "Lighter, Quicker, Cheaper", coined by urban designer Eric Reynolds, to describe the same basic approach expressed by tactical urbanism. Origin The tactical urbanist movement takes inspiration from urban experiments including Ciclovía, Paris-Plages, and the introduction of plazas and pedestrian malls in New York City during the tenure of Janette Sadik-Khan as Commissioner of the New York City Department of Transportation. Tactical urbanism formally emerged as a movement following a meeting of the Next Generation of New Urbanist (CNU NextGen) group in November 2010 in New Orleans. A driving force of the movement is to put the onus back on individuals to take personal responsibility in creating sustainable buildings, streets, neighborhoods, and cities. Following the meeting, an open-source project called Tactical Urbanism: Short TermAction | Long Term Change was developed by a group from NextGen to define tactical urbanism and to promote various interventions to improve urban design and promote positive change in neighbourhoods and communities. Types of interventions Tactical urbanism projects vary significantly in scope, size, budget, and support. Projects often begin as grassroots interventions and spread to other cities, and are in some cases later adopted by municipal governments as best practices. Some common interventions are listed below: Better block initiatives Temporarily transforming retail streets using cheap or donated materials and volunteers. Spaces are transformed by introducing food carts, sidewalk tables, temporary bike lanes and narrowing of streets. Chair bombing The act of removing salvageable materials and using it to build public seating. The chairs are placed in areas that either are quiet or lack comfortable places to sit. De-fencing The act of removing unnecessary fences to break down barriers between neighbours, beautify communities, and encourage community building. Depaving The act of removing unnecessary pavement to transform driveways and parking into green space so that rainwater can be absorbed and neighbourhoods beautified. Food carts/trucks Food carts and trucks are used to attract people to underused public spaces and offer small business opportunities for entrepreneurs. Guerilla gardening Guerrilla gardening is the act of gardening on land that the gardeners do not have the legal rights to utilize, such as abandoned sites, areas not being cared for, or private property. Open Streets To temporarily provide safe spaces for walking, bicycling, skating, and social activities; promote local economic development; and raise awareness about the impact of cars in urban spaces. "Open Streets" is an anglicized term for the South American 'Ciclovia', which originated in Bogota. PARK(ing) Day An annual event where on street parking is converted into park-like spaces. Park(ing) Day was launched in 2005 by Rebar art and design studio. Pavement to Plazas Popularized in New York City, pavement plazas involve converting space on streets to usable public space. The closure of Times Square to vehicular traffic, and its low-cost conversion to a pedestrian plaza, is a primary example of a pavement plaza. Pop-up cafes Pop-up cafes are temporary patios or terraces built in parking spots to provide overflow seating for a nearby cafe or for passersby. Most common in cities where sidewalks are narrow and where there otherwise is not room for outdoor sitting or eating areas. Pop-up parks Pop-up parks temporarily or permanently transform underused spaces into community gathering areas through beautification. Pop-up retail Pop-up shops are temporary retail stores that are set up in vacant stores or property. Protected bike lanes Bike lane protections are usually done by placing potted plants or other physical barriers to make painted bike lanes feel safer. Sometimes there is no pre-existing bike lane, and the physical protection is the only delineator. Resources The Street Plans Collaborative, in collaboration with Ciudad Emergente and Codesign studio, produces a series of free tactical urbanism e-books. Volumes 1 and 2 focus on North American case studies, Volume 3 is a Spanish-language guide to Latin American projects, and Volume 4 covers Australia and New Zealand, including responses to the 2011 Christchurch earthquake. Street Plans' Mike Lydon and Anthony Garcia published a tactical urbanism book in March 2015. References New Urbanism Urban design Environmentalism Sustainable transport Sustainable urban planning Urban planning Urban studies and planning terminology |
No, this text is not related with defense topics | BioLegend is a global developer and manufacturer of antibodies and reagents used in biomedical research located in San Diego, California. It was incorporated in June 2002 and has since expanded to include BioLegend Japan KK, where it is partnered with Tomy Digital Biology Co., Ltd. in Tokyo, BioLegend Europe in the United Kingdom, BioLegend GmbH in Germany, and BioLegend UK Ltd in the United Kingdom. BioLegend manufactures products in the areas of neuroscience, cell immunophenotyping, cytokines and chemokines, adhesion, cancer research, T regulatory cells, stem cells, innate immunity, cell-cycle analysis, apoptosis, and modification-specific antibodies. Reagents are created for use in flow cytometry, proteogenomics, ELISA, immunoprecipitation, Western blotting, immunofluorescence microscopy, immunohistochemistry, and in vitro or in vivo functional assays. History BioLegend was founded by CEO, Gene Lay, D.V.M., who was also the co-founder of PharMingen. In 2011, BioLegend co-developed and introduced Brilliant Violet(TM)-conjugated antibodies, using a novel fluorophore based on Nobel Prize-winning chemistry developed by Sirigen. In 2018, BioLegend introduced TotalSeq™ antibody-oligonucleotide conjugates for use in single cell proteogenomics analysis. BioLegend continued expansion and moved into a new 8 acre campus at BioLegend Way in 2019 with state of the art facilities designed to accommodate up to 1000 employees. References Biotechnology Antibodies |
No, this text is not related with defense topics | Research in Computational Molecular Biology (RECOMB) is an annual academic conference on the subjects of bioinformatics and computational biology. The conference has been held every year since 1997 and is a major international conference in computational biology, alongside the ISMB and ECCB conferences. The conference is affiliated with the International Society for Computational Biology. Since the first conference, authors of accepted proceedings papers have been invited to submit a revised version to a special issue of the Journal of Computational Biology. RECOMB was established in 1997 by Sorin Istrail, Pavel Pevzner and Michael Waterman. The first conference was held at the Sandia National Laboratories in Santa Fe, New Mexico. A series of RECOMB Satellite meetings was established by Pavel Pevzner in 2001. These meetings cover specialist aspects of bioinformatics, including massively parallel sequencing, comparative genomics, regulatory genomics and bioinformatics education. As of RECOMB 2010, the conference has included a highlights track, modelled on the success of a similar track at the ISMB conference. The highlights track contains presentations for computational biology papers published in the previous 18 months. In 2014 RECOMB and PLOS Computational Biology coordinated to let authors submit papers in parallel to both conference and journal. Papers not selected for publication in PLOS Computational Biology were published in edited form in the Journal of Computational Biology as usual. As of 2016 the conference started a partnership with Cell Systems. Each year, a subset of work accepted at RECOMB is also considered for publication in a special issue of Cell Systems devoted to RECOMB. Other RECOMB papers are invited for a short synopsis (Cell Systems Calls) in the same issue. RECOMB steering committee is chaired by Bonnie Berger. List of conferences See also Intelligent Systems for Molecular Biology (ISMB) References Computational science Bioinformatics Computer science conferences Biology conferences + |
No, this text is not related with defense topics | Sustainable living describes a lifestyle that attempts to reduce an individual's or society's use of the Earth's natural resources, and one's personal resources. It is often called "earth harmony living" or "net zero living". Its practitioners often attempt to reduce their ecological footprint (including their carbon footprint) by altering their home designs and methods of transportation, energy consumption and diet. Its proponents aim to conduct their lives in ways that are consistent with sustainability, naturally balanced, and respectful of humanity's symbiotic relationship with the Earth's natural ecology. The practice and general philosophy of ecological living closely follows the overall principles of sustainable development. One approach to sustainable living, exemplified by small-scale urban transition towns and rural ecovillages, seeks to create self-reliant communities based on principles of simple living, which maximize self-sufficiency particularly in food production. These principles, on a broader scale, underpin the concept of a bioregional economy. Additionally, practical ecovillage builders like Living Villages maintain that the shift to alternative technologies will only be successful if the resultant built environment is attractive to a local culture and can be maintained and adapted as necessary over multiple generations. Definition Sustainable living is fundamentally the application of sustainability to lifestyle choices and decisions. One conception of sustainable living expresses what it means in triple-bottom-line terms as meeting present ecological, societal, and economical needs without compromising these factors for future generations. Another broader conception describes sustainable living in terms of four interconnected social domains: economics, ecology, politics, and culture. In the first conception, sustainable living can be described as living within the innate carrying capacities defined by these factors. In the second or Circles of Sustainability conception, sustainable living can be described as negotiating the relationships of needs within limits across all the interconnected domains of social life, including consequences for future human generations and non-human species. Sustainable design and sustainable development are critical factors to sustainable living. Sustainable design encompasses the development of appropriate technology, which is a staple of sustainable living practices. Sustainable development in turn is the use of these technologies in infrastructure. Sustainable architecture and agriculture are the most common examples of this practice. Lester R. Brown, a prominent environmentalist and founder of the Worldwatch Institute and Earth Policy Institute, describes sustainable living in the twenty-first century as "shifting to a renewable energy-based, reuse/recycle economy with a diversified transport system." Derrick Jensen ("the poet-philosopher of the ecological movement"), a celebrated American author, radical environmentalist and prominent critic of mainstream environmentalism argues that "industrial civilization is not and can never be sustainable". From this statement, the natural conclusion is that sustainable living is at odds with industrialization. Thus, practitioners of the philosophy potentially face the challenge of living in an industrial society and adapting alternative norms, technologies, or practices. History 1954 The publication of Living the Good Life by Helen and Scott Nearing marked the beginning of the modern day sustainable living movement. The publication paved the way for the "back-to-the-land movement" in the late 1960s and early 1970s. 1962 The publication of Silent Spring by Rachel Carson marked another major milestone for the sustainability movement. 1972 Donella Meadows wrote the international bestseller The Limits to Growth, which reported on a study of long-term global trends in population, economics and the environment. It sold millions of copies and was translated into 28 languages. 1973 E. F. Schumacher published a collection of essays on shifting towards sustainable living through the appropriate use of technology in his book Small Is Beautiful. 1992–2002 The United Nations held a series of conferences, which focused on increasing sustainability within societies to conserve the Earth's natural resources. The Earth Summit conferences were held in 1992, 1972 and 2002. 2007 the United Nations published Sustainable Consumption and Production, Promoting Climate-Friendly Household Consumption Patterns, which promoted sustainable lifestyles in communities and homes. Shelter On a global scale, shelter is associated with about 25% of the greenhouse gas emissions embodied in household purchases and 26% of households' land use. Sustainable homes are built using sustainable methods, materials, and facilitate green practices, enabling a more sustainable lifestyle. Their construction and maintenance have neutral impacts on the Earth. Often, if necessary, they are close in proximity to essential services such as grocery stores, schools, daycares, work, or public transit making it possible to commit to sustainable transportation choices. Sometimes, they are off-the-grid homes that do not require any public energy, water, or sewer service. If not off-the-grid, sustainable homes may be linked to a grid supplied by a power plant that is using sustainable power sources, buying power as is normal convention. Additionally, sustainable homes may be connected to a grid, but generate their own electricity through renewable means and sell any excess to a utility. There are two common methods to approaching this option: net metering and double metering. Net metering uses the common meter that is installed in most homes, running forward when power is used from the grid, and running backward when power is put into the grid (which allows them to “net“ out their total energy use, putting excess energy into the grid when not needed, and using energy from the grid during peak hours, when you may not be able to produce enough immediately). Power companies can quickly purchase the power that is put back into the grid, as it is being produced. Double metering involves installing two meters: one measuring electricity consumed, the other measuring electricity created. Additionally, or in place of selling their renewable energy, sustainable home owners may choose to bank their excess energy by using it to charge batteries. This gives them the option to use the power later during less favorable power-generating times (i.e.: night-time, when there has been no wind, etc.), and to be completely independent of the electrical grid. Sustainably designed (see Sustainable Design) houses are generally sited so as to create as little of a negative impact on the surrounding ecosystem as possible, oriented to the sun so that it creates the best possible microclimate (typically, the long axis of the house or building should be oriented east-west), and provide natural shading or wind barriers where and when needed, among many other considerations. The design of a sustainable shelter affords the options it has later (i.e.: using passive solar lighting and heating, creating temperature buffer zones by adding porches, deep overhangs to help create favorable microclimates, etc.) Sustainably constructed houses involve environmentally friendly management of waste building materials such as recycling and composting, use non-toxic and renewable, recycled, reclaimed, or low-impact production materials that have been created and treated in a sustainable fashion (such as using organic or water-based finishes), use as much locally available materials and tools as possible so as to reduce the need for transportation, and use low-impact production methods (methods that minimize effects on the environment). In April 2019, New York City passed a bill to cut greenhouse gas emissions. The bill's goal was to minimize the climate pollution stemming from the hub that is New York City. It was approved in a 42 to 5 vote, showing a strong favor of the bill. The bill will restrict energy use in larger buildings. The bill imposes greenhouse gas caps on buildings that are over 25,000 square feet. The calculation of the exact cap is done by square feet per building. A similar emission cap had existed already for buildings of 50,000 square feet or more. This bill expands the legislation to cover more large buildings. The bill protects rent-regulated buildings of which there are around 990,000. Due to the implementation of the bill, around 23,000 new green jobs will be created. The bill received support from Mayor Bill de Blasio. New York is taking action based on the recognition that their climate pollution has effects far beyond the city limits of New York. In discussion of a possible new Amazon headquarters in NYC, De Blasio specified that the bill applies to everyone, regardless of prestige. Mayor de Blasio also announced a lawsuit by the city (of New York) to five major oil companies due to their harm on the environment and climate pollution. This also raises the question of the possible closing of the 24 oil and gas burning power plants in New York City, due to the aimed declining use of these sources of energy. With the emission cap, New York will likely see a turn to renewable energy sources. It is possible that these plants will be transitioned to hubs of renewable energy to power the city. This new bill will go into action in three years (2022) and is estimated to cut climate pollution by 40% in eight years (by 2030). Many materials can be considered a “green” material until its background is revealed. Any material that has used toxic or carcinogenic chemicals in its treatment or manufacturing (such as formaldehyde in glues used in woodworking), has traveled extensively from its source or manufacturer, or has been cultivated or harvested in an unsustainable manner might not be considered green. In order for any material to be considered green, it must be resource efficient, not compromise indoor air quality or water conservation, and be energy efficient (both in processing and when in use in the shelter). Resource efficiency can be achieved by using as much recycled content, reusable or recyclable content, materials that employ recycled or recyclable packaging, locally available material, salvaged or remanufactured material, material that employs resource efficient manufacturing, and long-lasting material as possible. Sustainable building materials Some building materials might be considered "sustainable" by some definitions and under some conditions. For example, wood might be thought of as sustainable if it is grown using sustainable forest management, processed using sustainable energy. delivered by sustainable transport, etc.: Under different conditions, however, it might not be considered as sustainable. The following materials might be considered as sustainable under certain conditions, based on a Life-cycle assessment. Adobe Bamboo Cellulose insulation Clay Cob Composite wood (when made from reclaimed hardwood sawdust and reclaimed or recycled plastic) Compressed earth block Cordwood Cork Hemp Insulating concrete forms Lime render Linoleum Lumber from Forest Stewardship Council approved sources Natural Rubber Natural fiber (coir, wool, jute, etc.) Organic cotton insulation Papercrete Rammed earth Reclaimed stone Reclaimed brick Recycled metal Recycled concrete Recycled paper Soy-based adhesive Soy insulation Straw Bale Structural insulated panel Wood Insulation of a sustainable home is important because of the energy it conserves throughout the life of the home. Well insulated walls and lofts using green materials are a must as it reduces or, in combination with a house that is well designed, eliminates the need for heating and cooling altogether. Installation of insulation varies according to the type of insulation being used. Typically, lofts are insulated by strips of insulating material laid between rafters. Walls with cavities are done in much the same manner. For walls that do not have cavities behind them, solid-wall insulation may be necessary which can decrease internal space and can be expensive to install. Energy-efficient windows are another important factor in insulation. Simply assuring that windows (and doors) are well sealed greatly reduces energy loss in a home. Double or Triple glazed windows are the typical method to insulating windows, trapping gas or creating a vacuum between two or three panes of glass allowing heat to be trapped inside or out. Low-emissivity or Low-E glass is another option for window insulation. It is a coating on windowpanes of a thin, transparent layer of metal oxide and works by reflecting heat back to its source, keeping the interior warm during the winter and cool during the summer. Simply hanging heavy-backed curtains in front of windows may also help their insulation. “Superwindows,” mentioned in Natural Capitalism: Creating the Next Industrial Revolution, became available in the 1980s and use a combination of many available technologies, including two to three transparent low-e coatings, multiple panes of glass, and a heavy gas filling. Although more expensive, they are said to be able to insulate four and a half times better than a typical double-glazed windows. Equipping roofs with highly reflective material (such as aluminum) increases a roof's albedo and will help reduce the amount of heat it absorbs, hence, the amount of energy needed to cool the building it is on. Green roofs or “living roofs” are a popular choice for thermally insulating a building. They are also popular for their ability to catch storm-water runoff and, when in the broader picture of a community, reduce the heat island effect (see urban heat island) thereby reducing energy costs of the entire area. It is arguable that they are able to replace the physical “footprint” that the building creates, helping reduce the adverse environmental impacts of the building's presence. Energy efficiency and water conservation are also major considerations in sustainable housing. If using appliances, computers, HVAC systems, electronics, or lighting the sustainable-minded often look for an Energy Star label, which is government-backed and holds stricter regulations in energy and water efficiency than is required by law. Ideally, a sustainable shelter should be able to completely run the appliances it uses using renewable energy and should strive to have a neutral impact on the Earth's water sources Greywater, including water from washing machines, sinks, showers, and baths may be reused in landscape irrigation and toilets as a method of water conservation. Likewise, rainwater harvesting from storm-water runoff is also a sustainable method to conserve water use in a sustainable shelter. Sustainable Urban Drainage Systems replicate the natural systems that clean water in wildlife and implement them in a city's drainage system so as to minimize contaminated water and unnatural rates of runoff into the environment. See related articles in: LEED (Leadership in Energy and Environmental Design) and also it is one of the most important factor of sustainable lifestyle. Power As mentioned under Shelter, some sustainable households may choose to produce their own renewable energy, while others may choose to purchase it through the grid from a power company that harnesses sustainable sources (also mentioned previously are the methods of metering the production and consumption of electricity in a household). Purchasing sustainable energy, however, may simply not be possible in some locations due to its limited availability. 6 out of the 50 states in the US do not offer green energy, for example. For those that do, its consumers typically buy a fixed amount or a percentage of their monthly consumption from a company of their choice and the bought green energy is fed into the entire national grid. Technically, in this case, the green energy is not being fed directly to the household that buys it. In this case, it is possible that the amount of green electricity that the buying household receives is a small fraction of their total incoming electricity. This may or may not depend on the amount being purchased. The purpose of buying green electricity is to support their utility's effort in producing sustainable energy. Producing sustainable energy on an individual household or community basis is much more flexible, but can still be limited in the richness of the sources that the location may afford (some locations may not be rich in renewable energy sources while others may have an abundance of it). When generating renewable energy and feeding it back into the grid (in participating countries such as the US and Germany), producing households are typically paid at least the full standard electricity rate by their utility and are also given separate renewable energy credits that they can then sell to their utility, additionally (utilities are interested in buying these renewable energy credits because it allows them to claim that they produce renewable energy). In some special cases, producing households may be paid up to four times the standard electricity rate, but this is not common. Solar power harnesses the energy of the sun to make electricity. Two typical methods for converting solar energy into electricity are photo-voltaic cells that are organized into panels and concentrated solar power, which uses mirrors to concentrate sunlight to either heat a fluid that runs an electrical generator via a steam turbine or heat engine, or to simply cast onto photo-voltaic cells. The energy created by photo-voltaic cells is a direct current and has to be converted to alternating current before it can be used in a household. At this point, users can choose to either store this direct current in batteries for later use, or use an AC/DC inverter for immediate use. To get the best out of a solar panel, the angle of incidence of the sun should be between 20 and 50 degrees. Solar power via photo-voltaic cells are usually the most expensive method to harnessing renewable energy, but is falling in price as technology advances and public interest increases. It has the advantages of being portable, easy to use on an individual basis, readily available for government grants and incentives, and being flexible regarding location (though it is most efficient when used in hot, arid areas since they tend to be the most sunny). For those that are lucky, affordable rental schemes may be found. Concentrated solar power plants are typically used on more of a community scale rather than an individual household scale, because of the amount of energy they are able to harness but can be done on an individual scale with a parabolic reflector. Solar thermal energy is harnessed by collecting direct heat from the sun. One of the most common ways that this method is used by households is through solar water heating. In a broad perspective, these systems involve well insulated tanks for storage and collectors, are either passive or active systems (active systems have pumps that continuously circulate water through the collectors and storage tank) and, in active systems, involve either directly heating the water that will be used or heating a non-freezing heat-transfer fluid that then heats the water that will be used. Passive systems are cheaper than active systems since they do not require a pumping system (instead, they take advantage of the natural movement of hot water rising above cold water to cycle the water being used through the collector and storage tank). Other methods of harnessing solar power are solar space heating (for heating internal building spaces), solar drying (for drying wood chips, fruits, grains, etc.), solar cookers, solar distillers, and other passive solar technologies (simply, harnessing sunlight without any mechanical means). Wind power is harnessed through turbines, set on tall towers (typically 20’ or 6m with 10‘ or 3m diameter blades for an individual household's needs) that power a generator that creates electricity. They typically require an average of wind speed of 9 mi/hr (14 km/hr) to be worth their investment (as prescribed by the US Department of Energy), and are capable of paying for themselves within their lifetimes. Wind turbines in urban areas usually need to be mounted at least 30’ (10m) in the air to receive enough wind and to be void of nearby obstructions (such as neighboring buildings). Mounting a wind turbine may also require permission from authorities. Wind turbines have been criticized for the noise they produce, their appearance, and the argument that they can affect the migratory patterns of birds (their blades obstruct passage in the sky). Wind turbines are much more feasible for those living in rural areas and are one of the most cost-effective forms of renewable energy per kilowatt, approaching the cost of fossil fuels, and have quick paybacks. For those that have a body of water flowing at an adequate speed (or falling from an adequate height) on their property, hydroelectricity may be an option. On a large scale, hydroelectricity, in the form of dams, has adverse environmental and social impacts. When on a small scale, however, in the form of single turbines, hydroelectricity is very sustainable. Single water turbines or even a group of single turbines are not environmentally or socially disruptive. On an individual household basis, single turbines are the probably the only economically feasible route (but can have high paybacks and is one of the most efficient methods of renewable energy production). It is more common for an eco-village to use this method rather than a singular household. Geothermal energy production involves harnessing the hot water or steam below the earth's surface, in reservoirs, to produce energy. Because the hot water or steam that is used is reinjected back into the reservoir, this source is considered sustainable. However, those that plan on getting their electricity from this source should be aware that there is controversy over the lifespan of each geothermal reservoir as some believe that their lifespans are naturally limited (they cool down over time, making geothermal energy production there eventually impossible). This method is often large scale as the system required to harness geothermal energy can be complex and requires deep drilling equipment. There do exist small individual scale geothermal operations, however, which harness reservoirs very close to the Earth's surface, avoiding the need for extensive drilling and sometimes even taking advantage of lakes or ponds where there is already a depression. In this case, the heat is captured and sent to a geothermal heat pump system located inside the shelter or facility that needs it (often, this heat is used directly to warm a greenhouse during the colder months). Although geothermal energy is available everywhere on Earth, practicality and cost-effectiveness varies, directly related to the depth required to reach reservoirs. Places such as the Philippines, Hawaii, Alaska, Iceland, California, and Nevada have geothermal reservoirs closer to the Earth's surface, making its production cost-effective. Biomass power is created when any biological matter is burned as fuel. As with the case of using green materials in a household, it is best to use as much locally available material as possible so as to reduce the carbon footprint created by transportation. Although burning biomass for fuel releases carbon dioxide, sulfur compounds, and nitrogen compounds into the atmosphere, a major concern in a sustainable lifestyle, the amount that is released is sustainable (it will not contribute to a rise in carbon dioxide levels in the atmosphere). This is because the biological matter that is being burned releases the same amount of carbon dioxide that it consumed during its lifetime. However, burning biodiesel and bioethanol (see biofuel) when created from virgin material, is increasingly controversial and may or may not be considered sustainable because it inadvertently increases global poverty, the clearing of more land for new agriculture fields (the source of the biofuel is also the same source of food), and may use unsustainable growing methods (such as the use of environmentally harmful pesticides and fertilizers). List of organic matter than can be burned for fuel Bagasse Biogas Manure Stover Straw Used vegetable oil Wood Digestion of organic material to produce methane is becoming an increasingly popular method of biomass energy production. Materials such as waste sludge can be digested to release methane gas that can then be burnt to produce electricity. Methane gas is also a natural by-product of landfills, full of decomposing waste, and can be harnessed here to produce electricity as well. The advantage in burning methane gas is that is prevents the methane from being released into the atmosphere, exacerbating the greenhouse effect. Although this method of biomass energy production is typically large scale (done in landfills), it can be done on a smaller individual or community scale as well. Food Globally, food accounts for 48% and 90% of household environmental impacts on land and water resources respectively, with consumption of meat, dairy and processed food rising quickly with income. Environmental impacts of industrial agriculture Industrial agricultural production is highly resource and energy intensive. Industrial agriculture systems typically require heavy irrigation, extensive pesticide and fertilizer application, intensive tillage, concentrated monoculture production, and other continual inputs. As a result of these industrial farming conditions, today's mounting environmental stresses are further exacerbated. These stresses include: declining water tables, chemical leaching, chemical runoff, soil erosion, land degradation, loss in biodiversity, and other ecological concerns. Conventional food distribution and long distance transport Conventional food distribution and long distance transport are additionally resource and energy exhaustive. Substantial climate-disrupting carbon emissions, boosted by the transport of food over long distances, are of growing concern as the world faces such global crisis as natural resource depletion, peak oil and climate change. “The average American meal currently costs about 1500 miles, and takes about 10 calories of oil and other fossil fuels to produce a single calorie of food.” Local and seasonal foods A more sustainable means of acquiring food is to purchase locally and seasonally. Buying food from local farmers reduces carbon output, caused by long-distance food transport, and stimulates the local economy. Local, small-scale farming operations also typically utilize more sustainable methods of agriculture than conventional industrial farming systems such as decreased tillage, nutrient cycling, fostered biodiversity and reduced chemical pesticide and fertilizer applications. Adapting a more regional, seasonally based diet is more sustainable as it entails purchasing less energy and resource demanding produce that naturally grow within a local area and require no long-distance transport. These vegetables and fruits are also grown and harvested within their suitable growing season. Thus, seasonal food farming does not require energy intensive greenhouse production, extensive irrigation, plastic packaging and long-distance transport from importing non-regional foods, and other environmental stressors. Local, seasonal produce is typically fresher, unprocessed and argued to be more nutritious. Local produce also contains less to no chemical residues from applications required for long-distance shipping and handling. Farmers' markets, public events where local small-scale farmers gather and sell their produce, are a good source for obtaining local food and knowledge about local farming productions. As well as promoting localization of food, farmers markets are a central gathering place for community interaction. Another way to become involved in regional food distribution is by joining a local community-supported agriculture (CSA). A CSA consists of a community of growers and consumers who pledge to support a farming operation while equally sharing the risks and benefits of food production. CSA's usually involve a system of weekly pick-ups of locally farmed vegetables and fruits, sometimes including dairy products, meat and special food items such as baked goods. Considering the previously noted rising environmental crisis, the United States and much of the world is facing immense vulnerability to famine. Local food production ensures food security if potential transportation disruptions and climatic, economical, and sociopolitical disasters were to occur. Reducing meat consumption Industrial meat production also involves high environmental costs such as land degradation, soil erosion and depletion of natural resources, especially pertaining to water and food. Mass meat production increase the amount of methane in the atmosphere. For more information on the environmental impact of meat production and consumption, see the ethics of eating meat. Reducing meat consumption, perhaps to a few meals a week, or adopting a vegetarian or vegan diet, alleviates the demand for environmentally damaging industrial meat production. Buying and consuming organically raised, free range or grass fed meat is another alternative towards more sustainable meat consumption. Organic farming Purchasing and supporting organic products is another fundamental contribution to sustainable living. Organic farming is a rapidly emerging trend in the food industry and in the web of sustainability. According to the USDA National Organic Standards Board (NOSB), organic agriculture is defined as "an ecological production management system that promotes and enhances biodiversity, biological cycles, and soil biological activity. It is based on minimal use of off-farm inputs and on management practices that restore, maintain, or enhance ecological harmony. The primary goal of organic agriculture is to optimize the health and productivity of interdependent communities of soil life, plants, animals and people." Upon sustaining these goals, organic agriculture uses techniques such as crop rotation, permaculture, compost, green manure and biological pest control. In addition, organic farming prohibits or strictly limits the use of manufactured fertilizers and pesticides, plant growth regulators such as hormones, livestock antibiotics, food additives and genetically modified organisms. Organically farmed products include vegetables, fruit, grains, herbs, meat, dairy, eggs, fibers, and flowers. See organic certification for more information. Urban gardening In addition to local, small-scale farms, there has been a recent emergence in urban agriculture expanding from community gardens to private home gardens. With this trend, both farmers and ordinary people are becoming involved in food production. A network of urban farming systems helps to further ensure regional food security and encourages self-sufficiency and cooperative interdependence within communities. With every bite of food raised from urban gardens, negative environmental impacts are reduced in numerous ways. For instance, vegetables and fruits raised within small-scale gardens and farms are not grown with tremendous applications of nitrogen fertilizer required for industrial agricultural operations. The nitrogen fertilizers cause toxic chemical leaching and runoff that enters our water tables. Nitrogen fertilizer also produces nitrous oxide, a more damaging greenhouse gas than carbon dioxide. Local, community-grown food also requires no imported, long-distance transport which further depletes our fossil fuel reserves. In developing more efficiency per land acre, urban gardens can be started in a wide variety of areas: in vacant lots, public parks, private yards, church and school yards, on roof tops (roof-top gardens), and many other places. Communities can work together in changing zoning limitations in order for public and private gardens to be permissible. Aesthetically pleasing edible landscaping plants can also be incorporated into city landscaping such as blueberry bushes, grapevines trained on an arbor, pecan trees, etc. With as small a scale as home or community farming, sustainable and organic farming methods can easily be utilized. Such sustainable, organic farming techniques include: composting, biological pest control, crop rotation, mulching, drip irrigation, nutrient cycling and permaculture. For more information on sustainable farming systems, see sustainable agriculture. Food preservation and storage Preserving and storing foods reduces reliance on long-distance transported food and the market industry. Home-grown foods can be preserved and stored outside of their growing season and continually consumed throughout the year, enhancing self-sufficiency and independence from the supermarket. Food can be preserved and saved by dehydration, freezing, vacuum packing, canning, bottling, pickling and jellying. For more information, see food preservation. Transportation With rising concerns over non-renewable energy source usage and climate change caused by carbon emissions, the phase-out of fossil fuel vehicles is becoming more and more important to the conversation of sustainability. Zero-emission urban transport systems that foster mobility, accessible public transportation and healthier urban environments are needed. Such urban transport systems should consist of rail transport, electric buses, bicycle pathways, provision for human-powered transport and pedestrian walkways. Public transport systems such as underground rail systems and bus transit systems shift huge numbers of people away from reliance on car dependency and dramatically reduce the rate of carbon emissions caused by automobile transport. In comparison to automobiles, bicycles are a paragon of energy efficient personal transportation with the bicycle roughly 50 times more energy efficient than driving. Bicycles increase mobility while alleviating congestion, lowering air and noise pollution, and increasing physical exercise. Most importantly, they do not emit climate-damaging carbon dioxide. Bike-sharing programs are beginning to boom throughout the world and are modeled in leading cities such as Paris, Amsterdam and London. Bike-sharing programs offer kiosks and docking stations that supply hundreds to thousands of bikes for rental throughout a city through small deposits or affordable memberships. A recent boom has occurred in electric bikes especially in China and other Asian countries. Electric bikes are similar to electric cars in that they are battery-powered and can be plugged into the provincial electric grid for recharging as needed. In contrast to electric cars, electric bikes do not directly use any fossil fuels. Adequate sustainable urban transportation is dependent upon proper city transport infrastructure and planning that incorporates efficient public transit along with bicycle and pedestrian-friendly pathways. Water A major factor of sustainable living involves that which no human can live without, water. Unsustainable water use has far reaching implications for humankind. Currently, humans use one-fourth of the Earth's total fresh water in natural circulation, and over half the accessible runoff. Additionally, population growth and water demand is ever increasing. Thus, it is necessary to use available water more efficiently. In sustainable living, one can use water more sustainably through a series of simple, everyday measures. These measures involve considering indoor home appliance efficiency, outdoor water use, and daily water use awareness. Indoor home appliances Housing and commercial buildings account for 12 percent of America's freshwater withdrawals. A typical American single family home uses about per person per day indoors. This use can be reduced by simple alterations in behavior and upgrades to appliance quality. Toilets Toilets accounted for almost 30% of residential indoor water use in the United States in 1999. One flush of a standard U.S. toilet requires more water than most individuals, and many families, in the world use for all their needs in an entire day. A home's toilet water sustainability can be improved in one of two ways: improving the current toilet or installing a more efficient toilet. To improve the current toilet, one possible method is to put weighted plastic bottles in the toilet tank. Also, there are inexpensive tank banks or float booster available for purchase. A tank bank is a plastic bag to be filled with water and hung in the toilet tank. A float booster attaches underneath the float ball of pre-1986 three and a half gallon capacity toilets. It allows these toilets to operate at the same valve and float setting but significantly reduces their water level, saving between one and one and a third gallons of water per flush. A major waste of water in existing toilets is leaks. A slow toilet leak is undetectable to the eye, but can waste hundreds of gallons each month. One way to check this is to put food dye in the tank, and to see if the water in the toilet bowl turns the same color. In the event of a leaky flapper, one can replace it with an adjustable toilet flapper, which allows self-adjustment of the amount of water per flush. In installing a new toilet there are a number of options to obtain the most water efficient model. A low flush toilet uses one to two gallons per flush. Traditionally, toilets use three to five gallons per flush. If an eighteen-liter per flush toilet is removed and a six-liter per flush toilet is put in its place, 70% of the water flushed will be saved while the overall indoor water use by will be reduced by 30%. It is possible to have a toilet that uses no water. A composting toilet treats human waste through composting and dehydration, producing a valuable soil additive. These toilets feature a two-compartment bowl to separate urine from feces. The urine can be collected or sold as fertilizer. The feces can be dried and bagged or composted. These toilets cost scarcely more than regularly installed toilets and do not require a sewer hookup. In addition to providing valuable fertilizer, these toilets are highly sustainable because they save sewage collection and treatment, as well as lessen agricultural costs and improve topsoil. Additionally, one can reduce toilet water sustainability by limiting total toilet flushing. For instance, instead of flushing small wastes, such as tissues, one can dispose of these items in the trash or compost. Showers On average, showers were 18% of U.S. indoor water use in 1999, at per minute traditionally in America. A simple method to reduce this use is to switch to low-flow, high-performance showerheads. These showerheads use only 1.0–1.5 gpm or less. An alternative to replacing the showerhead is to install a converter. This device arrests a running shower upon reaching the desired temperature. Solar water heaters can be used to obtain optimal water temperature, and are more sustainable because they reduce dependence on fossil fuels. To lessen excess water use, water pipes can be insulated with pre-slit foam pipe insulation. This insulation decreases hot water generation time. A simple, straightforward method to conserve water when showering is to take shorter showers. One method to accomplish this is to turn off the water when it is not necessary (such as while lathering) and resuming the shower when water is necessary. This can be facilitated when the plumbing or showerhead allow turning off the water without disrupting the desired temperature setting (common in the UK but not the United States). Dishwashers and sinks On average, sinks were 15% of U.S. indoor water use in 1999. There are, however, easy methods to rectify excessive water loss. Available for purchase is a screw-on aerator. This device works by combining water with air thus generating a frothy substance with greater perceived volume, reducing water use by half. Additionally, there is a flip-valve available that allows flow to be turned off and back on at the previously reached temperature. Finally, a laminar flow device creates a 1.5–2.4 gpm stream of water that reduces water use by half, but can be turned to normal water level when optimal. In addition to buying the above devices, one can live more sustainably by checking sinks for leaks, and fixing these links if they exist. According to the EPA, "A small drip from a worn faucet washer can waste 20 gallons of water per day, while larger leaks can waste hundreds of gallons". When washing dishes by hand, it is not necessary to leave the water running for rinsing, and it is more efficient to rinse dishes simultaneously. On average, dishwashing consumes 1% of indoor water use. When using a dishwasher, water can be conserved by only running the machine when it is full. Some have a "low flow" setting to use less water per wash cycle. Enzymatic detergents clean dishes more efficiently and more successfully with a smaller amount of water at a lower temperature. Washing machines On average, 23% of U.S. indoor water use in 1999 was due to clothes washing. In contrast to other machines, American washing machines have changed little to become more sustainable. A typical washing machine has a vertical-axis design, in which clothes are agitated in a tubful of water. Horizontal-axis machines, in contrast, put less water into the bottom of the rub and rotate clothes through it. These machines are more efficient in terms of soap use and clothing stability. Outdoor water use There are a number of ways one can incorporate a personal yard, roof, and garden in more sustainable living. While conserving water is a major element of sustainability, so is sequestering water. Conserving water In planning a yard and garden space, it is most sustainable to consider the plants, soil, and available water. Drought resistant shrubs, plants, and grasses require a smaller amount of water in comparison to more traditional species. Additionally, native plants (as opposed to herbaceous perennials) will use a smaller supply of water and have a heightened resistance to plant diseases of the area. Xeriscaping is a technique that selects drought-tolerant plants and accounts for endemic features such as slope, soil type, and native plant range. It can reduce landscape water use by 50 – 70%, while providing habitat space for wildlife. Plants on slopes help reduce runoff by slowing and absorbing accumulated rainfall. Grouping plants by watering needs further reduces water waste. After planting, placing a circumference of mulch surrounding plants functions to lessen evaporation. To do this, firmly press two to four inches of organic matter along the plant's dripline. This prevents water runoff. When watering, consider the range of sprinklers; watering paved areas is unnecessary. Additionally, to conserve the maximum amount of water, watering should be carried out during early mornings on non-windy days to reduce water loss to evaporation. Drip-irrigation systems and soaker hoses are a more sustainable alternative to the traditional sprinkler system. Drip-irrigation systems employ small gaps at standard distances in a hose, leading to the slow trickle of water droplets which percolate the soil over a protracted period. These systems use 30 – 50% less water than conventional methods. Soaker hoses help to reduce water use by up to 90%. They connect to a garden hose and lay along the row of plants under a layer of mulch. A layer of organic material added to the soil helps to increase its absorption and water retention; previously planted areas can be covered with compost. In caring for a lawn, there are a number of measures that can increase the sustainability of lawn maintenance techniques. A primary aspect of lawn care is watering. To conserve water, it is important to only water when necessary, and to deep soak when watering. Additionally, a lawn may be left to go dormant, renewing after a dry spell to its original vitality. Sequestering water A common method of water sequestrations is rainwater harvesting, which incorporates the collection and storage of rain. Primarily, the rain is obtained from a roof, and stored on the ground in catchment tanks. Water sequestration varies based on extent, cost, and complexity. A simple method involves a single barrel at the bottom of a downspout, while a more complex method involves multiple tanks. It is highly sustainable to use stored water in place of purified water for activities such as irrigation and flushing toilets. Additionally, using stored rainwater reduces the amount of runoff pollution, picked up from roofs and pavements that would normally enter streams through storm drains. The following equation can be used to estimate annual water supply: Collection area (square feet) × Rainfall (inch/year) / 12 (inch/foot) = Cubic Feet of Water/Year Cubic Feet/Year × 7.43 (Gallons/Cubic Foot) = Gallons/year Note, however, this calculation does not account for losses such as evaporation or leakage. Greywater systems function in sequestering used indoor water, such as laundry, bath and sink water, and filtering it for reuse. Greywater can be reused in irrigation and toilet flushing. There are two types of greywater systems: gravity fed manual systems and package systems. The manual systems do not require electricity but may require a larger yard space. The package systems require electricity but are self-contained and can be installed indoors. Waste As populations and resource demands climb, waste production contributes to emissions of carbon dioxide, leaching of hazardous materials into the soil and waterways, and methane emissions. In America alone, over the course of a decade, of American resources will have been transformed into nonproductive wastes and gases. Thus, a crucial component of sustainable living is being waste conscious. One can do this by reducing waste, reusing commodities, and recycling. There are a number of ways to reduce waste in sustainable living. Two methods to reduce paper waste are canceling junk mail like credit card and insurance offers and direct mail marketing and changing monthly paper statements to paperless emails. Junk mail alone accounted for 1.72 million tons of landfill waste in 2009. Another method to reduce waste is to buy in bulk, reducing packaging materials. Preventing food waste can limit the amount of organic waste sent to landfills producing the powerful greenhouse gas methane. Another example of waste reduction involves being cognizant of purchasing excessive amounts when buying materials with limited use like cans of paint. Non-hazardous or less hazardous alternatives can also limit the toxicity of waste. By reusing materials, one lives more sustainably by not contributing to the addition of waste to landfills. Reusing saves natural resources by decreasing the necessity of raw material extraction. For example, reusable bags can reduce the amount of waste created by grocery shopping eliminating the need to create and ship plastic bags and the need to manage their disposal and recycling or polluting effects. Recycling, a process that breaks down used items into raw materials to make new materials, is a particularly useful means of contributing to the renewal of goods. Recycling incorporates three primary processes; collection and processing, manufacturing, and purchasing recycled products. A natural example of recycling involves using food waste as compost to enrich the quality of soil, which can be carried out at home or locally with community composting. An offshoot of recycling, upcycling, strives to convert material into something of similar or greater value in its second life. By integrating measures of reusing, reducing, and recycling one can effectively reduce personal waste and use materials in a more sustainable manner. Reproductive choices Though it is not always included in discussions of sustainable living, some consider reproductive choices to be a key part of sustainable living. Reproductive choices refers, in this case, to the number of children that an individual has, whether they are conceived biologically or adopted. Some researchers have claimed that for people living in wealthy, high-consumption countries such as the United States, having fewer children is by far the most effective way to decrease one's carbon footprint, and one's ecological footprint more broadly. However, the scholarship that has led to this claim has been questioned, as has the misleading way that it's often been presented in popular newspaper and web articles. Some ethicists and environmental activists have made similar arguments about the need for a "small family ethic" and research has found that in some countries, these ecological concerns are leading some people to report having fewer children than they would otherwise, or no children at all. However, there have been multiple critiques of the idea that having fewer children is part of a sustainable lifestyle. Some argue that it is an example of the kind of Malthusian thinking that has led to coercion and violence in the past (including forced sterilizations and forced abortions), and that it might lead to similar policies that deny women reproductive freedom in the future. Additionally, research has found that some environmentalists consider having children, and even having more children than they might otherwise, to be a part of sustainable living. They assert that parenting can be an important way that individuals can exert a positive environmental influence, by educating the next generation and as a way to remain engaged in one's commitment to environmental action. Provision, supply and expenditure in general A study that reviewed 217 analyses of on-the-market products and services and analyzed existing alternatives to mainstream food, holidays, and furnishings, concluded that total greenhouse gas emissions by Swedes could be lowered by as of 2021 up to 36–38 % if consumers – without a decrease in total estimated expenditure or considerations of self-interest rationale – instead were to obtain those they – using available data – could assess to be more sustainable. Provision, supply/availability, product development/success/price, comparative benefits as well as incentives, purposes/demands and effects of expenditure-choices are part of or embedded in the human neuro-socioeconomic system and therefore overall largely beyond the control of an individual seeking to make rational and ethical choices within it even if all relevant life-cycle assessment/product and manufacturing information was available to this consumer . and it leads the consumer See also Buddhist economics Circles of Sustainability Citizen Science, cleanup projects that people can take part in. Cradle-to-cradle design Circular economy Climate-friendly gardening Downshifting Eco-communalism Ecodesign Ecological economics Ethical consumerism Foodscaping Frugality Simple living Sufficiency economy Sustainability Sustainable architecture Sustainable design Sustainable development Sustainable event management Sustainable landscaping Sustainable House Day (in Australia) Permaculture The Venus Project Transition Towns References External links INHERIT Project, a Horizon 2020 Project to identify ways of living, moving and consuming that protect the environment and promote health and health equity. Environmentalism Intentional living Simple living Living Sustainable design |
No, this text is not related with defense topics | Repetition blindness (RB) is a phenomenon observed in rapid serial visual presentation. People are sometimes poor at recognizing when things happen twice. Repetition blindness is the failure to recognize a second happening of a visual display. The two displays are shown sequentially, possibly with other stimuli displays in between. Each display is only shortly shown, usually for about 150 milliseconds (Kanwisher, 1987). If stimuli are shown in between, RB can occur in a time interval up to 600 milliseconds. Without other stimuli displayed in between the two repeated stimuli, RB only lasts about 250 milliseconds (Luo & Caramazza, 1995). Repetition blindness tasks usually are words in lists and in sentences. They are called phonologically similar items (Bavelier & Potter, 1992). There are also pictures, and words that include pictures. An example of this is a picture of the sun and the word sun (Bavelier, 1994). The most popular task used to examine repetition blindness is to show words one after another on a screen fast in which participants must recall the words that they saw. This task is known as the rapid serial visual presentation (RSVP). Repetition blindness is present if missing the second word creates an inaccurate sentence. An example of this is "When she spilled the ink there was ink all over.” An RSVP sequence participants will recall seeing "When she spilled the ink there was all over." However, they are missing the second occurrence of "ink" (Kanwisher, 1987). This finding supports that people are "blind" for the second occurrence of a repetitive item in an RSVP series. For example, a subject's chances of correctly reporting both appearances of the word "cat" in the RSVP stream "dog mouse cat elephant cat snake" are lower than their chances of reporting the third and fifth words in the stream "dog mouse cat elephant pig snake". The precise mechanism underlying RB has been extensively debated. Nancy Kanwisher has argued that it involves failure to tokenize the second appearance of a repeated stimulus. Tokenization, here, means the ability to identify the second stimulus as a second individual, or token. Lack of tokenization means that the second appearance of the stimulus is being dropped from short term memory before it can be identified, and hence, remains unreportable. However, Whittlesea and colleagues have argued that repetition blindness arises from a failure to properly reconstruct the list, both online and post list. This failure to properly reconstruct the list arises from the poor encoding cues that are the result of the RSVP task. See also Attentional blink Semantic satiation References Cognition |
No, this text is not related with defense topics | Ozone (), or trioxygen, is an inorganic molecule with the chemical formula . It is a pale blue gas with a distinctively pungent smell. It is an allotrope of oxygen that is much less stable than the diatomic allotrope , breaking down in the lower atmosphere to (dioxygen). Ozone is formed from dioxygen by the action of ultraviolet (UV) light and electrical discharges within the Earth's atmosphere. It is present in very low concentrations throughout the latter, with its highest concentration high in the ozone layer of the stratosphere, which absorbs most of the Sun's ultraviolet (UV) radiation. Ozone's odour is reminiscent of chlorine, and detectable by many people at concentrations of as little as in air. Ozone's O3 structure was determined in 1865. The molecule was later proven to have a bent structure and to be weakly diamagnetic. In standard conditions, ozone is a pale blue gas that condenses at cryogenic temperatures to a dark blue liquid and finally a violet-black solid. Ozone's instability with regard to more common dioxygen is such that both concentrated gas and liquid ozone may decompose explosively at elevated temperatures, physical shock or fast warming to the boiling point. It is therefore used commercially only in low concentrations. Ozone is a powerful oxidant (far more so than dioxygen) and has many industrial and consumer applications related to oxidation. This same high oxidizing potential, however, causes ozone to damage mucous and respiratory tissues in animals, and also tissues in plants, above concentrations of about . While this makes ozone a potent respiratory hazard and pollutant near ground level, a higher concentration in the ozone layer (from two to eight ppm) is beneficial, preventing damaging UV light from reaching the Earth's surface. Nomenclature The trivial name ozone is the most commonly used and preferred IUPAC name. The systematic names 2λ4-trioxidiene and catena-trioxygen, valid IUPAC names, are constructed according to the substitutive and additive nomenclatures, respectively. The name ozone derives from ozein (ὄζειν), the Greek verb for smell, referring to ozone's distinctive smell. In appropriate contexts, ozone can be viewed as trioxidane with two hydrogen atoms removed, and as such, trioxidanylidene may be used as a systematic name, according to substitutive nomenclature. By default, these names pay no regard to the radicality of the ozone molecule. In an even more specific context, this can also name the non-radical singlet ground state, whereas the diradical state is named trioxidanediyl. Trioxidanediyl (or ozonide) is used, non-systematically, to refer to the substituent group (-OOO-). Care should be taken to avoid confusing the name of the group for the context-specific name for the ozone given above. History In 1785, the Dutch chemist Martinus van Marum was conducting experiments involving electrical sparking above water when he noticed an unusual smell, which he attributed to the electrical reactions, failing to realize that he had in fact created ozone. A half century later, Christian Friedrich Schönbein noticed the same pungent odour and recognized it as the smell often following a bolt of lightning. In 1839, he succeeded in isolating the gaseous chemical and named it "ozone", from the Greek word () meaning "to smell". For this reason, Schönbein is generally credited with the discovery of ozone. The formula for ozone, O3, was not determined until 1865 by Jacques-Louis Soret and confirmed by Schönbein in 1867. For much of the second half of the nineteenth century and well into the twentieth, ozone was considered a healthy component of the environment by naturalists and health-seekers. Beaumont, California had as its official slogan "Beaumont: Zone of Ozone", as evidenced on postcards and Chamber of Commerce letterhead. Naturalists working outdoors often considered the higher elevations beneficial because of their ozone content. "There is quite a different atmosphere [at higher elevation] with enough ozone to sustain the necessary energy [to work]", wrote naturalist Henry Henshaw, working in Hawaii. Seaside air was considered to be healthy because of its believed ozone content; but the smell giving rise to this belief is in fact that of halogenated seaweed metabolites. Much of ozone's appeal seems to have resulted from its "fresh" smell, which evoked associations with purifying properties. Scientists, however, noted its harmful effects. In 1873 James Dewar and John Gray McKendrick documented that frogs grew sluggish, birds gasped for breath, and rabbits' blood showed decreased levels of oxygen after exposure to "ozonized air", which "exercised a destructive action". Schönbein himself reported that chest pains, irritation of the mucous membranes and difficulty breathing occurred as a result of inhaling ozone, and small mammals died. In 1911, Leonard Hill and Martin Flack stated in the Proceedings of the Royal Society B that ozone's healthful effects "have, by mere iteration, become part and parcel of common belief; and yet exact physiological evidence in favour of its good effects has been hitherto almost entirely wanting ... The only thoroughly well-ascertained knowledge concerning the physiological effect of ozone, so far attained, is that it causes irritation and œdema of the lungs, and death if inhaled in relatively strong concentration for any time." During World War I, ozone was tested at Queen Alexandra Military Hospital in London as a possible disinfectant for wounds. The gas was applied directly to wounds for as long as 15 minutes. This resulted in damage to both bacterial cells and human tissue. Other sanitizing techniques, such as irrigation with antiseptics, were found preferable. Until the 1920s, it was still not certain whether small amounts of oxozone, , were also present in ozone samples due to the difficulty of applying analytical chemistry techniques to the explosive concentrated chemical. In 1923, Georg-Maria Schwab (working for his doctoral thesis under Ernst Hermann Riesenfeld) was the first to successfully solidify ozone and perform accurate analysis which conclusively refuted the oxozone hypothesis. Further hitherto unmeasured physical properties of pure concentrated ozone were determined by the Riesenfeld group in the 1920s. Physical & Magnetic properties Ozone is a colourless or pale blue gas, slightly soluble in water and much more soluble in inert non-polar solvents such as carbon tetrachloride or fluorocarbons, in which it forms a blue solution. At , it condenses to form a dark blue liquid. It is dangerous to allow this liquid to warm to its boiling point, because both concentrated gaseous ozone and liquid ozone can detonate. At temperatures below , it forms a violet-black solid. Most people can detect about 0.01 μmol/mol of ozone in air where it has a very specific sharp odour somewhat resembling chlorine bleach. Exposure of 0.1 to 1 μmol/mol produces headaches, burning eyes and irritation to the respiratory passages. Even low concentrations of ozone in air are very destructive to organic materials such as latex, plastics and animal lung tissue. Ozone is weakly diamagnetic. Structure According to experimental evidence from microwave spectroscopy, ozone is a bent molecule, with C2v symmetry (similar to the water molecule). The O – O distances are . The O – O – O angle is 116.78°. The central atom is sp² hybridized with one lone pair. Ozone is a polar molecule with a dipole moment of 0.53 D. The molecule can be represented as a resonance hybrid with two contributing structures, each with a single bond on one side and double bond on the other. The arrangement possesses an overall bond order of 1.5 for both sides. It is isoelectronic with the nitrite anion. Naturally occurring ozone can be composed of substituted isotopes (16O, 17O, 18O). Reactions Ozone is among the most powerful oxidizing agents known, far stronger than O2. It is also unstable at high concentrations, decaying into ordinary oxygen. Its half-life varies with atmospheric conditions such as temperature, humidity, and air movement. Under laboratory conditions, Half-Life Time (HLT) will average ~1500 minutes (25 hours) in still air at room temperature (24 °C), zero humidity with zero air changes per hour (ACH). As such, in typical office or home environment, where air changes per hour vary between 5 and 8 ACH, ozone has a half life of as short as thirty minutes. 2 → 3 This reaction proceeds more rapidly with increasing temperature. Deflagration of ozone can be triggered by a spark and can occur in ozone concentrations of 10 wt% or higher. Ozone can also be produced from oxygen at the anode of an electrochemical cell. This reaction can create smaller quantities of ozone for research purposes. (g) + 2H+ + 2e− (g) + E°= 2.075V This can be observed as an unwanted reaction in a Hoffman gas apparatus during the electrolysis of water when the voltage is set above the necessary voltage. With metals Ozone will oxidize most metals (except gold, platinum, and iridium) to oxides of the metals in their highest oxidation state. For example: + → + + → + With nitrogen and carbon compounds Ozone also oxidizes nitric oxide to nitrogen dioxide: NO + → + This reaction is accompanied by chemiluminescence. The can be further oxidized to nitrate radical: + → + The formed can react with to form . Solid nitronium perchlorate can be made from NO2, ClO2, and gases: + + 2 → + 2 Ozone does not react with ammonium salts, but it oxidizes ammonia to ammonium nitrate: 2 + 4 → + 4 + Ozone reacts with carbon to form carbon dioxide, even at room temperature: C + 2 → + 2 With sulfur compounds Ozone oxidizes sulfides to sulfates. For example, lead(II) sulfide is oxidized to lead(II) sulfate: PbS + 4 O3 → PbSO4 + 4 O2 Sulfuric acid can be produced from ozone, water and either elemental sulfur or sulfur dioxide: S + H2O + O3 → H2SO4 3 SO2 + 3 H2O + O3 → 3 H2SO4 In the gas phase, ozone reacts with hydrogen sulfide to form sulfur dioxide: H2S + O3 → SO2 + H2O In an aqueous solution, however, two competing simultaneous reactions occur, one to produce elemental sulfur, and one to produce sulfuric acid: H2S + O3 → S + O2 + H2O 3 H2S + 4 O3 → 3 H2SO4 With alkenes and alkynes Alkenes can be oxidatively cleaved by ozone, in a process called ozonolysis, giving alcohols, aldehydes, ketones, and carboxylic acids, depending on the second step of the workup. Ozone can also cleave alkynes to form an acid anhydride or diketone product. If the reaction is performed in the presence of water, the anhydride hydrolyzes to give two carboxylic acids. Usually ozonolysis is carried out in a solution of dichloromethane, at a temperature of −78 °C. After a sequence of cleavage and rearrangement, an organic ozonide is formed. With reductive workup (e.g. zinc in acetic acid or dimethyl sulfide), ketones and aldehydes will be formed, with oxidative workup (e.g. aqueous or alcoholic hydrogen peroxide), carboxylic acids will be formed. Other substrates All three atoms of ozone may also react, as in the reaction of tin(II) chloride with hydrochloric acid and ozone: 3SnCl_2 + 6 HCl + O3 -> 3 SnCl4 + 3 H2O Iodine perchlorate can be made by treating iodine dissolved in cold anhydrous perchloric acid with ozone: I2 + 6HClO4 + O3 -> 2I(ClO4)3 + 3H2O Ozone could also react with potassium iodide to give oxygen and iodine gas : 2KI + O3 + H2O -> 2KOH + O2 + I2 Combustion Ozone can be used for combustion reactions and combustible gases; ozone provides higher temperatures than burning in dioxygen (O2). The following is a reaction for the combustion of carbon subnitride which can also cause higher temperatures: 3 + 4 → 12 CO + 3 Ozone can react at cryogenic temperatures. At , atomic hydrogen reacts with liquid ozone to form a hydrogen superoxide radical, which dimerizes: H + → HO2 + O 2 HO2 → Ozone decomposition Types of ozone decomposition Ozone is a toxic substance, commonly found or generated in human environments (aircraft cabins, offices with photocopiers, laser printers, sterilizers…) and its catalytic decomposition is very important to reduce pollution. This type of decomposition is the most widely used, especially with solid catalysts, and it has many advantages such as a higher conversion with a lower temperature. Furthermore, the product and the catalyst can be instantaneously separated, and this way the catalyst can be easily recovered without using any separation operation. Moreover, the most used materials in the catalytic decomposition of ozone in the gas phase are noble metals like Pt, Rh or Pd and transition metals such as Mn, Co, Cu, Fe, Ni or Ag. There are two other possibilities for the ozone decomposition in gas phase: The first one is a thermal decomposition where the ozone can be decomposed using only the action of heat. The problem is that this type of decomposition is very slow with temperatures below 250 °C. However, the decomposition rate can be increased working with higher temperatures but this would involve a high energy cost. The second one is a photochemical decomposition, which consists of radiating ozone with ultraviolet radiation (UV) and it gives rise to oxygen and radical peroxide. Kinetics of ozone decomposition into molecular oxygen The process of ozone decomposition is a complex reaction involving two elementary reactions that finally lead to molecular oxygen, and this means that the reaction order and the rate law cannot be determined by the stoichiometry of the fitted equation. Overall reaction: 2 O3 → 3 O2 Rate law (observed): V = K · [O3]2 · [O2]−1 It has been determined that the ozone decomposition follows a first order kinetics, and from the rate law above it can be determined that the partial order respect to molecular oxygen is -1 and respect to ozone is 2, therefore the global reaction order is 1. The ozone decomposition consists of two elementary steps: The first one corresponds to a unimolecular reaction because one only molecule of ozone decomposes into two products (molecular oxygen and oxygen). Then, the oxygen from the first step is an intermediate because it participates as a reactant in the second step, which is a bimolecular reaction because there are two different reactants (ozone and oxygen) that give rise to one product, that corresponds to molecular oxygen in the gas phase. Step 1: Unimolecular reaction O3 → O2 + O Step 2: Bimolecular reaction O3 + O → 2 O2 These two steps have different reaction rates, the first one is reversible and faster than the second reaction, which is slower, so this means that the determining step is the second reaction and this is used to determine the observed reaction rate. The reaction rate laws for every step are the ones that follow: V1 = K1 · [O3] V2 = K2 · [O] · [O3] The following mechanism allows to explain the rate law of the ozone decomposition observed experimentally, and also it allows to determine the reaction orders with respect to ozone and oxygen, with which the overall reaction order will be determined. The slower step, the bimolecular reaction, is the one that determines the rate of product formation, and considering that this step gives rise to two oxygen molecules the rate law has this form: V = 2 K2 · [O] · [O3] However, this equation depends on the concentration of oxygen (intermediate), which can be determined considering the first step. Since the first step is faster and reversible and the second step is slower, the reactants and products from the first step are in equilibrium, so the concentration of the intermediate can be determined as follows: Then using these equations, the formation rate of molecular oxygen is as shown below: Finally, the mechanism presented allows to establish the rate observed experimentally, with a rate constant (Kobs) and corresponding to a first order kinetics, as follows: where Reduction to ozonides Reduction of ozone gives the ozonide anion, O. Derivatives of this anion are explosive and must be stored at cryogenic temperatures. Ozonides for all the alkali metals are known. KO3, RbO3, and CsO3 can be prepared from their respective superoxides: KO2 + O3 → KO3 + O2 Although KO3 can be formed as above, it can also be formed from potassium hydroxide and ozone: 2 KOH + 5 O3 → 2 KO3 + 5 O2 + H2O NaO3 and LiO3 must be prepared by action of CsO3 in liquid NH3 on an ion-exchange resin containing Na+ or Li+ ions: CsO3 + Na+ → Cs+ + NaO3 A solution of calcium in ammonia reacts with ozone to give ammonium ozonide and not calcium ozonide: 3 Ca + 10 NH3 + 6 → Ca·6NH3 + Ca(OH)2 + Ca(NO3)2 + 2 NH4O3 + 2 O2 + H2 Applications Ozone can be used to remove iron and manganese from water, forming a precipitate which can be filtered: 2 Fe2+ + O3 + 5 H2O → 2 Fe(OH)3(s) + O2 + 4 H+ 2 Mn2+ + 2 O3 + 4 H2O → 2 MnO(OH)2(s) + 2 O2 + 4 H+ Ozone will also oxidize dissolved hydrogen sulfide in water to sulfurous acid: 3 + H2S → H2SO3 + 3 O2 These three reactions are central in the use of ozone-based well water treatment. Ozone will also detoxify cyanides by converting them to cyanates. CN− + O3 → + O2 Ozone will also completely decompose urea: (NH2)2CO + O3 → N2 + CO2 + 2 H2O Spectroscopic properties Ozone is a bent triatomic molecule with three vibrational modes: the symmetric stretch (1103.157 cm−1), bend (701.42 cm−1) and antisymmetric stretch (1042.096 cm−1). The symmetric stretch and bend are weak absorbers, but the antisymmetric stretch is strong and responsible for ozone being an important minor greenhouse gas. This IR band is also used to detect ambient and atmospheric ozone although UV-based measurements are more common. The electromagnetic spectrum of ozone is quite complex. An overview can be seen at the MPI Mainz UV/VIS Spectral Atlas of Gaseous Molecules of Atmospheric Interest. All of the bands are dissociative, meaning that the molecule falls apart to after absorbing a photon. The most important absorption is the Hartley band, extending from slightly above 300 nm down to slightly above 200 nm. It is this band that is responsible for absorbing UV C in the stratosphere. On the high wavelength side, the Hartley band transitions to the so-called Huggins band, which falls off rapidly until disappearing by ~360 nm. Above 400 nm, extending well out into the NIR, are the Chappius and Wulf bands. There, unstructured absorption bands are useful for detecting high ambient concentrations of ozone, but are so weak that they do not have much practical effect. There are additional absorption bands in the far UV, which increase slowly from 200 nm down to reaching a maximum at ~120 nm. Ozone in Earth's atmosphere The standard way to express total ozone levels (the amount of ozone in a given vertical column) in the atmosphere is by using Dobson units. Point measurements are reported as mole fractions in nmol/mol (parts per billion, ppb) or as concentrations in μg/m3. The study of ozone concentration in the atmosphere started in the 1920s. Ozone layer Location and production The highest levels of ozone in the atmosphere are in the stratosphere, in a region also known as the ozone layer between about 10 km and 50 km above the surface (or between about 6 and 31 miles). However, even in this "layer", the ozone concentrations are only two to eight parts per million, so most of the oxygen there is dioxygen, O2, at about 210,000 parts per million by volume. Ozone in the stratosphere is mostly produced from short-wave ultraviolet rays between 240 and 160 nm. Oxygen starts to absorb weakly at 240 nm in the Herzberg bands, but most of the oxygen is dissociated by absorption in the strong Schumann–Runge bands between 200 and 160 nm where ozone does not absorb. While shorter wavelength light, extending to even the X-Ray limit, is energetic enough to dissociate molecular oxygen, there is relatively little of it, and, the strong solar emission at Lyman-alpha, 121 nm, falls at a point where molecular oxygen absorption is a minimum. The process of ozone creation and destruction is called the Chapman cycle and starts with the photolysis of molecular oxygen O2 -> [\ce{photon}] [(\ce{radiation}\ \lambda\ <\ 240\ \ce{nm})] 2O followed by reaction of the oxygen atom with another molecule of oxygen to form ozone. O + + M → + M where "M" denotes the third body that carries off the excess energy of the reaction. The ozone molecule can then absorb a UV-C photon and dissociate → O + + kinetic energy The excess kinetic energy heats the stratosphere when the O atoms and the molecular oxygen fly apart and collide with other molecules. This conversion of UV light into kinetic energy warms the stratosphere. The oxygen atoms produced in the photolysis of ozone then react back with other oxygen molecule as in the previous step to form more ozone. In the clear atmosphere, with only nitrogen and oxygen, ozone can react with the atomic oxygen to form two molecules of O2 + O → 2 An estimate of the rate of this termination step to the cycling of atomic oxygen back to ozone can be found simply by taking the ratios of the concentration of O2 to O3. The termination reaction is catalysed by the presence of certain free radicals, of which the most important are hydroxyl (OH), nitric oxide (NO) and atomic chlorine (Cl) and bromine (Br). In the second half of the 20th century, the amount of ozone in the stratosphere was discovered to be declining, mostly because of increasing concentrations of chlorofluorocarbons (CFC) and similar chlorinated and brominated organic molecules. The concern over the health effects of the decline led to the 1987 Montreal Protocol, the ban on the production of many ozone depleting chemicals and in the first and second decade of the 21st century the beginning of the recovery of stratospheric ozone concentrations. Importance to surface-dwelling life on Earth Ozone in the ozone layer filters out sunlight wavelengths from about 200 nm UV rays to 315 nm, with ozone peak absorption at about 250 nm. This ozone UV absorption is important to life, since it extends the absorption of UV by ordinary oxygen and nitrogen in air (which absorb all wavelengths < 200 nm) through the lower UV-C (200–280 nm) and the entire UV-B band (280–315 nm). The small unabsorbed part that remains of UV-B after passage through ozone causes sunburn in humans, and direct DNA damage in living tissues in both plants and animals. Ozone's effect on mid-range UV-B rays is illustrated by its effect on UV-B at 290 nm, which has a radiation intensity 350 million times as powerful at the top of the atmosphere as at the surface. Nevertheless, enough of UV-B radiation at similar frequency reaches the ground to cause some sunburn, and these same wavelengths are also among those responsible for the production of vitamin D in humans. The ozone layer has little effect on the longer UV wavelengths called UV-A (315–400 nm), but this radiation does not cause sunburn or direct DNA damage, and while it probably does cause long-term skin damage in certain humans, it is not as dangerous to plants and to the health of surface-dwelling organisms on Earth in general (see ultraviolet for more information on near ultraviolet). Low level ozone Low level ozone (or tropospheric ozone) is an atmospheric pollutant. It is not emitted directly by car engines or by industrial operations, but formed by the reaction of sunlight on air containing hydrocarbons and nitrogen oxides that react to form ozone directly at the source of the pollution or many kilometers downwind. Ozone reacts directly with some hydrocarbons such as aldehydes and thus begins their removal from the air, but the products are themselves key components of smog. Ozone photolysis by UV light leads to production of the hydroxyl radical HO• and this plays a part in the removal of hydrocarbons from the air, but is also the first step in the creation of components of smog such as peroxyacyl nitrates, which can be powerful eye irritants. The atmospheric lifetime of tropospheric ozone is about 22 days; its main removal mechanisms are being deposited to the ground, the above-mentioned reaction giving HO•, and by reactions with OH and the peroxy radical HO2•. There is evidence of significant reduction in agricultural yields because of increased ground-level ozone and pollution which interferes with photosynthesis and stunts overall growth of some plant species. The United States Environmental Protection Agency is proposing a secondary regulation to reduce crop damage, in addition to the primary regulation designed for the protection of human health. Low level ozone in urban areas Certain examples of cities with elevated ozone readings are Denver, Colorado; Houston, Texas; and Mexico City, Mexico. Houston has a reading of around 41 nmol/mol, while Mexico City is far more hazardous, with a reading of about 125 nmol/mol. Low level ozone, or tropospheric ozone, is the most concerning type of ozone pollution in urban areas and is increasing in general. Ozone pollution in urban areas affects denser populations, and is worsened by high populations of vehicles, which emit pollutants NO2 and VOCs, the main contributors to problematic ozone levels. Ozone pollution in urban areas is especially concerning with increasing temperatures, raising heat-related mortality during heat waves. During heat waves in urban areas, ground level ozone pollution can be 20% higher than usual. Ozone pollution in urban areas reaches higher levels of exceedance in the summer and autumn, which may be explained by weather patterns and traffic patterns. More research needs to be done specifically concerning which populations in urban areas are most affected by ozone, as people of color and people experiencing poverty are more affected by pollution in general, even though these populations are less likely to be contributing to pollution levels. As mentioned above, Denver, Colorado, is one of the many cities in the United States that have high amounts of ozone. According to the American Lung Association, the Denver-Aurora area is the 14th most ozone-polluted area in the United States. The problem of high ozone levels is not new to this area. In 2004, "the US Environmental Protection Agency designated the Denver Metro/North Front Range (Adams, Arapahoe, Boulder, Broomfield, Denver, Douglas, Jefferson, and parts of Larimer and Weld counties) as nonattainment for the 1997 8-hour ozone standard", but later deferred this nonattainment status until 2007. The nonattainment standard indicates that an area does not meet the EPA's air quality standards. The Colorado Ozone Action Plan was created in response, and numerous changes were implemented from this plan. The first major change was that car emission testing was expanded across the state to more counties that did not previously mandate emissions testing, like areas of Larimer and Weld County. There have also been changes made to decrease Nitrogen Oxides (NOx) and Volatile Organic Compound (VOC) emissions, which should help lower ozone levels. One large contributor to high ozone levels in the area is the oil and natural gas industry situated in the Denver-Julesburg Basin (DJB) which overlaps with a majority of Colorado's metropolitan areas. Ozone is created naturally in the Earth's stratosphere, but is also created in the troposphere from human efforts. Briefly mentioned above, NOx and VOCs react with sunlight to create ozone through a process called photochemistry. One hour elevated ozone events (<75 ppb) "occur during June–August indicating that elevated ozone levels are driven by regional photochemistry". According to an article from the University of Colorado-Boulder, "Oil and natural gas VOC emission have a major role in ozone production and bear the potential to contribute to elevated O3 levels in the Northern Colorado Front Range (NCFR)". Using complex analyses to research wind patterns and emissions from large oil and natural gas operations, the authors concluded that "elevated O3 levels in the NCFR are predominantly correlated with air transport from N– ESE, which are the upwind sectors where the O&NG operations in the Wattenberg Field area of the DJB are located". Contained in the Colorado Ozone Action Plan, created in 2008, plans exist to evaluate "emission controls for large industrial sources of NOx" and "statewide control requirements for new oil and gas condensate tanks and pneumatic valves". In 2011, the Regional Haze Plan was released that included a more specific plan to help decrease NOx emissions. These efforts are increasingly difficult to implement and take many years to come to pass. Of course there are also other reasons that ozone levels remain high. These include: a growing population meaning more car emissions, and the mountains along the NCFR that can trap emissions. If interested, daily air quality readings can be found at the Colorado Department of Public Health and Environment's website. As noted earlier, Denver continues to experience high levels of ozone to this day. It will take many years and a systems-thinking approach to combat this issue of high ozone levels in the Front Range of Colorado. Ozone cracking Ozone gas attacks any polymer possessing olefinic or double bonds within its chain structure, such as natural rubber, nitrile rubber, and styrene-butadiene rubber. Products made using these polymers are especially susceptible to attack, which causes cracks to grow longer and deeper with time, the rate of crack growth depending on the load carried by the rubber component and the concentration of ozone in the atmosphere. Such materials can be protected by adding antiozonants, such as waxes, which bond to the surface to create a protective film or blend with the material and provide long term protection. Ozone cracking used to be a serious problem in car tires, for example, but it is not an issue with modern tires. On the other hand, many critical products, like gaskets and O-rings, may be attacked by ozone produced within compressed air systems. Fuel lines made of reinforced rubber are also susceptible to attack, especially within the engine compartment, where some ozone is produced by electrical components. Storing rubber products in close proximity to a DC electric motor can accelerate ozone cracking. The commutator of the motor generates sparks which in turn produce ozone. Ozone as a greenhouse gas Although ozone was present at ground level before the Industrial Revolution, peak concentrations are now far higher than the pre-industrial levels, and even background concentrations well away from sources of pollution are substantially higher. Ozone acts as a greenhouse gas, absorbing some of the infrared energy emitted by the earth. Quantifying the greenhouse gas potency of ozone is difficult because it is not present in uniform concentrations across the globe. However, the most widely accepted scientific assessments relating to climate change (e.g. the Intergovernmental Panel on Climate Change Third Assessment Report) suggest that the radiative forcing of tropospheric ozone is about 25% that of carbon dioxide. The annual global warming potential of tropospheric ozone is between 918–1022 tons carbon dioxide equivalent/tons tropospheric ozone. This means on a per-molecule basis, ozone in the troposphere has a radiative forcing effect roughly 1,000 times as strong as carbon dioxide. However, tropospheric ozone is a short-lived greenhouse gas, which decays in the atmosphere much more quickly than carbon dioxide. This means that over a 20-year span, the global warming potential of tropospheric ozone is much less, roughly 62 to 69 tons carbon dioxide equivalent / ton tropospheric ozone. Because of its short-lived nature, tropospheric ozone does not have strong global effects, but has very strong radiative forcing effects on regional scales. In fact, there are regions of the world where tropospheric ozone has a radiative forcing up to 150% of carbon dioxide. Health effects For the last few decades, scientists studied the effects of acute and chronic ozone exposure on human health. Hundreds of studies suggest that ozone is harmful to people at levels currently found in urban areas. Ozone has been shown to affect the respiratory, cardiovascular and central nervous system. Early death and problems in reproductive health and development are also shown to be associated with ozone exposure. Vulnerable populations The American Lung Association has identified five populations who are especially vulnerable to the effects of breathing ozone: Children and teens People 65 years old and older People who work or exercise outdoors People with existing lung diseases, such as asthma and chronic obstructive pulmonary disease (also known as COPD, which includes emphysema and chronic bronchitis) People with cardiovascular disease Additional evidence suggests that women, those with obesity and low-income populations may also face higher risk from ozone, although more research is needed. Acute ozone exposure Acute ozone exposure ranges from hours to a few days. Because ozone is a gas, it directly affects the lungs and the entire respiratory system. Inhaled ozone causes inflammation and acute—but reversible—changes in lung function, as well as airway hyperresponsiveness. These changes lead to shortness of breath, wheezing, and coughing which may exacerbate lung diseases, like asthma or chronic obstructive pulmonary disease (COPD) resulting in the need to receive medical treatment. Acute and chronic exposure to ozone has been shown to cause an increased risk of respiratory infections, due to the following mechanism. Multiple studies have been conducted to determine the mechanism behind ozone's harmful effects, particularly in the lungs. These studies have shown that exposure to ozone causes changes in the immune response within the lung tissue, resulting in disruption of both the innate and adaptive immune response, as well as altering the protective function of lung epithelial cells. It is thought that these changes in immune response and the related inflammatory response are factors that likely contribute to the increased risk of lung infections, and worsening or triggering of asthma and reactive airways after exposure to ground-level ozone pollution. The innate (cellular) immune system consists of various chemical signals and cell types that work broadly and against multiple pathogen types, typically bacteria or foreign bodies/substances in the host. The cells of the innate system include phagocytes, neutrophils, both thought to contribute to the mechanism of ozone pathology in the lungs, as the functioning of these cell types have been shown to change after exposure to ozone. Macrophages, cells that serve the purpose of eliminating pathogens or foreign material through the process of "phagocytosis", have been shown to change the level of inflammatory signals they release in response to ozone, either up-regulating and resulting in an inflammatory response in the lung, or down-regulating and reducing immune protection. Neutrophils, another important cell type of the innate immune system that primarily targets bacterial pathogens, are found to be present in the airways within 6 hours of exposure to high ozone levels. Despite high levels in the lung tissues, however, their ability to clear bacteria appears impaired by exposure to ozone. The adaptive immune system is the branch of immunity that provides long-term protection via the development of antibodies targeting specific pathogens and is also impacted by high ozone exposure. Lymphocytes, a cellular component of the adaptive immune response, produce an increased amount of inflammatory chemicals called "cytokines" after exposure to ozone, which may contribute to airway hyperreactivity and worsening asthma symptoms. The airway epithelial cells also play an important role in protecting individuals from pathogens. In normal tissue, the epithelial layer forms a protective barrier, and also contains specialized ciliary structures that work to clear foreign bodies, mucus and pathogens from the lungs. When exposed to ozone, the cilia become damaged and mucociliary clearance of pathogens is reduced. Furthermore, the epithelial barrier becomes weakened, allowing pathogens to cross the barrier, proliferate and spread into deeper tissues. Together, these changes in the epithelial barrier help make individuals more susceptible to pulmonary infections. Inhaling ozone not only affects the immune system and lungs, but it may also affect the heart as well. Ozone causes short-term autonomic imbalance leading to changes in heart rate and reduction in heart rate variability; and high levels exposure for as little as one-hour results in a supraventricular arrhythmia in the elderly, both increase the risk of premature death and stroke. Ozone may also lead to vasoconstriction resulting in increased systemic arterial pressure contributing to increased risk of cardiac morbidity and mortality in patients with pre-existing cardiac diseases. Chronic ozone exposure Breathing ozone for periods longer than eight hours at a time for weeks, months or years defines chronic exposure. Numerous studies suggest a serious impact on the health of various populations from this exposure. One study finds significant positive associations between chronic ozone and all-cause, circulatory, and respiratory mortality with 2%, 3%, and 12% increases in risk per 10 ppb and report an association (95% CI) of annual ozone and all-cause mortality with a hazard ratio of 1.02 (1.01–1.04), and with cardiovascular mortality of 1.03 (1.01–1.05). A similar study finds similar associations with all-cause mortality and even larger effects for cardiovascular mortality. An increased risk of mortality from respiratory causes is associated with long-term chronic exposure to ozone. Chronic ozone has detrimental effects on children, especially those with asthma. The risk for hospitalization in children with asthma increases with chronic exposure to ozone; younger children and those with low-income status are even at greater risk. Adults suffering from respiratory diseases (asthma, COPD, lung cancer) are at a higher risk of mortality and morbidity and critically ill patients have an increased risk of developing acute respiratory distress syndrome with chronic ozone exposure as well. Ozone produced by air cleaners Ozone generators sold as air cleaners intentionally produce the gas ozone. These are often marketed to control indoor air pollution, and use misleading terms to describe ozone. Some examples are describing it as"energized oxygen" or "pure air", suggesting that ozone is a healthy or "better" kind of oxygen. However, according to the EPA, "ozone is not effective at removing many odor-causing chemicals" and "does not effectively remove viruses, bacteria, mold, or other biological pollutants". Furthermore, another report states that "results of some controlled studies show that concentrations of ozone considerably higher than these [human safety] standards are possible even when a user follows the manufacturer's operating instructions". The California Air Resources Board has a page listing air cleaners (many with ionizers) meeting their indoor ozone limit of 0.050 parts per million. From that article: Ozone air pollution Ozone precursors are a group of pollutants, predominantly those emitted during the combustion of fossil fuels. Ground-level ozone pollution (tropospheric ozone) is created near the Earth's surface by the action of daylight UV rays on these precursors. The ozone at ground level is primarily from fossil fuel precursors, but methane is a natural precursor, and the very low natural background level of ozone at ground level is considered safe. This section examines the health impacts of fossil fuel burning, which raises ground level ozone far above background levels. There is a great deal of evidence to show that ground-level ozone can harm lung function and irritate the respiratory system. Exposure to ozone (and the pollutants that produce it) is linked to premature death, asthma, bronchitis, heart attack, and other cardiopulmonary problems. Long-term exposure to ozone has been shown to increase risk of death from respiratory illness. A study of 450,000 people living in United States cities saw a significant correlation between ozone levels and respiratory illness over the 18-year follow-up period. The study revealed that people living in cities with high ozone levels, such as Houston or Los Angeles, had an over 30% increased risk of dying from lung disease. Air quality guidelines such as those from the World Health Organization, the United States Environmental Protection Agency (EPA) and the European Union are based on detailed studies designed to identify the levels that can cause measurable ill health effects. According to scientists with the US EPA, susceptible people can be adversely affected by ozone levels as low as 40 nmol/mol. In the EU, the current target value for ozone concentrations is 120 µg/m3 which is about 60 nmol/mol. This target applies to all member states in accordance with Directive 2008/50/EC. Ozone concentration is measured as a maximum daily mean of 8 hour averages and the target should not be exceeded on more than 25 calendar days per year, starting from January 2010. Whilst the directive requires in the future a strict compliance with 120 µg/m3 limit (i.e. mean ozone concentration not to be exceeded on any day of the year), there is no date set for this requirement and this is treated as a long-term objective. In the US, the Clean Air Act directs the EPA to set National Ambient Air Quality Standards for several pollutants, including ground-level ozone, and counties out of compliance with these standards are required to take steps to reduce their levels. In May 2008, under a court order, the EPA lowered its ozone standard from 80 nmol/mol to 75 nmol/mol. The move proved controversial, since the Agency's own scientists and advisory board had recommended lowering the standard to 60 nmol/mol. Many public health and environmental groups also supported the 60 nmol/mol standard, and the World Health Organization recommends 100 µg/m3 (51 nmol/mol). On January 7, 2010, the U.S. Environmental Protection Agency (EPA) announced proposed revisions to the National Ambient Air Quality Standard (NAAQS) for the pollutant ozone, the principal component of smog: ... EPA proposes that the level of the 8-hour primary standard, which was set at 0.075 μmol/mol in the 2008 final rule, should instead be set at a lower level within the range of 0.060 to 0.070 μmol/mol, to provide increased protection for children and other at risk populations against an array of – related adverse health effects that range from decreased lung function and increased respiratory symptoms to serious indicators of respiratory morbidity including emergency department visits and hospital admissions for respiratory causes, and possibly cardiovascular-related morbidity as well as total non- accidental and cardiopulmonary mortality ... On October 26, 2015, the EPA published a final rule with an effective date of December 28, 2015 that revised the 8-hour primary NAAQS from 0.075 ppm to 0.070 ppm. The EPA has developed an air quality index (AQI) to help explain air pollution levels to the general public. Under the current standards, eight-hour average ozone mole fractions of 85 to 104 nmol/mol are described as "unhealthy for sensitive groups", 105 nmol/mol to 124 nmol/mol as "unhealthy", and 125 nmol/mol to 404 nmol/mol as "very unhealthy". Ozone can also be present in indoor air pollution, partly as a result of electronic equipment such as photocopiers. A connection has also been known to exist between the increased pollen, fungal spores, and ozone caused by thunderstorms and hospital admissions of asthma sufferers. In the Victorian era, one British folk myth held that the smell of the sea was caused by ozone. In fact, the characteristic "smell of the sea" is caused by dimethyl sulfide, a chemical generated by phytoplankton. Victorian Britons considered the resulting smell "bracing". Heat waves An investigation to assess the joint mortality effects of ozone and heat during the European heat waves in 2003, concluded that these appear to be additive. Physiology Ozone, along with reactive forms of oxygen such as superoxide, singlet oxygen, hydrogen peroxide, and hypochlorite ions, is produced by white blood cells and other biological systems (such as the roots of marigolds) as a means of destroying foreign bodies. Ozone reacts directly with organic double bonds. Also, when ozone breaks down to dioxygen it gives rise to oxygen free radicals, which are highly reactive and capable of damaging many organic molecules. Moreover, it is believed that the powerful oxidizing properties of ozone may be a contributing factor of inflammation. The cause-and-effect relationship of how the ozone is created in the body and what it does is still under consideration and still subject to various interpretations, since other body chemical processes can trigger some of the same reactions. There is evidence linking the antibody-catalyzed water-oxidation pathway of the human immune response to the production of ozone. In this system, ozone is produced by antibody-catalyzed production of trioxidane from water and neutrophil-produced singlet oxygen. When inhaled, ozone reacts with compounds lining the lungs to form specific, cholesterol-derived metabolites that are thought to facilitate the build-up and pathogenesis of atherosclerotic plaques (a form of heart disease). These metabolites have been confirmed as naturally occurring in human atherosclerotic arteries and are categorized into a class of secosterols termed atheronals, generated by ozonolysis of cholesterol's double bond to form a 5,6 secosterol as well as a secondary condensation product via aldolization. Impact on plant growth and crop yields Ozone has been implicated to have an adverse effect on plant growth: "... ozone reduced total chlorophylls, carotenoid and carbohydrate concentration, and increased 1-aminocyclopropane-1-carboxylic acid (ACC) content and ethylene production. In treated plants, the ascorbate leaf pool was decreased, while lipid peroxidation and solute leakage were significantly higher than in ozone-free controls. The data indicated that ozone triggered protective mechanisms against oxidative stress in citrus." Studies that have used pepper plants as a model have shown that ozone decreased fruit yield and changed fruit quality. Furthermore, it was also observed a decrease in chlorophylls levels and antioxidant defences on the leaves, as well as increased the reactive oxygen species (ROS) levels and lipid and protein damages. A 2022 study concludes that East Asia looses 63 billion Dollars in crops per year due to ozone pollution, a by-product of fossile fuel combustion. China looses about one third of it's potential wheat production and one fourth of it's rice production. Safety regulations Because of the strongly oxidizing properties of ozone, ozone is a primary irritant, affecting especially the eyes and respiratory systems and can be hazardous at even low concentrations. The Canadian Centre for Occupation Safety and Health reports that: Even very low concentrations of ozone can be harmful to the upper respiratory tract and the lungs. The severity of injury depends on both the concentration of ozone and the duration of exposure. Severe and permanent lung injury or death could result from even a very short-term exposure to relatively low concentrations." To protect workers potentially exposed to ozone, U.S. Occupational Safety and Health Administration has established a permissible exposure limit (PEL) of 0.1 μmol/mol (29 CFR 1910.1000 table Z-1), calculated as an 8-hour time weighted average. Higher concentrations are especially hazardous and NIOSH has established an Immediately Dangerous to Life and Health Limit (IDLH) of 5 μmol/mol. Work environments where ozone is used or where it is likely to be produced should have adequate ventilation and it is prudent to have a monitor for ozone that will alarm if the concentration exceeds the OSHA PEL. Continuous monitors for ozone are available from several suppliers. Elevated ozone exposure can occur on passenger aircraft, with levels depending on altitude and atmospheric turbulence. United States Federal Aviation Administration regulations set a limit of 250 nmol/mol with a maximum four-hour average of 100 nmol/mol. Some planes are equipped with ozone converters in the ventilation system to reduce passenger exposure. Production Ozone generators, or ozonators, are used to produce ozone for cleaning air or removing smoke odours in unoccupied rooms. These ozone generators can produce over 3 g of ozone per hour. Ozone often forms in nature under conditions where O2 will not react. Ozone used in industry is measured in μmol/mol (ppm, parts per million), nmol/mol (ppb, parts per billion), μg/m3, mg/h (milligrams per hour) or weight percent. The regime of applied concentrations ranges from 1% to 5% (in air) and from 6% to 14% (in oxygen) for older generation methods. New electrolytic methods can achieve up 20% to 30% dissolved ozone concentrations in output water. Temperature and humidity play a large role in how much ozone is being produced using traditional generation methods (such as corona discharge and ultraviolet light). Old generation methods will produce less than 50% of nominal capacity if operated with humid ambient air, as opposed to very dry air. New generators, using electrolytic methods, can achieve higher purity and dissolution through using water molecules as the source of ozone production. Corona discharge method This is the most common type of ozone generator for most industrial and personal uses. While variations of the "hot spark" coronal discharge method of ozone production exist, including medical grade and industrial grade ozone generators, these units usually work by means of a corona discharge tube or ozone plate. They are typically cost-effective and do not require an oxygen source other than the ambient air to produce ozone concentrations of 3–6%. Fluctuations in ambient air, due to weather or other environmental conditions, cause variability in ozone production. However, they also produce nitrogen oxides as a by-product. Use of an air dryer can reduce or eliminate nitric acid formation by removing water vapor and increase ozone production. At room temperature, nitric acid will form into a vapour that is hazardous if inhaled. Symptoms can include chest pain, shortness of breath, headaches and a dry nose and throat causing a burning sensation. Use of an oxygen concentrator can further increase the ozone production and further reduce the risk of nitric acid formation by removing not only the water vapor, but also the bulk of the nitrogen. Ultraviolet light UV ozone generators, or vacuum-ultraviolet (VUV) ozone generators, employ a light source that generates a narrow-band ultraviolet light, a subset of that produced by the Sun. The Sun's UV sustains the ozone layer in the stratosphere of Earth. UV ozone generators use ambient air for ozone production, no air prep systems are used (air dryer or oxygen concentrator), therefore these generators tend to be less expensive. However, UV ozone generators usually produce ozone with a concentration of about 0.5% or lower which limits the potential ozone production rate. Another disadvantage of this method is that it requires the ambient air (oxygen) to be exposed to the UV source for a longer amount of time, and any gas that is not exposed to the UV source will not be treated. This makes UV generators impractical for use in situations that deal with rapidly moving air or water streams (in-duct air sterilization, for example). Production of ozone is one of the potential dangers of ultraviolet germicidal irradiation. VUV ozone generators are used in swimming pools and spa applications ranging to millions of gallons of water. VUV ozone generators, unlike corona discharge generators, do not produce harmful nitrogen by-products and also unlike corona discharge systems, VUV ozone generators work extremely well in humid air environments. There is also not normally a need for expensive off-gas mechanisms, and no need for air driers or oxygen concentrators which require extra costs and maintenance. Cold plasma In the cold plasma method, pure oxygen gas is exposed to a plasma created by dielectric barrier discharge. The diatomic oxygen is split into single atoms, which then recombine in triplets to form ozone. Cold plasma machines utilize pure oxygen as the input source and produce a maximum concentration of about 5% ozone. They produce far greater quantities of ozone in a given space of time compared to ultraviolet production. However, because cold plasma ozone generators are very expensive, they are found less frequently than the previous two types. The discharges manifest as filamentary transfer of electrons (micro discharges) in a gap between two electrodes. In order to evenly distribute the micro discharges, a dielectric insulator must be used to separate the metallic electrodes and to prevent arcing. Some cold plasma units also have the capability of producing short-lived allotropes of oxygen which include O4, O5, O6, O7, etc. These species are even more reactive than ordinary . Electrolytic Electrolytic ozone generation (EOG) splits water molecules into H2, O2, and O3. In most EOG methods, the hydrogen gas will be removed to leave oxygen and ozone as the only reaction products. Therefore, EOG can achieve higher dissolution in water without other competing gases found in corona discharge method, such as nitrogen gases present in ambient air. This method of generation can achieve concentrations of 20–30% and is independent of air quality because water is used as the source material. Production of ozone electrolytically is typically unfavorable because of the high overpotential required to produce ozone as compared to oxygen. This is why ozone is not produced during typical water electrolysis. However, it is possible to increase the overpotential of oxygen by careful catalyst selection such that ozone is preferentially produced under electrolysis. Catalysts typically chosen for this approach are lead dioxide or boron-doped diamond. The ozone to oxygen ratio is improved by increasing current density at the anode, cooling the electrolyte around the anode close to 0 °C, using an acidic electrolyte (such as dilute sulfuric acid) instead of a basic solution, and by applying pulsed current instead of DC. Special considerations Ozone cannot be stored and transported like other industrial gases (because it quickly decays into diatomic oxygen) and must therefore be produced on site. Available ozone generators vary in the arrangement and design of the high-voltage electrodes. At production capacities higher than 20 kg per hour, a gas/water tube heat-exchanger may be utilized as ground electrode and assembled with tubular high-voltage electrodes on the gas-side. The regime of typical gas pressures is around absolute in oxygen and absolute in air. Several megawatts of electrical power may be installed in large facilities, applied as single phase AC current at 50 to 8000 Hz and peak voltages between 3,000 and 20,000 volts. Applied voltage is usually inversely related to the applied frequency. The dominating parameter influencing ozone generation efficiency is the gas temperature, which is controlled by cooling water temperature and/or gas velocity. The cooler the water, the better the ozone synthesis. The lower the gas velocity, the higher the concentration (but the lower the net ozone produced). At typical industrial conditions, almost 90% of the effective power is dissipated as heat and needs to be removed by a sufficient cooling water flow. Because of the high reactivity of ozone, only a few materials may be used like stainless steel (quality 316L), titanium, aluminium (as long as no moisture is present), glass, polytetrafluorethylene, or polyvinylidene fluoride. Viton may be used with the restriction of constant mechanical forces and absence of humidity (humidity limitations apply depending on the formulation). Hypalon may be used with the restriction that no water comes in contact with it, except for normal atmospheric levels. Embrittlement or shrinkage is the common mode of failure of elastomers with exposure to ozone. Ozone cracking is the common mode of failure of elastomer seals like O-rings. Silicone rubbers are usually adequate for use as gaskets in ozone concentrations below 1 wt%, such as in equipment for accelerated aging of rubber samples. Incidental production Ozone may be formed from by electrical discharges and by action of high energy electromagnetic radiation. Unsuppressed arcing in electrical contacts, motor brushes, or mechanical switches breaks down the chemical bonds of the atmospheric oxygen surrounding the contacts [ → 2O]. Free radicals of oxygen in and around the arc recombine to create ozone []. Certain electrical equipment generate significant levels of ozone. This is especially true of devices using high voltages, such as ionic air purifiers, laser printers, photocopiers, tasers and arc welders. Electric motors using brushes can generate ozone from repeated sparking inside the unit. Large motors that use brushes, such as those used by elevators or hydraulic pumps, will generate more ozone than smaller motors. Ozone is similarly formed in the Catatumbo lightning storms phenomenon on the Catatumbo River in Venezuela, though ozone's instability makes it dubious that it has any effect on the ozonosphere. It is the world's largest single natural generator of ozone, lending calls for it to be designated a UNESCO World Heritage Site. Laboratory production In the laboratory, ozone can be produced by electrolysis using a 9 volt battery, a pencil graphite rod cathode, a platinum wire anode and a 3 molar sulfuric acid electrolyte. The half cell reactions taking place are: 3 H2O → O3 + 6 H+ + 6 e− (ΔE° = −1.53 V) 6 H+ + 6 e− → 3 H2 (ΔE° = 0 V) 2 H2O → O2 + 4 H+ + 4 e− (ΔE° = 1.23 V) In the net reaction, three equivalents of water are converted into one equivalent of ozone and three equivalents of hydrogen. Oxygen formation is a competing reaction. It can also be generated by a high voltage arc. In its simplest form, high voltage AC, such as the output of a neon-sign transformer is connected to two metal rods with the ends placed sufficiently close to each other to allow an arc. The resulting arc will convert atmospheric oxygen to ozone. It is often desirable to contain the ozone. This can be done with an apparatus consisting of two concentric glass tubes sealed together at the top with gas ports at the top and bottom of the outer tube. The inner core should have a length of metal foil inserted into it connected to one side of the power source. The other side of the power source should be connected to another piece of foil wrapped around the outer tube. A source of dry is applied to the bottom port. When high voltage is applied to the foil leads, electricity will discharge between the dry dioxygen in the middle and form and which will flow out the top port. This is called a Siemen's ozoniser. The reaction can be summarized as follows: 3O2 ->[electricity] 2O3 Applications Industry The largest use of ozone is in the preparation of pharmaceuticals, synthetic lubricants, and many other commercially useful organic compounds, where it is used to sever carbon-carbon bonds. It can also be used for bleaching substances and for killing microorganisms in air and water sources. Many municipal drinking water systems kill bacteria with ozone instead of the more common chlorine. Ozone has a very high oxidation potential. Ozone does not form organochlorine compounds, nor does it remain in the water after treatment. Ozone can form the suspected carcinogen bromate in source water with high bromide concentrations. The U.S. Safe Drinking Water Act mandates that these systems introduce an amount of chlorine to maintain a minimum of 0.2 μmol/mol residual free chlorine in the pipes, based on results of regular testing. Where electrical power is abundant, ozone is a cost-effective method of treating water, since it is produced on demand and does not require transportation and storage of hazardous chemicals. Once it has decayed, it leaves no taste or odour in drinking water. Although low levels of ozone have been advertised to be of some disinfectant use in residential homes, the concentration of ozone in dry air required to have a rapid, substantial effect on airborne pathogens exceeds safe levels recommended by the U.S. Occupational Safety and Health Administration and Environmental Protection Agency. Humidity control can vastly improve both the killing power of the ozone and the rate at which it decays back to oxygen (more humidity allows more effectiveness). Spore forms of most pathogens are very tolerant of atmospheric ozone in concentrations at which asthma patients start to have issues. In 1908 artificial ozonisation the Central Line of the London Underground was introduced as an aerial disinfectant. The process was found to be worthwhile, but was phased out by 1956. However the beneficial effect was maintained by the ozone created incidentally from the electrical discharges of the train motors (see above: Incidental production). Ozone generators were made available to schools and universities in Wales for the Autumn term 2021, to disinfect classrooms after Covid outbreaks. Industrially, ozone is used to: Disinfect laundry in hospitals, food factories, care homes etc.; Disinfect water in place of chlorine Deodorize air and objects, such as after a fire. This process is extensively used in fabric restoration Kill bacteria on food or on contact surfaces; Water intense industries such as breweries and dairy plants can make effective use of dissolved ozone as a replacement to chemical sanitizers such as peracetic acid, hypochlorite or heat. Disinfect cooling towers and control Legionella with reduced chemical consumption, water bleed-off and increased performance. Sanitize swimming pools and spas Kill insects in stored grain Scrub yeast and mold spores from the air in food processing plants; Wash fresh fruits and vegetables to kill yeast, mold and bacteria; Chemically attack contaminants in water (iron, arsenic, hydrogen sulfide, nitrites, and complex organics lumped together as "colour"); Provide an aid to flocculation (agglomeration of molecules, which aids in filtration, where the iron and arsenic are removed); Manufacture chemical compounds via chemical synthesis Clean and bleach fabrics (the former use is utilized in fabric restoration; the latter use is patented); Act as an antichlor in chlorine-based bleaching; Assist in processing plastics to allow adhesion of inks; Age rubber samples to determine the useful life of a batch of rubber; Eradicate water-borne parasites such as Giardia lamblia and Cryptosporidium in surface water treatment plants. Ozone is a reagent in many organic reactions in the laboratory and in industry. Ozonolysis is the cleavage of an alkene to carbonyl compounds. Many hospitals around the world use large ozone generators to decontaminate operating rooms between surgeries. The rooms are cleaned and then sealed airtight before being filled with ozone which effectively kills or neutralizes all remaining bacteria. Ozone is used as an alternative to chlorine or chlorine dioxide in the bleaching of wood pulp. It is often used in conjunction with oxygen and hydrogen peroxide to eliminate the need for chlorine-containing compounds in the manufacture of high-quality, white paper. Ozone can be used to detoxify cyanide wastes (for example from gold and silver mining) by oxidizing cyanide to cyanate and eventually to carbon dioxide. Water disinfection Since the invention of Dielectric Barrier Discharge (DBD) plasma reactors, it has been employed for water treatment with ozone. However, with cheaper alternative disinfectants like chlorine, such applications of DBD ozone water decontamination have been limited by high power consumption and bulky equipment. Despite this, with research revealing the negative impacts of common disinfectants like chlorine with respect to toxic residuals and ineffectiveness in killing certain micro-organisms, DBD plasma-based ozone decontamination is of interest in current available technologies. Although ozonation of water with a high concentration of bromide does lead to the formation of undesirable brominated disinfection byproducts, unless drinking water is produced by desalination, ozonation can generally be applied without concern for these byproducts. Advantages of ozone include high thermodynamic oxidation potential, less sensitivity to organic material and better tolerance for pH variations while retaining the ability to kill bacteria, fungi, viruses, as well as spores and cysts. Although, ozone has been widely accepted in Europe for decades, it is sparingly used for decontamination in the U.S due to limitations of high-power consumption, bulky installation and stigma attached with ozone toxicity. Considering this, recent research efforts have been directed towards the study of effective ozone water treatment systems Researchers have looked into lightweight and compact low power surface DBD reactors, energy efficient volume DBD reactors and low power micro-scale DBD reactors. Such studies can help pave the path to re-acceptance of DBD plasma-based ozone decontamination of water, especially in the U.S. Consumers Devices generating high levels of ozone, some of which use ionization, are used to sanitize and deodorize uninhabited buildings, rooms, ductwork, woodsheds, boats and other vehicles. Ozonated water is used to launder clothes and to sanitize food, drinking water, and surfaces in the home. According to the U.S. Food and Drug Administration (FDA), it is "amending the food additive regulations to provide for the safe use of ozone in gaseous and aqueous phases as an antimicrobial agent on food, including meat and poultry." Studies at California Polytechnic University demonstrated that 0.3 μmol/mol levels of ozone dissolved in filtered tapwater can produce a reduction of more than 99.99% in such food-borne microorganisms as salmonella, E. coli 0157:H7 and Campylobacter. This quantity is 20,000 times the WHO-recommended limits stated above. Ozone can be used to remove pesticide residues from fruits and vegetables. Ozone is used in homes and hot tubs to kill bacteria in the water and to reduce the amount of chlorine or bromine required by reactivating them to their free state. Since ozone does not remain in the water long enough, ozone by itself is ineffective at preventing cross-contamination among bathers and must be used in conjunction with halogens. Gaseous ozone created by ultraviolet light or by corona discharge is injected into the water. Ozone is also widely used in the treatment of water in aquariums and fishponds. Its use can minimize bacterial growth, control parasites, eliminate transmission of some diseases, and reduce or eliminate "yellowing" of the water. Ozone must not come in contact with fishes' gill structures. Natural saltwater (with life forms) provides enough "instantaneous demand" that controlled amounts of ozone activate bromide ions to hypobromous acid, and the ozone entirely decays in a few seconds to minutes. If oxygen-fed ozone is used, the water will be higher in dissolved oxygen and fishes' gill structures will atrophy, making them dependent on oxygen-enriched water. Aquaculture Ozonation – a process of infusing water with ozone – can be used in aquaculture to facilitate organic breakdown. Ozone is also added to recirculating systems to reduce nitrite levels through conversion into nitrate. If nitrite levels in the water are high, nitrites will also accumulate in the blood and tissues of fish, where it interferes with oxygen transport (it causes oxidation of the heme-group of haemoglobin from ferrous () to ferric (), making haemoglobin unable to bind ). Despite these apparent positive effects, ozone use in recirculation systems has been linked to reducing the level of bioavailable iodine in salt water systems, resulting in iodine deficiency symptoms such as goitre and decreased growth in Senegalese sole (Solea senegalensis) larvae. Ozonate seawater is used for surface disinfection of haddock and Atlantic halibut eggs against nodavirus. Nodavirus is a lethal and vertically transmitted virus which causes severe mortality in fish. Haddock eggs should not be treated with high ozone level as eggs so treated did not hatch and died after 3–4 days. Agriculture Ozone application on freshly cut pineapple and banana shows increase in flavonoids and total phenol contents when exposure is up to 20 minutes. Decrease in ascorbic acid (one form of vitamin C) content is observed but the positive effect on total phenol content and flavonoids can overcome the negative effect. Tomatoes upon treatment with ozone shows an increase in β-carotene, lutein and lycopene. However, ozone application on strawberries in pre-harvest period shows decrease in ascorbic acid content. Ozone facilitates the extraction of some heavy metals from soil using EDTA. EDTA forms strong, water-soluble coordination compounds with some heavy metals (Pb, Zn) thereby making it possible to dissolve them out from contaminated soil. If contaminated soil is pre-treated with ozone, the extraction efficacy of Pb, Am and Pu increases by 11.0–28.9%, 43.5% and 50.7% respectively. Alternative medicine The use of ozone for the treatment of medical conditions is not supported by high quality evidence, and is generally considered alternative medicine. See also Cyclic ozone Global Ozone Monitoring by Occultation of Stars (GOMOS) Global warming Greenhouse gas Chappuis absorption International Day for the Preservation of the Ozone Layer (September 16) Nitrogen oxides Ozone Action Day Ozone depletion, including the phenomenon known as the ozone hole. Ozone therapy Ozoneweb Ozonolysis Polymer degradation Sterilization (microbiology) Provide possible References Further reading Becker, K. H., U. Kogelschatz, K. H. Schoenbach, R. J. Barker (ed.). Non-Equilibrium Air Plasmas at Atmospheric Pressure. Series in Plasma Physics. Bristol and Philadelphia: Institute of Physics Publishing Ltd; ; 2005 United States Environmental Protection Agency. Risk and Benefits Group. (August 2014). Health Risk and Exposure Assessment for Ozone: Final Report. External links International Ozone Association European Environment Agency's near real-time ozone map (ozoneweb) NASA's Ozone Resource Page OSHA Ozone Information Paul Crutzen Interview—Video of Nobel Laureate Paul Crutzen talking to Nobel Laureate Harry Kroto by the Vega Science Trust NASA's Earth Observatory article on Ozone International Chemical Safety Card 0068 NIOSH Pocket Guide to Chemical Hazards National Institute of Environmental Health Sciences, Ozone Information Ground-level Ozone Air Pollution NASA Study Links "Smog" to Arctic Warming—NASA Goddard Institute for Space Studies (GISS) study shows the warming effect of ozone in the Arctic during winter and spring. US EPA report questioning effectiveness or safety of ozone generators sold as air cleaners Ground-level ozone information from the American Lung Association of New England Allotropes of oxygen Disinfectants Environmental chemistry Gases Greenhouse gases Industrial gases Oxidizing agents Homonuclear triatomic molecules Gases with color Pollution Air pollution |
No, this text is not related with defense topics | The Maxwell–Bloch equations, also called the optical Bloch equations describe the dynamics of a two-state quantum system interacting with the electromagnetic mode of an optical resonator. They are analogous to (but not at all equivalent to) the Bloch equations which describe the motion of the nuclear magnetic moment in an electromagnetic field. The equations can be derived either semiclassically or with the field fully quantized when certain approximations are made. Semi-classical formulation The derivation of the semi-classical optical Bloch equations is nearly identical to solving the two-state quantum system (see the discussion there). However, usually one casts these equations into a density matrix form. The system we are dealing with can be described by the wave function: The density matrix is (other conventions are possible; this follows the derivation in Metcalf (1999)). One can now solve the Heisenberg equation of motion, or translate the results from solving the Schrödinger equation into density matrix form. One arrives at the following equations, including spontaneous emission: In the derivation of these formulae, we define and . It was also explicitly assumed that spontaneous emission is described by an exponential decay of the coefficient with decay constant . is the Rabi frequency, which is , and is the detuning and measures how far the light frequency, , is from the transition, . Here, is the transition dipole moment for the transition and is the vector electric field amplitude including the polarization (in the sense ). Derivation from cavity quantum electrodynamics Beginning with the Jaynes–Cummings Hamiltonian under coherent drive where is the lowering operator for the cavity field, and is the atomic lowering operator written as a combination of Pauli matrices. The time dependence can be removed by transforming the wavefunction according to , leading to a transformed Hamiltonian where . As it stands now, the Hamiltonian has four terms. The first two are the self energy of the atom (or other two level system) and field. The third term is an energy conserving interaction term allowing the cavity and atom to exchange population and coherence. These three terms alone give rise to the Jaynes-Cummings ladder of dressed states, and the associated anharmonicity in the energy spectrum. The last term models coupling between the cavity mode and a classical field, i.e. a laser. The drive strength is given in terms of the power transmitted through the empty two-sided cavity as , where is the cavity linewidth. This brings to light a crucial point concerning the role of dissipation in the operation of a laser or other CQED device; dissipation is the means by which the system (coupled atom/cavity) interacts with its environment. To this end, dissipation is included by framing the problem in terms of the master equation, where the last two terms are in the Lindblad form The equations of motion for the expectation values of the operators can be derived from the master equation by the formulas and . The equations of motion for , , and , the cavity field, atomic coherence, and atomic inversion respectively, are At this point, we have produced three of an infinite ladder of coupled equations. As can be seen from the third equation, higher order correlations are necessary. The differential equation for the time evolution of will contain expectation values of higher order products of operators, thus leading to an infinite set of coupled equations. We heuristically make the approximation that the expectation value of a product of operators is equal to the product of expectation values of the individual operators. This is akin to assuming that the operators are uncorrelated, and is a good approximation in the classical limit. It turns out that the resulting equations give the correct qualitative behavior even in the single excitation regime. Additionally, to simplify the equations we make the following replacements And the Maxwell–Bloch equations can be written in their final form Application: Atom-Laser Interaction Within the dipole approximation and rotating-wave approximation, the dynamics of the atomic density matrix, when interacting with laser field, is described by optical Bloch equation, whose effect can be divided into two parts: Optical Dipole Force, and Scattering Force. See also Atomic electron transition Lorenz system Semiconductor Bloch equations References Quantum mechanics Theoretical physics |
No, this text is not related with defense topics | A flow tracer is any fluid property used to track flow, magnitude, direction, and circulation patterns. Tracers can be chemical properties, such as radioactive material, or chemical compounds, physical properties, such as density, temperature, salinity, or dyes, and can be natural or artificially induced. Flow tracers are used in many fields, such as physics, hydrology, limnology, oceanography, environmental studies and atmospheric studies. Conservative tracers remain constant following fluid parcels, whereas reactive tracers (such as compounds undergoing a mutual chemical reaction) grow or decay with time. Active tracers dynamically alter the flow of the fluid by changing fluid properties which appear in the equation of motion such as density or viscosity, while passive tracers have no influence on flow. Uses in oceanography Ocean tracers are used to deduce small scale flow patterns, large-scale ocean circulation, water mass formation and changes, "dating" of water masses, and carbon dioxide storage and uptake. Tracers such are temperature, salinity, density, and other conservative tracers are often used to track currents, circulation and water mass mixing. An interesting example was when 28,000 plastic ducks fell over board from a container ship in the middle of the Pacific Ocean. The following twelve years oceanographers recorded where the ducks washed ashore, some thousands of miles from the spill site, and this data was used to calibrate and verify the circulation patters of the North Pacific Gyre. Transient tracers change over time, such as radioactive material (Tritium and Cesium-137) and chemical concentrations (CFCs and SF6), which are used to date water masses and can also track mixing. In the mid-1900s, Nuclear weapons testing and chemical production released tons of compounds that are not naturally found in the environment. While extremely unfortunate, scientists were able to use the concentrations of anthropogenic compounds and half-lives of radioactive material to determine how old a water body is. The Fukushima nuclear disaster was really well studied by oceanographers, who tracked the radioactive material spread throughout the Pacific Ocean, and used that to better understand ocean currents and mixing patterns. Biological tracers can also be used to track water masses in the ocean. Phytoplankton blooms can be seen by satellites and move with the changing currents. They can be used as a "check point" to see how well water masses are mixing. Subtropical water is often warm, which is ideal for phytoplankton, but nutrient poor, which inhibits their growth, while subpolar water is cold and nutrient rich. When these two types of water masses mix, such as the Kuroshio Current in the north Pacific, it often causes huge phytoplankton blooms, because they now how conditions they need to grow—warm temperatures and high nutrients. Vertical mixing and eddy formation can also cause phytoplankton blooms, and these blooms are tracked by satellites to observe current patterns and mixing. See also Perfluorocarbon tracer References External links ctraj Library of advection codes, including passive tracer modelling. Fluid dynamics Data collection Oceanography |
No, this text is not related with defense topics | Secondary contact is the process in which two allopatricaly distributed populations of a species are geographically reunited. This contact allows for the potential for the exchange of genes, dependent on how reproductively isolated the two populations have become. There are several primary outcomes of secondary contact: extinction of one species, fusion of the two populations back into one, reinforcement, the formation of a hybrid zone, and the formation of a new species through hybrid speciation. Extinction One of the two populations may go extinct due to competitive exclusion after secondary contact. This tends to happen when the two populations have strong reproductive isolation and significant overlap in their niche. A possible way to prevent extinction is if there is an advantage to being rare. For example, sexual imprinting and male-male competition may prevent extinction. The population that goes extinct may leave behind some of its genes in the surviving population if they hybridize. For example, the secondary contact between Homo sapiens and Neanderthals, as well as the Denisovans, left traces of their genes in modern human. However, if hybridization is so common that the resulting population received significant amount of genetic contribution from both populations, the result should be considered a fusion. Fusion The two populations may fuse back into one population. This tends to occur when there is little to no reproductive isolation between the two. During the process of fusion a hybrid zone may occur. This is sometimes called introgressive hybridization or reverse speciation. Concerns have been raised that the homogenizing of the environment may contribute to more and more fusion, leading to the loss of biodiversity. Hybrid zones A hybrid zone may appear during secondary contact, meaning there would be an area where the two populations cohabitate and produce hybrids, often arranged in a cline. The width of the zone may vary from tens of meters to several hundred kilometers. A hybrid zone may be stable, or it may not. Some shift in one direction, which may eventually lead to the extinction of the receding population. Some expand over time until the two populations fuse. Reinforcement may occur in hybrid zones. Hybrid zones are important study systems for speciation. Reinforcement Reinforcement is the evolution towards increased reproductive isolation due to selection against hybridization. This occurs when the populations already have some reproductive isolation, but still hybridize to some extent. Because hybridization is costly (e.g. giving birth and raising a weak offspring), natural selection favors strong isolation mechanisms that can avoid such outcome, such as assortative mating. Evidence for speciation by reinforcement has been accumulating since the 1990s. Hybrid speciation Occasionally, the hybrids may be able to survive and reproduce, but not backcross with either of the two parental lineages, thus becoming a new species. This often occur in plants through polyploidy, including in many important food crops. Occasionally, the hybrids may lead to the extinction of one or both parental lineages. References Ecology Evolutionary biology Speciation |
No, this text is not related with defense topics | Koch's postulates () are four criteria designed to establish a causative relationship between a microbe and a disease. The postulates were formulated by Robert Koch and Friedrich Loeffler in 1884, based on earlier concepts described by Jakob Henle, and refined and published by Koch in 1890. Koch applied the postulates to describe the etiology of cholera and tuberculosis, both of which are now ascribed to bacteria. The postulates have been controversially generalized to other diseases. More modern concepts in microbial pathogenesis cannot be examined using Koch's postulates, including viruses (which are obligate intracellular parasites) and asymptomatic carriers. They have largely been supplanted by other criteria such as the Bradford Hill criteria for infectious disease causality in modern public health, and Falkow's criteria for microbial pathogenesis. The postulates Koch's postulates are the following: The microorganism must be found in abundance in all organisms suffering from the disease, but should not be found in healthy organisms. The microorganism must be isolated from a diseased organism and grown in pure culture. The cultured microorganism should cause disease when introduced into a healthy organism. The microorganism must be reisolated from the inoculated, diseased experimental host and identified as being identical to the original specific causative agent. However, Koch later abandoned the universalist requirement of the first postulate altogether when he discovered asymptomatic carriers of cholera and, later, of typhoid fever. Asymptomatic or subclinical infection carriers are now known to be a common feature of many infectious diseases, especially viral diseases such as polio, herpes simplex, HIV/AIDS, hepatitis C, and COVID-19. As a specific example, all doctors and virologists agree that poliovirus causes paralysis in just a few infected subjects. The second postulate may also be suspended for certain microorganisms or entities that cannot (at the present time) be grown in pure culture. Viruses also require host cells to grow and reproduce and therefore cannot be grown in pure cultures. The third postulate specifies "should" not "must" because, as Koch himself proved in regard to both tuberculosis and cholera, not all organisms exposed to an infectious agent will acquire the infection. Noninfection may be due to such factors as general health and proper immune functioning; acquired immunity from previous exposure or vaccination; or genetic immunity, as with the resistance to malaria conferred by possessing at least one sickle cell allele. There are a few other exceptions to Koch's postulates. A single pathogen can cause several disease conditions. Additionally, a single disease condition can be caused by several different microorganisms. Some pathogens cannot be cultured in the lab, and some pathogens only cause disease in humans. In summary, an infectious agent can be considered to be a sufficient cause for a disease if it satisfies Koch's postulates. Failing that, the postulates suggest that the infectious agent is a necessary, but insufficient, cause for a disease. History Koch's postulates were developed in the 19th century as general guidelines to identify pathogens that could be isolated with the techniques of the day. Even in Koch's time, it was recognized that some infectious agents were clearly responsible for disease even though they did not fulfill all of the postulates. Attempts to apply Koch's postulates rigidly to the diagnosis of viral diseases in the late 19th century, at a time when viruses could not be seen or isolated in culture, may have impeded the early development of the field of virology. Koch's postulates have been recognized as largely obsolete by epidemiologists since the 1950s, so, while retaining historical importance and continuing to inform the approach to microbiologic diagnosis, they are not routinely used to demonstrate causality. Koch's postulates have also influenced scientists who examine microbial pathogenesis from a molecular point of view. In 1988, a molecular version of Koch's postulates was developed to guide the identification of microbial genes encoding virulence factors. That HIV causes AIDS does not follow from Koch's postulates, which may have supported HIV/AIDS denialism. The role of oncoviruses in causing some cancers also does not follow Koch's postulates. New discoveries of methods of infections as a result of Koch and many others' work have shown that some diseases and conditions are not always caused by a single microbe species. According to a 2019 study by Todd and Peters, a newly discovered interaction between the pathogen Staphylococcus aureus and "fungal opportunist" Candida albicans is being considered a co-infection that is found in the bodies of sick patients who suffer from different conditions. This kind of synergism was found to be lethal in a separate study conducted by Carlson on mice. When mice were infected with one pathogen independently of the other, sickness resulted but the mice were able to recover. When infected with both pathogens together, the mice had a near-100% mortality rate, showing that some pathogens cannot be as easily isolated or may need extra techniques and steps that better prove causation of the disease. For the 21st century Koch's postulates have played an important role in microbiology, yet they have major limitations. For example, Koch was well aware in the case of cholera that the causal agent, Vibrio cholerae, could be found in both sick and healthy people, invalidating his first postulate. Furthermore, viral diseases were not yet discovered when Koch formulated his postulates, and there are many viruses that do not cause illness in all infected individuals, a requirement of the first postulate. Additionally, it was known through experimentation that Helicobacter pylori caused inflammation of the gastric lining when ingested. As evident as the inflammation was, it still did not immediately convince skeptics that H. pylori was associated with stomach ulcers. Eventually, skeptics were silenced when a newly developed antibiotic treatment eliminated the bacteria and ultimately cured the disease. Koch's postulates are also of limited effectiveness when evaluating biofilms, Somni cells, and viruses. Cultivation of biofilms requires cultivation by molecular methods rather than traditional methods, and these alternative methods do not detect the cause of infection, which therefore interferes with the third postulate, that microorganisms should cause disease. For example, Somni cells and viruses cannot be cultured. The Somni cells, also called sleeping cells, become dormant due to strain on the cell. This state of sleep prevents the cell from growing in the culture. This is similar to how viruses cannot grow in axenic culture: viruses must be living to replicate, so the culture is not a suitable host. Byrd and Segre have proposed changes to the postulates to make them more accurate for today's world. Their revisions involve the third postulate: they disagree that a pathogen will always cause disease. Their first revision involves colonization resistance. Colonization resistance allows an organism to feed off of the host and protect it from pathogens that would have caused disease if the organism was not attached to the host. Their second revision is that a community of microbes could help inhibit pathogens even further, preventing the pathogen from spreading disease as it is supposed to. Similar to Byrd and Segre, Rivers suggested revisions to Koch's postulates. He believed that, although the original postulates were made as a guide, they were actually an obstacle. Rivers wanted to show the link between viruses and diseases. Rivers' own postulates are: the virus must be connected to disease consistently; the outcome of experimentation must indicate that the virus is directly responsible for the disease. Contradictions and occurrences such as these have led many to believe that a fifth postulate may be required. If accepted, this postulate would state that sufficient microbial data should allow scientists to treat, cure, or prevent the particular disease. More recently, modern nucleic-acid-based microbial detection methods have made Koch's original postulates even less relevant. These methods enable the identification of microbes that are associated with a disease, but which cannot be cultured. Also, these methods are very sensitive, and can often detect very low levels of viruses in healthy people. These new methods have led to revised versions of Koch's postulates. Fredricks and Relman have suggested a set of postulates for the novel field of microbial pathogenesis. These modifications are still controversial in that they do not account well for established disease associations, such as papillomavirus and cervical cancer, nor do they take into account prion diseases, which have no nucleic acid sequences of their own. See also Bradford Hill criteria Causal inference Mill's Methods Molecular Koch's postulates Willoughby D. Miller References Further reading Contagion: Historical Views of Diseases and Epidemics from Harvard Library Epidemiology Infectious diseases Robert Koch 1884 in biology 1884 in Germany |
No, this text is not related with defense topics | The Aero Club of India (ACI) is the apex body of all flying clubs and institutions involved in flight training, and also the national sports federation for air sports in India. Legally, it is registered as a non-profit, non-commercial organization. The ACI was founded in 1927 as the Royal Aero Club of India and Burma Ltd. Prior to India's independence in 1947, the organization had vast regulatory powers including the authority to issue flying licences to pilots and to approve certified flight instructors, and to issue licences for arms and wireless facilities to foreign aviators. However, most of these powers were transferred to government agencies after independence. The ACI lost nearly all of its regulatory powers after the formation of the Directorate General of Civil Aviation (DGCA). History The ACI was founded by businessman and hotelier Victor Sassoon as the Royal Aero Club of India and Burma Ltd. (RACIB) on 19 September 1927. The club's primary objectives were to create awareness of air sports in the country, and to provide training to people seeking employment in commercial aviation. The club was patronized by the British Indian government since its inception with the Viceroy of India and Burma serving as its Patron-in-Chief, the Commander-in-chief of India serving as its President, and the Director General of Posts and Telegraphs serving as the Vice President. RACIB's constitution was very similar to that of the Royal Aero Club of Great Britain. RACIB received affiliation from the Royal Aero Club and the Societe Aviation Internationale. RACIB sought to establish flying clubs across the country in order to achieve its founding objectives. The first such club, the Delhi Flying Club, was formed in May 1928. RACIB subsequently established flying clubs in Karachi (in present-day Pakistan), Allahabad, Calcutta and Bombay. RACIB received financial assistance from the government to acquire two Pussmoth aircraft for each flying club. The government also assisted in the flying club's financial operations. The first flying licence was issued by RACIB to J.R.D. Tata in 1929. Tata would go on to make India's first commercial flight on 15 October 1932. Tata donated the plane used to make the flight to the Aero Club of India in 1985. Today, it is displayed, suspended from the ceiling, at the ACI's headquarters at Safdarjung Airport. RACIB essentially operated as a branch of the Royal Aero Club of Great Britain until India's independence. In the years preceding independence, RACIB had suspended all of its operations due to World War II. Post-independence in 1947, RACIB was re-constituted as the Aero Club of India Ltd. (ACI). India's first Prime Minister Jawaharlal Nehru served as the organization's first President, and Constituent Assembly member H.N. Kunzru served as its Vice President. The ACI became a full member of Fédération Aéronautique Internationale in 1950. The organization assumed its current name in 1963 by dropping the word "Ltd." from its official name. Rajiv Gandhi became ACI President in 1984 and held the position until becoming Prime Minister of India in October 1984. The Rajiv Gandhi administration later allotted 30 acres of land near the Safdarjung Airport to the ACI for a period of 30 years at a concessional rate of per annum. The ACI moved into its new headquarters at Safdarjung Airport in September 1985. After the licence expired in September 2013, the ACI attempted to renew the licence for another 30 years and sent a cheque worth as a licence fee to the Airports Authority of India (AAI), the current owner of the land. However, the renewal was denied by AAI who instead issued an eviction notice to the ACI. The ACI challenged the eviction in the Delhi High Court. The petition was dismissed by the Court which observed that the facility was being used "more as a marriage/party venue than a flying club", and that "no injustice had been meted out" by evicting the plaintiff. References Flying clubs Sports governing bodies in India 1927 establishments in India Organisations based in Delhi Air sports Aviation organizations Aeronautics organizations Aerobatic organizations Sports organizations established in 1927 |
No, this text is not related with defense topics | Hyperhydricity (previously known as vitrification) is a physiological malformation that results in excessive hydration, low lignification, impaired stomatal function and reduced mechanical strength of tissue culture-generated plants. The consequence is poor regeneration of such plants without intensive greenhouse acclimation for outdoor growth. Additionally, it may also lead to leaf-tip and bud necrosis in some cases, which often leads to loss of apical dominance in the shoots. In general, the main symptom of hyperhydricity is translucent characteristics signified by a shortage of chlorophyll and high water content. Specifically, the presence of a thin or absent cuticular layer, reduced number of palisade cells, irregular stomata, less developed cell wall and large intracellular spaces in the mesophyll cell layer have been described as some of the anatomic changes associated with hyperhydricity. Causes The main causes of hyperhydricity in plant tissue culture are those factors triggering oxidative stresses such as high salt concentration, high relative humidity, low light intensity, gas accumulation in the atmosphere of the jar, length of time intervals between subcultures; number of subcultures, concentration and type of gelling agent, the type of explants used, the concentrations of microelement and hormonal imbalances. Hyperhydricity is commonly apparent in liquid culture-grown plants or when there is low concentration of gelling agent. High ammonium concentration also contributes to hyperhydricity. Control Hyperhydricity can be monitored by modifying the atmosphere of the culture vessels. Adjusting the relative humidity in the vessel is one of the most important parameters to be controlled. Use of gas-permeable membranes may help in this regard as this allows increased exchange of water vapor and other gases such as ethylene with the surrounding environment. Using higher concentration of a gelling agent, on top of the use of a higher-strength gelling agent may reduce the risk from hyperhydricity. Hyperhydricity can also be controlled by bottom cooling, which allows water to condense on the medium, the use of cytokinin-meta-topolin (6-(3-Hydroxybenzylamino)purine)</9>, the combination of lower cytokinin and ammonium nitrate in the medium, use of nitrate or glutamine as the sole nitrogen source and decreasing the ratio of NH4+:NO3- in the medium. In studies on calcium deficiency in tissue cultures of Lavandula angustifolia, it was shown that an increase in calcium in the medium reduced hyperhydricity. See also Callus (cell biology) Chimera (genetics) Somatic embryogenesis Embryo rescue Notes and references External links http://users.ugent.be/~pdebergh/tro/tro3_d01.htm Cell culture Biotechnology Cell biology Plant physiology |
No, this text is not related with defense topics | In art, a study is a drawing, sketch or painting done in preparation for a finished piece, as visual notes, or as practice. Studies are often used to understand the problems involved in rendering subjects and to plan the elements to be used in finished works, such as light, color, form, perspective and composition. Studies can have more impact than more-elaborately planned work, due to the fresh insights the artist gains while exploring the subject. The excitement of discovery can give a study vitality. When layers of the work show changes the artist made as more was understood, the viewer shares more of the artist's sense of discovery. Written notes alongside visual images add to the import of the piece as they allow the viewer to share the artist's process of getting to know the subject. Studies inspired some of the first 20th century conceptual art, where the creative process itself becomes the subject of the piece. Since the process is what is all-important in studies and conceptual art, the viewer may be left with no material object of art. Studies can be traced back even as long ago as the Italian Renaissance, from which art historians have maintained some of Michelangelo's studies. One in particular, his study for the Libyan Sibyl on the ceiling of the Sistine Chapel, is based on a male model, though the finished painting is of a woman. Such details help to reveal the thought processes and techniques of many artists. References Drawing |
No, this text is not related with defense topics | Respiratory inductance plethysmography (RIP) is a method of evaluating pulmonary ventilation by measuring the movement of the chest and abdominal wall. Accurate measurement of pulmonary ventilation or breathing often requires the use of devices such as masks or mouthpieces coupled to the airway opening. These devices are often both encumbering and invasive, and thus ill suited for continuous or ambulatory measurements. As an alternative RIP devices that sense respiratory excursions at the body surface can be used to measure pulmonary ventilation. According to a paper by Konno and Mead "the chest can be looked upon as a system of two compartments with only one degree of freedom each". Therefore, any volume change of the abdomen must be equal and opposite to that of the rib cage. The paper suggests that the volume change is close to being linearly related to changes in antero-posterior (front to back of body) diameter. When a known air volume is inhaled and measured with a spirometer, a volume-motion relationship can be established as the sum of the abdominal and rib cage displacements. Therefore, according to this theory, only changes in the antero-posterior diameter of the abdomen and the rib cage are needed to estimate changes in lung volume. Several sensor methodologies based on this theory have been developed. RIP is the most frequently used, established and accurate plethysmography method to estimate lung volume from respiratory movements. RIP has been used in many clinical and academic research studies in a variety of domains including polysomnographic (sleep), psychophysiology, psychiatric research, anxiety and stress research, anesthesia, cardiology and pulmonary research (asthma, COPD, dyspnea). Technology A respiratory inductance plethysmograph consists of two sinusoid wire coils insulated and placed within two 2.5 cm (about 1 inch) wide, lightweight elastic and adhesive bands. The transducer bands are placed around the rib cage under the armpits and around the abdomen at the level of the umbilicus (belly button). They are connected to an oscillator and subsequent frequency demodulation electronics to obtain digital waveforms. During inspiration the cross-sectional area of the rib cage and abdomen increases altering the self-inductance of the coils and the frequency of their oscillation, with the increase in cross-sectional area proportional to lung volumes. The electronics convert this change in frequency to a digital respiration waveform where the amplitude of the waveform is proportional to the inspired breath volume. A typical pitch of the wire sinusoid is in the range 1-2 cm and the inductance of the belt is ~ 2-4 microhenries per metre of belt. The inductance can be measured by making it part of the tuned circuit of an oscillator and then measuring the oscillation frequency. Single Vs. Dual Band Respiration Dual Band Respiration Konno and Mead extensively evaluated a two-degrees-of-freedom model of chest wall motion, whereby ventilation could be derived from measurements of rib cage and abdomen displacements. With this model, tidal volume (Vt) was calculated as the sum of the anteroposterior dimensions of the rib cage and abdomen, and could be measured to within 10% of actual Vt as long as a given posture was maintained. Single Band Respiration Changes in volume of the thoracic cavity can also be inferred from displacements of the rib cage and diaphragm. Motion of the rib cage can be directly assessed, whereas the motion of the diaphragm is indirectly assessed as the outward movement of the anterolateral abdominal wall. However, accuracy issues arise when trying to assess accurate respiratory volumes from a single respiration band placed either at the thorax, abdomen or midline. Due to differences in posture and thoraco-abdominal respiratory synchronization it is not possible to obtain accurate respiratory volumes with a single band. Furthermore, the shape of the acquired waveform tends to be non-linear due to the non-exact co-ordination of the two respiratory compartments. This further limits quantification of many useful respiratory indices and limits utility to only respiration rates and other basic timing indices. Therefore, to accurately perform volumetric respiratory measurements, a dual band respiratory sensor system must be required. RIP Data Analysis Dual band respiratory inductance plethysmography can be used to describe various measures of complex respiratory patterns. The image shows waveforms and measures commonly analyzed. Respiratory rate is the number of breaths per minute. A non-specific measure of respiratory disorder. Tidal volume (Vt) is the volume inspired and expired with each breath. Variability in the wave form can be used to differentiate between restrictive (less) and obstructive pulmonary diseases as well as acute anxiety. Minute ventilation is equivalent to tidal volume multiplied by respiratory rate and is used to assess metabolic activity. Peak inspiratory flow (PifVt) is a measure that reflects respiratory drive, the higher its value, the greater the respiratory drive in the presence of coordinated thoraco-abdominal or even moderately discoordinated thoraco-abdominal movements. Fractional inspiratory time (Ti/Tt) is the "Duty cycle" (Ti/Tt, ratio of time of inspirationy to total breath time). Low values may reflect severe airways obstruction and can also occur during speech. Higher values are observed when snoring. Work of breathing is a measure of a "Rapid shallow breathing index". Peak/mean inspiratory and expiratory flow measures the presence of upper airway flow limitations during inspiration and expiration. %RCi is the percent contribution of the rib cage excursions to the tidal volume Vt. The %RCi contribution to Tidal Volume ratio is obtained by dividing the inspired volume in the RC band by the inspired volume in the algebraic sum of RC + AB at the point of the peak of inspiratory tidal volume. This value is higher in woman than in men. The values are also generally higher during acute hyperventilation. Phase Angle - Phi - Normal breathing involves a combination of both thoracic and abdominal (diaphragmatic) movements. During inhalation, both the thoracic and abdominal cavities simultaneously expand in volume, and thus in girth as well. If there is a blockage in the trachea or nasopharynx, the phasing of these movements will shift in relation to the degree of the obstruction. In the case of a total obstruction, the strong chest muscles force the thorax to expand, pulling the diaphragm upward in what is referred to as "paradoxical" breathing – paradoxical in that the normal phases of thoracic and abdominal motion are reversed. This is commonly referred to as the Phase Angle. Apnea & hypopnea detection - Diagnostic components of sleep apnea/hypopnea syndrome and periodic breathing. Apnea & hypopnea classification - Phase relation between thorax and abdomen classifies apnea/hypopnea events into central, mixed, and obstructive types. qDEEL quantitative difference of end expiratory lung volume is a change in the level of end expiratory lung volume and may be elevated in Cheyne-Stokes respiration and periodic breathing. Accuracy Dual band respiratory inductance plethysmography was validated in determining tidal volume during exercise and shown to be accurate. A version of RIP embedded in a garment called the LifeShirt was used for these validation studies. References External links Use of RIP for preclinical research in freely moving animals : emkaPACK4G Jacketed External Telemetry (large animals) Jacketed External Telemetry JET(large animals) DECRO(small animals) Biotechnology Biomedicine |
No, this text is not related with defense topics | Attention inequality is a term used to target the inequality of distribution of attention across users on social networks, people in general, and for scientific papers. Yun Family Foundation introduced "Attention Inequality Coefficient" as a measure of inequality in attention and arguments it by the close interconnection with wealth inequality. Relationship to economic inequality Attention inequality is related to economic inequality since attention is an economically scarce good. Same measures and concepts as in classical economy can be applied for attention economy. The relationship develops also beyond the conceptual level — considering the AIDA process, attention is the prerequisite for real monetary income on the Internet. On data of 2018, a significant relationship between likes and comments on Facebook to donations is proven for non-profit organizations. Extent As data of 2008 shows, 50% of the attention is concentrated on approximately 0.2% of all hostnames, and 80% on 5% of hostnames. The Gini coefficient of attention distribution lay in 2008 at over 0.921 for such commercial domains names as ac.jp and at 0.985 for .org-domains. The Gini coefficient was measured on Twitter in 2016 for the number of followers as 0.9412, for the number of mentions as 0.9133, and for the number of retweets as 0.9034. For comparison, the world's income Gini coefficient was 0.68 in 2005 and 0.904 in 2018. More than 96% of all followers, 93% of the retweets, and 93% of all mentions are owned by 20% of Twitter. Causes At least for scientific papers, today's consensus states that inequality is unexplainable by variations of quality and individual talent. Matthew effect plays a significant role in the emergence of attention inequality — those, who already enjoy a lot of attention, get even more attention and those who do not, lose even more. Significant evidence could be found that ranking algorithm would alleviate the inequality of number of posts across topics. See also Attention economy Famous for being famous Kardashian index Knowledge gap hypothesis Ortega hypothesis Pareto distribution External links Attention inequality by Yun Family Foundation References Social inequality Economic inequality Information Age Matthew effect |
No, this text is not related with defense topics | Detoxification foot baths, also known as foot detox, ionic cleansing, ionic foot bath and aqua/water detox are pseudoscientific devices marketed as being able to remove toxins from the human body. They work by providing an electric current to an electrode array immersed in a salt water solution. When switched on, the electrodes rapidly rust in a chemical process called electrolysis which quickly turns the water brown. This reaction happens regardless of whether or not a person's feet are immersed in the water and no toxins from the human body have ever been detected in the water after use. Description Detoxification foot baths first became popular with consumers in the early 2000s and quickly became popular in spas due to the theatre of the visible brown water and sludge produced by the devices. One manufacturer of the device, known as Aqua Detox, states that the concept is based on research from the 1920s to 1930s by Royal Rife, an inventor who claimed his Rife Devices could "devitalize disease organisms" by vibrating them at certain frequencies. Detoxification foot baths consist of two major components, a simple container in which to place the feet and an electrode array. Usually a fragrant, warm salted water is used as the electrolyte and the customer's feet, along with the array are immersed in this water. Inside the array are two metal electrodes, between which a current flows, causing the electrodes to rust rapidly due to electrolysis. This reaction quickly turns the salt water solution brown, and flakes of rust may also be visible in the water. Electrode arrays used in this application degrade quickly, and usually need to be replaced after roughly 16 hours of use. Claims Proponents of detoxification foot baths claim they are capable of helping the human body in numerous ways. Effects like "re-balancing the cellular energy" of the body, helping with headaches and sleeplessness, to kidney, liver and immune system function are regularly stated. More serious claims such as helping with heavy metal toxicity and autism spectrum disorder have been made by various proponents. Some spas and manufacturers provide charts to show their customers the different areas of their bodies that toxins can come from. In these charts the different color of the water in the foot bath, after treatment, purportedly defines where in the body the toxins have come from. There is currently no scientific basis to the claims these charts make. Criticism Inside Edition visited several spas in New York City in 2011 to investigate detox foot treatments. At each spa they visited, they were told that the treatments would improve their overall health, and that the change in the color of the water was due to the release of toxins from their bodies. Inside Edition then purchased their own detox foot bath and had it examined by electrical engineer Steve Fowler, at his lab. After examining the device, he concluded that "Everything you see here is just rust, this is nothing more than two pieces of metal rusting, it has nothing to do with toxins. It is just a simple chemistry experiment." In his 2008 book Bad Science, Ben Goldacre discussed his experiences investigating the science behind detox foot baths. After reading an article in The Daily Telegraph about them, he suspected that the brown water could be a result of rust. He then set up his own experiment using a bucket of water, a car battery and two large nails. His experiment quickly changed the color of the water in the bucket to a dark brown with a sludge on top. With this information in mind, he sent a friend along to a local spa to get a treatment and to collect samples of the water before and after. The samples were sent to the Medical Toxicology Unit at St Mary's Hospital in London to be analyzed. The water sampled before the detox foot bath was activated contained only 0.54mg per liter of iron and after the treatment was complete it contained 23.6mg per liter. For reference, Goldacre's water sample from his original experiment contained 97mg per liter. Goldacre approached a number of manufacturers of the devices regarding their claims about removing toxins from the body. None were able to say exactly which toxins were being removed from the body or even if any were at all. With that information, he decided to have his water samples tested for creatinine and urea, two of the smallest breakdown molecules that the human body creates. Neither of these molecules were found in the samples, just the iron oxide rust. Joe Schwarcz also explained that putting the iron and aluminum electrodes in water will produce iron oxide, showing as various shades of brownish residue. The magnesium and calcium naturally present in human sweat increase the electrolytic reaction. After trying the apparatus and getting the brown residue even when the bath is running without the presence of human feet, Timothy Caulfield concluded that "this is a really good example of what's ultimately nothing but a marketing scam." See also References External links Health fraud Deception Pseudoscience Alternative detoxification Fringe science Unnecessary health care Scientific skepticism Alternative medicine Energy therapies |
No, this text is not related with defense topics | A subclinical infection — sometimes called a preinfection or inapparent infection — is an infection that, being subclinical, is nearly or completely asymptomatic (no signs or symptoms). A subclinically infected person is thus a paucisymptomatic or asymptomatic carrier of a microbe, intestinal parasite, or virus that usually is a pathogen causing illness, at least in some individuals. Many pathogens spread by being silently carried in this way by some of their host population. Such infections occur both in humans and animals. An example of an asymptomatic infection is a mild common cold that is not noticed by the infected individual. Since subclinical infections often occur without eventual overt sign, their existence is only identified by microbiological culture or DNA techniques such as polymerase chain reaction. Infection transmission/signs An individual may only develop signs of an infection after a period of subclinical infection, a duration that is called the incubation period. This is the case, for example, for subclinical sexually transmitted diseases such as AIDS and genital warts. It is thought that individuals with such subclinical infections, and those that never develop overt illness, creates a reserve of individuals that can transmit an infectious agent to infect other individuals. Because such cases of infections do not come to clinical attention, health statistics can often fail to measure the true prevalence of an infection in a population, and this prevents the accurate modeling of its infectious transmission. Types of subclinical infections The following pathogens (together with their symptomatic illnesses) are known to be carried asymptomatically, often in a large percentage of the potential host population: Baylisascaris procyonis Bordetella pertussis (Pertussis or whooping cough) Chlamydia pneumoniae Chlamydia trachomatis (Chlamydia) Clostridium difficile Cyclospora cayetanensis Dengue virus Dientamoeba fragilis Entamoeba histolytica enterotoxigenic Escherichia coli Epstein-Barr virus Group A streptococcal infection Helicobacter pylori Herpes simplex (oral herpes, genital herpes, etc.) HIV-1 (AIDS) Influenza (strains) Legionella pneumophila (Legionnaires' disease) measles viruses Mycobacterium leprae (leprosy) Mycobacterium tuberculosis (tuberculosis) Neisseria gonorrhoeae (gonorrhoea) Neisseria meningitidis (Meningitis) nontyphoidal Salmonella noroviruses Poliovirus (Poliomyelitis) Plasmodium (Malaria) Rabies lyssavirus (Rabies) rhinoviruses (Common cold) Salmonella enterica serovar Typhi (Typhoid fever) SARS-CoV-2 (COVID-19) and other coronaviruses Staphylococcus aureus Streptococcus pneumoniae (Bacterial pneumonia) Treponema pallidum (syphilis) Host tolerance Fever and sickness behavior and other signs of infection are often taken to be due to them. However, they are evolved physiological and behavioral responses of the host to clear itself of the infection. Instead of incurring the costs of deploying these evolved responses to infections, the body opts to tolerate an infection as an alternative to seeking to control or remove the infecting pathogen. Subclinical infections are important since they allow infections to spread from a reserve of carriers. They also can cause clinical problems unrelated to the direct issue of infection. For example, in the case of urinary tract infections in women, this infection may cause preterm delivery if the person becomes pregnant without proper treatment. See also References Further reading Human diseases and disorders Epidemiology Infectious diseases Medical terminology Symptoms |
No, this text is not related with defense topics | Radical evil () is a phrase used by German philosopher Immanuel Kant, one representing the Christian term, . Kant believed that human beings naturally have a tendency to be evil. He explains radical evil as corruption that entirely takes over a human being and leads to desire's acting against the universal moral law. The outcome of one's natural tendency, or innate propensity, towards evil are actions or "deeds" that subordinate the moral law. According to Kant, these actions oppose the universally moral maxims and displayed from self-love and self conceit. By many authors, Kant's concept of radical evil is seen as a paradox and inconsistent through his development of moral theories. Origin The concept of radical evil was constructed by Immanuel Kant and first explained thoroughly in Kant's Religion within the Bounds of Reason Alone in 1793. There Kant writes: This concept has been described as a Kantian adaptation of the Lutheran "simul justus et peccator." Categorical imperatives Categorical imperatives (Cl) is the foundation of morality in which Kant uses to create the phrase radical evil. Kant characterized morality in terms of categorical imperatives. Cl is described as boundaries that you should not pass regardless of our natural desires. We are expressed to have obligations in following these principles because they derive from reason. When one acts against Cl then one is seen to act irrationally and therefore immorally. Propensity of evil vs. the natural predisposition of good To be morally evil is to possess desires that causes one to act against good. To be radically evil, one can no longer act in accordance to good because they determinedly follow maxims of willing that discounts good. According to Kant, a person has the choice between good maxims, rules that respect the moral law, and evil maxims, rules that contradict or opposes moral law. One that disregards, and act against moral law, they are described to be corrupted with an innate propensity to evil. Propensity is explained as a natural characteristic of a human being that is deemed non-necessary. Propensity therefore is distinguished as a tendency, or inclination, in one's behavior to act accordingly or opposed to the moral law. This propensity to evil is the source of one's immoral actions and therefore entirely corrupting one's natural predisposition of good. Since this has corrupted them as a whole, the evil is considered to be radical. This is not saying that being radical is a concrete mindset, the propensity of evil can be revised through what is described to be a "revolution of thought" which reforms one's character through moral agents that practice universal ethics. Incentives in humanity Kant states that human willing is either good or evil, it is either one or neither. Human willing is considered good if one's action respects the moral law. There are three incentives in humanity in which we align our willing with, (1) animality, (2) humanity, and (3) personality. Kant's concept of human freedom is characterized by three predisposition of human beings: Enlists the existential drive for "self-preservation", one's sexual drive for breeding, the being preservation towards their child that is birthed through this breeding, and finally their "social drive" with other humans. The propensity "to gain worth in the opinion of others." Through this predisposition, "jealousy and rivalry" is produced through beings hence incentives culture. One's likeliness to follow the moral law. Inconsistency in ideas Kant's inconsistency of his moral theories are pointed and argued by many authors. Kant changes his supporting arguments and claims in his work that some philosophers found as "scandalous", "inconsistent", and "indecisive". From this, Kant's idea of radical evil is seen deviant and an undeveloped concept that does not support his overall ideas of ethics. Even though his development is seen as inconsistent, it is argued that his concept of radical evil align with his ideas of human freedom, the moral law, and moral responsibility. References Footnotes Bibliography Huang, Hshuan, "Kant's Concept of Radical Evil" Kant, Immanuel, Kant: Religion within the Boundaries of Mere Reason: And Other Writings, (Cambridge Texts in the History of Philosophy), Cambridge University Press (January 28, 1999), , Stanford Encyclopedia of Philosophy, "Radical Evil" in "Kant's Philosophy of Religion" Ethics Immanuel Kant Philosophy of religion Good and evil |
No, this text is not related with defense topics | Double switching, double cutting, or double breaking is the practice of using a multipole switch to close or open both the positive and negative sides of a DC electrical circuit, or both the hot and neutral sides of an AC circuit. This technique is used to prevent shock hazard in electric devices connected with unpolarised AC power plugs and sockets. Double switching is a crucial safety engineering practice in railway signalling, wherein it is used to ensure that a single false feed of current to a relay is unlikely to cause a wrong-side failure. It is an example of using redundancy to increase safety and reduce the likelihood of failure, analogous to double insulation. Double switching increases the cost and complexity of systems in which it is employed, for example by extra relay contacts and extra relays, so the technique is applied selectively where it can provide a cost-effective safety improvement. Examples Landslip and Washaway Detectors A landslip or washaway detector is buried in the earth embankment, and opens a circuit should a landslide occur. It is not possible to guarantee that the wet earth of the embankment will not complete the circuit which is supposed to break. If the circuit is double cut with positive and negative wires, any wet conductive earth is likely to blow a fuse on the one hand, and short the detecting relay on the other hand, either of which is almost certain to apply the correct warning signal. Accidents Clapham The Clapham Junction rail crash of 1988 was caused in part by the lack of double switching (known as "double cutting" in the British railway industry). The signal relay in question was switched only on the hot side, while the return current came back on an unswitched wire. A loose wire bypassed the contacts by which the train detection relays switched the signal, allowing the signal to show green when in fact there was a stationary train ahead. 35 people were killed in the resultant collision. United Flight 811 A similar accident on the United Airlines Flight 811 was caused in part by a single-switched safety circuit for the baggage door mechanism. Failure of the wiring insulation in that circuit allowed the baggage door to be unlocked by a false feed, leading to a catastrophic de-pressurisation, and the deaths of nine passengers. Signalling in NSW A study of railway electrical signalling in New South Wales from the 1900s, shows an ever increasing proportion of double switching compared to single switching. Double switching does of course cost more wires, more relay contacts, and testing. On the other hand double switching is inherently less prone to wrong side failures; it helps overcome short-circuit faults that are hard to test for. Partial double switching might double switch the lever controls, and the track circuits between one signal and the next, while single switching the track circuits in the less critical overlap beyond the next signal. Double switching is facilitated by more modern relays that have more contacts in less space: Pre-1950 Shelf Type Relay - 12 contacts (front (make) and back (break)) - full size Post-1950 Q-type plug in relay - 16 contacts (front (make) and back (break)) - about half size See also Redundancy (engineering) Double insulation Single-wire earth return References Couplers Railway signalling Safety Fault tolerance |
No, this text is not related with defense topics | The International Council for Harmonisation of Technical Requirements for Pharmaceuticals for Human Use (ICH) is an initiative that brings together regulatory authorities and pharmaceutical industry to discuss scientific and technical aspects of pharmaceutical product development and registration. The mission of the ICH is to promote public health by achieving greater harmonisation through the development of technical Guidelines and requirements for pharmaceutical product registration. Harmonisation leads to a more rational use of human, animal and other resources, the elimination of unnecessary delay in the global development, and availability of new medicines while maintaining safeguards on quality, safety, efficacy, and regulatory obligations to protect public health. Junod notes in her 2005 treatise on Clinical Drug Trials that "Above all, the ICH has succeeded in aligning clinical trial requirements." History In the 1980s the European Union began harmonising regulatory requirements. In 1989, Europe, Japan, and the United States began creating plans for harmonisation. The International Conference on Harmonisation of Technical Requirements for Registration of Pharmaceuticals for Human Use (ICH) was created in April 1990 at a meeting in Brussels. ICH had the initial objective of coordinating the regulatory activities of the European, Japanese and United States regulatory bodies in consultation with the pharmaceutical trade associations from these regions, to discuss and agree the scientific aspects arising from product registration. Since the new millennium, ICH's attention has been directed towards extending the benefits of harmonisation beyond the founding ICH regions. In 2015, ICH underwent several reforms and changed its name to the International Council for Harmonisation of Technical Requirements for Pharmaceuticals for Human Use while becoming a legal entity in Switzerland as a non-profit association. The aim of these reforms was to transform ICH into a truly global initiative supported by a robust and transparent governance structure. The ICH Association established an Assembly as the over-arching governing body with the aim of focusing global pharmaceutical regulatory harmonisation work in one venue that allows pharmaceutical regulatory authorities and concerned industry organisations to be more actively involved in ICH’s harmonisation work. The new Assembly met for the first time on 23 October 2015. Structure The ICH comprises the following bodies: ICH Assembly ICH Management Committee MedDRA Management Committee ICH Secretariat The ICH Assembly brings together all Members and Observers of the ICH Association as the overarching governing body of ICH. It adopts decisions in particular on matters such as on the adoption of ICH Guidelines, admission of new Members and Observers, and the ICH Association’s work plans and budget. Member representatives appointed to the Assembly are supported by ICH Coordinators who represent each Member to the ICH Secretariat on a daily basis. The ICH Management Committee (MC) is the body that oversees operational aspects of ICH on behalf of all Members, including administrative and financial matters and oversight of the Working Groups (WGs). The MedDRA Management Committee (MC) has responsibility for direction of MedDRA, ICH’s standardised medical terminology. The MedDRA MC has the role of managing, supporting, and facilitating the maintenance, development, and dissemination of MedDRA. The ICH Secretariat is responsible for day-to-day management of ICH, coordinating ICH activities as well as providing support to the Assembly, the MC and Working Groups. The ICH Secretariat also provides support for the MedDRA MC. The ICH Secretariat is located in Geneva, Switzerland. The ICH WGs are established by the Assembly when a new technical topic is accepted for harmonisation, and are charged with developing a harmonised guideline that meets the objectives outlined in the Concept Paper and Business Plan. Face-to-face meetings of the WG will normally only take place during the biannual ICH meetings. Interim reports are made at each meeting of the Assembly and made publicly available on the ICH website. Process of Harmonisation ICH harmonisation activities fall into 4 categories: Formal ICH Procedure, Q&A Procedure, Revision Procedure and Maintenance Procedure, depending on the activity to be undertaken. The development of a new harmonised guideline and its implementation (the formal ICH procedure) involves 5 steps: Step 1: Consensus building The WG works to prepare a consensus draft of the Technical Document, based on the objectives set out in the Concept Paper. When consensus on the draft is reached within the WG, the technical experts of the WG will sign the Step 1 Experts sign-off sheet. The Step 1 Experts Technical Document is then submitted to the Assembly to request adoption under Step 2 of the ICH process. Step 2a: Confirmation of consensus on the Technical Document Step 2a is reached when the Assembly agrees, based on the report of the WG, that there is sufficient scientific consensus on the technical issues for the Technical Document to proceed to the next stage of regulatory consultation. The Assembly then endorses the Step 2a Technical Document. Step 2b: Endorsement of draft Guideline by Regulatory Members Step 2b is reached when the Regulatory Members of the Assembly further endorse the draft Guideline. Step 3: Regulatory consultation and discussion Step 3 occurs in three distinct stages: regulatory consultation, discussion, and finalisation of the Step 3 Expert Draft Guideline. Stage I - Regional regulatory consultation: The Guideline embodying the scientific consensus leaves the ICH process and becomes the subject of normal wide-ranging regulatory consultation in the ICH regions. Regulatory authorities and industry associations in other regions may also comment on the draft consultation documents by providing their comments to the ICH Secretariat. Stage II - Discussion of regional consultation comments: After obtaining all comments from the consultation process, the EWG works to address the comments received and reach consensus on what is called the Step 3 Experts Draft Guideline. Stage III - Finalisation of Step 3 Experts Draft Guideline: If, after due consideration of the consultation results by the WG, consensus is reached amongst the experts on a revised version of the Step 2b draft Guideline, the Step 3 Expert Draft Guideline is signed by the experts of the ICH Regulatory Members. The Step 3 Expert Draft Guideline with regulatory EWG signatures is submitted to the Regulatory Members of the Assembly to request adoption at Step 4 of the ICH process. Step 4: Adoption of an ICH Harmonised Guideline Step 4 is reached when the Regulatory Members of the Assembly agree that there is sufficient scientific consensus on the draft Guideline and adopt the ICH Harmonised Guideline. Step 5: Implementation The ICH Harmonised Guideline moves immediately to the final step of the process that is the regulatory implementation. This step is carried out according to the same national/regional procedures that apply to other regional regulatory guidelines and requirements in the ICH regions. Information on the regulatory action taken and implementation dates are reported back to the Assembly and published by the ICH Secretariat on the ICH website. Work products Guidelines The ICH topics are divided into four categories and ICH topic codes are assigned according to these categories: Q : Quality Guidelines S : Safety Guidelines E : Efficacy Guidelines M : Multidisciplinary Guidelines ICH Guidelines are not mandatory for anybody per se but the strength of the ICH process lies in the commitment for implementation by ICH Regulatory Members using appropriate national/regional tools. MedDRA MedDRA is a rich and highly specific standardised medical terminology developed by ICH to facilitate sharing of regulatory information internationally for medical products used by humans. It is used for registration, documentation and safety monitoring of medical products both before and after a product has been authorised for sale. Products covered by the scope of MedDRA include pharmaceuticals, vaccines and drug-device combination products. See also ANVISA, Brazil Australia New Zealand Therapeutic Products Authority BIO CIOMS Guidelines Clinical study report Clinical trial Common Technical Document Council for International Organizations of Medical Sciences EFPIA FDA, US Good clinical practice (GCP) Health Canada, Canada HSA, Singapore IFPMA – & – International Pharmaceutical Federation JPMA MFDS, Republic of Korea MHLW, Japan National pharmaceuticals policy Pharmaceutical policy Pharmacopoeia PhRMA PMDA, Japan Regulation of therapeutic goods Swissmedic, Switzerland TFDA, Taiwan Uppsala Monitoring Centre Notes External links ICH website Analysis: New ICH M2 Requirements into eCTD NMV (=RPS) ANVISA, Brazil BIO EC, Europe EFPIA FDA, US Health Canada, Canada HSA, Singapore IGBA JPMA MedDRA website MFDS, Republic of Korea MHLW/PMDA, Japan PhRMA Swissmedic, Switzerland TFDA, Chinese Taipei WSMI Clinical research Pharmaceuticals policy Drug safety Life sciences industry International standards |
No, this text is not related with defense topics | A football is a ball inflated with air that is used to play one of the various sports known as football. In these games, with some exceptions, goals or points are scored only when the ball enters one of two designated goal-scoring areas; football games involve the two teams each trying to move the ball in opposite directions along the field of play. The first balls were made of natural materials, such as an inflated pig bladder, later put inside a leather cover, which has given rise to the American slang-term "pigskin". Modern balls are designed by teams of engineers to exacting specifications, with rubber or plastic bladders, and often with plastic covers. Various leagues and games use different balls, though they all have one of the following basic shapes: a sphere: used in association football and Gaelic football a prolate spheroid (elongated sphere) either with rounded ends: used in the rugby codes and Australian football or with more pointed ends: used in gridiron football The precise shape and construction of footballs is typically specified as part of the rules and regulations. The oldest football still in existence, which is thought to have been made circa 1550, was discovered in the roof of Stirling Castle, Scotland, in 1981. The ball is made of leather (possibly from a deer) and a pig's bladder. It has a diameter of between , weighs and is currently on display at the Smith Art Gallery and Museum in Stirling. Association football Law 2 of the game specifies that the ball is an air-filled sphere with a circumference of , a weight of , inflated to a pressure of 0.6 to 1.1 atmospheres () "at sea level", and covered in leather or "other suitable material". The weight specified for a ball is the dry weight, as older balls often became significantly heavier in the course of a match played in wet weather. There are a number of different types of football balls depending on the match and turf including: training footballs, match footballs, professional match footballs, beach footballs, street footballs, indoor footballs, turf balls, futsal footballs and mini/skills footballs. Most modern Association footballs are stitched from 32 panels of waterproofed leather or plastic: 12 regular pentagons and 20 regular hexagons. The 32-panel configuration is the spherical polyhedron corresponding to the truncated icosahedron; it is spherical because the faces bulge from the pressure of the air inside. The first 32-panel ball was marketed by Select in the 1950s in Denmark. This configuration became common throughout Continental Europe in the 1960s, and was publicised worldwide by the Adidas Telstar, the official ball of the 1970 World Cup. This design in often referenced when describing the truncated icosahedron Archimedean solid, carbon buckyballs, or the root structure of geodesic domes. Gridiron football In the United States and Canada, the term football usually refers to a ball made of cow hide leather, which is required in professional and collegiate football. Footballs used in recreation and in organized youth leagues may be made of rubber or plastic materials (the high school football rulebooks still allow the inexpensive all-rubber footballs, though they are less common than leather). Since 1941, Horween Leather Company has been the exclusive supplier of leather for National Football League footballs. The arrangement was established by Arnold Horween, who had played and coached in the NFL. Horween Leather Company also supplies leather to Spalding, supplier of balls to the Arena Football League. Leather panels are typically tanned to a natural brown color, which is usually required in professional leagues and collegiate play. At least one manufacturer uses leather that has been tanned to provide a "tacky" grip in dry or wet conditions. Historically, white footballs have been used in games played at night so that the ball can be seen more easily however, improved artificial lighting conditions have made this no longer necessary. At most levels of play (but not, notably, the NFL), white stripes are painted on each end of the ball, halfway around the circumference, to improve nighttime visibility and also to differentiate the college football from the pro football. However, the NFL once explored the usage of white-striped footballs – in Super Bowl VIII. In the CFL the stripes traverse the entire circumference of the ball. The UFL used a ball with lime-green stripes. The XFL of 2001 used a novel color pattern, a black ball with red curved lines in lieu of stripes, for its footballs; this design was redone in a tan and navy color scheme for the Arena Football League in 2003. A ball with red, white and blue panels was introduced in the American Indoor Football League in 2005 and used by its successors, as well as the Ultimate Indoor Football League of the early 2010s and the Can-Am Indoor Football League during its lone season in 2017. The XFL of 2020 uses standard brown but with X markings on each point instead of stripes. Footballs used in gridiron-style games have prominent points on both ends. The shape is generally credited to official Hugh "Shorty" Ray, who introduced the new ball in 1934 as a way to make the forward pass more effective. Australian rules football The football used in Australian football is similar to a rugby ball but generally slightly smaller and more rounded at the ends, but more elongated in overall appearance, being longer by comparison with its width than a rugby ball. A regulation football is in circumference, and transverse circumference, and inflated to a pressure of . In the AFL, the balls are red for day matches and yellow for night matches. The first games of Australian football were played with a round ball, because balls of that shape were more readily available. In 1860, Australian football pioneer Tom Wills argued that the oval rugby ball travelled further in the air and made for a more exciting game. It became customary in Australian football by the 1870s. The Australian football ball was invented by T. W. Sherrin in 1880, after he was given a misshapen rugby ball to fix. Sherrin designed the ball with indented rather than pointy ends to give the ball a better bounce. Australian football ball brands include Burley, Ross Faulkner, and Sherrin (the brand used by the Australian Football League). Gaelic football The game is played with a round leather football made of 18 stitched leather panels, similar in appearance to a traditional volleyball (but larger), with a circumference of , weighing between when dry. It may be kicked or hand passed. A hand pass is not a punch but rather a strike of the ball with the side of the closed fist, using the knuckle of the thumb. Rugby football Until 1870, rugby was played with a near spherical ball with an inner-tube made of a pig's bladder. In 1870 Richard Lindon and Bernardo Solano started making balls for Rugby school out of hand stitched, four-panel, leather casings and pigs' bladders. The rugby ball's distinctive shape is supposedly due to the pig's bladder, although early balls were more plum-shape than oval. The balls varied in size in the beginning depending upon how large the pig's bladder was. Because of the pliability of rubber the shape gradually changed from a sphere to an egg. In 1892 the RFU endorsed ovalness as the compulsory shape. The gradual flattening of the ball continued over the years. The introduction of synthetic footballs over the traditional leather balls, in both rugby codes, was originally governed by weather conditions. If the playing surface was wet, the synthetic ball was used, because it wouldn't absorb water and become heavy. Eventually, the leather balls were phased out completely. Rugby league Rugby league is played with a prolate spheroid shaped football which is inflated with air. A referee will stop play immediately if the ball does not meet the requirements of size and shape. Traditionally made of brown leather, modern footballs are synthetic and manufactured in a variety of colours and patterns. Senior competitions should use light-coloured balls to allow spectators to see the ball more easily. The football used in rugby league is known as "international size" or "size 5" and is approximately long and in circumference at its widest point. Smaller-sized balls are used for junior versions of the game, such as "Mini" and "Mod". A full size ball weighs between . Rugby league footballs are slightly more pointed than rugby union footballs and larger than American footballs. The Australasian National Rugby League and Super League use balls made by Steeden. Steeden is also sometimes used in Australia as a noun to describe the ball itself. Rugby union The ball used in rugby union, usually referred to as a rugby ball, is a prolate spheroid essentially elliptical in profile. Traditionally made of brown leather, modern footballs are manufactured in a variety of colours and patterns. A regulation football is long and in circumference at its widest point. It weighs and is inflated to . In 1980, leather-encased balls, which were prone to water-logging, were replaced with balls encased in synthetic waterproof materials. The Gilbert Synergie was the match ball of the 2007 Rugby World Cup. See also List of inflatable manufactured goods Bibliography Angela Royston, 2005. How Is a Soccer Ball Made? Heinemann. . Footnotes External links Official FIFA Football BALL Website Ki-o-Rahi history and rules Paper model truncated icosahedron (association football ball) Popular Mechanics article on American football manufacturing process Association football equipment Rugby league equipment Rugby union equipment Association football terminology Ball Balls Inflatable manufactured goods |
No, this text is not related with defense topics | A setup, in storytelling, is the introduction in a plot of an element that will be useful to the story only later, when the payoff comes. Most of the important elements that are part of the setup are usually introduced during the exposition, with which it is sometimes confused. But there can be a setup within a specific scene late in the story, with a character, object or concept appearing only to be used paragraphs or seconds later. Storytelling |
No, this text is not related with defense topics | A lineworker (lineman (American English), linesman (British English), powerline technician (PLT), or powerline worker) is a tradesperson who constructs and maintains electric power transmission, telecommunications lines (cable, internet and phone) and distribution lines. A lineworker generally does outdoor installation and maintenance jobs. Those who install and maintain electrical wiring inside buildings are electricians. History The occupation had begun with the widespread use of the telegraph in the 1840s. Telegraph lines could be strung on trees, but wooden poles were quickly adopted as the preferred method. The term 'lineman' was used for those who set wooden poles and strung wire. The term continued in use with the invention of the telephone in the 1870s and the beginning of electrification in the 1890s. This new electrical power work was more hazardous than telegraph or telephone work because of the risk of electrocution. Between the 1890s and the 1930s, line work was considered one of the most hazardous jobs. This led to the formation of labor organizations to represent the workers and advocate for their safety. This also led to the establishment of apprenticeship programs and the establishment of more stringent safety standards, starting in the late 1930s. The union movement in the United States was led by lineman Henry Miller, who in 1890 was elected president of the Electrical Wiremen and Linemen's Union, No. 5221 of the American Federation of Labor. United States The rural electrification drive during the New Deal led to a wide expansion in the number of jobs in the electric power industry. Many power linemen during that period traveled around the country following jobs as they became available in tower construction, substation construction, and wire stringing. They often lived in temporary camps set up near the project they were working on, or in boarding houses if the work was in a town or city, and relocating every few weeks or months. The occupation was lucrative at the time, but the hazards and the extensive travel limited its appeal. A brief drive to electrify some railroads on the East Coast of the US-led to the development of specialization of linemen who installed and maintained catenary overhead lines. Growth in this branch of linework declined after most railroads favored diesel over electric engines for replacement of steam engines. The occupation evolved during the 1940s and 1950s with the expansion of residential electrification. This led to an increase in the number of linemen needed to maintain power distribution circuits and provide emergency repairs. Maintenance linemen mostly stayed in one place, although sometimes they were called to travel to assist repairs. During the 1950s, some electric lines began to be installed in tunnels, expanding the scope of the work. Duties Power linemen work on electrically energized (live) and de-energized (dead) power lines. They may perform several tasks associated with power lines, including installation or replacement of distribution equipment such as capacitor banks, distribution transformers on poles, insulators and fuses. These duties include the use of ropes, knots, and lifting equipment. These tasks may have to be performed with primitive manual tools where accessibility is limited. Such conditions are common in rural or mountainous areas that are inaccessible to trucks. High voltage transmission lines can be worked live with proper setups. The lineman must be isolated from the ground. The lineman wears special conductive clothing that is connected to the live power line, at which point the line and the lineman are at the same potential, allowing the lineman to handle the wire. The lineman may still be electrocuted if he completes an electrical circuit, for example by handling both ends of a broken conductor. Such work is often done by helicopter by specially trained linemen. Isolated line work is only used for transmission-level voltages and sometimes for the higher distribution voltages. Live wire work is common on low voltage distribution systems within the UK and Australia as all linesmen are trained to work 'live'. Live wire work on high voltage distribution systems within the UK and Australia is carried out by specialist teams. Work on outdoor tower construction or wire installation is not performed exclusively by linemen. A crew of linemen will include several groundmen and apprentices. Groundmen assist with on-the-ground tasks needed to support the linemen, but may work above ground or on electrical circuits. Training Becoming a lineworker usually involves starting as an apprentice and a four-year training program before becoming a "Journey Lineworker". Apprentice linemen are trained in all types of work from operating equipment and climbing to proper techniques and safety standards. Schools throughout the United States offer a pre-apprentice lineman training program such as Southeast Lineman Training Center and Northwest Lineman College. Safety Lineworkers, especially those who deal with live electrical apparatus, use personal protective equipment (PPE) as protection against inadvertent contact. This includes rubber gloves, rubber sleeves, bucket liners, and protective blankets. When working with energized power lines, linemen must use protection to eliminate any contact with the energized line. The requirements for PPEs and associated permissible voltage depends on applicable regulations in the jurisdiction as well as company policy. Voltages higher than those that can be worked using gloves are worked with special sticks known as hot-line tools or hot sticks, with which power lines can be safely handled from a distance. Linemen must also wear special rubber insulating gear when working with live wires to protect against any accidental contact with the wire. The buckets linemen sometimes work from are also insulated with fiberglass. De-energized power lines can be hazardous as they can still be energized from another source such as interconnection or interaction with another circuit even when they appear to be shut off. For example, a higher-voltage distribution level circuit may feed several lower-voltage distribution circuits through transformers. If the higher voltage circuit is de-energized, but if lower-voltage circuits connected remain energized, the higher voltage circuit will remain energized. Another problem can arise when de-energized wires become energized through electrostatic or electromagnetic induction from energized wires nearby. All live line work PPE must be kept clean from contaminants and regularly tested for di-electric integrity. This is done by the use of high voltage electrical testing equipment. Other general items of PPE such as helmets are usually replaced at regular intervals. See also Overhead cable References External links Thomas M. Shoemaker and James E. Mack. (2002) The Lineman's and Cableman's Handbook. Edwin B. Kurtz. . "How Linemen Handle Hot Wires And Stay Alive" , July 1949, Popular Science basics explained on lineman safety for the general public Inter-Utility Overhead Trainers Association Construction trades workers Crafts Electric power Skills Technicians |
No, this text is not related with defense topics | The photoplethysmogram (PPG) measurement made at a peripheral site, such as the finger, ear or forehead represents the volume of blood in the vessel at the site of measurement. The PPG signal consists of pulses that reflect the change in vascular blood volume with each cardiac beat. Beat-to-beat fluctuations, known as photoplethysmogram variability (PPGV) are found in the signal baseline and amplitude which reflects various physiological influences such as respiration and regulation of vascular tone by the sympathetic nervous system. Frequency domain (spectral) features The beat-to-beat variation of the PPG in time domain can be represented in the frequency domain by means of signal transformation methods, such as fast Fourier transform or autoregressive model. The spectrum can be divided into two main bands, that is, low frequency (LF) band ranging from 0.04 to 0.15 Hz and high frequency (HF) band between 0.15 and 0.6 Hz. The LF band can be subdivided into a mid-frequency (MF) band from 0.09 Hz to 0.15 Hz. The integration of the power spectral density over the frequency range will give the spectral power. Low-frequency oscillation in the PPG is found to reflect the sympathetic control over the peripheral circulation, whilst the high frequency component is related to the mechanical consequence of respiration on venous return. Applications The PPGV was found to be useful in detecting blood loss by observing the spectral features of the PPGV. LF power, together with other features derived from the PPG waveform, was used to classify patients into different ranges of systemic vascular resistance, which may be used as an indicator of critical illness. It has been proposed that the PPGV can also be used as an indicator of peripheral circulatory abnormalities in sepsis patients. The application of PPGV as an indicator of sepsis has been extended by using spectral analysis of the PPGV to classify patients into different severity of sepsis. See also Heart rate variability References Cardiology |
No, this text is not related with defense topics | Mathematical chemistry is the area of research engaged in novel applications of mathematics to chemistry; it concerns itself principally with the mathematical modeling of chemical phenomena. Mathematical chemistry has also sometimes been called computer chemistry, but should not be confused with computational chemistry. Major areas of research in mathematical chemistry include chemical graph theory, which deals with topology such as the mathematical study of isomerism and the development of topological descriptors or indices which find application in quantitative structure-property relationships; and chemical aspects of group theory, which finds applications in stereochemistry and quantum chemistry. Another important area is molecular knot theory and circuit topology that describe the topology of folded linear molecules such as proteins and Nucleic Acids. The history of the approach may be traced back to the 19th century. Georg Helm published a treatise titled "The Principles of Mathematical Chemistry: The Energetics of Chemical Phenomena" in 1894. Some of the more contemporary periodical publications specializing in the field are MATCH Communications in Mathematical and in Computer Chemistry, first published in 1975, and the Journal of Mathematical Chemistry, first published in 1987. In 1986 a series of annual conferences MATH/CHEM/COMP taking place in Dubrovnik was initiated by the late Ante Graovac. The basic models for mathematical chemistry are molecular graph and topological index. In 2005 the International Academy of Mathematical Chemistry (IAMC) was founded in Dubrovnik (Croatia) by Milan Randić. The Academy has 82 members (2009) from all over the world, including six scientists awarded with a Nobel Prize. See also Bibliography Molecular Descriptors for Chemoinformatics, by R. Todeschini and V. Consonni, Wiley-VCH, Weinheim, 2009. Mathematical Chemistry Series, by D. Bonchev, D. H. Rouvray (Eds.), Gordon and Breach Science Publisher, Amsterdam, 2000. Chemical Graph Theory, by N. Trinajstic, CRC Press, Boca Raton, 1992. Mathematical Concepts in Organic Chemistry, by I. Gutman, O. E. Polansky, Springer-Verlag, Berlin, 1986. Chemical Applications of Topology and Graph Theory, ed. by R. B. King, Elsevier, 1983. Topological approach to the chemistry of conjugated molecules, by A. Graovac, I. Gutman, and N. Trinajstic, Lecture Notes in Chemistry, no.4, Springer-Verlag, Berlin, 1977. Notes References N. Trinajstić, I. Gutman, Mathematical Chemistry, Croatica Chemica Acta, 75(2002), pp. 329–356. A. T. Balaban, Reflections about Mathematical Chemistry, Foundations of Chemistry, 7(2005), pp. 289–306. G. Restrepo, J. L. Villaveces, Mathematical Thinking in Chemistry, HYLE, 18(2012), pp. 3–22. Advances in Mathematical Chemistry and Applications. Volume 2. Basak S. C., Restrepo G., Villaveces J. L. (Bentham Science eBooks, 2015) External links Journal of Mathematical Chemistry MATCH Communications in Mathematical and in Computer Chemistry International Academy of Mathematical Chemistry Chemistry Theoretical chemistry Application-specific graphs Cheminformatics |
No, this text is not related with defense topics | In Islam, nurses provide healthcare services to patients, families and communities as a manifestation of love for Allah and Muhammad. The nursing profession is not new to Islam. Islamic traditions include sympathy for and responsibility toward those in need. This perspective had emerged during the development of Islam as a religion, culture, and civilization. Ethos of health care service In Islamic traditions, caring is the manifestation of love for Allah and Muhammad. Caring in Islam, however, is more than the act of empathy; instead, it consists of being responsible for, sensitive to, and concerned with those in need, namely the weak, the suffering and the outcasts of society. This act of caring is further divided into three principles: intention, thought, and action. Intention and thought refer to who, what, where, when and why to care, whereas action is related to the knowledge necessary to be able to care. In short, health care is deemed a service to the patients and to Allah, as opposed to other professions that are commercial. This ethos was the fundamental motivating factor for the majority of the doctors and nurses in the history of Islam. Approach to health care service Another aspect of Islamic health care service that distinguishes it from the contemporary Western health care industry is the holistic approach to health and wellbeing taken. This holistic approach consists of both treating the organic basis of the ailments and providing spiritual support for the patient. This spiritual component comes in the form of Tawheed (Oneness of Allah), a dimension lacking in current Western models of nursing and, thus, could pose as a challenge for application of this model of nursing to Muslim patients as it does not meet their holistic needs. First Muslim nurse The first professional nurse in the history of Islam is a woman named Rufaidah bint Sa’ad, also known as Rufaida Al-Aslamia or Rufayda al-Aslamiyyah, who was born in 620 (est.) and lived at the time of Muhammed. She hailed from the Bani Aslam tribe in Medina and was among the first people in Medina to accept Islam. Rufaidah received her training and knowledge in medicine from her father, a physician, whom she assisted regularly. At the time when Muhammed's early followers were engaged in war, she led a group of volunteer nurses to the battlefield to treat and care for the injured and dying. After the Muslim state was established in Medina, she was given permission by Muhammed to set up a tent outside the mosque to treat the ill and to train more Muslim women and girls as nurses. Rufaidah is described as a woman possessing the qualities of an ideal nurse: compassionate, empathetic, a good leader and a great teacher. She is said to have provided health education to the community, helped the disadvantaged (like orphans and the disabled), advocated for preventative care, and even to have drafted the world’s first code of nursing ethics . Nursing in hospitals In hospitals built in the Medieval Muslim society male nurses tended to male patients and female nurses to female patients. The hospital in Al-Qayrawan (Kairouan in English) was especially unique among Muslim hospitals for several reasons. Built in 830 by the order of the Prince Ziyadat Allah I of Ifriqiya (817–838), the Al-Dimnah Hospital, constructed in the Dimnah region close to the great mosque of Al Qayrawan, was quite ahead of its time. It had the innovation of having a waiting area for visitors, not to mention that the first official female nurses were hired from Sudan to work in this hospital. Moreover, aside from regular physicians working there, a group of religious imams who also practiced medicine, called Fugaha al-Badan, provided service as well, likely by tending the patients’ spiritual needs. References Nursing Islam and science History of nursing |
No, this text is not related with defense topics | Economic integration is the unification of economic policies between different states, through the partial or full abolition of tariff and non-tariff restrictions on trade. The trade-stimulation effects intended by means of economic integration are part of the contemporary economic Theory of the Second Best: where, in theory, the best option is free trade, with free competition and no trade barriers whatsoever. Free trade is treated as an idealistic option, and although realized within certain developed states, economic integration has been thought of as the "second best" option for global trade where barriers to full free trade exist. Economic integration is meant in turn to lead to lower prices for distributors and consumers with the goal of increasing the level of welfare, while leading to an increase of economic productivity of the states. Objective There are economic as well as political reasons why nations pursue economic integration. The economic rationale for the increase of trade between member states of economic unions rests on the supposed productivity gains from integration. This is one of the reasons for the development of economic integration on a global scale, a phenomenon now realized in continental economic blocs such as ASEAN, NAFTA, SACN, the European Union, AfCFTA and the Eurasian Economic Community; and proposed for intercontinental economic blocks, such as the Comprehensive Economic Partnership for East Asia and the Transatlantic Free Trade Area. Comparative advantage refers to the ability of a person or a country to produce a particular good or service at a lower marginal and opportunity cost over another. Comparative advantage was first described by David Ricardo who explained it in his 1817 book On the Principles of Political Economy and Taxation in an example involving England and Portugal. In Portugal it is possible to produce both wine and cloth with less labour than it would take to produce the same quantities in England. However the relative costs of producing those two goods are different in the two countries. In England it is very hard to produce wine, and only moderately difficult to produce cloth. In Portugal both are easy to produce. Therefore, while it is cheaper to produce cloth in Portugal than England, it is cheaper still for Portugal to produce excess wine, and trade that for English cloth. Conversely England benefits from this trade because its cost for producing cloth has not changed but it can now get wine at a lower price, closer to the cost of cloth. The conclusion drawn is that each country can gain by specializing in the good where it has comparative advantage, and trading that good for the other. Economies of scale refers to the cost advantages that an enterprise obtains due to expansion. There are factors that cause a producer's average cost per unit to fall as the scale of output is increased. Economies of scale is a long run concept and refers to reductions in unit cost as the size of a facility and the usage levels of other inputs increase. Economies of scale is also a justification for economic integration, since some economies of scale may require a larger market than is possible within a particular country — for example, it would not be efficient for Liechtenstein to have its own car maker, if they would only sell to their local market. A lone car maker may be profitable, however, if they export cars to global markets in addition to selling to the local market. Besides these economic reasons, the primary reasons why economic integration has been pursued in practise are largely political. The Zollverein or German Customs Union of 1867 paved the way for partial German unification under Prussian leadership in 1871. "Imperial free trade" was (unsuccessfully) proposed in the late 19th century to strengthen the loosening ties within the British Empire. The European Economic Community was created to integrate France and Germany's economies to the point that they would find it impossible to go to war with each other. Success factors Among the requirements for successful development of economic integration are "permanency" in its evolution (a gradual expansion and over time a higher degree of economic/political unification); "a formula for sharing joint revenues" (customs duties, licensing etc.) between member states (e.g., per capita); "a process for adopting decisions" both economically and politically; and "a will to make concessions" between developed and developing states of the union. A "coherence" policy is a must for the permanent development of economic unions, being also a property of the economic integration process. Historically the success of the European Coal and Steel Community opened a way for the formation of the European Economic Community (EEC) which involved much more than just the two sectors in the ECSC. So a coherence policy was implemented to use a different speed of economic unification (coherence) applied both to economic sectors and economic policies. Implementation of the coherence principle in adjusting economic policies in the member states of economic block causes economic integration effects. Economic theory The framework of the theory of economic integration was laid out by Jacob Viner (1950) who defined the trade creation and trade diversion effects, the terms introduced for the change of interregional flow of goods caused by changes in customs tariffs due to the creation of an economic union. He considered trade flows between two states prior and after their unification, and compared them with the rest of the world. His findings became and still are the foundation of the theory of economic integration. The next attempts to enlarge the static analysis towards three states+world (Lipsey, et al.) were not as successful. The basics of the theory were summarized by the Hungarian economist Béla Balassa in the 1960s. As economic integration increases, the barriers of trade between markets diminish. Balassa believed that supranational common markets, with their free movement of economic factors across national borders, naturally generate demand for further integration, not only economically (via monetary unions) but also politically—and, thus, that economic communities naturally evolve into political unions over time. The dynamic part of international economic integration theory, such as the dynamics of trade creation and trade diversion effects, the Pareto efficiency of factors (labor, capital) and value added, mathematically was introduced by Ravshanbek Dalimov. This provided an interdisciplinary approach to the previously static theory of international economic integration, showing what effects take place due to economic integration, as well as enabling the results of the non-linear sciences to be applied to the dynamics of international economic integration. Equations describing: enforced oscillations of a pendulum with friction; predator-prey oscillations; heat and/or gas spatial dynamics (the heat equation and Navier-Stokes equations) were successfully applied towards: the dynamics of GDP; price-output dynamics and the dynamic matrix of the outputs of an economy; regional and inter-regional migration of labor income and value added, and to trade creation and trade diversion effects (inter-regional output flows). The straightforward conclusion from the findings is that one may use the accumulated knowledge of the exact and natural sciences (physics, biodynamics, and chemical kinetics) and apply them towards the analysis and forecasting of economic dynamics. Dynamic analysis has started with a new definition of gross domestic product (GDP), as a difference between aggregate revenues of sectors and investment (a modification of the value added definition of the GDP). It was possible to analytically prove that all the states gain from economic unification, with larger states receiving less growth of GDP and productivity, and vice versa concerning the benefit to lesser states. Although this fact has been empirically known for decades, now it was also shown as being mathematically correct. A qualitative finding of the dynamic method is the similarity of a coherence policy of economic integration and a mixture of previously separate liquids in a retort: they finally get one colour and become one liquid. Economic space (tax, insurance and financial policies, customs tariffs, etc.) all finally become the same along with the stages of economic integration. Another important finding is a direct link between the dynamics of macro- and micro-economic parameters such as the evolution of industrial clusters and the GDP's temporal and spatial dynamics. Specifically, the dynamic approach analytically described the main features of the theory of competition summarized by Michael Porter, stating that industrial clusters evolve from initial entities gradually expanding within their geographic proximity. It was analytically found that the geographic expansion of industrial clusters goes along with raising their productivity and technological innovation. Domestic savings rates of the member states were observed to strive to one magnitude, and the dynamic method of forecasting this phenomenon has also been developed. Overall dynamic picture of economic integration has been found to look quite similar to unification of previously separate basins after opening intraboundary sluices, where instead of water the value added (revenues) of entities of member states interact. Stages The degree of economic integration can be categorized into seven stages: Preferential trading area Free-trade area Customs union Single market Economic union Economic and monetary union Complete economic integration These differ in the degree of unification of economic policies, with the highest one being the completed economic integration of the states, which would most likely involve political integration as well. A "free trade area" (FTA) is formed when at least two states partially or fully abolish custom tariffs on their inner border. To exclude regional exploitation of zero tariffs within the FTA there is a rule of certificate of origin for the goods originating from the territory of a member state of an FTA. A "customs union" introduces unified tariffs on the exterior borders of the union (CET, common external tariffs). A "monetary union" introduces a shared currency. A "common market" add to a FTA the free movement of services, capital and labor. An "economic union" combines customs union with a common market. A "fiscal union" introduces a shared fiscal and budgetary policy. In order to be successful the more advanced integration steps are typically accompanied by unification of economic policies (tax, social welfare benefits, etc.), reductions in the rest of the trade barriers, introduction of supranational bodies, and gradual moves towards the final stage, a "political union". [partial] — [substantial] — [none or not applicable] Global economic integration Globalization refers to the increasing global relationships of culture, people, and economic activity. With economics crisis started in 2008 the global economy has started to realize quite a few initiatives on regional level. It is unification between the EU and US, expansion of Eurasian Economic Community (now Eurasia Economic Union) by Armenia and Kyrgyzstan. It is also the creation of BRICS with the bank of its members, and notably high motivation of creating competitive economic structures within Shanghai Organization, also creating the bank with many multi-currency instruments applied. Engine for such fast and dramatic changes was insufficiency of global capital, while one has to mention obvious large political discrepancies witnessed in 2014–2015. Global economy has to overcome this by easing the moves of capital and labor, while this is impossible unless the states will find common point of views in resolving cultural and politic differences which pushed it so far as of now. Etymology In economics the word integration was first employed in industrial organisation to refer to combinations of business firms through economic agreements, cartels, concerns, trusts, and mergers—horizontal integration referring to combinations of competitors, vertical integration to combinations of suppliers with customers. In the current sense of combining separate economies into larger economic regions, the use of the word integration can be traced to the 1930s and 1940s. Fritz Machlup credits Eli Heckscher, Herbert Gaedicke and Gert von Eyern as the first users of the term economic integration in its current sense. According to Machlup, such usage first appears in the 1935 English translation of Hecksher's 1931 book Merkantilismen (Mercantilism in English), and independently in Gaedicke's and von Eyern's 1933 two-volume study Die produktionswirtschaftliche Integration Europas: Eine Untersuchung über die Aussenhandelsverflechtung der europäischen Länder. See also European integration Financial integration List of trade blocs (from PTA to EMU) List of international trade topics Middle East economic integration Regional integration Social integration Trade pact Notes Bibliography Balassa, В. Trade Creation and Trade Diversion in the European Common Market. The Economic Journal, vol. 77, 1967, pp. 1–21. Dalimov R.T. Modelling international economic integration: an oscillation theory approach. Trafford, Victoria 2008, 234 p. Dalimov R.T. Dynamics of international economic integration: non-linear analysis. Lambert Academic Publishing, 2011, 276 p.; , . Johnson, H. An Economic Theory of Protection, Tariff Bargaining and the Formation of Customs Unions. Journal of Political Economy, 1965, vol. 73, pp. 256–283. Johnson, H. Optimal Trade Intervention in the Presence of Domestic Distortions, in Baldwin et al., Trade Growth and the Balance of Payments, Chicago, Rand McNally, 1965, pp. 3–34. Jovanovich, М. International Economic Integration. Limits and Prospects. Second edition, 1998, Routledge. Lipsey, R.G. The Theory of Customs Union: Trade Diversion and Welfare. Economica, 1957, vol. 24, рр.40-46. Меаdе, J.E. The Theory of Customs Union.” North Holland Publishing Company, 1956, pp. 29–43. Negishi, T. Customs Unions and the Theory of the Second Best. International Economic Review, 1969, vol. 10, pp. 391–398 Porter M. On Competition. Harvard Business School Press; 1998; 485 pgs. Riezman, R. A Theory of Customs Unions: The Three Country–Two Goods Case. Weltwirtschaftliches Archiv, 1979, vol. 115, pp. 701–715. Ruiz Estrada, M. Global Dimension of Regional Integration Model (GDRI-Model). Faculty of Economics and Administration, University of Malaya. FEA-Working Paper, № 2004-7 Tinbergen, J. International Economic Integration. Amsterdam: Elsevier, 1954. Tovias, A. The Theory of Economic Integration: Past and Future. 2d ECSA-World conference “Federalism, Subsidiarity and Democracy in the European Union”, Brussels, May 5–6, 1994, 10 p. Viner, J. The Customs Union Issue. Carnegie Endowment for International Peace, 1950, pp. 41–55. INTAL; https://web.archive.org/web/20100516012601/http://www.iadb.org/intal/index.asp?idioma=eng External links International trade |
No, this text is not related with defense topics | The notion of making media mobile can be traced back to the “first time someone thought to write on a tablet that could be lifted and hauled – rather than on a cave wall, a cliff face, a monument that usually was stuck in place, more or less forever”. In his book Cellphone, Paul Levinson refers to mobile media as “the media-in-motion business.” Since their incarnation, mobile phones as a means of communication have been a focus of great fascination as well as debate. In the book, Studying Mobile Media: Cultural Technologies, Mobile Communication, and the iPhone, Gerard Goggin notes how the ability of portable voice communication to provide ceaseless contact complicates the relationship between the public and private spheres of society. Lee Humphrey's explains in her book that now, "more people in the world today have a mobile phone than have an Internet connection". The development of the portable telephone can be traced back to its use by the military in the late nineteenth-century. By the 1930s, police cars in several major U.S. cities were equipped with one-way mobile radios. In 1931, the Galvin Manufacturing Corporation designed a mass market two-way radio. This radio was named Motorola, which also became the new name for the company in 1947. In 1943, Motorola developed the first portable radiotelephone, the Walkie-Talkie, for use by the American forces during World War II. After the war, two-way radio technology was developed for civilian use. In 1946, AT&T and Southwestern Bell made available the first commercial mobile radiotelephone. This service allowed calls to be made from a fixed phone to a mobile one. "Many scholars have noted and praised the mobility of reading brought about the emergence of the book and the advent of early modern print culture". Along with the book, the transistor radio, the Walkman, and the Kodak camera are also bearers of portable information and early examples of mobile media consumption. With the rise of the internet, many forms of media can be considered mobile. Forms of mobile media, such as podcasts and even social networking services, are some of the few that can be downloaded, used or even streamed over the internet. According to Jordan Frith and Didem Ozkul in their book, Mobile Media Beyond Mobile Phones, they believe that mobile media has moved beyond our past knowledge of mobile media. "With this issue, we realized that not only has our understanding of mobile media expanded beyond the mobile phone, but our thinking of the 'mobile in front of media has evolved". From The Mobile Reader, Jason Farman and other authors describe this expansion of mobile media. "The cultural shift that happened in conjunction with the printing press can be mapped onto our uses of mobile media (especially location-aware technologies): the cultural imaginaries of space became simultaneously about experiencing the expansion of space, an increase in speed of transmission, and a transformed view of the local". For a time, mobile phones and PDAs (Personal Digital Assistants) were the primary source of portable media from which we could obtain information and communicate with one another. More recently, the smartphone has rendered the PDA obsolete by combining many features of the cell phone with those of the PDA. In 2011, the growth of new mobile media as a true force in society was marked by smartphone sales outpacing personal computer sales. With this non-stop consumption of new and improved smartphones, theorists such as Marsha Berry and Max Schleser explain that these change the way we can do things in life. "With the rise of smartphones in 2007 and proliferation of application through Apple's App Store and Android Market in the following year, how citizen users and creative professionals represent, experience and share the everyday is changing". While mobile phone independent technologies and functions may be new and innovative (in relation to changes and improvements in media capabilities in respect to their function what they can do when and where and what they look like, in regard to their size and shape) the need and desire to access and use media devices regardless of where we are in the world has been around for centuries. Indeed, Paul Levinson remarks, in regard to telephonic communication, that it was “intelligence and inventiveness" applied to our need to communicate regardless of where we may be, led logically and eventually to telephones that we carry in our pockets”. Levinson credits the printing press for disseminating information to a mass audience, the reduction in size and portability of the camera for allowing people to capture what they saw regardless of their location, and the Internet for providing on-demand information. Smartphones have altered the very structure of society. "With this issue, we realized that not only has our understanding of mobile media expanded beyond the mobile phone, but our thinking of the 'mobile' in front of media has evolved". The ability of smartphones to transcend certain boundaries of times and space has revolutionized the nature of communication, allowing it to be both synchronous and asynchronous. These devices and their corresponding media technologies, such as cloud-based technologies, play an increasingly important role in the everyday lives of millions of people worldwide. See also Location-based media Comparison of portable media players Web film Documentary practice Mass media Notes and references Multimedia Media Media |
No, this text is not related with defense topics | Diphenylalanine is a term that has recently been used to describe the unnatural amino acid similar to the two amino acids alanine and phenylalanine. It has been used for the synthesis of pseudopeptide analogues which are capable of inhibiting certain enzymes. A possible synthesis starts from 3,3-diphenylpropionic acid which is stereoselective aminated to the diphenylalanine. A historical use of the term "diphenylalanine" refers to the dipeptide of phenylalanine. References Amino acids |
No, this text is not related with defense topics | Locus of control is the degree to which people believe that they, as opposed to external forces (beyond their influence), have control over the outcome of events in their lives. The concept was developed by Julian B. Rotter in 1954, and has since become an aspect of personality psychology. A person's "locus" (plural "loci", Latin for "place" or "location") is conceptualized as internal (a belief that one can control one's own life) or external (a belief that life is controlled by outside factors which the person cannot influence, or that chance or fate controls their lives). Individuals with a strong internal locus of control believe events in their life are primarily a result of their own actions: for example, when receiving exam results, people with an internal locus of control tend to praise or blame themselves and their abilities. People with a strong external locus of control tend to praise or blame external factors such as the teacher or the exam. Locus of control has generated much research in a variety of areas in psychology. The construct is applicable to such fields as educational psychology, health psychology, industrial and organizational psychology, and clinical psychology. Debate continues whether domain-specific or more global measures of locus of control will prove to be more useful in practical application. Careful distinctions should also be made between locus of control (a personality variable linked with generalized expectancies about the future) and attributional style (a concept concerning explanations for past outcomes), or between locus of control and concepts such as self-efficacy. Locus of control is one of the four dimensions of core self-evaluations – one's fundamental appraisal of oneself – along with neuroticism, self-efficacy, and self-esteem. The concept of core self-evaluations was first examined by Judge, Locke, and Durham (1997), and since has proven to have the ability to predict several work outcomes, specifically, job satisfaction and job performance. In a follow-up study, Judge et al. (2002) argued that locus of control, neuroticism, self-efficacy, and self-esteem factors may have a common core. History Locus of control as a theoretical construct derives from Julian B. Rotter's (1954) social learning theory of personality. It is an example of a problem-solving generalized expectancy, a broad strategy for addressing a wide range of situations. In 1966 he published an article in Psychological Monographs which summarized over a decade of research (by Rotter and his students), much of it previously unpublished. In 1976, Herbert M. Lefcourt defined the perceived locus of control: "...a generalised expectancy for internal as opposed to external control of reinforcements". Attempts have been made to trace the genesis of the concept to the work of Alfred Adler, but its immediate background lies in the work of Rotter and his students. Early work on the topic of expectations about control of reinforcement had been performed in the 1950s by James and Phares (prepared for unpublished doctoral dissertations supervised by Rotter at Ohio State University). Another Rotter student, William H. James studied two types of "expectancy shifts": Typical expectancy shifts, believing that success (or failure) would be followed by a similar outcome Atypical expectancy shifts, believing that success (or failure) would be followed by a dissimilar outcome Additional research led to the hypothesis that typical expectancy shifts were displayed more often by those who attributed their outcomes to ability, whereas those who displayed atypical expectancy were more likely to attribute their outcomes to chance. This was interpreted that people could be divided into those who attribute to ability (an internal cause) versus those who attribute to luck (an external cause). Bernard Weiner argued that rather than ability-versus-luck, locus may relate to whether attributions are made to stable or unstable causes. Rotter (1975, 1989) has discussed problems and misconceptions in others' use of the internal-versus-external construct. Personality orientation Rotter (1975) cautioned that internality and externality represent two ends of a continuum, not an either/or typology. Internals tend to attribute outcomes of events to their own control. People who have internal locus of control believe that the outcomes of their actions are results of their own abilities. Internals believe that their hard work would lead them to obtain positive outcomes. They also believe that every action has its consequence, which makes them accept the fact that things happen and it depends on them if they want to have control over it or not. Externals attribute outcomes of events to external circumstances. People with an external locus of control tend to believe that the things which happen in their lives are out of their control, and even that their own actions are a result of external factors, such as fate, luck, the influence of powerful others (such as doctors, the police, or government officials) and/or a belief that the world is too complex for one to predict or successfully control its outcomes. Such people tend to blame others rather than themselves for their lives' outcomes. It should not be thought, however, that internality is linked exclusively with attribution to effort and externality with attribution to luck (as Weiner's work – see below – makes clear). This has obvious implications for differences between internals and externals in terms of their achievement motivation, suggesting that internal locus is linked with higher levels of need for achievement. Due to their locating control outside themselves, externals tend to feel they have less control over their fate. People with an external locus of control tend to be more stressed and prone to clinical depression. Internals were believed by Rotter (1966) to exhibit two essential characteristics: high achievement motivation and low outer-directedness. This was the basis of the locus-of-control scale proposed by Rotter in 1966, although it was based on Rotter's belief that locus of control is a single construct. Since 1970, Rotter's assumption of uni-dimensionality has been challenged, with Levenson (for example) arguing that different dimensions of locus of control (such as beliefs that events in one's life are self-determined, or organized by powerful others and are chance-based) must be separated. Weiner's early work in the 1970s suggested that orthogonal to the internality-externality dimension, differences should be considered between those who attribute to stable and those who attribute to unstable causes. This new, dimensional theory meant that one could now attribute outcomes to ability (an internal stable cause), effort (an internal unstable cause), task difficulty (an external stable cause) or luck (an external, unstable cause). Although this was how Weiner originally saw these four causes, he has been challenged as to whether people see luck (for example) as an external cause, whether ability is always perceived as stable, and whether effort is always seen as changing. Indeed, in more recent publications (e.g. Weiner, 1980) he uses different terms for these four causes (such as "objective task characteristics" instead of "task difficulty" and "chance" instead of "luck"). Psychologists since Weiner have distinguished between stable and unstable effort, knowing that in some circumstances effort could be seen as a stable cause (especially given the presence of words such as "industrious" in English). Regarding locus of control, there is another type of control that entails a mix among the internal and external types. People that have the combination of the two types of locus of control are often referred to as Bi-locals. People that have Bi-local characteristics are known to handle stress and cope with their diseases more efficiently by having the mixture of internal and external locus of control. People that have this mix of loci of control can take personal responsibility for their actions and the consequences thereof while remaining capable of relying upon and having faith in outside resources; these characteristics correspond to the internal and external loci of control, respectively. Measuring scales The most widely used questionnaire to measure locus of control is the 13-item (plus six filler items), forced-choice scale of Rotter (1966). However, this is not the only questionnaire; Bialer's (1961) 23-item scale for children predates Rotter's work. Also relevant to the locus-of-control scale are the Crandall Intellectual Ascription of Responsibility Scale (Crandall, 1965) and the Nowicki-Strickland Scale . One of the earliest psychometric scales to assess locus of control (using a Likert-type scale, in contrast to the forced-choice alternative measure in Rotter's scale) was that devised by W. H. James for his unpublished doctoral dissertation, supervised by Rotter at Ohio State University; however, this remains unpublished. Many measures of locus of control have appeared since Rotter's scale. These were reviewed by Furnham and Steele (1993) and include those related to health psychology, industrial and organizational psychology and those specifically for children (such as the Stanford Preschool Internal-External Scale for three- to six-year-olds). Furnham and Steele (1993) cite data suggesting that the most reliable, valid questionnaire for adults is the Duttweiler scale. For a review of the health questionnaires cited by these authors, see "Applications" below. The Duttweiler (1984) Internal Control Index (ICI) addresses perceived problems with the Rotter scales, including their forced-choice format, susceptibility to social desirability and heterogeneity (as indicated by factor analysis). She also notes that, while other scales existed in 1984 to measure locus of control, "they appear to be subject to many of the same problems". Unlike the forced-choice format used on Rotter's scale, Duttweiler's 28-item ICI uses a Likert-type scale in which people must state whether they would rarely, occasionally, sometimes, frequently or usually behave as specified in each of 28 statements. The ICI assess variables pertinent to internal locus: cognitive processing, autonomy, resistance to social influence, self-confidence and delay of gratification. A small (133 student-subject) validation study indicated that the scale had good internal consistency reliability (a Cronbach's alpha of 0.85). Attributional style Attributional style (or explanatory style) is a concept introduced by Lyn Yvonne Abramson, Martin Seligman and John D. Teasdale. This concept advances a stage further than Weiner, stating that in addition to the concepts of internality-externality and stability a dimension of globality-specificity is also needed. Abramson et al. believed that how people explained successes and failures in their lives related to whether they attributed these to internal or external factors, short-term or long-term factors, and factors that affected all situations. The topic of attribution theory (introduced to psychology by Fritz Heider) has had an influence on locus of control theory, but there are important historical differences between the two models. Attribution theorists have been predominantly social psychologists, concerned with the general processes characterizing how and why people make the attributions they do, whereas locus of control theorists have been concerned with individual differences. Significant to the history of both approaches are the contributions made by Bernard Weiner in the 1970s. Before this time, attribution theorists and locus of control theorists had been largely concerned with divisions into external and internal loci of causality. Weiner added the dimension of stability-instability (and later controllability), indicating how a cause could be perceived as having been internal to a person yet still beyond the person's control. The stability dimension added to the understanding of why people succeed or fail after such outcomes. Although not part of Weiner's model, a further dimension of attribution, that of globality-specificity, was added by Abramson, Seligman and Teasdale. Applications Locus of control's best known application may have been in the area of health psychology, largely due to the work of Kenneth Wallston. Scales to measure locus of control in the health domain were reviewed by Furnham and Steele in 1993. The best-known are the Health Locus of Control Scale and the Multidimensional Health Locus of Control Scale, or MHLC. The latter scale is based on the idea (echoing Levenson's earlier work) that health may be attributed to three sources: internal factors (such as self-determination of a healthy lifestyle), powerful others (such as one's doctor) or luck (which is very dangerous as lifestyle advice will be ignored – these people are very difficult to help). Some of the scales reviewed by Furnham and Steele (1993) relate to health in more specific domains, such as obesity (for example, Saltzer's (1982) Weight Locus of Control Scale or Stotland and Zuroff's (1990) Dieting Beliefs Scale), mental health (such as Wood and Letak's (1982) Mental Health Locus of Control Scale or the Depression Locus of Control Scale of Whiteman, Desmond and Price, 1987) and cancer (the Cancer Locus of Control Scale of Pruyn et al., 1988). In discussing applications of the concept to health psychology Furnham and Steele refer to Claire Bradley's work, linking locus of control to the management of diabetes mellitus. Empirical data on health locus of control in a number of fields was reviewed by Norman and Bennett in 1995; they note that data on whether certain health-related behaviors are related to internal health locus of control have been ambiguous. They note that some studies found that internal health locus of control is linked with increased exercise, but cite other studies which found a weak (or no) relationship between exercise behaviors (such as jogging) and internal health locus of control. A similar ambiguity is noted for data on the relationship between internal health locus of control and other health-related behaviors (such as breast self-examination, weight control and preventive-health behavior). Of particular interest are the data cited on the relationship between internal health locus of control and alcohol consumption. Norman and Bennett note that some studies that compared alcoholics with non-alcoholics suggest alcoholism is linked to increased externality for health locus of control; however, other studies have linked alcoholism with increased internality. Similar ambiguity has been found in studies of alcohol consumption in the general, non-alcoholic population. They are more optimistic in reviewing the literature on the relationship between internal health locus of control and smoking cessation, although they also point out that there are grounds for supposing that powerful-others and internal-health loci of control may be linked with this behavior. It is thought that, rather than being caused by one or the other, that alcoholism is directly related to the strength of the locus, regardless of type, internal or external. They argue that a stronger relationship is found when health locus of control is assessed for specific domains than when general measures are taken. Overall, studies using behavior-specific health locus scales have tended to produce more positive results. These scales have been found to be more predictive of general behavior than more general scales, such as the MHLC scale. Norman and Bennett cite several studies that used health-related locus-of-control scales in specific domains (including smoking cessation), diabetes, tablet-treated diabetes, hypertension, arthritis, cancer, and heart and lung disease. They also argue that health locus of control is better at predicting health-related behavior if studied in conjunction with health value (the value people attach to their health), suggesting that health value is an important moderator variable in the health locus of control relationship. For example, Weiss and Larsen (1990) found an increased relationship between internal health locus of control and health when health value was assessed. Despite the importance Norman and Bennett attach to specific measures of locus of control, there are general textbooks on personality which cite studies linking internal locus of control with improved physical health, mental health and quality of life in people with diverse conditions: HIV, migraines, diabetes, kidney disease and epilepsy. During the 1970s and 1980s, Whyte correlated locus of control with the academic success of students enrolled in higher-education courses. Students who were more internally controlled believed that hard work and focus would result in successful academic progress, and they performed better academically. Those students who were identified as more externally controlled (believing that their future depended upon luck or fate) tended to have lower academic-performance levels. Cassandra B. Whyte researched how control tendency influenced behavioral outcomes in the academic realm by examining the effects of various modes of counseling on grade improvements and the locus of control of high-risk college students. Rotter also looked at studies regarding the correlation between gambling and either an internal or external locus of control. For internals, gambling is more reserved. When betting, they primarily focus on safe and moderate wagers. Externals, however, take more chances and, for example, bet more on a card or number that has not appeared for a certain period, under the notion that this card or number has a higher chance of occurring. Organizational psychology and religion Other fields to which the concept has been applied include industrial and organizational psychology, sports psychology, educational psychology and the psychology of religion. Richard Kahoe has published work in the latter field, suggesting that intrinsic religious orientation correlates positively (and extrinsic religious orientation correlates negatively) with internal locus. Of relevance to both health psychology and the psychology of religion is the work of Holt, Clark, Kreuter and Rubio (2003) on a questionnaire to assess spiritual-health locus of control. The authors distinguished between an active spiritual-health locus of control (in which "God empowers the individual to take healthy actions") and a more passive spiritual-health locus of control (where health is left up to God). In industrial and organizational psychology, it has been found that internals are more likely to take positive action to change their jobs (rather than merely talk about occupational change) than externals. Locus of control relates to a wide variety of work variables, with work-specific measures relating more strongly than general measures. In Educational setting, some research has shown that students who were intrinsically motivated had processed reading material more deeply and had better academic performance than students with extrinsic motivation. Consumer research Locus of control has also been applied to the field of consumer research. For example, Martin, Veer and Pervan (2007) examined how the weight locus of control of women (i.e., beliefs about the control of body weight) influence how they react to female models in advertising of different body shapes. They found that women who believe they can control their weight ("internals"), respond most favorably to slim models in advertising, and this favorable response is mediated by self-referencing. In contrast, women who feel powerless about their weight ("externals"), self-reference larger-sized models, but only prefer larger-sized models when the advertisement is for a non-fattening product. For fattening products, they exhibit a similar preference for larger-sized models and slim models. The weight locus of control measure was also found to be correlated with measures for weight control beliefs and willpower. Political ideology Locus of control has been linked to political ideology. In the 1972 U.S. Presidential election, research of college students found that those with an internal locus of control were substantially more likely to register as a Republican, while those with an external locus of control were substantially more likely to register as a Democratic. A 2011 study surveying students at Cameron University in Oklahoma found similar results, although these studies were limited in scope. Consistent with these findings, Kaye Sweetser (2014) found that Republicans significantly displayed greater internal locus of control than Democrats and Independents. Those with an internal locus of control are more likely to be of higher socioeconomic status, and are more likely to be politically involved (e.g., following political news, joining a political organization) Those with an internal locus of control are also more likely to vote. Familial origins The development of locus of control is associated with family style and resources, cultural stability and experiences with effort leading to reward. Many internals have grown up with families modeling typical internal beliefs; these families emphasized effort, education, responsibility and thinking, and parents typically gave their children rewards they had promised them. In contrast, externals are typically associated with lower socioeconomic status. Societies experiencing social unrest increase the expectancy of being out-of-control; therefore, people in such societies become more external. The 1995 research of Schneewind suggests that "children in large single parent families headed by women are more likely to develop an external locus of control" Schultz and Schultz also claim that children in families where parents have been supportive and consistent in discipline develop internal locus of control. At least one study has found that children whose parents had an external locus of control are more likely to attribute their successes and failures to external causes. Findings from early studies on the familial origins of locus of control were summarized by Lefcourt: "Warmth, supportiveness and parental encouragement seem to be essential for development of an internal locus". However, causal evidence regarding how parental locus of control influences offspring locus of control (whether genetic, or environmentally mediated) is lacking. Locus of control becomes more internal with age. As children grow older, they gain skills which give them more control over their environment. However, whether this or biological development is responsible for changes in locus is unclear. Age Some studies showed that with age people develop a more internal locus of control, but other study results have been ambiguous. Longitudinal data collected by Gatz and Karel imply that internality may increase until middle age, decreasing thereafter. Noting the ambiguity of data in this area, Aldwin and Gilmer (2004) cite Lachman's claim that locus of control is ambiguous. Indeed, there is evidence here that changes in locus of control in later life relate more visibly to increased externality (rather than reduced internality) if the two concepts are taken to be orthogonal. Evidence cited by Schultz and Schultz (2005) suggests that locus of control increases in internality until middle age. The authors also note that attempts to control the environment become more pronounced between ages eight and fourteen. Health locus of control is how people measure and understand how people relate their health to their behavior, health status and how long it may take to recover from a disease. Locus of control can influence how people think and react towards their health and health decisions. Each day we are exposed to potential diseases that may affect our health. The way we approach that reality has a lot to do with our locus of control. Sometimes it is expected to see older adults experience progressive declines in their health, for this reason it is believed that their health locus of control will be affected. However, this does not necessarily mean that their locus of control will be affected negatively but older adults may experience decline in their health and this can show lower levels of internal locus of control. Age plays an important role in one's internal and external locus of control. When comparing a young child and an older adult with their levels of locus of control in regards to health, the older person will have more control over their attitude and approach to the situation. As people age they become aware of the fact that events outside of their own control happen and that other individuals can have control of their health outcomes. A study published in the journal Psychosomatic Medicine examined the health effect of childhood locus of control. 7,500 British adults (followed from birth), who had shown an internal locus of control at age 10, were less likely to be overweight at age 30. The children who had an internal locus of control also appeared to have higher levels of self-esteem. Gender-based differences As Schultz and Schultz (2005) point out, significant gender differences in locus of control have not been found for adults in the U.S. population. However, these authors also note that there may be specific sex-based differences for specific categories of items to assess locus of control; for example, they cite evidence that men may have a greater internal locus for questions related to academic achievement. A study made by Takaki and colleagues (2006), focused on the sex or gendered differences with relationship to internal locus of control and self-efficacy in hemodialysis patients and their compliance. This study showed that female people who had high internal locus of control were less compliant in regards to their health and medical advice compared to the male people that participated in this study. Compliance is known to be the degree in which a person's behavior, in this case the patient, has a relationship with the medical advice. For example, a person that is compliant will correctly follow his/her doctor's advice. A 2018 study that looked at the relationship between locus of control and optimism among children aged 10–15, however, found that an external locus of control was more prevalent among young girls. The study found no significant differences had been found in internal and unknown locus of control. Cross-cultural and regional issues The question of whether people from different cultures vary in locus of control has long been of interest to social psychologists. Japanese people tend to be more external in locus-of-control orientation than people in the U.S.; however, differences in locus of control between different countries within Europe (and between the U.S. and Europe) tend to be small. As Berry et al. pointed out in 1992, ethnic groups within the United States have been compared on locus of control; African Americans in the U.S. are more external than whites when socioeconomic status is controlled. Berry et al. also pointed out in 1992 how research on other ethnic minorities in the U.S. (such as Hispanics) has been ambiguous. More on cross-cultural variations in locus of control can be found in . Research in this area indicates that locus of control has been a useful concept for researchers in cross-cultural psychology. On a less broad scale, Sims and Baumann explained how regions in the United States cope with natural disasters differently. The example they used was tornados. They "applied Rotter's theory to explain why more people have died in tornado[e]s in Alabama than in Illinois". They explain that after giving surveys to residents of four counties in both Alabama and Illinois, Alabama residents were shown to be more external in their way of thinking about events that occur in their lives. Illinois residents, however, were more internal. Because Alabama residents had a more external way of processing information, they took fewer precautions prior to the appearance of a tornado. Those in Illinois, however, were more prepared, thus leading to fewer casualties. Later studies find that these geographic differences can be explained by differences in relational mobility. Relational mobility is a measure of how much choice individuals have in terms of whom to form relationships with, including friendships, romantic partnerships, and work relations. Relational mobility is low in cultures with a subsistence economy that requires tight cooperation and coordination, such as farming, while it is high in cultures based on nomadic herding and in urban industrial cultures. A cross-cultural study found that the relational mobility is lowest in East Asian countries where rice farming is common, and highest in South American countries. Self-efficacy Self-efficacy refers to an individual's belief in their capacity to execute behaviors necessary to produce specific performance attainments. It is a related concept introduced by Albert Bandura, and has been measured by means of a psychometric scale. It differs from locus of control by relating to competence in circumscribed situations and activities (rather than more general cross-situational beliefs about control). Bandura has also emphasised differences between self-efficacy and self-esteem, using examples where low self-efficacy (for instance, in ballroom dancing) are unlikely to result in low self-esteem because competence in that domain is not very important (see valence) to an individual. Although individuals may have a high internal health locus of control and feel in control of their own health, they may not feel efficacious in performing a specific treatment regimen that is essential to maintaining their own health. Self-efficacy plays an important role in one's health because when people feel that they have self-efficacy over their health conditions, the effects of their health becomes less of a stressor. Smith (1989) has argued that locus of control only weakly measures self-efficacy; "only a subset of items refer directly to the subject's capabilities". Smith noted that training in coping skills led to increases in self-efficacy, but did not affect locus of control as measured by Rotter's 1966 scale. Stress The previous section showed how self-efficacy can be related to a person's locus of control, and stress also has a relationship in these areas. Self-efficacy can be something that people use to deal with the stress that they are faced within their everyday lives. Some findings suggest that higher levels of external locus of control combined with lower levels self-efficacy are related to higher illness-related psychological distress. People who report a more external locus of control also report more concurrent and future stressful experiences and higher levels of psychological and physical problems. These people are also more vulnerable to external influences and as a result, they become more responsive to stress. Veterans of the military forces who have spinal cord injuries and post-traumatic stress are a good group to look at in regard to locus of control and stress. Aging shows to be a very important factor that can be related to the severity of the symptoms of PTSD experienced by patients following the trauma of war. Research suggests that patients who suffered a spinal cord injury benefit from knowing that they have control over their health problems and their disability, which reflects the characteristics of having an internal locus of control. A study by Chung et al. (2006) focused on how the responses of spinal cord injury post-traumatic stress varied depending on age. The researchers tested different age groups including young adults, middle-aged, and elderly; the average age was 25, 48, and 65 for each group respectively. After the study, they concluded that age does not make a difference on how spinal cord injury patients respond to the traumatic events that happened. However, they did mention that age did play a role in the extent to which the external locus of control was used, and concluded that the young adult group demonstrated more external locus of control characteristics than the other age groups to which they were being compared. See also References Sources EBSCO 17453574. APA 1996-97268-003. ERIC EJ177217. Bibliography R. Gross, P. Humphreys, Psychology: The Science of Mind and Behaviour Psychology Press, 1994, . External links Locus of control: A class tutorial Spheres of Control Scale Attributional Style & Controllability Concepts in ethics Concepts in metaphysics Concepts in political philosophy Concepts in social philosophy Concepts in the philosophy of mind Control (social and political) Cognitive biases Determinism Free will History of philosophy History of psychology Metaphysical theories Metaphysics of mind Motivation Personal life Philosophy of life Philosophy of mind Psychological concepts Self-care |
No, this text is not related with defense topics | The Media Object Server (MOS) protocol allows newsroom computer systems (NCS) to communicate using a standard protocol with video servers, audio servers, still stores, and character generators for broadcast production. The MOS protocol is based on XML. It enables the exchange of the following types of messages: Descriptive Data for Media Objects. The MOS "pushes" descriptive information and pointers to the NCS as objects are created, modified, or deleted in the MOS. This allows the NCS to be "aware" of the contents of the MOS and enables the NCS to perform searches on and manipulate the data the MOS has sent. Playlist Exchange. The NCS can build and transfer playlist information to the MOS. This allows the NCS to control the sequence that media objects are played or presented by the MOS. Status Exchange. The MOS can inform the NCS of the status of specific clips or the MOS system in general. The NCS can notify the MOS of the status of specific playlist items or running orders. MOS was developed to reduce the need for the development of device specific drivers. By allowing developers to embed functionality and handle events, vendors were relieved of the burden of developing device drivers. It was left to the manufacturers to interface newsroom computer systems. This approach affords broadcasters flexibility to purchase equipment from multiple vendors. It also limits the need to have operators in multiple locations throughout the studio as, for example, multiple character generators (CG) can be fired from a single control workstation, without needing an operator at each CG console. MOS enables journalists to see, use, and control media devices inside Associated Press's ENPS system so that individual pieces of newsroom production technology speak a common XML-based language. History of MOS The first meeting of the MOS protocol development group occurred at the Associated Press ENPS developer's conference in Orlando, Florida in 1998. The fundamental concepts of MOS were released to the public domain at that conference. As an open protocol, the MOS Development Group encourages the participation of broadcast equipment vendors and their customers. More than 100 companies are said to work with AP on MOS-related projects. Compatible hardware and software includes video editing, storage and management; automation; machine control; prompters; character generators; audio editing, store and management; web publishing, interactive TV, field transmission and graphics. Current development is happening on two tracks: a socket-based version, and a web service version. The current official versions of the MOS protocol, as of January 2011, are 2.8.4 (sockets) and 3.8.4 (web service). In 2016 proposals began to introduce IP Video support in the MOS protocol. This proposal allows representations of live IP Video sources such as NDI (Network Device Interface) to be included as MOS objects alongside MOS objects representing files to be played off disk There is also a Java based implementation called jmos that is currently compatible with MOS specification 2.8.2. An open source TypeScript (dialect of JavaScript) MOS connector and MOS Gateway is being actively developed by the Norwegian state broadcaster NRK, as part of their open-source Sofie broadcast automation software initiative. An open source Python library and command line tool called mosromgr was developed by the BBC. The mosromgr library provides functionality for classifying MOS file types, processing and inspecting MOS message files, as well as merging a batch of MOS files into a complete running order. In 2017 the National Academy of Television Arts and Sciences awarded an Emmy to the MOS Group for "Development and Standardization of Media Object Server (MOS) Protocol." References Broadcast engineering Multimedia Servers (computing) Television technology Television terminology Video storage |
No, this text is not related with defense topics | Hindsight bias, also known as the knew-it-all-along phenomenon or creeping determinism, is the common tendency for people to perceive past events as having been more predictable than they actually were. People often believe that after an event has occurred, they would have predicted or perhaps even would have known with a high degree of certainty what the outcome of the event would have been before the event occurred. Hindsight bias may cause distortions of memories of what was known or believed before an event occurred, and is a significant source of overconfidence regarding an individual's ability to predict the outcomes of future events. Examples of hindsight bias can be seen in the writings of historians describing outcomes of battles, physicians recalling clinical trials, and in judicial systems as individuals attribute responsibility on the basis of the supposed predictability of accidents. History The hindsight bias, although it was not yet named, was not a new concept when it emerged in psychological research in the 1970s. In fact, it had been indirectly described numerous times by historians, philosophers, and physicians. In 1973, Baruch Fischhoff attended a seminar where Paul E. Meehl stated an observation that clinicians often overestimate their ability to have foreseen the outcome of a particular case, as they claim to have known it all along. Baruch, a psychology graduate student at the time, saw an opportunity in psychological research to explain these observations. In the early seventies, investigation of heuristics and biases was a large area of study in psychology, led by Amos Tversky and Daniel Kahneman. Two heuristics identified by Tversky and Kahneman were of immediate importance in the development of the hindsight bias; these were the availability heuristic and the representativeness heuristic. In an elaboration of these heuristics, Beyth and Fischhoff devised the first experiment directly testing the hindsight bias. They asked participants to judge the likelihood of several outcomes of US president Richard Nixon's upcoming visit to Beijing (then romanized as Peking) and Moscow. Some time after president Nixon's return, participants were asked to recall (or reconstruct) the probabilities they had assigned to each possible outcome, and their perceptions of the likelihood of each outcome was greater or overestimated for events that actually had occurred. This study is frequently referred to in definitions of the hindsight bias, and the title of the paper, "I knew it would happen," may have contributed to the hindsight bias being interchangeable with the phrase, "knew-it-all-along phenomenon." In 1975, Fischhoff developed another method for investigating the hindsight bias, which was, at the time, referred to as the "creeping determinism hypothesis". This method involves giving participants a short story with four possible outcomes, one of which they are told is true, and are then asked to assign the likelihood of each particular outcome. Participants frequently assign a higher likelihood of occurrence to whichever outcome they have been told is true. Remaining relatively unmodified, this method is still used in psychological and behavioural experiments investigating aspects of the hindsight bias. Having evolved from the heuristics of Tversky and Kahneman into the creeping determinism hypothesis and finally into the hindsight bias as we now know it, the concept has many practical applications and is still at the forefront of research today. Recent studies involving the hindsight bias have investigated the effect age has on the bias, how hindsight may impact interference and confusion, and how it may affect banking and investment strategies. Factors Outcome valence and intensity Hindsight bias has been found to be more likely occur when the outcome of an event is negative rather than positive. This is a phenomenon consistent with the general tendency for people to pay more attention to negative outcomes of events than positive outcomes. In addition, hindsight bias is affected by the severity of the negative outcome. In malpractice lawsuits, it has been found that the more severe a negative outcome is, the juror's hindsight bias is more dramatic. In a perfectly objective case, the verdict would be based on the physician's standard of care instead of the outcome of the treatment; however, studies show that cases ending in severe negative outcomes (such as death) result in a higher level of hindsight bias. For example, in 1996, LaBine proposed a scenario where a psychiatric patient told a therapist that he was contemplating harming another individual. The therapist did not warn the other individual of the possible danger. Participants were each given one of three possible outcomes; the threatened individual either received no injuries, minor injuries, or serious injuries. Participants were then asked to determine if the doctor should be considered negligent. Participants in the "serious injuries" condition were not only more likely to rate the therapist as negligent but also rated the attack as more foreseeable. Participants in the no injuries and minor injury categories were more likely to see the therapist's actions as reasonable. Surprise The role of surprise can help explain the malleability of hindsight bias. Surprise influences how the mind reconstructs pre-outcome predictions in three ways: 1. Surprise is a direct metacognitive heuristic to estimate the distance between outcome and prediction. 2. Surprise triggers a deliberate sense-making process. 3. Surprise biases this process ( the malleability of hindsight bias) by enhancing the recall of surprise-congruent information and expectancy-based hypothesis testing. Pezzo's sense-making model supports two contradicting ideas of a surprising outcome. The results can show a lesser hindsight bias or possibly a reversed effect, where the individual believes the outcome was not a possibility at all. The outcome can also lead to the hindsight bias being magnified to have a stronger effect. The sense-making process is triggered by an initial surprise. If the sense-making process does not complete and the sensory information is not detected or coded [by the individual], the sensation is experienced as a surprise and the hindsight bias has a gradual reduction. When the sense-making process is lacking, the phenomena of reversed hindsight bias is created. Without the sense-making process being present, there is no remnant of thought about the surprise. This can lead to a sensation of not believing the outcome as a possibility. Personality Along with the emotion of surprise, the personality traits of an individual affect hindsight bias. A new C model is an approach to figure out the bias and accuracy in human inferences because of their individual personality traits. This model integrates on accurate personality judgments and hindsight effects as a by-product of knowledge updating. During the study, three processes showed potential to explain the occurrence of hindsight effects in personality judgments: 1. Changes in an individual's cue perceptions, 2. Changes in the use of more valid cues, and 3. Changes in the consistency with which an individual applies cue knowledge. After two studies, it was clear that there were hindsight effects for each of the Big Five personality dimensions. Evidence was found that both the utilization of more valid cues and changes in cue perceptions of the individual, but not changes in the consistency with which cue knowledge is applied, account for the hindsight effects. During both of these studies, participants were presented with target pictures and were asked to judge each target's levels of the Big Five personality traits. Age It is more difficult to test for hindsight bias in children than adults due to the verbal methods used in experiments on adults are too complex for children to understand, let alone measure bias. Some experimental procedures have been created with visual identification to test children about their hindsight bias in a way they can grasp. Methods with visual images start by presenting a blurry image to the child that becomes clearer over time. In some conditions, the subjects know what the final object is and in others they do not. In cases where the subject knows what the object shape will become when the image is clear, they are asked to estimate the amount of time other participants of similar age will take to guess what the object is. Due to hindsight bias, the estimated times are often much lower than the actual times. This is because the participant is using their personal knowledge while making their estimate. These types of studies show that children are also affected by hindsight bias. Adults and children with hindsight bias share the core cognitive constraint of being biased to one's own current knowledge when, at the same time, attempting to recall or reason about a more naïve cognitive state—regardless of whether the more naïve state is one's own earlier naïve state or someone else's. Children have a theory of mind, which is their mental state of reasoning. Hindsight bias is a fundamental problem in cognitive perspective-taking. After reviewing developmental literature on hindsight bias and other limitations [of perception], it was found that some of children's limitation in the theory of mind may stem from the same core component as hindsight bias does. This key factor brings forth underlying mechanisms. A developmental approach to [hindsight bias] is necessary for a comprehensive understanding of the nature of hindsight bias in social cognition. Effects Auditory distractions Another factor that impacts the capabilities of the hindsight bias is the auditory function of humans. To test the effects of auditory distractions on hindsight bias, four experiments were completed. Experiment one included plain words, in which low-pass filters were used to reduce the amplitude for sounds of consonants; thus making the words more degraded. In the naïve-identification task, participants were presented a warning tone before hearing the degraded words. In the hindsight estimation task, a warning tone was presented before the clear word followed by the degraded version of the word. Experiment two included words with explicit warnings of the hindsight bias. It followed the same procedure as experiment one. However, the participants were informed and asked not to complete the same error. Experiment three included full sentences of degraded words rather than individual words. Experiment four included less-degraded words in order to make the words easier to understand and identify to the participants. By using these different techniques, this offered a different range of detection and also evaluated the ecological validity of the [experiment's] effect. In each experiment, hindsight estimates exceeded the naïve-identification rates. Therefore, knowing the identities of words caused people to overestimate others' naïve ability to identify moderately to highly degraded spoken versions of those words. People who know the outcome of an event tend to overestimate their own prior knowledge or others' naïve knowledge of the event. As a result, speakers tend to overestimate the clarity of their message while listeners tend to overestimate their understanding of ambiguous messages. This miscommunication stems from hindsight bias which then creates a feeling of inevitability. Overall, this auditory hindsight bias occurs despite people's effort to avoid it. Cognitive models To understand how a person can so easily change the foundation of knowledge and belief for events after receiving new information, three cognitive models of hindsight bias have been reviewed. The three models are: SARA (Selective Activation and Reconstructive Anchoring), RAFT (reconstruction after feedback with take the best), and CMT (causal model theory). SARA and RAFT focus on distortions or changes in a memory process, while CMT focuses on probability judgments of hindsight bias. The SARA model, created by Rüdiger Pohl and associates, explains hindsight bias for descriptive information in memory and hypothetical situations. SARA assumes that people have a set of images to draw their memories from. They suffer from the hindsight bias due to selective activation or biased sampling of that set of images. Basically, people only remember small, select amounts of information—and when asked to recall it later, use that biased image to support their own opinions about the situation. The set of images is originally processed in the brain when first experienced. When remembered, this image reactivates, and the mind can edit and alter the memory, which takes place in hindsight bias when new and correct information is presented, leading one to believe that this new information, when remembered at a later time, is the persons original memory. Due to this reactivation in the brain, a more permanent memory trace can be created. The new information acts as a memory anchor causing retrieval impairment. The RAFT model explains hindsight bias with comparisons of objects. It uses knowledge-based probability then applies interpretations to those probabilities. When given two choices, a person recalls the information on both topics and makes assumptions based on how reasonable they find the information. An example case is someone comparing the size of two cities. If they know one city well (e.g. because it has a popular sporting team or through personal history) and know much less about the other, their mental cues for the more popular city increase. They then "take the best" option in their assessment of their own probabilities. For example, they recognize a city due to knowing of its sports team, and thus they assume that that city has the highest population. "Take the best" refers to a cue that is viewed as most valid and becomes support for the person's interpretations. RAFT is a by-product of adaptive learning. Feedback information updates a person's knowledge base. This can lead a person to be unable to retrieve the initial information, because the information cue has been replaced by a cue that they thought was more fitting. The "best" cue has been replaced, and the person only remembers the answer that is most likely and believes they thought this was the best point the whole time. Both SARA and RAFT descriptions include a memory trace impairment or cognitive distortion that is caused by feedback of information and reconstruction of memory. CMT is a non-formal theory based on work by many researchers to create a collaborative process model for hindsight bias that involves event outcomes. People try to make sense of an event that has not turned out how they expected by creating causal reasoning for the starting event conditions. This can give that person the idea that the event outcome was inevitable and there was nothing that could take place to prevent it from happening. CMT can be caused by a discrepancy between a person's expectation of the event and the reality of an outcome. They consciously want to make sense of what has happened and selectively retrieve memory that supports the current outcome. This causal attribution can be motivated by wanting to feel more positive about the outcome and possibly themselves. Memory distortions Hindsight bias has similarities to other memory distortions, such as misinformation effect and false autobiographical memory. Misinformation effect occurs after an event is witnessed; new information received after the fact influences how the person remembers the event, and can be called post-event misinformation. This is an important issue with an eyewitness testimony. False autobiographical memory takes place when suggestions or additional outside information is provided to distort and change memory of events; this can also lead to false memory syndrome. At times this can lead to creation of new memories that are completely false and have not taken place. All three of these memory distortions contain a three-stage procedure. The details of each procedure are different, but all three can result in some form of psychological manipulation and alteration of memory. Stage one is different between the three paradigms, although all involve an event, an event that has taken place (misinformation effect), an event that has not taken place (false autobiographical memory), and a judgment made by a person about an event that must be remembered (hindsight bias). Stage two consists of more information that is received by the person after the event has taken place. The new information given in hindsight bias is correct and presented upfront to the person, while the extra information for the other two memory distortions is wrong and presented in an indirect and possibly manipulative way. The third stage consists of recalling the starting information. The person must recall the original information with hindsight bias and misinformation effect, while a person that has a false autobiographical memory is expected to remember the incorrect information as a true memory. Cavillo (2013) tested whether there is a relationship between the amount of time the people performing the experiment gave the participants to respond and the participant's level of bias when recalling their initial judgements. The results showed that there is in fact a relationship; the hindsight bias index was greater among the participants who were asked to respond more rapidly than among the participants who were allowed more time to respond. Distortions of autobiographical memory produced by hindsight bias have also been used as a tool to study changes in students’ beliefs about paranormal phenomena after taking a university level skepticism course. In a study by Kane (2010), students in Kane's skepticism class rated their level of belief in a variety of paranormal phenomena at both the beginning and at the end of the course. At the end of the course they also rated what they remembered their level of belief had been at the beginning of the course. The critical finding was that not only did students reduce their average level of belief in paranormal phenomena by the end of the course, they also falsely remembered the level of belief they held at the beginning of the course, recalling a much lower level of belief than what they had initially rated. It is the latter finding that is a reflection of the operation of hindsight bias. To create a false autobiographical memory, the person must believe a memory that is not real. To seem real, the information must be influenced by their own personal judgments. There is no real episode of an event to remember, so this memory construction must be logical to that person's knowledge base. Hindsight bias and the misinformation effect recall a specific time and event; this is called an episodic memory process. These two memory distortions both use memory-based mechanisms that involve a memory trace that has been changed. Hippocampus activation takes place when an episodic memory is recalled. The memory is then available for alteration by new information. The person believes that the recalled information is the original memory trace, not an altered memory. This new memory is made from accurate information, and therefore the person does not have much motivation to admit that they were wrong originally by remembering the original memory. This can lead to motivated forgetting. Motivated forgetting Following a negative outcome of a situation, people do not want to accept responsibility. Instead of accepting their role in the event, they might either view themselves as caught up in a situation that was unforeseeable with them therefore not being the culprits (this is referred to as defensive processing) or view the situation as inevitable with there therefore being nothing that could have been done to prevent it (this is retroactive pessimism). Defensive processing involves less hindsight bias, as they are playing ignorant of the event. Retroactive pessimism makes use of hindsight bias after a negative, unwanted outcome. Events in life can be hard to control or predict. It is no surprise that people want to view themselves in a more positive light and do not want to take responsibility for situations they could have altered. This leads to hindsight bias in the form of retroactive pessimism to inhibit upward counterfactual thinking, instead interpreting the outcome as succumbing to an inevitable fate. This memory inhibition that prevents a person from recalling what really happened may lead to failure to accept mistakes, and therefore may make someone unable to learn and grow to prevent repeating the mistake. Hindsight bias can also lead to overconfidence in decisions without considering other options. Such people see themselves as persons who remember correctly, even though they are just forgetting that they were wrong. Avoiding responsibility is common among the human population. Examples are discussed below to show the regularity and severity of hindsight bias in society. Consequences Hindsight bias has both positive and negative consequences. The bias also plays a role in the process of decision-making within the medical field. Positive Positive consequences of hindsight bias is an increase in one's confidence and performance, as long as the bias distortion is reasonable and does not create overconfidence. Another positive consequence is that one's self-assurance of their knowledge and decision-making, even if it ends up being a poor decision, can be beneficial to others; allowing others to experience new things or to learn from those who made the poor decisions. Negative Hindsight bias decreases one's rational thinking because of when a person experiences strong emotions, which in turn decreases rational thinking. Another negative consequence of hindsight bias is the interference of one's ability to learn from experience, as a person is unable to look back on past decisions and learn from mistakes. A third consequence is a decrease in sensitivity toward a victim by the person who caused the wrongdoing. The person demoralizes the victim and does not allow for a correction of behaviors and actions. Medical decision-making Hindsight bias may lead to overconfidence and malpractice in regards to doctors. Hindsight bias and overconfidence is often attributed to the number of years of experience the doctor has. After a procedure, doctors may have a “knew it the whole time” attitude, when in reality they may not have actually known it. In an effort to avoid hindsight bias, doctors use a computer-based decision support system that help the doctor diagnose and treat their patients correctly and accurately. Visual hindsight bias Hindsight bias has also been found to affect judgments regarding the perception of visual stimuli, an effect referred to as the “I saw it all along” phenomenon. This effect has been demonstrated experimentally by presenting participants with initially very blurry images of celebrities. Participants then viewed the images as the images resolved to full clarity (Phase 1). Following Phase 1, participants predicted the level of blur at which a peer would be able to make an accurate identification of each celebrity. It was found that, now that the identity of the celebrities in each image was known, participants significantly overestimated the ease with which others would be able to identify the celebrities when the images were blurry. The phenomenon of visual hindsight bias has important implications for a form of malpractice litigation that occurs in the field of radiology. Typically, in these cases, a radiologist is charged with having failed to detect the presence of an abnormality that was actually present in a radiology image. During litigation, a different radiologist – who now knows that the image contains an abnormality – is asked to judge how likely it would be for a naive radiologist to have detected the abnormality during the initial reading of the image. This kind of judgment directly parallels the judgments made in hindsight bias studies. Consistent with the hindsight bias literature, it has been found that abnormalities are, in fact, more easily detected in hindsight than foresight. In the absence of controls for hindsight bias, testifying radiologists may overestimate the ease with which the abnormality would have been detected in foresight. Attempts to decrease Research suggests that people still exhibit the hindsight bias even when they are aware of it or possess the intention of eradicating it. There is no solution to eliminate hindsight bias in its totality, but only ways to reduce it. Some of these include considering alternative explanations or opening one's mind to different perspectives. The only observable way to decrease hindsight bias in testing is to have the participant think about how alternative hypotheses could be correct. As a result, the participant would doubt the correct hypothesis and report not having chosen it. Given that researchers' attempts to eliminate hindsight bias have failed, some believe there is a possible combination of motivational and automatic processes in cognitive reconstruction. Incentive prompts participants to use more effort to recover even the weak memory traces. This idea supports the causal model theory and the use of sense-making to understand event outcomes. Mental illness Schizophrenia Schizophrenia is an example of a disorder that directly affects the hindsight bias. Individuals with schizophrenia are more strongly affected by the hindsight bias than are individuals from the general public. The hindsight bias effect is a paradigm that demonstrates how recently acquired knowledge influences the recollection of past information. Recently acquired knowledge has a strange but strong influence on schizophrenic individuals in relation to information previously learned. New information combined with rejection of past memories can disconfirm behavior and delusional belief, which is typically found in patients suffering from schizophrenia. This can cause faulty memory, which can lead to hindsight thinking and believing in knowing something they don't. Delusion-prone individuals suffering from schizophrenia can falsely jump to conclusions. Jumping to conclusions can lead to hindsight, which strongly influences the delusional conviction in individuals with schizophrenia. In numerous studies, cognitive functional deficits in schizophrenic individuals impair their ability to represent and uphold contextual processing. Post-traumatic stress disorder Post-traumatic stress disorder (PTSD) is the re-experiencing and avoidance of trauma-related stressors, emotions, and memories from a past event or events that has cognitive dramatizing impact on an individual. PTSD can be attributed to the functional impairment of the prefrontal cortex (PFC) structure. Dysfunctions of cognitive processing of context and abnormalities that PTSD patients suffer from can affect hindsight thinking, such as in combat soldiers perceiving they could have altered outcomes of events in war. The PFC and dopamine systems are parts of the brain that can be responsible for the impairment in cognitive control processing of context information. The PFC is well known for controlling the thought process in hindsight bias that something will happen when it evidently does not. Brain impairment in certain brain regions can also affect the thought process of an individual who may engage in hindsight thinking. Cognitive flashbacks and other associated features from a traumatic event can trigger severe stress and negative emotions such as unpardonable guilt. For example, studies were done on trauma-related guilt characteristics of war veterans with chronic PTSD. Although there has been limited research, significant data suggests that hindsight bias has an effect on war veterans' personal perception of wrongdoing, in terms of guilt and responsibility from traumatic events of war. They blame themselves, and, in hindsight, perceive that they could have prevented what happened. Examples Healthcare system Accidents are prone to happen in any human undertaking, but accidents occurring within the healthcare system seem more salient and severe because of their profound effect on the lives of those involved and sometimes result in the death of a patient. In the healthcare system, there are a number of methods in which specific cases of accidents that happened being reviewed by others who already know the outcome of the case. Those methods include morbidity and mortality conferences, autopsies, case analysis, medical malpractice claims analysis, staff interviews, and even patient observation. Hindsight bias has been shown to cause difficulties in measuring errors in these cases. Many of the errors are considered preventable after the fact, which clearly indicates the presence and the importance of a hindsight bias in this field. There are two sides in the debate in how these case reviews should be approached to best evaluate past cases: the error elimination strategy and the safety management strategy. The error elimination strategy aims to find the cause of errors, relying heavily on hindsight (therefore more subject to the hindsight bias). The safety management strategy relies less on hindsight (less subject to hindsight bias) and identifies possible constraints during the decision making process of that case. However, it is not immune to error. Judicial system Hindsight bias results in being held to a higher standard in court. The defense is particularly susceptible to these effects since their actions are the ones being scrutinized by the jury. The hindsight bias causes defendants to be judged as capable of preventing the bad outcome. Although much stronger for the defendants, hindsight bias also affects the plaintiffs. In cases that there is an assumption of risk, hindsight bias may contribute to the jurors perceiving the event as riskier because of the poor outcome. That may lead the jury to feel that the plaintiff should have exercised greater caution in the situation. Both effects can be minimized if attorneys put the jury in a position of foresight, rather than hindsight, through the use of language and timelines. Judges and juries are likely to mistakenly view negative events as being more foreseeable than what it really was in the moment when they look at the situation after the fact in court. Encouraging people to explicitly think about the counterfactuals was an effective means of reducing the hindsight bias. In other words, people became less attached to the actual outcome and were more open to consider alternative lines of reasoning prior to the event. Judges involved in fraudulent transfer litigation cases were subject to the hindsight bias as well and result in an unfair advantage for the plaintiff, showing that jurors are not the only ones sensitive to the effects of the hindsight bias in the courtroom. Wikipedia Since hindsight leads people to focus on information that is consistent with what happened while inconsistent information is ignored or regarded as less relevant, it is likely included in representations about the past as well. In a study of Wikipedia articles the latest article versions before the event (foresight article versions) were compared to two hindsight article versions: the first online after the event took place and another one eight weeks later. To be able to investigate various types of events, even including disasters (such as the nuclear disaster at Fukushima), for which foresight articles do not exist, the authors made use of articles about the structure that suffered damage in those instances (such as the article about the nuclear power plant of Fukushima). When analyzing to what extent the articles were suggestive of a particular event, they found only articles about disasters to be much more suggestive of the disaster in hindsight than in foresight, which indicated hindsight bias. For the remaining event categories, however, Wikipedia articles did not show any hindsight bias. In an attempt to compare individuals' and Wikipedia's hindsight bias more directly, another study came to the conclusion that Wikipedia articles are less susceptible to hindsight bias than individuals' representations. See also References Further reading Excerpt from: David G. Myers, Exploring Social Psychology. New York: McGraw-Hill, 1994, pp. 15–19. (More discussion of Paul Lazarsfeld's experimental questions.) Ken Fisher, Forecasting (Macro and Micro) and Future Concepts, on Market Analysis (4/7/06) Iraq War Naysayers May Have Hindsight Bias. Shankar Vedantam. The Washington Post. Why Hindsight Can Damage Foresight. Paul Goodwin. Foresight: The International Journal of Applied Forecasting, Spring 2010. Social Cognition (2007) Vol. 25, Special Issue: The Hindsight Bias Cognitive biases Memory biases Prospect theory Error |
No, this text is not related with defense topics | A mental image or mental picture is an experience that, on most occasions, significantly resembles the experience of visually perceiving some object, event, or scene, but occurs when the relevant object, event, or scene is not actually present to the senses. There are sometimes episodes, particularly on falling asleep (hypnagogic imagery) and waking up (hypnopompic), when the mental imagery, being of a rapid, phantasmagoric and involuntary character, defies perception, presenting a kaleidoscopic field, in which no distinct object can be discerned. Mental imagery can sometimes produce the same effects as would be produced by the behavior or experience imagined. The nature of these experiences, what makes them possible, and their function (if any) have long been subjects of research and controversy in philosophy, psychology, cognitive science, and, more recently, neuroscience. As contemporary researchers use the expression, mental images or imagery can comprise information from any source of sensory input; one may experience auditory images, olfactory images, and so forth. However, the majority of philosophical and scientific investigations of the topic focus upon visual mental imagery. It has sometimes been assumed that, like humans, some types of animals are capable of experiencing mental images. Due to the fundamentally introspective nature of the phenomenon, there is little to no evidence either for or against this view. Philosophers such as George Berkeley and David Hume, and early experimental psychologists such as Wilhelm Wundt and William James, understood ideas in general to be mental images. Today it is very widely believed that much imagery functions as mental representations (or mental models), playing an important role in memory and thinking. William Brant (2013, p. 12) traces the scientific use of the phrase "mental images" back to John Tyndall's 1870 speech called the "Scientific Use of the Imagination". Some have gone so far as to suggest that images are best understood to be, by definition, a form of inner, mental or neural representation; in the case of hypnagogic and hypnapompic imagery, it is not representational at all. Others reject the view that the image experience may be identical with (or directly caused by) any such representation in the mind or the brain, but do not take account of the non-representational forms of imagery. The mind's eye The notion of a "mind's eye" goes back at least to Cicero's reference to mentis oculi during his discussion of the orator's appropriate use of simile. In this discussion, Cicero observed that allusions to "the Syrtis of his patrimony" and "the Charybdis of his possessions" involved similes that were "too far-fetched"; and he advised the orator to, instead, just speak of "the rock" and "the gulf" (respectively)—on the grounds that "the eyes of the mind are more easily directed to those objects which we have seen, than to those which we have only heard". The concept of "the mind's eye" first appeared in English in Chaucer's (c. 1387) Man of Law's Tale in his Canterbury Tales, where he tells us that one of the three men dwelling in a castle was blind, and could only see with "the eyes of his mind"; namely, those eyes "with which all men see after they have become blind". Physical basis The biological foundation of the mind's eye is not fully understood. Studies using fMRI have shown that the lateral geniculate nucleus and the V1 area of the visual cortex are activated during mental imagery tasks. Ratey writes: The visual pathway is not a one-way street. Higher areas of the brain can also send visual input back to neurons in lower areas of the visual cortex. [...] As humans, we have the ability to see with the mind's eye—to have a perceptual experience in the absence of visual input. For example, PET scans have shown that when subjects, seated in a room, imagine they are at their front door starting to walk either to the left or right, activation begins in the visual association cortex, the parietal cortex, and the prefrontal cortex—all higher cognitive processing centers of the brain. The rudiments of a biological basis for the mind's eye is found in the deeper portions of the brain below the neocortex, or where the center of perception exists. The thalamus has been found to be discrete to other components in that it processes all forms of perceptional data relayed from both lower and higher components of the brain. Damage to this component can produce permanent perceptual damage, however when damage is inflicted upon the cerebral cortex, the brain adapts to neuroplasticity to amend any occlusions for perception . It can be thought that the neocortex is a sophisticated memory storage warehouse in which data received as an input from sensory systems are compartmentalized via the cerebral cortex. This would essentially allow for shapes to be identified, although given the lack of filtering input produced internally, one may as a consequence, hallucinate—essentially seeing something that isn't received as an input externally but rather internal (i.e. an error in the filtering of segmented sensory data from the cerebral cortex may result in one seeing, feeling, hearing or experiencing something that is inconsistent with reality). Not all people have the same internal perceptual ability. For many, when the eyes are closed, the perception of darkness prevails. However, some people are able to perceive colorful, dynamic imagery. The use of hallucinogenic drugs increases the subject's ability to consciously access visual (and auditory, and other sense) percepts. Furthermore, the pineal gland is a hypothetical candidate for producing a mind's eye. Rick Strassman and others have postulated that during near-death experiences (NDEs) and dreaming, the gland might secrete a hallucinogenic chemical N,N-Dimethyltryptamine (DMT) to produce internal visuals when external sensory data is occluded. However, this hypothesis has yet to be fully supported with neurochemical evidence and plausible mechanism for DMT production. The condition where a person lacks mental imagery is called aphantasia. The term was first suggested in a 2015 study. Common examples of mental images include daydreaming and the mental visualization that occurs while reading a book. Another is of the pictures summoned by athletes during training or before a competition, outlining each step they will take to accomplish their goal. When a musician hears a song, they can sometimes "see" the song notes in their head, as well as hear them with all their tonal qualities. This is considered different from an after-effect, such as an afterimage. Calling up an image in our minds can be a voluntary act, so it can be characterized as being under various degrees of conscious control. According to psychologist and cognitive scientist Steven Pinker, our experiences of the world are represented in our minds as mental images. These mental images can then be associated and compared with others, and can be used to synthesize completely new images. In this view, mental images allow us to form useful theories of how the world works by formulating likely sequences of mental images in our heads without having to directly experience that outcome. Whether other creatures have this capability is debatable. There are several theories as to how mental images are formed in the mind. These include the dual-code theory, the propositional theory, and the functional-equivalency hypothesis. The dual-code theory, created by Allan Paivio in 1971, is the theory that we use two separate codes to represent information in our brains: image codes and verbal codes. Image codes are things like thinking of a picture of a dog when you are thinking of a dog, whereas a verbal code would be to think of the word "dog". Another example is the difference between thinking of abstract words such as justice or love and thinking of concrete words like elephant or chair. When abstract words are thought of, it is easier to think of them in terms of verbal codes—finding words that define them or describe them. With concrete words, it is often easier to use image codes and bring up a picture of a human or chair in your mind rather than words associated or descriptive of them. The propositional theory involves storing images in the form of a generic propositional code that stores the meaning of the concept not the image itself. The propositional codes can either be descriptive of the image or symbolic. They are then transferred back into verbal and visual code to form the mental image. The functional-equivalency hypothesis is that mental images are "internal representations" that work in the same way as the actual perception of physical objects. In other words, the picture of a dog brought to mind when the word dog is read is interpreted in the same way as if the person looking at an actual dog before them. Research has occurred to designate a specific neural correlate of imagery; however, studies show a multitude of results. Most studies published before 2001 suggest neural correlates of visual imagery occur in Brodmann area 17. Auditory performance imagery have been observed in the premotor areas, precunes, and medial Brodmann area 40. Auditory imagery in general occurs across participants in the temporal voice area (TVA), which allows top-down imaging manipulations, processing, and storage of audition functions. Olfactory imagery research shows activation in the anterior piriform cortex and the posterior piriform cortex; experts in olfactory imagery have larger gray matter associated to olfactory areas. Tactile imagery is found to occur in the dorsolateral prefrontal area, inferior frontal gyrus, frontal gyrus, insula, precentral gyrus, and the medial frontal gyrus with basal ganglia activation in the ventral posteriomedial nucleus and putamen (hemisphere activation corresponds to the location of the imagined tactile stimulus). Research in gustatory imagery reveals activation in the anterior insular cortex, frontal operculum, and prefrontal cortex. Novices of a specific form of mental imagery show less gray matter than experts of mental imagery congruent to that form. A meta-analysis of neuroimagery studies revealed significant activation of the bilateral dorsal parietal, interior insula, and left inferior frontal regions of the brain. Imagery has been thought to cooccur with perception; however, participants with damaged sense-modality receptors can sometimes perform imagery of said modality receptors. Neuroscience with imagery has been used to communicate with seemingly unconscious individuals through fMRI activation of different neural correlates of imagery, demanding further study into low quality consciousness. A study on one patient with one occipital lobe removed found the horizontal area of their visual mental image was reduced. Neural substrates of visual imagery Visual imagery is the ability to create mental representations of things, people, and places that are absent from an individual’s visual field. This ability is crucial to problem-solving tasks, memory, and spatial reasoning. Neuroscientists have found that imagery and perception share many of the same neural substrates, or areas of the brain that function similarly during both imagery and perception, such as the visual cortex and higher visual areas. Kosslyn and colleagues (1999) showed that the early visual cortex, Area 17 and Area 18/19, is activated during visual imagery. They found that inhibition of these areas through repetitive transcranial magnetic stimulation (rTMS) resulted in impaired visual perception and imagery. Furthermore, research conducted with lesioned patients has revealed that visual imagery and visual perception have the same representational organization. This has been concluded from patients in which impaired perception also experience visual imagery deficits at the same level of the mental representation. Behrmann and colleagues (1992) describe a patient C.K., who provided evidence challenging the view that visual imagery and visual perception rely on the same representational system. C.K. was a 33-year old man with visual object agnosia acquired after a vehicular accident. This deficit prevented him from being able to recognize objects and copy objects fluidly. Surprisingly, his ability to draw accurate objects from memory indicated his visual imagery was intact and normal. Furthermore, C.K. successfully performed other tasks requiring visual imagery for judgment of size, shape, color, and composition. These findings conflict with previous research as they suggest there is a partial dissociation between visual imagery and visual perception. C.K. exhibited a perceptual deficit that was not associated with a corresponding deficit in visual imagery, indicating that these two processes have systems for mental representations that may not be mediated entirely by the same neural substrates. Schlegel and colleagues (2013) conducted a functional MRI analysis of regions activated during manipulation of visual imagery. They identified 11 bilateral cortical and subcortical regions that exhibited increased activation when manipulating a visual image compared to when the visual image was just maintained. These regions included the occipital lobe and ventral stream areas, two parietal lobe regions, the posterior parietal cortex and the precuneus lobule, and three frontal lobe regions, the frontal eye fields, dorsolateral prefrontal cortex, and the prefrontal cortex. Due to their suspected involvement in working memory and attention, the authors propose that these parietal and prefrontal regions, and occipital regions, are part of a network involved in mediating the manipulation of visual imagery. These results suggest a top-down activation of visual areas in visual imagery. Using Dynamic Causal Modeling (DCM) to determine the connectivity of cortical networks, Ishai et al. (2010) demonstrated that activation of the network mediating visual imagery is initiated by prefrontal cortex and posterior parietal cortex activity. Generation of objects from memory resulted in initial activation of the prefrontal and the posterior parietal areas, which then activate earlier visual areas through backward connectivity. Activation of the prefrontal cortex and posterior parietal cortex has also been found to be involved in retrieval of object representations from long-term memory, their maintenance in working memory, and attention during visual imagery. Thus, Ishai et al. suggest that the network mediating visual imagery is composed of attentional mechanisms arising from the posterior parietal cortex and the prefrontal cortex. Vividness of visual imagery is a crucial component of an individual’s ability to perform cognitive tasks requiring imagery. Vividness of visual imagery varies not only between individuals but also within individuals. Dijkstra and colleagues (2017) found that the variation in vividness of visual imagery is dependent on the degree to which the neural substrates of visual imagery overlap with those of visual perception. They found that overlap between imagery and perception in the entire visual cortex, the parietal precuneus lobule, the right parietal cortex, and the medial frontal cortex predicted the vividness of a mental representation. The activated regions beyond the visual areas are believed to drive the imagery-specific processes rather than the visual processes shared with perception. It has been suggested that the precuneus contributes to vividness by selecting important details for imagery. The medial frontal cortex is suspected to be involved in the retrieval and integration of information from the parietal and visual areas during working memory and visual imagery. The right parietal cortex appears to be important in attention, visual inspection, and stabilization of mental representations. Thus, the neural substrates of visual imagery and perception overlap in areas beyond the visual cortex and the degree of this overlap in these areas correlates with the vividness of mental representations during imagery. Philosophical ideas Mental images are an important topic in classical and modern philosophy, as they are central to the study of knowledge. In the Republic, Book VII, Plato has Socrates present the Allegory of the Cave: a prisoner, bound and unable to move, sits with his back to a fire watching the shadows cast on the cave wall in front of him by people carrying objects behind his back. These people and the objects they carry are representations of real things in the world. Unenlightened man is like the prisoner, explains Socrates, a human being making mental images from the sense data that he experiences. The eighteenth-century philosopher Bishop George Berkeley proposed similar ideas in his theory of idealism. Berkeley stated that reality is equivalent to mental images—our mental images are not a copy of another material reality but that reality itself. Berkeley, however, sharply distinguished between the images that he considered to constitute the external world, and the images of individual imagination. According to Berkeley, only the latter are considered "mental imagery" in the contemporary sense of the term. The eighteenth century British writer Dr. Samuel Johnson criticized idealism. When asked what he thought about idealism, he is alleged to have replied "I refute it thus!" as he kicked a large rock and his leg rebounded. His point was that the idea that the rock is just another mental image and has no material existence of its own is a poor explanation of the painful sense data he had just experienced. David Deutsch addresses Johnson's objection to idealism in The Fabric of Reality when he states that, if we judge the value of our mental images of the world by the quality and quantity of the sense data that they can explain, then the most valuable mental image—or theory—that we currently have is that the world has a real independent existence and that humans have successfully evolved by building up and adapting patterns of mental images to explain it. This is an important idea in scientific thought. Critics of scientific realism ask how the inner perception of mental images actually occurs. This is sometimes called the "homunculus problem" (see also the mind's eye). The problem is similar to asking how the images you see on a computer screen exist in the memory of the computer. To scientific materialism, mental images and the perception of them must be brain-states. According to critics, scientific realists cannot explain where the images and their perceiver exist in the brain. To use the analogy of the computer screen, these critics argue that cognitive science and psychology have been unsuccessful in identifying either the component in the brain (i.e., "hardware") or the mental processes that store these images (i.e. "software"). In experimental psychology Cognitive psychologists and (later) cognitive neuroscientists have empirically tested some of the philosophical questions related to whether and how the human brain uses mental imagery in cognition. One theory of the mind that was examined in these experiments was the "brain as serial computer" philosophical metaphor of the 1970s. Psychologist Zenon Pylyshyn theorized that the human mind processes mental images by decomposing them into an underlying mathematical proposition. Roger Shepard and Jacqueline Metzler challenged that view by presenting subjects with 2D line drawings of groups of 3D block "objects" and asking them to determine whether that "object" is the same as a second figure, some of which rotations of the first "object". Shepard and Metzler proposed that if we decomposed and then mentally re-imaged the objects into basic mathematical propositions, as the then-dominant view of cognition "as a serial digital computer" assumed, then it would be expected that the time it took to determine whether the object is the same or not would be independent of how much the object had been rotated. Shepard and Metzler found the opposite: a linear relationship between the degree of rotation in the mental imagery task and the time it took participants to reach their answer. This mental rotation finding implied that the human mind—and the human brain—maintains and manipulates mental images as topographic and topological wholes, an implication that was quickly put to test by psychologists. Stephen Kosslyn and colleagues showed in a series of neuroimaging experiments that the mental image of objects like the letter "F" are mapped, maintained and rotated as an image-like whole in areas of the human visual cortex. Moreover, Kosslyn's work showed that there are considerable similarities between the neural mappings for imagined stimuli and perceived stimuli. The authors of these studies concluded that, while the neural processes they studied rely on mathematical and computational underpinnings, the brain also seems optimized to handle the sort of mathematics that constantly computes a series of topologically-based images rather than calculating a mathematical model of an object. Recent studies in neurology and neuropsychology on mental imagery have further questioned the "mind as serial computer" theory, arguing instead that human mental imagery manifests both visually and kinesthetically. For example, several studies have provided evidence that people are slower at rotating line drawings of objects such as hands in directions incompatible with the joints of the human body, and that patients with painful, injured arms are slower at mentally rotating line drawings of the hand from the side of the injured arm. Some psychologists, including Kosslyn, have argued that such results occur because of interference in the brain between distinct systems in the brain that process the visual and motor mental imagery. Subsequent neuroimaging studies showed that the interference between the motor and visual imagery system could be induced by having participants physically handle actual 3D blocks glued together to form objects similar to those depicted in the line-drawings. Amorim et al. have shown that, when a cylindrical "head" was added to Shepard and Metzler's line drawings of 3D block figures, participants were quicker and more accurate at solving mental rotation problems. They argue that motoric embodiment is not just "interference" that inhibits visual mental imagery but is capable of facilitating mental imagery. As cognitive neuroscience approaches to mental imagery continued, research expanded beyond questions of serial versus parallel or topographic processing to questions of the relationship between mental images and perceptual representations. Both brain imaging (fMRI and ERP) and studies of neuropsychological patients have been used to test the hypothesis that a mental image is the reactivation, from memory, of brain representations normally activated during the perception of an external stimulus. In other words, if perceiving an apple activates contour and location and shape and color representations in the brain’s visual system, then imagining an apple activates some or all of these same representations using information stored in memory. Early evidence for this idea came from neuropsychology. Patients with brain damage that impairs perception in specific ways, for example by damaging shape or color representations, seem to generally to have impaired mental imagery in similar ways. Studies of brain function in normal human brains support this same conclusion, showing activity in the brain’s visual areas while subjects imagined visual objects and scenes. The previously mentioned and numerous related studies have led to a relative consensus within cognitive science, psychology, neuroscience, and philosophy on the neural status of mental images. In general, researchers agree that, while there is no homunculus inside the head viewing these mental images, our brains do form and maintain mental images as image-like wholes. The problem of exactly how these images are stored and manipulated within the human brain, in particular within language and communication, remains a fertile area of study. One of the longest-running research topics on the mental image has basis on the fact that people report large individual differences in the vividness of their images. Special questionnaires have been developed to assess such differences, including the Vividness of Visual Imagery Questionnaire (VVIQ) developed by David Marks. Laboratory studies have suggested that the subjectively reported variations in imagery vividness are associated with different neural states within the brain and also different cognitive competences such as the ability to accurately recall information presented in pictures Rodway, Gillies and Schepman used a novel long-term change detection task to determine whether participants with low and high vividness scores on the VVIQ2 showed any performance differences. Rodway et al. found that high vividness participants were significantly more accurate at detecting salient changes to pictures compared to low-vividness participants. This replicated an earlier study. Recent studies have found that individual differences in VVIQ scores can be used to predict changes in a person's brain while visualizing different activities. Functional magnetic resonance imaging (fMRI) was used to study the association between early visual cortex activity relative to the whole brain while participants visualized themselves or another person bench pressing or stair climbing. Reported image vividness correlates significantly with the relative fMRI signal in the visual cortex. Thus, individual differences in the vividness of visual imagery can be measured objectively. Logie, Pernet, Buonocore and Della Sala (2011) used behavioural and fMRI data for mental rotation from individuals reporting vivid and poor imagery on the VVIQ. Groups differed in brain activation patterns suggesting that the groups performed the same tasks in different ways. These findings help to explain the lack of association previously reported between VVIQ scores and mental rotation performance. Training and learning styles Some educational theorists have drawn from the idea of mental imagery in their studies of learning styles. Proponents of these theories state that people often have learning processes that emphasize visual, auditory, and kinesthetic systems of experience. According to these theorists, teaching in multiple overlapping sensory systems benefits learning, and they encourage teachers to use content and media that integrates well with the visual, auditory, and kinesthetic systems whenever possible. Educational researchers have examined whether the experience of mental imagery affects the degree of learning. For example, imagining playing a five-finger piano exercise (mental practice) resulted in a significant improvement in performance over no mental practice—though not as significant as that produced by physical practice. The authors of the study stated that "mental practice alone seems to be sufficient to promote the modulation of neural circuits involved in the early stages of motor skill learning". Visualization and the Himalayan traditions In general, Vajrayana Buddhism, Bön, and Tantra utilize sophisticated visualization or imaginal (in the language of Jean Houston of Transpersonal Psychology) processes in the thoughtform construction of the yidam sadhana, kye-rim, and dzog-rim modes of meditation and in the yantra, thangka, and mandala traditions, where holding the fully realized form in the mind is a prerequisite prior to creating an 'authentic' new art work that will provide a sacred support or foundation for deity. Substitution effects Mental imagery can act as a substitute for the imagined experience: Imagining an experience can evoke similar cognitive, physiological, and/or behavioral consequences as having the corresponding experience in reality. At least four classes of such effects have been documented. Imagined experiences are attributed evidentiary value like physical evidence. Mental practice can instantiate the same performance benefits as physical practice and reduction central neuropathic pain. Imagined consumption of a food can reduce its actual consumption. Imagined goal achievement can reduce motivation for actual goal achievement. See also Aphantasia (condition whereby people can't think with mental images at all) Animal cognition Audiation (imaginary sound) Cognition Creative visualization Fantasy (psychology) Fantasy prone personality Guided imagery Imagination Internal monologue Mental event Mental rotation Mind Motor imagery Spatial ability Tulpa Visual space References Further reading Albert, J.-M. ‘’Mental Image and Representation. (French text by Jean-Max Albert and translation by H. Arnold) Paris: Mercier & Associés, 2018. Amorim, Michel-Ange, Brice Isableu and Mohammed Jarraya (2006) Embodied Spatial Transformations: “Body Analogy” for the Mental Rotation. Journal of Experimental Psychology: General. Bennett, M.R. & Hacker, P.M.S. (2003). Philosophical Foundations of Neuroscience. Oxford: Blackwell. Brant, W. (2013). Mental Imagery and Creativity: Cognition, Observation and Realization. Akademikerverlag. pp. 227. Saarbrücken, Germany. Egan, Kieran (1992). Imagination in Teaching and Learning. Chicago: University of Chicago Press. Finke, R.A. (1989). Principles of Mental Imagery. Cambridge, MA: MIT Press. Garnder, Howard. (1987) The Mind's New Science: A History of the Cognitive Revolution New York: Basic Books. Kosslyn, Stephen M. (1983). Ghosts in the Mind's Machine: Creating and Using Images in the Brain. New York: Norton. Kosslyn, Stephen (1994) Image and Brain: The Resolution of the Imagery Debate. Cambridge, MA: MIT Press. McGabhann. R, Squires. B, 2003, 'Releasing The Beast Within – A path to Mental Toughness', Granite Publishing, Australia. McKellar, Peter (1957). Imagination and Thinking. London: Cohen & West. Paivio, Allan (1986). Mental Representations: A Dual Coding Approach. New York: Oxford University Press. Pascual-Leone, Alvaro, Nguyet Dang, Leonardo G. Cohen, Joaquim P. Brasil-Neto, Angel Cammarota, and Mark Hallett (1995). Modulation of Muscle Responses Evoked by Transcranial Magnetic Stimulation During the Acquisition of New Fine Motor Skills. Journal of Neuroscience Prinz, J.J. (2002). Furnishing the Mind: Concepts and their Perceptual Basis. Boston, MA: MIT Press. Reisberg, Daniel (Ed.) (1992). Auditory Imagery. Hillsdale, NJ: Erlbaum. Richardson, A. (1969). Mental Imagery. London: Routledge & Kegan Paul. Rohrer, T. (2006). The Body in Space: Dimensions of embodiment The Body in Space: Embodiment, Experientialism and Linguistic Conceptualization. In Body, Language and Mind, vol. 2. Zlatev, Jordan; Ziemke, Tom; Frank, Roz; Dirven, René (eds.). Berlin: Mouton de Gruyter. Ryle, G. (1949). The Concept of Mind. London: Hutchinson. Sartre, J.-P. (1940). The Psychology of Imagination. (Translated from the French by B. Frechtman, New York: Philosophical Library, 1948.) Skinner, B.F. (1974). About Behaviorism. New York: Knopf. Thomas, N.J.T. (2003). Mental Imagery, Philosophical Issues About. In L. Nadel (Ed.), Encyclopedia of Cognitive Science (Volume 2, pp. 1147–1153). London: Nature Publishing/Macmillan. External links Mental Imagery in the Stanford Encyclopedia of Philosophy Concepts Imagination Memory |
No, this text is not related with defense topics | Medicine is the science and practice of caring for a patient, managing the diagnosis, prognosis, prevention, treatment, palliation of their injury or disease, and promoting their health. Medicine encompasses a variety of health care practices evolved to maintain and restore health by the prevention and treatment of illness. Contemporary medicine applies biomedical sciences, biomedical research, genetics, and medical technology to diagnose, treat, and prevent injury and disease, typically through pharmaceuticals or surgery, but also through therapies as diverse as psychotherapy, external splints and traction, medical devices, biologics, and ionizing radiation, amongst others. Medicine has been practiced since prehistoric times, during most of which it was an art (an area of skill and knowledge) frequently having connections to the religious and philosophical beliefs of local culture. For example, a medicine man would apply herbs and say prayers for healing, or an ancient philosopher and physician would apply bloodletting according to the theories of humorism. In recent centuries, since the advent of modern science, most medicine has become a combination of art and science (both basic and applied, under the umbrella of medical science). While stitching technique for sutures is an art learned through practice, the knowledge of what happens at the cellular and molecular level in the tissues being stitched arises through science. Prescientific forms of medicine are now known as traditional medicine or folk medicine, which remains commonly used in the absence of scientific medicine, and are thus called alternative medicine. Alternative treatments outside of scientific medicine having safety and efficacy concerns are termed quackery. Etymology Medicine (, ) is the science and practice of the diagnosis, prognosis, treatment, and prevention of disease. The word "medicine" is derived from Latin medicus, meaning "a physician". Clinical practice Medical availability and clinical practice varies across the world due to regional differences in culture and technology. Modern scientific medicine is highly developed in the Western world, while in developing countries such as parts of Africa or Asia, the population may rely more heavily on traditional medicine with limited evidence and efficacy and no required formal training for practitioners. In the developed world, evidence-based medicine is not universally used in clinical practice; for example, a 2007 survey of literature reviews found that about 49% of the interventions lacked sufficient evidence to support either benefit or harm. In modern clinical practice, physicians and physician assistants personally assess patients in order to diagnose, prognose, treat, and prevent disease using clinical judgment. The doctor-patient relationship typically begins an interaction with an examination of the patient's medical history and medical record, followed by a medical interview and a physical examination. Basic diagnostic medical devices (e.g. stethoscope, tongue depressor) are typically used. After examination for signs and interviewing for symptoms, the doctor may order medical tests (e.g. blood tests), take a biopsy, or prescribe pharmaceutical drugs or other therapies. Differential diagnosis methods help to rule out conditions based on the information provided. During the encounter, properly informing the patient of all relevant facts is an important part of the relationship and the development of trust. The medical encounter is then documented in the medical record, which is a legal document in many jurisdictions. Follow-ups may be shorter but follow the same general procedure, and specialists follow a similar process. The diagnosis and treatment may take only a few minutes or a few weeks depending upon the complexity of the issue. The components of the medical interview and encounter are: Chief complaint (CC): the reason for the current medical visit. These are the 'symptoms.' They are in the patient's own words and are recorded along with the duration of each one. Also called 'chief concern' or 'presenting complaint'. History of present illness (HPI): the chronological order of events of symptoms and further clarification of each symptom. Distinguishable from history of previous illness, often called past medical history (PMH). Medical history comprises HPI and PMH. Current activity: occupation, hobbies, what the patient actually does. Medications (Rx): what drugs the patient takes including prescribed, over-the-counter, and home remedies, as well as alternative and herbal medicines or remedies. Allergies are also recorded. Past medical history (PMH/PMHx): concurrent medical problems, past hospitalizations and operations, injuries, past infectious diseases or vaccinations, history of known allergies. Social history (SH): birthplace, residences, marital history, social and economic status, habits (including diet, medications, tobacco, alcohol). Family history (FH): listing of diseases in the family that may impact the patient. A family tree is sometimes used. Review of systems (ROS) or systems inquiry: a set of additional questions to ask, which may be missed on HPI: a general enquiry (have you noticed any weight loss, change in sleep quality, fevers, lumps and bumps? etc.), followed by questions on the body's main organ systems (heart, lungs, digestive tract, urinary tract, etc.). The physical examination is the examination of the patient for medical signs of disease, which are objective and observable, in contrast to symptoms that are volunteered by the patient and not necessarily objectively observable. The healthcare provider uses sight, hearing, touch, and sometimes smell (e.g., in infection, uremia, diabetic ketoacidosis). Four actions are the basis of physical examination: inspection, palpation (feel), percussion (tap to determine resonance characteristics), and auscultation (listen), generally in that order although auscultation occurs prior to percussion and palpation for abdominal assessments. The clinical examination involves the study of: Vital signs including height, weight, body temperature, blood pressure, pulse, respiration rate, and hemoglobin oxygen saturation General appearance of the patient and specific indicators of disease (nutritional status, presence of jaundice, pallor or clubbing) Skin Head, eye, ear, nose, and throat (HEENT) Cardiovascular (heart and blood vessels) Respiratory (large airways and lungs) Abdomen and rectum Genitalia (and pregnancy if the patient is or could be pregnant) Musculoskeletal (including spine and extremities) Neurological (consciousness, awareness, brain, vision, cranial nerves, spinal cord and peripheral nerves) Psychiatric (orientation, mental state, mood, evidence of abnormal perception or thought). It is to likely focus on areas of interest highlighted in the medical history and may not include everything listed above. The treatment plan may include ordering additional medical laboratory tests and medical imaging studies, starting therapy, referral to a specialist, or watchful observation. Follow-up may be advised. Depending upon the health insurance plan and the managed care system, various forms of "utilization review", such as prior authorization of tests, may place barriers on accessing expensive services. The medical decision-making (MDM) process involves analysis and synthesis of all the above data to come up with a list of possible diagnoses (the differential diagnoses), along with an idea of what needs to be done to obtain a definitive diagnosis that would explain the patient's problem. On subsequent visits, the process may be repeated in an abbreviated manner to obtain any new history, symptoms, physical findings, and lab or imaging results or specialist consultations. Institutions Contemporary medicine is in general conducted within health care systems. Legal, credentialing and financing frameworks are established by individual governments, augmented on occasion by international organizations, such as churches. The characteristics of any given health care system have significant impact on the way medical care is provided. From ancient times, Christian emphasis on practical charity gave rise to the development of systematic nursing and hospitals and the Catholic Church today remains the largest non-government provider of medical services in the world. Advanced industrial countries (with the exception of the United States) and many developing countries provide medical services through a system of universal health care that aims to guarantee care for all through a single-payer health care system, or compulsory private or co-operative health insurance. This is intended to ensure that the entire population has access to medical care on the basis of need rather than ability to pay. Delivery may be via private medical practices or by state-owned hospitals and clinics, or by charities, most commonly by a combination of all three. Most tribal societies provide no guarantee of healthcare for the population as a whole. In such societies, healthcare is available to those that can afford to pay for it or have self-insured it (either directly or as part of an employment contract) or who may be covered by care financed by the government or tribe directly. Transparency of information is another factor defining a delivery system. Access to information on conditions, treatments, quality, and pricing greatly affects the choice by patients/consumers and, therefore, the incentives of medical professionals. While the US healthcare system has come under fire for lack of openness, new legislation may encourage greater openness. There is a perceived tension between the need for transparency on the one hand and such issues as patient confidentiality and the possible exploitation of information for commercial gain on the other. The health professionals who provide care in medicine comprise multiple professions such as medics, nurses, physio therapists, and psychologists. These professions will have their own ethical standards, professional education, and bodies. The medical profession have been conceptualized from a sociological perspective. Delivery Provision of medical care is classified into primary, secondary, and tertiary care categories. Primary care medical services are provided by physicians, physician assistants, nurse practitioners, or other health professionals who have first contact with a patient seeking medical treatment or care. These occur in physician offices, clinics, nursing homes, schools, home visits, and other places close to patients. About 90% of medical visits can be treated by the primary care provider. These include treatment of acute and chronic illnesses, preventive care and health education for all ages and both sexes. Secondary care medical services are provided by medical specialists in their offices or clinics or at local community hospitals for a patient referred by a primary care provider who first diagnosed or treated the patient. Referrals are made for those patients who required the expertise or procedures performed by specialists. These include both ambulatory care and inpatient services, Emergency departments, intensive care medicine, surgery services, physical therapy, labor and delivery, endoscopy units, diagnostic laboratory and medical imaging services, hospice centers, etc. Some primary care providers may also take care of hospitalized patients and deliver babies in a secondary care setting. Tertiary care medical services are provided by specialist hospitals or regional centers equipped with diagnostic and treatment facilities not generally available at local hospitals. These include trauma centers, burn treatment centers, advanced neonatology unit services, organ transplants, high-risk pregnancy, radiation oncology, etc. Modern medical care also depends on information – still delivered in many health care settings on paper records, but increasingly nowadays by electronic means. In low-income countries, modern healthcare is often too expensive for the average person. International healthcare policy researchers have advocated that "user fees" be removed in these areas to ensure access, although even after removal, significant costs and barriers remain. Separation of prescribing and dispensing is a practice in medicine and pharmacy in which the physician who provides a medical prescription is independent from the pharmacist who provides the prescription drug. In the Western world there are centuries of tradition for separating pharmacists from physicians. In Asian countries, it is traditional for physicians to also provide drugs. Branches Working together as an interdisciplinary team, many highly trained health professionals besides medical practitioners are involved in the delivery of modern health care. Examples include: nurses, emergency medical technicians and paramedics, laboratory scientists, pharmacists, podiatrists, physiotherapists, respiratory therapists, speech therapists, occupational therapists, radiographers, dietitians, and bioengineers, medical physics, surgeons, surgeon's assistant, surgical technologist. The scope and sciences underpinning human medicine overlap many other fields. Dentistry, while considered by some a separate discipline from medicine, is a medical field. A patient admitted to the hospital is usually under the care of a specific team based on their main presenting problem, e.g., the cardiology team, who then may interact with other specialties, e.g., surgical, radiology, to help diagnose or treat the main problem or any subsequent complications/developments. Physicians have many specializations and subspecializations into certain branches of medicine, which are listed below. There are variations from country to country regarding which specialties certain subspecialties are in. The main branches of medicine are: Basic sciences of medicine; this is what every physician is educated in, and some return to in biomedical research Medical specialties Interdisciplinary fields, where different medical specialties are mixed to function in certain occasions. Basic sciences Anatomy is the study of the physical structure of organisms. In contrast to macroscopic or gross anatomy, cytology and histology are concerned with microscopic structures. Biochemistry is the study of the chemistry taking place in living organisms, especially the structure and function of their chemical components. Biomechanics is the study of the structure and function of biological systems by means of the methods of Mechanics. Biostatistics is the application of statistics to biological fields in the broadest sense. A knowledge of biostatistics is essential in the planning, evaluation, and interpretation of medical research. It is also fundamental to epidemiology and evidence-based medicine. Biophysics is an interdisciplinary science that uses the methods of physics and physical chemistry to study biological systems. Cytology is the microscopic study of individual cells. Embryology is the study of the early development of organisms. Endocrinology is the study of hormones and their effect throughout the body of animals. Epidemiology is the study of the demographics of disease processes, and includes, but is not limited to, the study of epidemics. Genetics is the study of genes, and their role in biological inheritance. Histology is the study of the structures of biological tissues by light microscopy, electron microscopy and immunohistochemistry. Immunology is the study of the immune system, which includes the innate and adaptive immune system in humans, for example. Lifestyle medicine is the study of the chronic conditions, and how to prevent, treat and reverse them. Medical physics is the study of the applications of physics principles in medicine. Microbiology is the study of microorganisms, including protozoa, bacteria, fungi, and viruses. Molecular biology is the study of molecular underpinnings of the process of replication, transcription and translation of the genetic material. Neuroscience includes those disciplines of science that are related to the study of the nervous system. A main focus of neuroscience is the biology and physiology of the human brain and spinal cord. Some related clinical specialties include neurology, neurosurgery and psychiatry. Nutrition science (theoretical focus) and dietetics (practical focus) is the study of the relationship of food and drink to health and disease, especially in determining an optimal diet. Medical nutrition therapy is done by dietitians and is prescribed for diabetes, cardiovascular diseases, weight and eating disorders, allergies, malnutrition, and neoplastic diseases. Pathology as a science is the study of disease—the causes, course, progression and resolution thereof. Pharmacology is the study of drugs and their actions. Gynecology is the study of female reproductive system. Photobiology is the study of the interactions between non-ionizing radiation and living organisms. Physiology is the study of the normal functioning of the body and the underlying regulatory mechanisms. Radiobiology is the study of the interactions between ionizing radiation and living organisms. Toxicology is the study of hazardous effects of drugs and poisons. Specialties In the broadest meaning of "medicine", there are many different specialties. In the UK, most specialities have their own body or college, which has its own entrance examination. These are collectively known as the Royal Colleges, although not all currently use the term "Royal". The development of a speciality is often driven by new technology (such as the development of effective anaesthetics) or ways of working (such as emergency departments); the new specialty leads to the formation of a unifying body of doctors and the prestige of administering their own examination. Within medical circles, specialities usually fit into one of two broad categories: "Medicine" and "Surgery". "Medicine" refers to the practice of non-operative medicine, and most of its subspecialties require preliminary training in Internal Medicine. In the UK, this was traditionally evidenced by passing the examination for the Membership of the Royal College of Physicians (MRCP) or the equivalent college in Scotland or Ireland. "Surgery" refers to the practice of operative medicine, and most subspecialties in this area require preliminary training in General Surgery, which in the UK leads to membership of the Royal College of Surgeons of England (MRCS). At present, some specialties of medicine do not fit easily into either of these categories, such as radiology, pathology, or anesthesia. Most of these have branched from one or other of the two camps above; for example anaesthesia developed first as a faculty of the Royal College of Surgeons (for which MRCS/FRCS would have been required) before becoming the Royal College of Anaesthetists and membership of the college is attained by sitting for the examination of the Fellowship of the Royal College of Anesthetists (FRCA). Surgical specialty Surgery is an ancient medical specialty that uses operative manual and instrumental techniques on a patient to investigate or treat a pathological condition such as disease or injury, to help improve bodily function or appearance or to repair unwanted ruptured areas (for example, a perforated ear drum). Surgeons must also manage pre-operative, post-operative, and potential surgical candidates on the hospital wards. Surgery has many sub-specialties, including general surgery, ophthalmic surgery, cardiovascular surgery, colorectal surgery, neurosurgery, oral and maxillofacial surgery, oncologic surgery, orthopedic surgery, otolaryngology, plastic surgery, podiatric surgery, transplant surgery, trauma surgery, urology, vascular surgery, and pediatric surgery. In some centers, anesthesiology is part of the division of surgery (for historical and logistical reasons), although it is not a surgical discipline. Other medical specialties may employ surgical procedures, such as ophthalmology and dermatology, but are not considered surgical sub-specialties per se. Surgical training in the U.S. requires a minimum of five years of residency after medical school. Sub-specialties of surgery often require seven or more years. In addition, fellowships can last an additional one to three years. Because post-residency fellowships can be competitive, many trainees devote two additional years to research. Thus in some cases surgical training will not finish until more than a decade after medical school. Furthermore, surgical training can be very difficult and time-consuming. Internal medicine specialty Internal medicine is the medical specialty dealing with the prevention, diagnosis, and treatment of adult diseases. According to some sources, an emphasis on internal structures is implied. In North America, specialists in internal medicine are commonly called "internists". Elsewhere, especially in Commonwealth nations, such specialists are often called physicians. These terms, internist or physician (in the narrow sense, common outside North America), generally exclude practitioners of gynecology and obstetrics, pathology, psychiatry, and especially surgery and its subspecialities. Because their patients are often seriously ill or require complex investigations, internists do much of their work in hospitals. Formerly, many internists were not subspecialized; such general physicians would see any complex nonsurgical problem; this style of practice has become much less common. In modern urban practice, most internists are subspecialists: that is, they generally limit their medical practice to problems of one organ system or to one particular area of medical knowledge. For example, gastroenterologists and nephrologists specialize respectively in diseases of the gut and the kidneys. In the Commonwealth of Nations and some other countries, specialist pediatricians and geriatricians are also described as specialist physicians (or internists) who have subspecialized by age of patient rather than by organ system. Elsewhere, especially in North America, general pediatrics is often a form of primary care. There are many subspecialities (or subdisciplines) of internal medicine: Angiology/Vascular Medicine Bariatrics Cardiology Critical care medicine Endocrinology Gastroenterology Geriatrics Hematology Hepatology Infectious disease Nephrology Neurology Oncology Pediatrics Pulmonology/Pneumology/Respirology/chest medicine Rheumatology Sports Medicine Training in internal medicine (as opposed to surgical training), varies considerably across the world: see the articles on medical education and physician for more details. In North America, it requires at least three years of residency training after medical school, which can then be followed by a one- to three-year fellowship in the subspecialties listed above. In general, resident work hours in medicine are less than those in surgery, averaging about 60 hours per week in the US. This difference does not apply in the UK where all doctors are now required by law to work less than 48 hours per week on average. Diagnostic specialties Clinical laboratory sciences are the clinical diagnostic services that apply laboratory techniques to diagnosis and management of patients. In the United States, these services are supervised by a pathologist. The personnel that work in these medical laboratory departments are technically trained staff who do not hold medical degrees, but who usually hold an undergraduate medical technology degree, who actually perform the tests, assays, and procedures needed for providing the specific services. Subspecialties include transfusion medicine, cellular pathology, clinical chemistry, hematology, clinical microbiology and clinical immunology. Pathology as a medical specialty is the branch of medicine that deals with the study of diseases and the morphologic, physiologic changes produced by them. As a diagnostic specialty, pathology can be considered the basis of modern scientific medical knowledge and plays a large role in evidence-based medicine. Many modern molecular tests such as flow cytometry, polymerase chain reaction (PCR), immunohistochemistry, cytogenetics, gene rearrangements studies and fluorescent in situ hybridization (FISH) fall within the territory of pathology. Diagnostic radiology is concerned with imaging of the body, e.g. by x-rays, x-ray computed tomography, ultrasonography, and nuclear magnetic resonance tomography. Interventional radiologists can access areas in the body under imaging for an intervention or diagnostic sampling. Nuclear medicine is concerned with studying human organ systems by administering radiolabelled substances (radiopharmaceuticals) to the body, which can then be imaged outside the body by a gamma camera or a PET scanner. Each radiopharmaceutical consists of two parts: a tracer that is specific for the function under study (e.g., neurotransmitter pathway, metabolic pathway, blood flow, or other), and a radionuclide (usually either a gamma-emitter or a positron emitter). There is a degree of overlap between nuclear medicine and radiology, as evidenced by the emergence of combined devices such as the PET/CT scanner. Clinical neurophysiology is concerned with testing the physiology or function of the central and peripheral aspects of the nervous system. These kinds of tests can be divided into recordings of: (1) spontaneous or continuously running electrical activity, or (2) stimulus evoked responses. Subspecialties include electroencephalography, electromyography, evoked potential, nerve conduction study and polysomnography. Sometimes these tests are performed by techs without a medical degree, but the interpretation of these tests is done by a medical professional. Other major specialties The following are some major medical specialties that do not directly fit into any of the above-mentioned groups: Anesthesiology (also known as anaesthetics): concerned with the perioperative management of the surgical patient. The anesthesiologist's role during surgery is to prevent derangement in the vital organs' (i.e. brain, heart, kidneys) functions and postoperative pain. Outside of the operating room, the anesthesiology physician also serves the same function in the labor and delivery ward, and some are specialized in critical medicine. Dermatology is concerned with the skin and its diseases. In the UK, dermatology is a subspecialty of general medicine. Emergency medicine is concerned with the diagnosis and treatment of acute or life-threatening conditions, including trauma, surgical, medical, pediatric, and psychiatric emergencies. Family medicine, family practice, general practice or primary care is, in many countries, the first port-of-call for patients with non-emergency medical problems. Family physicians often provide services across a broad range of settings including office based practices, emergency department coverage, inpatient care, and nursing home care. Obstetrics and gynecology (often abbreviated as OB/GYN (American English) or Obs & Gynae (British English)) are concerned respectively with childbirth and the female reproductive and associated organs. Reproductive medicine and fertility medicine are generally practiced by gynecological specialists. Medical genetics is concerned with the diagnosis and management of hereditary disorders. Neurology is concerned with diseases of the nervous system. In the UK, neurology is a subspecialty of general medicine. Ophthalmology is exclusively concerned with the eye and ocular adnexa, combining conservative and surgical therapy. Pediatrics (AE) or paediatrics (BE) is devoted to the care of infants, children, and adolescents. Like internal medicine, there are many pediatric subspecialties for specific age ranges, organ systems, disease classes, and sites of care delivery. Pharmaceutical medicine is the medical scientific discipline concerned with the discovery, development, evaluation, registration, monitoring and medical aspects of marketing of medicines for the benefit of patients and public health. Physical medicine and rehabilitation (or physiatry) is concerned with functional improvement after injury, illness, or congenital disorders. Podiatric medicine is the study of, diagnosis, and medical & surgical treatment of disorders of the foot, ankle, lower limb, hip and lower back. Psychiatry is the branch of medicine concerned with the bio-psycho-social study of the etiology, diagnosis, treatment and prevention of cognitive, perceptual, emotional and behavioral disorders. Related fields include psychotherapy and clinical psychology. Preventive medicine is the branch of medicine concerned with preventing disease. Community health or public health is an aspect of health services concerned with threats to the overall health of a community based on population health analysis. Interdisciplinary fields Some interdisciplinary sub-specialties of medicine include: Aerospace medicine deals with medical problems related to flying and space travel. Addiction medicine deals with the treatment of addiction. Medical ethics deals with ethical and moral principles that apply values and judgments to the practice of medicine. Biomedical Engineering is a field dealing with the application of engineering principles to medical practice. Clinical pharmacology is concerned with how systems of therapeutics interact with patients. Conservation medicine studies the relationship between human and animal health, and environmental conditions. Also known as ecological medicine, environmental medicine, or medical geology. Disaster medicine deals with medical aspects of emergency preparedness, disaster mitigation and management. Diving medicine (or hyperbaric medicine) is the prevention and treatment of diving-related problems. Evolutionary medicine is a perspective on medicine derived through applying evolutionary theory. Forensic medicine deals with medical questions in legal context, such as determination of the time and cause of death, type of weapon used to inflict trauma, reconstruction of the facial features using remains of deceased (skull) thus aiding identification. Gender-based medicine studies the biological and physiological differences between the human sexes and how that affects differences in disease. Hospice and Palliative Medicine is a relatively modern branch of clinical medicine that deals with pain and symptom relief and emotional support in patients with terminal illnesses including cancer and heart failure. Hospital medicine is the general medical care of hospitalized patients. Physicians whose primary professional focus is hospital medicine are called hospitalists in the United States and Canada. The term Most Responsible Physician (MRP) or attending physician is also used interchangeably to describe this role. Laser medicine involves the use of lasers in the diagnostics or treatment of various conditions. Medical humanities includes the humanities (literature, philosophy, ethics, history and religion), social science (anthropology, cultural studies, psychology, sociology), and the arts (literature, theater, film, and visual arts) and their application to medical education and practice. Health informatics is a relatively recent field that deal with the application of computers and information technology to medicine. Nosology is the classification of diseases for various purposes. Nosokinetics is the science/subject of measuring and modelling the process of care in health and social care systems. Occupational medicine is the provision of health advice to organizations and individuals to ensure that the highest standards of health and safety at work can be achieved and maintained. Pain management (also called pain medicine, or algiatry) is the medical discipline concerned with the relief of pain. Pharmacogenomics is a form of individualized medicine. Podiatric medicine is the study of, diagnosis, and medical treatment of disorders of the foot, ankle, lower limb, hip and lower back. Sexual medicine is concerned with diagnosing, assessing and treating all disorders related to sexuality. Sports medicine deals with the treatment and prevention and rehabilitation of sports/exercise injuries such as muscle spasms, muscle tears, injuries to ligaments (ligament tears or ruptures) and their repair in athletes, amateur and professional. Therapeutics is the field, more commonly referenced in earlier periods of history, of the various remedies that can be used to treat disease and promote health. Travel medicine or emporiatrics deals with health problems of international travelers or travelers across highly different environments. Tropical medicine deals with the prevention and treatment of tropical diseases. It is studied separately in temperate climates where those diseases are quite unfamiliar to medical practitioners and their local clinical needs. Urgent care focuses on delivery of unscheduled, walk-in care outside of the hospital emergency department for injuries and illnesses that are not severe enough to require care in an emergency department. In some jurisdictions this function is combined with the emergency department. Veterinary medicine; veterinarians apply similar techniques as physicians to the care of animals. Wilderness medicine entails the practice of medicine in the wild, where conventional medical facilities may not be available. Many other health science fields, e.g. dietetics Education and legal controls Medical education and training varies around the world. It typically involves entry level education at a university medical school, followed by a period of supervised practice or internship, or residency. This can be followed by postgraduate vocational training. A variety of teaching methods have been employed in medical education, still itself a focus of active research. In Canada and the United States of America, a Doctor of Medicine degree, often abbreviated M.D., or a Doctor of Osteopathic Medicine degree, often abbreviated as D.O. and unique to the United States, must be completed in and delivered from a recognized university. Since knowledge, techniques, and medical technology continue to evolve at a rapid rate, many regulatory authorities require continuing medical education. Medical practitioners upgrade their knowledge in various ways, including medical journals, seminars, conferences, and online programs. A database of objectives covering medical knowledge, as suggested by national societies across the United States, can be searched at http://data.medobjectives.marian.edu/. In most countries, it is a legal requirement for a medical doctor to be licensed or registered. In general, this entails a medical degree from a university and accreditation by a medical board or an equivalent national organization, which may ask the applicant to pass exams. This restricts the considerable legal authority of the medical profession to physicians that are trained and qualified by national standards. It is also intended as an assurance to patients and as a safeguard against charlatans that practice inadequate medicine for personal gain. While the laws generally require medical doctors to be trained in "evidence based", Western, or Hippocratic Medicine, they are not intended to discourage different paradigms of health. In the European Union, the profession of doctor of medicine is regulated. A profession is said to be regulated when access and exercise is subject to the possession of a specific professional qualification. The regulated professions database contains a list of regulated professions for doctor of medicine in the EU member states, EEA countries and Switzerland. This list is covered by the Directive 2005/36/EC. Doctors who are negligent or intentionally harmful in their care of patients can face charges of medical malpractice and be subject to civil, criminal, or professional sanctions. Medical ethics Medical ethics is a system of moral principles that apply values and judgments to the practice of medicine. As a scholarly discipline, medical ethics encompasses its practical application in clinical settings as well as work on its history, philosophy, theology, and sociology. Six of the values that commonly apply to medical ethics discussions are: autonomy – the patient has the right to refuse or choose their treatment. (Voluntas aegroti suprema lex.) beneficence – a practitioner should act in the best interest of the patient. (Salus aegroti suprema lex.) justice – concerns the distribution of scarce health resources, and the decision of who gets what treatment (fairness and equality). non-maleficence – "first, do no harm" (primum non-nocere). respect for persons – the patient (and the person treating the patient) have the right to be treated with dignity. truthfulness and honesty – the concept of informed consent has increased in importance since the historical events of the Doctors' Trial of the Nuremberg trials, Tuskegee syphilis experiment, and others. Values such as these do not give answers as to how to handle a particular situation, but provide a useful framework for understanding conflicts. When moral values are in conflict, the result may be an ethical dilemma or crisis. Sometimes, no good solution to a dilemma in medical ethics exists, and occasionally, the values of the medical community (i.e., the hospital and its staff) conflict with the values of the individual patient, family, or larger non-medical community. Conflicts can also arise between health care providers, or among family members. For example, some argue that the principles of autonomy and beneficence clash when patients refuse blood transfusions, considering them life-saving; and truth-telling was not emphasized to a large extent before the HIV era. History Ancient world Prehistoric medicine incorporated plants (herbalism), animal parts, and minerals. In many cases these materials were used ritually as magical substances by priests, shamans, or medicine men. Well-known spiritual systems include animism (the notion of inanimate objects having spirits), spiritualism (an appeal to gods or communion with ancestor spirits); shamanism (the vesting of an individual with mystic powers); and divination (magically obtaining the truth). The field of medical anthropology examines the ways in which culture and society are organized around or impacted by issues of health, health care and related issues. Early records on medicine have been discovered from ancient Egyptian medicine, Babylonian Medicine, Ayurvedic medicine (in the Indian subcontinent), classical Chinese medicine (predecessor to the modern traditional Chinese medicine), and ancient Greek medicine and Roman medicine. In Egypt, Imhotep (3rd millennium BCE) is the first physician in history known by name. The oldest Egyptian medical text is the Kahun Gynaecological Papyrus from around 2000 BCE, which describes gynaecological diseases. The Edwin Smith Papyrus dating back to 1600 BCE is an early work on surgery, while the Ebers Papyrus dating back to 1500 BCE is akin to a textbook on medicine. In China, archaeological evidence of medicine in Chinese dates back to the Bronze Age Shang Dynasty, based on seeds for herbalism and tools presumed to have been used for surgery. The Huangdi Neijing, the progenitor of Chinese medicine, is a medical text written beginning in the 2nd century BCE and compiled in the 3rd century. In India, the surgeon Sushruta described numerous surgical operations, including the earliest forms of plastic surgery. Earliest records of dedicated hospitals come from Mihintale in Sri Lanka where evidence of dedicated medicinal treatment facilities for patients are found. In Greece, the Greek physician Hippocrates, the "father of modern medicine", laid the foundation for a rational approach to medicine. Hippocrates introduced the Hippocratic Oath for physicians, which is still relevant and in use today, and was the first to categorize illnesses as acute, chronic, endemic and epidemic, and use terms such as, "exacerbation, relapse, resolution, crisis, paroxysm, peak, and convalescence". The Greek physician Galen was also one of the greatest surgeons of the ancient world and performed many audacious operations, including brain and eye surgeries. After the fall of the Western Roman Empire and the onset of the Early Middle Ages, the Greek tradition of medicine went into decline in Western Europe, although it continued uninterrupted in the Eastern Roman (Byzantine) Empire. Most of our knowledge of ancient Hebrew medicine during the 1st millennium BC comes from the Torah, i.e. the Five Books of Moses, which contain various health related laws and rituals. The Hebrew contribution to the development of modern medicine started in the Byzantine Era, with the physician Asaph the Jew. Middle Ages The concept of hospital as institution to offer medical care and possibility of a cure for the patients due to the ideals of Christian charity, rather than just merely a place to die, appeared in the Byzantine Empire. Although the concept of uroscopy was known to Galen, he did not see the importance of using it to localize the disease. It was under the Byzantines with physicians such of Theophilus Protospatharius that they realized the potential in uroscopy to determine disease in a time when no microscope or stethoscope existed. That practice eventually spread to the rest of Europe. After 750 CE, the Muslim world had the works of Hippocrates, Galen and Sushruta translated into Arabic, and Islamic physicians engaged in some significant medical research. Notable Islamic medical pioneers include the Persian polymath, Avicenna, who, along with Imhotep and Hippocrates, has also been called the "father of medicine". He wrote The Canon of Medicine which became a standard medical text at many medieval European universities, considered one of the most famous books in the history of medicine. Others include Abulcasis, Avenzoar, Ibn al-Nafis, and Averroes. Persian physician Rhazes was one of the first to question the Greek theory of humorism, which nevertheless remained influential in both medieval Western and medieval Islamic medicine. Some volumes of Rhazes's work Al-Mansuri, namely "On Surgery" and "A General Book on Therapy", became part of the medical curriculum in European universities. Additionally, he has been described as a doctor's doctor, the father of pediatrics, and a pioneer of ophthalmology. For example, he was the first to recognize the reaction of the eye's pupil to light. The Persian Bimaristan hospitals were an early example of public hospitals. In Europe, Charlemagne decreed that a hospital should be attached to each cathedral and monastery and the historian Geoffrey Blainey likened the activities of the Catholic Church in health care during the Middle Ages to an early version of a welfare state: "It conducted hospitals for the old and orphanages for the young; hospices for the sick of all ages; places for the lepers; and hostels or inns where pilgrims could buy a cheap bed and meal". It supplied food to the population during famine and distributed food to the poor. This welfare system the church funded through collecting taxes on a large scale and possessing large farmlands and estates. The Benedictine order was noted for setting up hospitals and infirmaries in their monasteries, growing medical herbs and becoming the chief medical care givers of their districts, as at the great Abbey of Cluny. The Church also established a network of cathedral schools and universities where medicine was studied. The Schola Medica Salernitana in Salerno, looking to the learning of Greek and Arab physicians, grew to be the finest medical school in Medieval Europe. However, the fourteenth and fifteenth century Black Death devastated both the Middle East and Europe, and it has even been argued that Western Europe was generally more effective in recovering from the pandemic than the Middle East. In the early modern period, important early figures in medicine and anatomy emerged in Europe, including Gabriele Falloppio and William Harvey. The major shift in medical thinking was the gradual rejection, especially during the Black Death in the 14th and 15th centuries, of what may be called the 'traditional authority' approach to science and medicine. This was the notion that because some prominent person in the past said something must be so, then that was the way it was, and anything one observed to the contrary was an anomaly (which was paralleled by a similar shift in European society in general – see Copernicus's rejection of Ptolemy's theories on astronomy). Physicians like Vesalius improved upon or disproved some of the theories from the past. The main tomes used both by medicine students and expert physicians were Materia Medica and Pharmacopoeia. Andreas Vesalius was the author of De humani corporis fabrica, an important book on human anatomy. Bacteria and microorganisms were first observed with a microscope by Antonie van Leeuwenhoek in 1676, initiating the scientific field microbiology. Independently from Ibn al-Nafis, Michael Servetus rediscovered the pulmonary circulation, but this discovery did not reach the public because it was written down for the first time in the "Manuscript of Paris" in 1546, and later published in the theological work for which he paid with his life in 1553. Later this was described by Renaldus Columbus and Andrea Cesalpino. Herman Boerhaave is sometimes referred to as a "father of physiology" due to his exemplary teaching in Leiden and textbook 'Institutiones medicae' (1708). Pierre Fauchard has been called "the father of modern dentistry". Modern Veterinary medicine was, for the first time, truly separated from human medicine in 1761, when the French veterinarian Claude Bourgelat founded the world's first veterinary school in Lyon, France. Before this, medical doctors treated both humans and other animals. Modern scientific biomedical research (where results are testable and reproducible) began to replace early Western traditions based on herbalism, the Greek "four humours" and other such pre-modern notions. The modern era really began with Edward Jenner's discovery of the smallpox vaccine at the end of the 18th century (inspired by the method of inoculation earlier practiced in Asia), Robert Koch's discoveries around 1880 of the transmission of disease by bacteria, and then the discovery of antibiotics around 1900. The post-18th century modernity period brought more groundbreaking researchers from Europe. From Germany and Austria, doctors Rudolf Virchow, Wilhelm Conrad Röntgen, Karl Landsteiner and Otto Loewi made notable contributions. In the United Kingdom, Alexander Fleming, Joseph Lister, Francis Crick and Florence Nightingale are considered important. Spanish doctor Santiago Ramón y Cajal is considered the father of modern neuroscience. From New Zealand and Australia came Maurice Wilkins, Howard Florey, and Frank Macfarlane Burnet. Others that did significant work include William Williams Keen, William Coley, James D. Watson (United States); Salvador Luria (Italy); Alexandre Yersin (Switzerland); Kitasato Shibasaburō (Japan); Jean-Martin Charcot, Claude Bernard, Paul Broca (France); Adolfo Lutz (Brazil); Nikolai Korotkov (Russia); Sir William Osler (Canada); and Harvey Cushing (United States). As science and technology developed, medicine became more reliant upon medications. Throughout history and in Europe right until the late 18th century, not only animal and plant products were used as medicine, but also human body parts and fluids. Pharmacology developed in part from herbalism and some drugs are still derived from plants (atropine, ephedrine, warfarin, aspirin, digoxin, vinca alkaloids, taxol, hyoscine, etc.). Vaccines were discovered by Edward Jenner and Louis Pasteur. The first antibiotic was arsphenamine (Salvarsan) discovered by Paul Ehrlich in 1908 after he observed that bacteria took up toxic dyes that human cells did not. The first major class of antibiotics was the sulfa drugs, derived by German chemists originally from azo dyes. Pharmacology has become increasingly sophisticated; modern biotechnology allows drugs targeted towards specific physiological processes to be developed, sometimes designed for compatibility with the body to reduce side-effects. Genomics and knowledge of human genetics and human evolution is having increasingly significant influence on medicine, as the causative genes of most monogenic genetic disorders have now been identified, and the development of techniques in molecular biology, evolution, and genetics are influencing medical technology, practice and decision-making. Evidence-based medicine is a contemporary movement to establish the most effective algorithms of practice (ways of doing things) through the use of systematic reviews and meta-analysis. The movement is facilitated by modern global information science, which allows as much of the available evidence as possible to be collected and analyzed according to standard protocols that are then disseminated to healthcare providers. The Cochrane Collaboration leads this movement. A 2001 review of 160 Cochrane systematic reviews revealed that, according to two readers, 21.3% of the reviews concluded insufficient evidence, 20% concluded evidence of no effect, and 22.5% concluded positive effect. Quality, efficiency, and access Evidence-based medicine, prevention of medical error (and other "iatrogenesis"), and avoidance of unnecessary health care are a priority in modern medical systems. These topics generate significant political and public policy attention, particularly in the United States where healthcare is regarded as excessively costly but population health metrics lag similar nations. Globally, many developing countries lack access to care and access to medicines. As of 2015, most wealthy developed countries provide health care to all citizens, with a few exceptions such as the United States where lack of health insurance coverage may limit access. See also References new:चिकित्सा |
No, this text is not related with defense topics | Industrialized and developing countries have distinctly different rates of teenage pregnancy. In developed regions, such as United States, Canada, Western Europe, Australia, and New Zealand, teen parents tend to be unmarried, and adolescent pregnancy is seen as a social issue. By contrast, teenage parents in developing regions such as Africa, Asia, Eastern Europe, Latin America, and the Pacific Islands are often married, and their pregnancy may be welcomed by family and society. However, in these societies, early pregnancy may combine with malnutrition and poor health care to cause long-term medical problems for both the mother and child. A report by Save the Children found that, annually, 13 million children are born to women under age 20 worldwide. More than 90% of these births occur to women living in developing countries. Complications of pregnancy and childbirth are the leading cause of mortality among women between the ages of 15 and 19 in such areas, as they are the leading cause of mortality among older women. The age of the mother is determined by the easily verified date when the pregnancy ends, not by the estimated date of conception. Consequently, the statistics do not include women who first became pregnant before their 20th birthdays, if those pregnancies did not end until on or after their 20th birthdays. Rates by continent Africa The highest rate of teenage pregnancy in the world—143 per 1,000 girls aged 15–19 years—is in sub-Saharan Africa. Women in Africa, in general, get married at a much younger age than women elsewhere—leading to earlier pregnancies. In Nigeria, according to the Health and Demographic Survey in 1992, 47% of women aged 20–24 were married before 15, and 87% before 18. Also, 53% of those surveyed had given birth to a child before the age of 18. African countries have the highest rates of teenage birth (2002) According to data from World Bank, as of 2015, the highest incidence of births among 15- to 19-year-old girls was in Niger, Mali, Angola, Guinea, and Mozambique. In Mozambique, in 2015, 46% of girls aged 15 to 19 years were already mothers or pregnant, an increase of 9% between results found on the National Demographic Health Survey in 2011 and National Survey on HIV, Malaria and Reproductive Health (IMASIDA) 2015. With the exception of Maputo, the country capital city, all provinces presented an increase in the percentage of early pregnancies. The rates are particularly higher in the northern provinces, namely, Cabo Delgado, Nampula and Niassa with 64.9%, 61.3% and 60%, respectively. A Save the Children report identified 10 countries where motherhood carried the most risks for young women and their babies. Of these, 9 were in sub-Saharan Africa, and Niger, Liberia, and Mali were the nations where girls were the most at-risk. In the 10 highest-risk nations, more than one in six teenage girls between 15 and 19 years old gave birth annually, and nearly one in seven babies born to these teenagers died before the age of one year. Asia The rate of early marriage is higher in rural regions than it is in urbanized areas. Fertility rates in South Asia range from 71 to 119 births with a trend towards increasing age at marriage for both sexes. In South Korea and Singapore, although the occurrence of sexual intercourse before marriage has risen, rates of adolescent childbearing are low at 4 to 8 per 1000. The rate of early marriage and pregnancy has decreased sharply in Indonesia; however, it remains high in comparison to the rest of Asia. Surveys from Thailand have found that a significant minority of unmarried adolescents are sexually active. Although premarital sex is considered normal behavior for males, particularly with prostitutes, it is not always regarded as such for females. Most Thai youth reported that their first sexual experience, whether within or outside of marriage, was without contraception. The adolescent fertility rate in Thailand is relatively high at 60 per 1000. 25% of women admitted to hospitals in Thailand for complications of induced abortion are students. The Thai government has undertaken measures to inform the nation's youth about the prevention of sexually transmitted diseases and unplanned pregnancy. According to the World Health Organization, in several Asian countries including Bangladesh and Indonesia, a large proportion (26–37%) of deaths among female adolescents can be attributed to maternal causes. Australia In 2015, the birth rate among teenage women in Australia was 11.9 births per 1,000 women. The rate has fallen from 55.5 births per 1,000 women in 1971, probably due to ease of access to effective birth control, rather than any decrease in sexual activity. Europe The overall trend in Europe since 1970 has been a decrease in the total fertility rate, an increase in the age at which women experience their first birth, and a decrease in the number of births among teenagers. The rates of teenage pregnancy may vary widely within a country. For instance, in the United Kingdom, the rate of adolescent pregnancy in 2002 was as high as 100.4 per 1000 among young women living in the London Borough of Lambeth, and as low as 20.2 per 1000 among residents in the Midlands local authority area of Rutland. Teenage birth is often associated with economic and social issues: such as alcohol and drug misuse and, across 13 nations in the European Union, women who gave birth as teenagers were twice as likely to be living in poverty, compared with those who first gave birth when they were over 20. Bulgaria and Romania Romania and Bulgaria have some of the highest teenage birth rates in Europe. As of 2015, Bulgaria had a birth rate of 37 per 1,000 women aged 15–19, and Romania of 34. Both countries also have very large Romani populations, who have an occurrence of teenage pregnancies well above the local average. In recent years, the number of teenage mothers is declining in Bulgaria. Netherlands The Netherlands has a low rate of births and abortions among teenagers (5 births per 1,000 women aged 15–19 in 2002). Compared with countries with higher teenage birth rates, the Dutch have a higher average age at first intercourse and increased levels of contraceptive use (including the "double Dutch" method of using both a hormonal contraception method and a condom). Nordic countries Nordic countries, such as Denmark and Sweden, also have low rates of teenage birth (both have 7 births per 1,000 women aged 15–19 in 2002). However, Norway's birth rate is slightly higher (11 births per 1,000 women aged 15–19 in 2002) and Iceland has a birth rate of 19 per 1,000 women aged 15–19 (nearly the same as the UK). These countries have higher abortion rates than the Netherlands. Greece, Italy, Spain and Portugal In some countries, such as Italy and Spain, the rate of adolescent pregnancy is low (6 births per 1,000 women aged 15–19 in 2002 in both countries). These two countries also have low abortion rates (lower than Sweden and the other Nordic countries) and their teenage pregnancy rates are among the lowest in Europe. However, Greece (10 births per 1,000 women aged 15–19 in 2002) and Portugal (17 births per 1,000 women aged 15–19 in 2002) have higher rates of teenage pregnancy. United Kingdom In 2018, conception rates for under 18-year-olds in England and Wales declined by 6.1% to 16.8 conceptions per 1,000 women aged 15 to 17 years. Since 1999, conception rates for women aged under 18 years have decreased by 62.7%. The Americas Canada The Canadian teenage birth rate in 2002 was 16 per 1000 and the teenage pregnancy rate was 33.9. According to data from Statistics Canada, the Canadian teenage pregnancy rate has trended towards a steady decline for both younger (15–17) and older (18–19) teens in the period between 1992 and 2002. Canada's highest teen pregnancy rates occur in small towns located in rural parts of peninsular Ontario. Alberta and Quebec have high teen pregnancy rates as well. Colombia In 2016, the Minister of Health and Social Protection of Colombia, Alejandro Gaviria Uribe announced that "teenage pregnancy decreased by two percentage points breaking the growing tendency that had been seen since the nineties". United States In 2013, the teenage birth rate in the United States reached a historic low: 26.6 births per 1,000 women aged 15–19. More than three-quarters of these births are to adult women aged 18 or 19. In 2005 in the U.S., the majority (57%) of teen pregnancies resulted in a live birth, 27% ended in an induced abortion, and 16% in a fetal loss. The U.S. teen birth rate was 53 births per 1,000 women aged 15–19 in 2002, the highest in the developed world. If all pregnancies, including those that end in abortion or miscarriage, are taken into account, the total rate in 2000 was 75.4 pregnancies per 1,000 girls. Nevada and the District of Columbia have the highest teen pregnancy rates in the U.S., while North Dakota has the lowest. Over 80% of teenage pregnancies in the U.S. are unintended; approximately one third end in abortion, one third end in spontaneous miscarriage, and one third will continue their pregnancy and keep their baby. However, the trend is decreasing: in 1990, the birth rate was 61.8, and the pregnancy rate 116.9 per thousand. This decline has manifested across all races, although teenagers of African-American and Latino descent retain a higher rate, in comparison to that of European-Americans and Asian-Americans. The Guttmacher Institute attributed about 25% of the decline to abstinence and 75% to the effective use of contraceptives. Within the United States teen pregnancy is often brought up in political discourse. The goal to limit teen pregnancy is shared by Republicans and Democrats, though avenues of reduction are usually different. Many Democrats cite teen pregnancy as proof of the continuing need for access to birth control and sexual education, while Republicans often cite a need for returning to conservative values, often including abstinence. An inverse correlation has been noted between teen pregnancy rates and the quality of education in a state. A positive correlation, albeit weak, appears between a city's teen pregnancy rate and its average summer night temperature, especially in the Southern U.S. (Savageau, compiler, 1993–1995). Statistics World Development Indicator The birth rate for women aged 15–19 is one of the World Bank's World Development Indicators. The data for most countries and a variety of groupings (e.g. Sub-Saharan Africa or OECD members) are published regularly, and can be viewed or downloaded from a United Nations website. UN Statistics Division, live birth 2009 Per 1,000 women 15–19 years old: UN Statistics Division, estimates 1995-2010 Per 1,000 women 15–19 years old, source: Birth and abortion rates, 1996 Per 1000 women 15–19 (% aborted = % of teenage pregnancies ending in abortion), source: See also Adolescent sexuality in the United States Teenage pregnancy in the United Kingdom References Teenage pregnancy Teenage pregnancy |
No, this text is not related with defense topics | The Handheld Device Markup Language (HDML) is a markup language intended for display on handheld computers, information appliances, smartphones, etc.. It is similar to HTML, but for wireless and handheld devices with small displays, like PDA, mobile phones and so on. It was originally developed in about 1996 by Unwired Planet, the company that became Phone.com and then Openwave. HDML was submitted to W3C for standardization, but was not turned into a standard. Instead it became an important influence on the development and standardization of WML, which then replaced HDML in practice. Unlike WML, HDML has no support for scripts. See also Wireless Application Protocol List of document markup languages Comparison of document markup languages References Markup languages Computer-related introductions in 1996 Mobile web |
No, this text is not related with defense topics | The Storie index is a method of soil rating based on soil characteristics that govern the land's potential utilization and productivity capacity. Developed by R. Earl Storie at University of California, Berkeley in the 1930s as a method of land valuation, it is independent of other physical or economic factors that might determine the desirability of growing certain plants in a given location. The evaluation is easy to be realized, being this an advantage of this method. A variety of categories are comprised in few categories. Four or five parameters are evaluated: A: Soil depth and texture; B: Soil permeability; C: Soil chemical characteristics; D: Drainage, Surface runoff; E: Climate (only if it is not homogeneous, if so than it should not be included in the formula); The index is calculated from the multiplication of these parameters, that is: Sindex = A x B x C x D x E The disadvantage of this method is that if we have a value of zero in any category, then the result will be zero and won't be suitable for using. Another disadvantage is that the ratings are subjective. This methodology was updated in again in 2008 by O'Geen et al. to correct for some of the subjectivity. http://anrcatalog.ucanr.edu/pdf/8335.pdf References 2 O’Geen, Anthony Toby, Susan B. Southard and Randal J. Southard. “A Revised Storie Index for Use with Digital Soil Information.” Publication 8335, University of California, Division of Agriculture and Natural Resources, September 2008. http://anrcatalog.ucanr.edu/pdf/8335.pdf 3 Storie, R. Earl. “An Index for Rating the Agricultural Value of Soils,” Bulletin, California Agricultural Experiment Station, University of California, 1937. Pedology Agriculture |
No, this text is not related with defense topics | Preoperative care refers to health care provided before a surgical operation. The aim of preoperative care is to do whatever is right to increase the success of the surgery. At some point before the operation the health care provider will assess the fitness of the person to have surgery. This assessment should include whatever tests are indicated, but not include screening for conditions without an indication. Immediately before surgery the person's body is prepared, perhaps by washing with an antiseptic, and whenever possible their anxiety is addressed to make them comfortable. Technique At some point before surgery a health care provider conducts a preoperative assessment to verify that a person is fit and ready for the surgery. For surgeries in which a person receives either general or local anesthesia, this assessment may be done either by a doctor or a nurse trained to do the assessment. The available research does not give insight about any differences in outcomes depending on whether a doctor or nurse conducts this assessment. Addressing anxiety Playing calming music to patients immediately before surgery has a beneficial effect in addressing anxiety about the surgery. Surgical site preparation Hair removal at the location where the surgical incision is made is often done before the surgery. Sufficient evidence does not exist to say that removing hair is a useful way to prevent infections. When it is done immediately before surgery, the use of hair clippers might be preferable to shaving. Bathing with an antiseptic like chlorhexidine does not seem to affect incidence of complications after surgery. However, washing the surgical site with chlorhexidine after surgery does seem helpful for preventing surgical site infection. Risks Screening is a test to see whether a person has a disease, and screenings are often done before surgery. Screenings should happen when they are indicated and not otherwise as a matter of routine. Screenings which are done without indication carry the risks of having unnecessary health care. Commonly overused screenings include the following: Electrocardiograms (ECGs) are sometimes given before any kind of surgery as a matter of routine, but are unnecessary if a person does not have new and worrisome symptoms and if the surgery is minor. Eye surgery, for example, would not usually require an ECG. Cardiac imaging and cardiac stress tests are usually unnecessary for people who do not have a serious heart condition and who are having surgery unrelated to the heart. People in the United States using government healthcare services are especially likely to have this procedure without indication. Chest x-rays are usually unnecessary for people under age 70 who are not having chest surgery and who do not have worrisome symptoms. Breathing tests are usually unnecessary for people who do not smoke, do not have respiratory disease, and who do not have symptoms. Carotid ultrasonography is usually unnecessary for people who have not had a stroke or mini-stroke. Special populations Children Among children who are at normal risk of pulmonary aspiration or vomiting during anaesthesia, there is no evidence showing that denying them oral liquids before surgery improves outcomes but there is evidence showing that giving liquids prevents anxiety. Recreational substance users Sometimes before a surgery a health care provider will recommend some health intervention to modify some risky behavior which is associated with complications from surgery. Smoking cessation before surgery is likely to reduce the risk of complications from surgery. In circumstances in which a person's doctor advises them to avoid drinking alcohol before and after the surgery, but in which the person seems likely to drink anyway, intense interventions which direct a person to quit using alcohol have been proven to be helpful in reducing complications from surgery. References External links Preoperative care in the Surgery Encyclopedia Surgery |
No, this text is not related with defense topics | Spike-timing-dependent plasticity (STDP) is a biological process that adjusts the strength of connections between neurons in the brain. The process adjusts the connection strengths based on the relative timing of a particular neuron's output and input action potentials (or spikes). The STDP process partially explains the activity-dependent development of nervous systems, especially with regard to long-term potentiation and long-term depression. Process Under the STDP process, if an input spike to a neuron tends, on average, to occur immediately before that neuron's output spike, then that particular input is made somewhat stronger. If an input spike tends, on average, to occur immediately after an output spike, then that particular input is made somewhat weaker hence: "spike-timing-dependent plasticity". Thus, inputs that might be the cause of the post-synaptic neuron's excitation are made even more likely to contribute in the future, whereas inputs that are not the cause of the post-synaptic spike are made less likely to contribute in the future. The process continues until a subset of the initial set of connections remain, while the influence of all others is reduced to 0. Since a neuron produces an output spike when many of its inputs occur within a brief period, the subset of inputs that remain are those that tended to be correlated in time. In addition, since the inputs that occur before the output are strengthened, the inputs that provide the earliest indication of correlation will eventually become the final input to the neuron. History In 1973, M. M. Taylor suggested that if synapses were strengthened for which a presynaptic spike occurred just before a postsynaptic spike more often than the reverse (Hebbian learning), while with the opposite timing or in the absence of a closely timed presynaptic spike, synapses were weakened (anti-Hebbian learning), the result would be an informationally efficient recoding of input patterns. This proposal apparently passed unnoticed in the neuroscientific community, and subsequent experimentation was conceived independently of these early suggestions. Early experiments on associative plasticity were carried out by W. B. Levy and O. Steward in 1983 and examined the effect of relative timing of pre- and postsynaptic action potentials at millisecond level on plasticity. Bruce McNaughton contributed much to this area, too. In studies on neuromuscular synapses carried out by Y. Dan and Mu-ming Poo in 1992, and on the hippocampus by D. Debanne, B. Gähwiler, and S. Thompson in 1994, showed that asynchronous pairing of postsynaptic and synaptic activity induced long-term synaptic depression. However, STDP was more definitively demonstrated by Henry Markram in his postdoc period till 1993 in Bert Sakmann's lab (SFN and Phys Soc abstracts in 1994–1995) which was only published in 1997. C. Bell and co-workers also found a form of STDP in the cerebellum. Henry Markram used dual patch clamping techniques to repetitively activate pre-synaptic neurons 10 milliseconds before activating the post-synaptic target neurons, and found the strength of the synapse increased. When the activation order was reversed so that the pre-synaptic neuron was activated 10 milliseconds after its post-synaptic target neuron, the strength of the pre-to-post synaptic connection decreased. Further work, by Guoqiang Bi, Li Zhang, and Huizhong Tao in Mu-Ming Poo's lab in 1998, continued the mapping of the entire time course relating pre- and post-synaptic activity and synaptic change, to show that in their preparation synapses that are activated within 5-20 ms before a postsynaptic spike are strengthened, and those that are activated within a similar time window after the spike are weakened. This phenomenon has been observed in various other preparations, with some variation in the time-window relevant for plasticity. Several reasons for timing-dependent plasticity have been suggested. For example, STDP might provide a substrate for Hebbian learning during development, or, as suggested by Taylor in 1973, the associated Hebbian and anti-Hebbian learning rules might create informationally efficient coding in bundles of related neurons. Works from Y. Dan's lab advanced to study STDP in in vivo systems. Mechanisms Postsynaptic NMDA receptors are highly sensitive to the membrane potential (see coincidence detection in neurobiology). Due to their high permeability for calcium, they generate a local chemical signal that is largest when the back-propagating action potential in the dendrite arrives shortly after the synapse was active (pre-post spiking). Large postsynaptic calcium transients are known to trigger synaptic potentiation (Long-term potentiation). The mechanism for spike-timing-dependent depression is less well understood, but often involves either postsynaptic voltage-dependent calcium entry/mGluR activation, or retrograde endocannabinoids and presynaptic NMDARs. From Hebbian rule to STDP According to the Hebbian rule, synapses increase their efficiency if the synapse persistently takes part in firing the postsynaptic target neuron. Similarly, the efficiency of synapses decreases when the firing of their presynaptic targets is persistently independent of firing their postsynaptic ones. These principles are often simplified in the mnemonics: those who fire together, wire together; and those who fire out of sync, lose their link. However, if two neurons fire exactly at the same time, then one cannot have caused, or taken part in firing the other. Instead, to take part in firing the postsynaptic neuron, the presynaptic neuron needs to fire just before the postsynaptic neuron. Experiments that stimulated two connected neurons with varying interstimulus asynchrony confirmed the importance of temporal relation implicit in Hebb's principle: for the synapse to be potentiated or depressed, the presynaptic neuron has to fire just before or just after the postsynaptic neuron, respectively. In addition, it has become evident that the presynaptic neural firing needs to consistently predict the postsynaptic firing for synaptic plasticity to occur robustly, mirroring at a synaptic level what is known about the importance of contingency in classical conditioning, where zero contingency procedures prevent the association between two stimuli. Role in hippocampal learning For the most efficient STDP, the presynaptic and postsynaptic signal has to be separated by approximately a dozen of milliseconds. However, events happening within a couple of minutes can typically be linked together by the hippocampus as episodic memories. To resolve this contradiction, a mechanism relying on the theta waves and the phase precession has been proposed: Representations of different memory entities (such as a place, face, person etc.) are repeated on each theta cycle at a given theta phase during the episode to be remembered. Expected, ongoing, and completed entities have early, intermediate and late theta phases, respectively. In the CA3 region of the hippocampus, the recurrent network turns entities with neighboring theta phases into coincident ones thereby allowing STDP to link them together. Experimentally detectable memory sequences are created this way by reinforcing the connection between subsequent (neighboring) representations. Uses in artificial neural networks The concept of STDP has been shown to be a proven learning algorithm for forward-connected artificial neural networks in pattern recognition. Recognising traffic, sound or movement using Dynamic Vision Sensor (DVS) cameras has been an area of research. Correct classifications with a high degree of accuracy with only minimal learning time has been shown. It was shown that a spiking neuron trained with STDP learns a linear model of a dynamic system with minimal least square error. A general approach, replicated from the core biological principles, is to apply a window function (Δw) to each synapse in a network. The window function will increase the weight (and therefore the connection) of a synapse when the parent neuron fires just before the child neuron, but will decrease otherwise. Several variations of the window function have been proposed to allow for a range of learning speeds and classification accuracy. See also Synaptic plasticity Didactic organisation References Further reading External links Spike-timing dependent plasticity - Scholarpedia Neuroplasticity Memory |
No, this text is not related with defense topics | Fellowship in Dental Surgery of the Royal College of Surgeons of England (FDSRCS) is a Dental postgraduate professional qualification. It is bestowed by the Faculty of Dental Surgery of the Royal College of Surgeons of England. Similar degrees The Royal College of Surgeons in Ireland, The Royal College of Surgeons of Edinburgh and The Royal College of Physicians and Surgeons of Glasgow each has its equivalent Fellowship degree. Other degrees The Faculty can also grant other qualifications as the Membership of the Faculty of Dental Surgery of the Royal College of Surgeons of England (MFDSRCS), Diploma in Dental Public Health, Diploma in Special Care Dentistry, Membership in Restorative Dentistry and the Membership in Surgical Dentistry. Current regulations The FDSRCS was mostly granted after passing examinations currently it could still be granted by the faculty after consideration of the applicants career and achievements, this is done through an election process by the faculty's council. See also Royal College of Surgeons of England Faculty of Dental Surgery External links Faculty website References Surgery Dentistry in England Oral surgery Educational qualifications in England |
No, this text is not related with defense topics | Numerous theoretical accounts of memory have differentiated memory for facts and memory for context. Psychologist Endel Tulving (1972; 1983) further defined these two declarative memory conceptions of explicit memory (in which information is consciously registered and recalled) into semantic memory wherein general world knowledge not tied to specific events is stored and episodic memory involving the storage of context-specific information about personal experiences (i.e. time, location, and surroundings of personal knowledge). Conversely, implicit memory (non declarative) involves perhaps unconscious registration (lack of awareness during encoding), yet definite unconscious recollection. Skills and habits, priming, and classical conditioning all utilize implicit memory. An essential aspect of episodic memory includes date and time encoding in the subject's past. For such processing, the details surrounding the memory (where, when, and with whom the experience took place) must be preserved and are necessary for an episodic memory to form, otherwise the memory would be semantic. For instance, one may possess an episodic memory of John F. Kennedy's assassination, including the fact that he was watching Walter Cronkite announce that Kennedy had been murdered. However, if the contextual details of this event were lost, remaining would be a semantic memory that John F. Kennedy was assassinated. The ability to recall episodic information concerning a memory has been termed source monitoring, and is subject to distortion that may lead to source amnesia. References Johnson, Hashtroudi, & Lindsay (1993) Source monitoring. Psychological Bulletin. 114, (1), 3-28. Memory |
No, this text is not related with defense topics | {{DISPLAYTITLE:N-Propyl-L-arginine}} {{chembox | Verifiedfields = changed | Watchedfields = changed | verifiedrevid = 424750638 | Name = N-Propyl--arginine | ImageFile = N-Propyl-L-arginine.png | ImageFile_Ref = | ImageName = Stereo, skeletal formula of N-propyl-L-arginine (S) | OtherNames = 2-Amino-5-[(N-propylcarbamimidoyl)amino]pentanoic acid |Section1= |Section2= |Section3= }}N''-Propyl--arginine''', or more properly NG-propyl--arginine (NPA), is a selective inhibitor of neuronal nitric oxide synthase (nNOS). Amino acids Guanidines |
No, this text is not related with defense topics | The next-in-line effect is the phenomenon of people being unable to recall information concerning events immediately preceding their turn to perform. The effect was first studied experimentally by Malcolm Brenner in 1973. In his experiment the participants were each in turn reading a word aloud from an index card, and after 25 words were asked to recall as many of all the read words as possible. The results of the experiment showed that words read aloud within approximately nine seconds before the subject's own turn were recalled worse than other words. The reason for the next-in-line effect appears to be a deficit in encoding the perceived information preceding a performance. That is, the information is never stored to long-term memory and thus cannot be retrieved later after the performance. One finding supporting this theory is that asking the subjects beforehand to pay more attention to events preceding their turn to perform can prevent the memory deficit and even result in overcompensation, making people remember the events before their turn better than others. In addition, the appearance of the next-in-line effect does not seem to be connected to the level of fear of negative evaluation. Both people with lower and higher anxiety levels are subject to the memory deficit. References Memory tests Memory |
No, this text is not related with defense topics | A biological rule or biological law is a generalized law, principle, or rule of thumb formulated to describe patterns observed in living organisms. Biological rules and laws are often developed as succinct, broadly applicable ways to explain complex phenomena or salient observations about the ecology and biogeographical distributions of plant and animal species around the world, though they have been proposed for or extended to all types of organisms. Many of these regularities of ecology and biogeography are named after the biologists who first described them. From the birth of their science, biologists have sought to explain apparent regularities in observational data. In his biology, Aristotle inferred rules governing differences between live-bearing tetrapods (in modern terms, terrestrial placental mammals). Among his rules were that brood size decreases with adult body mass, while lifespan increases with gestation period and with body mass, and fecundity decreases with lifespan. Thus, for example, elephants have smaller and fewer broods than mice, but longer lifespan and gestation. Rules like these concisely organized the sum of knowledge obtained by early scientific measurements of the natural world, and could be used as models to predict future observations. Among the earliest biological rules in modern times are those of Karl Ernst von Baer (from 1828 onwards) on embryonic development, and of Constantin Wilhelm Lambert Gloger on animal pigmentation, in 1833. There is some scepticism among biogeographers about the usefulness of general rules. For example, J.C. Briggs, in his 1987 book Biogeography and Plate Tectonics, comments that while Willi Hennig's rules on cladistics "have generally been helpful", his progression rule is "suspect". List of biological rules Allen's rule states that the body shapes and proportions of endotherms vary by climatic temperature by either minimizing exposed surface area to minimize heat loss in cold climates or maximizing exposed surface area to maximize heat loss in hot climates. It is named after Joel Asaph Allen who described it in 1877. Bateson's rule states that extra legs are mirror-symmetric with their neighbours, such as when an extra leg appears in an insect's leg socket. It is named after the pioneering geneticist William Bateson who observed it in 1894. It appears to be caused by the leaking of positional signals across the limb-limb interface, so that the extra limb's polarity is reversed. Bergmann's rule states that within a broadly distributed taxonomic clade, populations and species of larger size are found in colder environments, and species of smaller size are found in warmer regions. It applies with exceptions to many mammals and birds. It was named after Carl Bergmann who described it in 1847. Cope's rule states that animal population lineages tend to increase in body size over evolutionary time. The rule is named for the palaeontologist Edward Drinker Cope. Deep-sea gigantism, noted in 1880 by Henry Nottidge Moseley, states that deep-sea animals are larger than their shallow-water counterparts. In the case of marine crustaceans, it has been proposed that the increase in size with depth occurs for the same reason as the increase in size with latitude (Bergmann's rule): both trends involve increasing size with decreasing temperature. Dollo's law of irreversibility, proposed in 1893 by French-born Belgian paleontologist Louis Dollo states that "an organism never returns exactly to a former state, even if it finds itself placed in conditions of existence identical to those in which it has previously lived ... it always keeps some trace of the intermediate stages through which it has passed." Eichler's rule states that the taxonomic diversity of parasites co-varies with the diversity of their hosts. It was observed in 1942 by Wolfdietrich Eichler, and is named for him. Emery's rule, noticed by Carlo Emery, states that insect social parasites are often closely related to their hosts, such as being in the same genus. Foster's rule, the island rule, or the island effect states that members of a species get smaller or bigger depending on the resources available in the environment. The rule was first stated by J. Bristol Foster in 1964 in the journal Nature, in an article titled "The evolution of mammals on islands". Gause's law or the competitive exclusion principle, named for Georgy Gause, states that two species competing for the same resource cannot coexist at constant population values. The competition leads either to the extinction of the weaker competitor or to an evolutionary or behavioral shift toward a different ecological niche. Gloger's rule states that within a species of endotherms, more heavily pigmented forms tend to be found in more humid environments, e.g. near the equator. It was named after the zoologist Constantin Wilhelm Lambert Gloger, who described it in 1833. Haldane's rule states that if in a species hybrid only one sex is sterile, that sex is usually the heterogametic sex. The heterogametic sex is the one with two different sex chromosomes; in mammals, this is the male, with XY chromosomes. It is named after J.B.S. Haldane. Hamilton's rule states that genes should increase in frequency when the relatedness of a recipient to an actor, multiplied by the benefit to the recipient, exceeds the reproductive cost to the actor. This is a prediction from the theory of kin selection formulated by W. D. Hamilton. Harrison's rule states that parasite body sizes co-vary with those of their hosts. He proposed the rule for lice, but later authors have shown that it works equally well for many other groups of parasite including barnacles, nematodes, fleas, flies, mites, and ticks, and for the analogous case of small herbivores on large plants. Hennig's progression rule states that when considering a group of species in cladistics, the species with the most primitive characters are found within the earliest part of the area, which will be the center of origin of that group. It is named for Willi Hennig, who devised the rule. Jordan's rule states that there is an inverse relationship between water temperature and meristic characteristics such as the number of fin rays, vertebrae, or scale numbers, which are seen to increase with decreasing temperature. It is named after the father of American ichthyology, David Starr Jordan. Lack's principle, proposed by David Lack, states that "the clutch size of each species of bird has been adapted by natural selection to correspond with the largest number of young for which the parents can, on average, provide enough food". Rapoport's rule states that the latitudinal ranges of plants and animals are generally smaller at lower latitudes than at higher latitudes. It was named after Eduardo H. Rapoport by G. C. Stevens in 1989. Rensch's rule states that, across animal species within a lineage, sexual size dimorphism increases with body size when the male is the larger sex, and decreases as body size increases when the female is the larger sex. The rule applies in primates, pinnipeds (seals), and even-toed ungulates (such as cattle and deer). It is named after Bernhard Rensch, who proposed it in 1950. Schmalhausen's law, named after Ivan Schmalhausen, states that a population at the extreme limit of its tolerance in any one aspect is more vulnerable to small differences in any other aspect. Therefore, the variance of data is not simply noise interfering with the detection of so-called "main effects", but also an indicator of stressful conditions leading to greater vulnerability. Thorson's rule states that benthic marine invertebrates at low latitudes tend to produce large numbers of eggs developing to pelagic (often planktotrophic [plankton-feeding]) and widely dispersing larvae, whereas at high latitudes such organisms tend to produce fewer and larger lecithotrophic (yolk-feeding) eggs and larger offspring, often by viviparity or ovoviviparity, which are often brooded. It was named after Gunnar Thorson by S. A. Mileikovsky in 1971. Van Valen's law states that the probability of extinction for species and higher taxa (such as families and orders) is constant for each group over time; groups grow neither more resistant nor more vulnerable to extinction, however old their lineage is. It is named for the evolutionary biologist Leigh Van Valen. von Baer's laws, discovered by Karl Ernst von Baer, state that embryos start from a common form and develop into increasingly specialised forms, so that the diversification of embryonic form mirrors the taxonomic and phylogenetic tree. Therefore, all animals in a phylum share a similar early embryo; animals in smaller taxa (classes, orders, families, genera, species) share later and later embryonic stages. This was in sharp contrast to the recapitulation theory of Johann Friedrich Meckel (and later of Ernst Haeckel), which claimed that embryos went through stages resembling adult organisms from successive stages of the scala naturae from supposedly lowest to highest levels of organisation. Williston's law, first noticed by Samuel Wendell Williston, states that parts in an organism tend to become reduced in number and greatly specialized in function. He had studied the dentition of vertebrates, and noted that where ancient animals had mouths with differing kinds of teeth, modern carnivores had incisors and canines specialized for tearing and cutting flesh, while modern herbivores had large molars specialized for grinding tough plant materials. See also Aristotle's biology References Biogeography Ecology Biology |
No, this text is not related with defense topics | In common usage and in philosophy, ideas are the results of thought. Also in philosophy, ideas can also be mental representational images of some object. Many philosophers have considered ideas to be a fundamental ontological category of being. The capacity to create and understand the meaning of ideas is considered to be an essential and defining feature of human beings. In a popular sense, an idea arises in a reflexive, spontaneous manner, even without thinking or serious reflection, for example, when we talk about the idea of a person or a place. A new or an original idea can often lead to innovation. Etymology The word idea comes from Greek ἰδέα idea "form, pattern," from the root of ἰδεῖν idein, "to see." Innate and adventitious ideas One view on the nature of ideas is that there exist some ideas (called innate ideas) which can be general and abstract that they could not have arisen as a representation of an object of our perception but rather were in some sense always present. These are distinguished from adventitious ideas which are images or concepts which are accompanied by the judgment that they are caused or occasioned by an external object. Another view holds that we only discover ideas in the same way that we discover the real world, from personal experiences. The view that humans acquire all or almost all their behavioral traits from nurture (life experiences) is known as tabula rasa ("blank slate"). Most of the confusions in the way ideas arise is at least in part due to the use of the term "idea" to cover both the representation perceptics and the object of conceptual thought. This can be always illustrated in terms of the scientific doctrines of innate ideas, "concrete ideas versus abstract ideas", as well as "simple ideas versus complex ideas". Philosophy Plato Plato in Ancient Greece was one of the earliest philosophers to provide a detailed discussion of ideas and of the thinking process (in Plato's Greek the word idea carries a rather different sense from our modern English term). Plato argued in dialogues such as the Phaedo, Symposium, Republic, and Timaeus that there is a realm of ideas or forms (eidei), which exist independently of anyone who may have thoughts on these ideas, and it is the ideas which distinguish mere opinion from knowledge, for unlike material things which are transient and liable to contrary properties, ideas are unchanging and nothing but just what they are. Consequently, Plato seems to assert forcefully that material things can only be the objects of opinion; real knowledge can only be had of unchanging ideas. Furthermore, ideas for Plato appear to serve as universals; consider the following passage from the Republic: René Descartes Descartes often wrote of the meaning of idea as an image or representation, often but not necessarily "in the mind", which was well known in the vernacular. Despite that Descartes is usually credited with the invention of the non-Platonic use of the term, he at first followed this vernacular use.b In his Meditations on First Philosophy he says, "Some of my thoughts are like images of things, and it is to these alone that the name 'idea' properly belongs." He sometimes maintained that ideas were innate and uses of the term idea diverge from the original primary scholastic use. He provides multiple non-equivalent definitions of the term, uses it to refer to as many as six distinct kinds of entities, and divides ideas inconsistently into various genetic categories. For him knowledge took the form of ideas and philosophical investigation is the deep consideration of these entities. John Locke In striking contrast to Plato's use of idea is that of John Locke. In his Introduction to An Essay Concerning Human Understanding, Locke defines idea as "that term which, I think, serves best to stand for whatsoever is the object of the understanding when a man thinks, I have used it to express whatever is meant by phantasm, notion, species, or whatever it is which the mind can be employed about in thinking; and I could not avoid frequently using it." He said he regarded the book necessary to examine our own abilities and see what objects our understandings were, or were not, fitted to deal with. In his philosophy other outstanding figures followed in his footsteps — Hume and Kant in the 18th century, Arthur Schopenhauer in the 19th century, and Bertrand Russell, Ludwig Wittgenstein, and Karl Popper in the 20th century. Locke always believed in good sense — not pushing things to extremes and on taking fully into account the plain facts of the matter. He considered his common-sense ideas "good-tempered, moderate, and down-to-earth." As John Locke studied humans in his work “An Essay Concerning Human Understanding” he continually referenced Descartes for ideas as he asked this fundamental question: “When we are concerned with something about which we have no certain knowledge, what rules or standards should guide how confident we allow ourselves to be that our opinions are right?” A simpler way of putting it is how do humans know ideas, and what are the different types of ideas. An idea to Locke “can simply mean some sort of brute experience.” He shows that there are “No innate principles in the mind.”. Thus, he concludes that “our ideas are all experiential in nature.” An experience can either be a sensation or a reflection: “consider whether there are any innate ideas in the mind before any are brought in by the impression from sensation or reflection.” Therefore, an idea was an experience in which the human mind apprehended something. In a Lockean view, there are really two types of ideas: complex and simple. Simple ideas are the building blocks for much more complex ideas, and “While the mind is wholly passive in the reception of simple ideas, it is very active in the building of complex ideas…” Complex ideas, therefore, can either be modes, substances, or relations. Modes are when ideas are combined in order to convey new information. For instance, David Banach gives the example of beauty as a mode. He says that it is the combination of color and form. Substances, however, are different. Substances are certain objects, that can either be dogs, cats, or tables. And relations represent the relationship between two or more ideas. In this way, Locke did, in fact, answer his own questions about ideas and humans. David Hume Hume differs from Locke by limiting idea to the more or less vague mental reconstructions of perceptions, the perceptual process being described as an "impression." Hume shared with Locke the basic empiricist premise that it is only from life experiences (whether their own or others') that humans' knowledge of the existence of anything outside of themselves can be ultimately derived, that they shall carry on doing what they are prompted to do by their emotional drives of varying kinds. In choosing the means to those ends, they shall follow their accustomed associations of ideas.d Hume has contended and defended the notion that "reason alone is merely the 'slave of the passions'." Immanuel Kant Immanuel Kant defines an idea as opposed to a concept. "Regulative ideas" are ideals that one must tend towards, but by definition may not be completely realized. Liberty, according to Kant, is an idea. The autonomy of the rational and universal subject is opposed to the determinism of the empirical subject. Kant felt that it is precisely in knowing its limits that philosophy exists. The business of philosophy he thought was not to give rules, but to analyze the private judgement of good common sense.e Rudolf Steiner Whereas Kant declares limits to knowledge ("we can never know the thing in itself"), in his epistemological work, Rudolf Steiner sees ideas as "objects of experience" which the mind apprehends, much as the eye apprehends light. In Goethean Science (1883), he declares, "Thinking ... is no more and no less an organ of perception than the eye or ear. Just as the eye perceives colors and the ear sounds, so thinking perceives ideas." He holds this to be the premise upon which Goethe made his natural-scientific observations. Wilhelm Wundt Wundt widens the term from Kant's usage to include conscious representation of some object or process of the external world. In so doing, he includes not only ideas of memory and imagination, but also perceptual processes, whereas other psychologists confine the term to the first two groups. One of Wundt's main concerns was to investigate conscious processes in their own context by experiment and introspection. He regarded both of these as exact methods, interrelated in that experimentation created optimal conditions for introspection. Where the experimental method failed, he turned to other objectively valuable aids, specifically to those products of cultural communal life which lead one to infer particular mental motives. Outstanding among these are speech, myth, and social custom. Wundt designed the basic mental activity apperception — a unifying function which should be understood as an activity of the will. Many aspects of his empirical physiological psychology are used today. One is his principles of mutually enhanced contrasts and of assimilation and dissimilation (i.e. in color and form perception and his advocacy of objective methods of expression and of recording results, especially in language. Another is the principle of heterogony of ends — that multiply motivated acts lead to unintended side effects which in turn become motives for new actions. Charles Sanders Peirce C. S. Peirce published the first full statement of pragmatism in his important works "How to Make Our Ideas Clear" (1878) and "The Fixation of Belief" (1877). In "How to Make Our Ideas Clear" he proposed that a clear idea (in his study he uses concept and idea as synonymic) is defined as one, when it is apprehended such as it will be recognized wherever it is met, and no other will be mistaken for it. If it fails of this clearness, it is said to be obscure. He argued that to understand an idea clearly we should ask ourselves what difference its application would make to our evaluation of a proposed solution to the problem at hand. Pragmatism (a term he appropriated for use in this context), he defended, was a method for ascertaining the meaning of terms (as a theory of meaning). The originality of his ideas is in their rejection of what was accepted as a view and understanding of knowledge by scientists for some 250 years, i.e. that, he pointed, knowledge was an impersonal fact. Peirce contended that we acquire knowledge as participants, not as spectators. He felt "the real", sooner or later, is information acquired through ideas and knowledge with the application of logical reasoning would finally result in. He also published many papers on logic in relation to ideas. G. F. Stout and J. M. Baldwin G. F. Stout and J. M. Baldwin, in the Dictionary of Philosophy and Psychology, define idea as "the reproduction with a more or less adequate image, of an object not actually present to the senses." They point out that an idea and a perception are by various authorities contrasted in various ways. "Difference in degree of intensity", "comparative absence of bodily movement on the part of the subject", "comparative dependence on mental activity", are suggested by psychologists as characteristic of an idea as compared with a perception. It should be observed that an idea, in the narrower and generally accepted sense of a mental reproduction, is frequently composite. That is, as in the example given above of the idea of a chair, a great many objects, differing materially in detail, all call a single idea. When a man, for example, has obtained an idea of chairs in general by comparison with which he can say "This is a chair, that is a stool", he has what is known as an "abstract idea" distinct from the reproduction in his mind of any particular chair (see abstraction). Furthermore, a complex idea may not have any corresponding physical object, though its particular constituent elements may severally be the reproductions of actual perceptions. Thus the idea of a centaur is a complex mental picture composed of the ideas of man and horse, that of a mermaid of a woman and a fish. In anthropology and the social sciences Diffusion studies explore the spread of ideas from culture to culture. Some anthropological theories hold that all cultures imitate ideas from one or a few original cultures, the Adam of the Bible, or several cultural circles that overlap. Evolutionary diffusion theory holds that cultures are influenced by one another but that similar ideas can be developed in isolation. In the mid-20th century, social scientists began to study how and why ideas spread from one person or culture to another. Everett Rogers pioneered diffusion of innovations studies, using research to prove factors in adoption and profiles of adopters of ideas. In 1976, in his book The Selfish Gene, Richard Dawkins suggested applying biological evolutionary theories to the spread of ideas. He coined the term meme to describe an abstract unit of selection, equivalent to the gene in evolutionary biology. Semantics Samuel Johnson James Boswell recorded Samuel Johnson's opinion about ideas. Johnson claimed that they are mental images or internal visual pictures. As such, they have no relation to words or the concepts which are designated by verbal names. Relationship of ideas to modern legal time- and scope-limited monopolies Relationship between ideas and patents On susceptibility to exclusive property Patent law regulates various aspects related to the functional manifestation of inventions based on new ideas or incremental improvements to existing ones. Thus, patents have a direct relationship to ideas. Relationship between ideas and copyrights In some cases, authors can be granted limited legal monopolies on the manner in which certain works are expressed. This is known colloquially as copyright, although the term intellectual property is used mistakenly in place of copyright. Copyright law regulating the aforementioned monopolies generally does not cover the actual ideas. The law does not bestow the legal status of property upon ideas per se. Instead, laws purport to regulate events related to the usage, copying, production, sale and other forms of exploitation of the fundamental expression of a work, that may or may not carry ideas. Copyright law is fundamentally different from patent law in this respect: patents do grant monopolies on ideas (more on this below). A copyright is meant to regulate some aspects of the usage of expressions of a work, not an idea. Thus, copyrights have a negative relationship to ideas. Work means a tangible medium of expression. It may be an original or derivative work of art, be it literary, dramatic, musical recitation, artistic, related to sound recording, etc. In (at least) countries adhering to the Berne Convention, copyright automatically starts covering the work upon the original creation and fixation thereof, without any extra steps. While creation usually involves an idea, the idea in itself does not suffice for the purposes of claiming copyright. Relationship of ideas to confidentiality agreements Confidentiality and nondisclosure agreements are legal instruments that assist corporations and individuals in keeping ideas from escaping to the general public. Generally, these instruments are covered by contract law. See also Idealism Brainstorming Creativity techniques Diffusion of innovations Form Ideology List of perception-related topics Notion (philosophy) Object of the mind Think tank Thought experiment History of ideas Intellectual history Concept Philosophical analysis Notes References The Encyclopedia of Philosophy, Macmillan Publishing Company, New York, 1973 Dictionary of the History of Ideas Charles Scribner's Sons, New York 1973–74, - Nous ¹ Volume IV 1a, 3a ² Volume IV 4a, 5a ³ Volume IV 32 - 37 Ideas Ideology Authority Education Liberalism Idea of God Pragmatism Chain of Being The Story of Thought, DK Publishing, Bryan Magee, London, 1998, a.k.a. The Story of Philosophy, Dorling Kindersley Publishing, 2001, (subtitled on cover: The Essential Guide to the History of Western Philosophy) a Plato, pages 11 - 17, 24 - 31, 42, 50, 59, 77, 142, 144, 150 b Descartes, pages 78, 84 - 89, 91, 95, 102, 136 - 137, 190, 191 c Locke, pages 59 - 61, 102 - 109, 122 - 124, 142, 185 d Hume, pages 61, 103, 112 - 117, 142 - 143, 155, 185 e Kant, pages 9, 38, 57, 87, 103, 119, 131 - 137, 149, 182 f Peirce, pages 61, How to Make Our Ideas Clear 186 - 187 and 189 g Saint Augustine, pages 30, 144; City of God 51, 52, 53 and The Confessions 50, 51, 52 - additional in the Dictionary of the History of Ideas for Saint Augustine and Neo-Platonism h Stoics, pages 22, 40, 44; The governing philosophy of the Roman Empire on pages 46 - 47. - additional in Dictionary of the History of Ideas for Stoics, also here , and here , and here . The Reader's Encyclopedia, 2nd Edition 1965, Thomas Y. Crowell Company, An Encyclopedia of World Literature ¹apage 774 Plato (427–348 BC) ²apage 779 Francesco Petrarca ³apage 770 Charles Sanders Peirce ¹bpage 849 the Renaissance This article incorporates text from the Schaff-Herzog Encyclopedia of Religious Knowledge, a publication now in the public domain. Further reading A. G. Balz, Idea and Essence in the Philosophy of Hobbes and Spinoza (New York 1918) Gregory T. Doolan, Aquinas on the divine ideas as exemplar causes (Washington, D.C.: Catholic University of America Press, 2008) Patricia A. Easton (ed.), Logic and the Workings of the Mind. The Logic of Ideas and Faculty Psychology in Early Modern Philosophy (Atascadero, Calif.: Ridgeview 1997) Pierre Garin, La Théorie de l'idée suivant l'école thomiste (Paris 1932) Marc A. High, Idea and Ontology. An Essay in Early Modern Metaphysics of Ideas ( Pennsylvania State University Press, 2008) Lawrence Lessig, The Future of Ideas (New York 2001) Paul Natorp, Platons Ideenlehre (Leipzig 1930) W. D. Ross, Plato's Theory of Ideas (Oxford 1951) Peter Watson, Ideas: A History from Fire to Freud, Weidenfeld & Nicolson (London 2005) J. W. Yolton, John Locke and the Way of Ideas (Oxford 1956) A priori Abstraction Cognition Creativity Concepts in epistemology Free will Idealism Innovation Mental content Mental processes Concepts in metaphilosophy Metaphysics of mind Observation Ontology Perception Platonism Qualia Rationalism Reasoning Sources of knowledge Subjective experience Thought |
No, this text is not related with defense topics | Performance art is an artwork or art exhibition created through actions executed by the artist or other participants. It may be witnessed live or through documentation, spontaneously developed or written, and is traditionally presented to a public in a fine art context in an interdisciplinary mode. Also known as artistic action, it has been developed through the years as a genre of its own in which art is presented live. It had an important and fundamental role in 20th century avant-garde art. It involves four basic elements: time, space, body, and presence of the artist, and the relation between the creator and the public. The actions, generally developed in art galleries and museums, can take place in the street, any kind of setting or space and during any time period. Its goal is to generate a reaction, sometimes with the support of improvisation and a sense of aesthetics. The themes are commonly linked to life experiences of the artist themselves, or the need of denunciation or social criticism and with a spirit of transformation. The term "performance art" and "performance" became widely used in the 1970s, even though the history of performance in visual arts dates back to futurist productions and cabarets from the 1910s. The main pioneers of performance art include Carolee Schneemann, Marina Abramović, Ana Mendieta, Chris Burden, Hermann Nitsch, Joseph Beuys, Nam June Paik, Yves Klein and Vito Acconci. Some of the main exponents more recently are Tania Bruguera, Abel Azcona, Regina José Galindo, Tehching Hsieh, Marta Minujín and Petr Pavlensky. The discipline is linked to happening, the Fluxus movement, body art and conceptual art. Definition The definition and historical and pedagogical contextualization of performance art is controversial. One of the handicaps comes from the term itself, which is polysemic, and one of its meanings relates to the scenic arts. This meaning of performance in the scenic arts context is opposite to the meaning of performance art, since performance art emerged with a critical and antagonistic position towards scenic arts. Performance art only adjoins the scenic arts in certain aspects such as the audience and the present body, and still not every performance art piece contains these elements. The meaning of the term in the narrower sense is related to postmodernist traditions in Western culture. From about the mid-1960s into the 1970s, often derived from concepts of visual art, with respect to Antonin Artaud, Dada, the Situationists, Fluxus, installation art, and conceptual art, performance art tended to be defined as an antithesis to theatre, challenging orthodox art forms and cultural norms. The ideal had been an ephemeral and authentic experience for performer and audience in an event that could not be repeated, captured or purchased. The widely discussed difference, how concepts of visual arts and concepts of performing arts are used, can determine the meanings of a performance art presentation. Performance art is a term usually reserved to refer to a conceptual art which conveys a content-based meaning in a more drama-related sense, rather than being simple performance for its own sake for entertainment purposes. It largely refers to a performance presented to an audience, but which does not seek to present a conventional theatrical play or a formal linear narrative, or which alternately does not seek to depict a set of fictitious characters in formal scripted interactions. It therefore can include action or spoken word as a communication between the artist and audience, or even ignore expectations of an audience, rather than following a script written beforehand. Some types of performance art nevertheless can be close to performing arts. Such performance may use a script or create a fictitious dramatic setting, but still constitute performance art in that it does not seek to follow the usual dramatic norm of creating a fictitious setting with a linear script which follows conventional real-world dynamics; rather, it would intentionally seek to satirize or to transcend the usual real-world dynamics which are used in conventional theatrical plays. Performance artists often challenge the audience to think in new and unconventional ways, break conventions of traditional arts, and break down conventional ideas about "what art is". As long as the performer does not become a player who repeats a role, performance art can include satirical elements; use robots and machines as performers, as in pieces of the Survival Research Laboratories; involve ritualised elements (e.g. Shaun Caton); or borrow elements of any performing arts such as dance, music, and circus. Some artists, e.g. the Viennese Actionists and neo-Dadaists, prefer to use the terms "live art", "action art", "actions", "intervention" (see art intervention) or "manoeuvre" to describe their performing activities. As genres of performance art appear body art, fluxus-performance, happening, action poetry, and intermedia. Origins Performance art is a form of expression that was born as an alternative artistic manifestation. The discipline emerged in 1916 parallel to dadaism, under the umbrella of conceptual art. The movement was led by Tristan Tzara, one of the pioneers of Dada. Western culture theorists have set the origins of performance art in the beginnings of the 20th century, along with constructivism, Futurism and Dadaism. Dada was an important inspiration because of their poetry actions, which drifted apart from conventionalisms, and futurist artists, specially some members of Russian futurism, could also be identified as part of the starting process of performance art. Cabaret Voltaire The Cabaret Voltaire was founded in Zurich (Switzerland) by the couple Hugo Ball and Emmy Hennings for artistic and political purposes and was a place where new tendencies were explored. Located on the upper floor of a theater, whose exhibitions they mocked in their shows, the works interpreted in the cabaret were avant garde and experimental. It is thought that the Dada movement was founded in the ten square meter locale. Moreover, Surrealists, whose movement descended directly from Dadaism, used to meet in the Cabaret. On its brief existence—barely six months, closing the summer of 1916—the Dadaist Manifesto was read and it held the first Dada actions, performances, and hybrid poetry, plastic art, music and repetitive action presentations. Founders such as Richard Huelsenbeck, Marcel Janco, Tristan Tzara, Sophie Taeuber-Arp and Jean Arp participated in provocative and scandalous events that were fundamental and the basis of the foundation for the anarchist movement called Dada. Dadaism was born with the intention of destroying any system or established norm in the art world. It is an anti-art movement, anti-literary and anti-poetry, that questioned the existence of art, literature and poetry itself. Not only was it a way of creating, but of living; it created a whole new ideology. It was against eternal beauty, the eternity of principles, the laws of logic, the immobility of thought and clearly against anything universal. It promoted change, spontaneity, immediacy, contradiction, randomness and the defense of chaos against the order and imperfection against perfection, ideas similar to those of performance art. They stood for provocation, anti-art protest and scandal, through ways of expression many times satirical and ironic. The absurd or lack of value and the chaos protagonized their breaking actions with traditional artistic form. Futurism Futurism was an artistic avant garde movement that appeared in 1909. It first started as a literary movement, even though most of the participants were painters. In the beginning it also included sculpture, photography, music and cinema. The First World War put an end to the movement, even though in Italy it went on until the 1930s. One of the countries where it had the most impact was Russia. In 1912 manifestos such as the Futurist Sculpture Manifesto and the Futurist Architecture arose, and in 1913 the Manifesto of Futurist Lust by Valentine de Saint-Point, dancer, writer and French artist. The futurists spread their theories through encounters, meetings and conferences in public spaces, that got close to the idea of a political concentration, with poetry and music-halls, which anticipated performance art. Bauhaus The Bauhaus, founded in Weimar in 1919, included an experimental performing arts workshops with the goal of exploring the relationship between the body, space, sound and light. The Black Mountain College, founded in the United States by instructors of the original Bauhaus who were exiled by the Nazi Party, continued incorporating experimental performing arts in the scenic arts training twenty years before the events related to the history of performance in the 1960s. The name Bauhaus derives from the German words Bau, construction and Haus, house; ironically, despite its name and the fact that his founder was an architect, the Bauhaus did not have an architecture department the first years of its existence. Action painting In the 1940s and 1950s, the action painting technique or movement gave artists the possibility of interpreting the canvas as an area to act in, rendering the paintings as traces of the artist's performance in the studio According to art critic Harold Rosenberg, it was one of the initiating processes of performance art, along with abstract expressionism. Jackson Pollock is the action painter par excellence, who carried out many of his actions live. Names to be highlighted are Willem de Kooning and Franz Kline, whose work include abstract and action painting. Nouveau réalisme Nouveau réalisme is another one of the artistic movements cited in the beginnings of performance art. It was a painting movement founded in 1960 by art critic Pierre Restany and painter Yves Klein, during the first collective exhibition in the Apollinaire Gallery in Milan. Nouveau réalisme was, along with Fluxus and other groups, one of the many avant garde tendencies of the 1960s. Pierre Restany created various performance art assemblies in the Tate Modern, amongst other spaces. Yves Klein is one of the main exponents of the movement. He was a clear pioneer of performance art, with his conceptual pieces like Zone de Sensibilité Picturale Immatérielle (1959–62), Anthropométries (1960), and the photomontage Saut dans le vide. All his works have a connection with performance art, as they are created as a live action, like his best-known artworks of paintings created with the bodies of women. The members of the group saw the world as an image, from which they took parts and incorporated them into their work; they sought to bring life and art closer together. Gutai One of the other movements that anticipated performance art was the Japanese movement Gutai, who made action art or happening. It emerged in 1955 in the region of Kansai (Kyōto, Ōsaka, Kōbe). The main participants were Jirō Yoshihara, Sadamasa Motonaga, Shozo Shimamoto, Saburō Murakami, Katsuō Shiraga, Seichi Sato, Akira Ganayama and Atsuko Tanaka. The Gutai group arose after World War II. They rejected capitalist consumerism, carrying out ironic actions with latent aggressiveness (object breaking, actions with smoke). They influenced groups such as Fluxus and artists like Joseph Beuys and Wolf Vostell. Land art and performance In the late 1960s, diverse land art artists such as Robert Smithson or Dennis Oppenheim created environmental pieces that preceded performance art in the 1970s. Works by conceptual artists from the early 1980s, such as Sol LeWitt, who made mural drawing into a performance act, were influenced by Yves Klein and other land art artists. Land art is a contemporary art movement in which the landscape and the artwork are deeply bound. It uses nature as a material (wood, soil, rocks, sand, wind, fire, water, etc.) to intervene on itself. The artwork is generated with the place itself as a starting point. The result is sometimes a junction between sculpture and architecture, and sometimes a junction between sculpture and landscaping that is increasingly taking a more determinant role in contemporary public spaces. When incorporating the artist's body in the creative process, it acquires similarities with the beginnings of performance art. 1960s In the 1960s, with the purpose of evolving the generalized idea of art and with similar principles of those originary from Cabaret Voltaire or Futurism, a variety of new works, concepts and a growing number of artists led to new kinds of performance art. Movements clearly differentiated from Viennese Actionism, avant garde performance art in New York City, process art, the evolution of The Living Theatre or happening, but most of all the consolidation of the pioneers of performance art. Viennese actionism The term Viennese Actionism (Wiener Aktionismus) comprehends a brief and controversial art movement of the 20th century, which is remembered for the violence, grotesque and visual of their artworks. It is located in the Austrian vanguard of the 1960s, and it had the goal of bringing art to the ground of performance art, and is linked to Fluxus and Body Art. Amongst their main exponents are Günter Brus, Otto Muehl and Hermann Nitsch, who developed most of their actionist activities between 1960 and 1971. Hermann Nitsch presented in 1962 his Theatre of Orgies and Mysteries (Orgien und Mysterien Theater), pioneer of performance art, close to scenic arts. New York and avant-garde performance In the early 1960s, New York City harbored many movements, events end interests regarding performance art. Amongst others, Andy Warhol began creating films and videos, and mid decade he sponsored The Velvet Underground and staged events and performative actions in New York, such as the Exploding Plastic Inevitable (1966), that included live rock music, explosive lights and films. The Living Theatre Indirectly influential for art-world performance, particularly in the United States, were new forms of theatre, embodied by the San Francisco Mime Troupe and the Living Theatre and showcased in Off-Off Broadway theaters in SoHO and at La MaMa in New York City. The Living Theatre is a theater company created in 1947 in New York. It is the oldest experimental theatre in the United States. Throughout its history it has been led by its founders: actress Judith Malina, who had studies theatre with Erwin Piscator, with whom she studied Bertolt Brecht's and Meyerhold's theory; and painter and poet Julian Beck. After Beck's death in 1985, the company member Hanon Reznikov became co-director along with Malina. Because it is one of the oldest random theatre or live theatre groups nowadays, it is looked upon by the rest. They understood theatre as a way of life, and the actors lived in a community under libertary principles. It was a theatre campaign dedicated to transformation of the power organization of an authoritarian society and hierarchical structure. The Living Theatre chiefly toured in Europe between 1963 and 1968, and in the U.S. in 1968. A work of this period, Paradise Now, was notorious for its audience participation and a scene in which actors recited a list of social taboos that included nudity, while disrobing. Fluxus Fluxus, a Latin word that means flow, is a visual arts movement related to music, literature, and dance. Its most active moment was in the 1960s and 1970s. They proclaimed themselves against the traditional artistic object as a commodity and declared themselves a sociological art movement. Fluxus was informally organized in 1962 by George Maciunas (1931–1978). This movement had representation in Europe, the United States and Japan. The Fluxus movement, mostly developed in North America and Europe under the stimulus of John Cage, did not see the avant-garde as a linguistic renovation, but it sought to make a different use of the main art channels that separate themselves from specific language; it tries to be interdisciplinary and to adopt mediums and materials from different fields. Language is not the goal, but the mean for a renovation of art, seen as a global art. As well as Dada, Fluxus escaped any attempt for a definition or categorization. As one of the movement's founders, Dick Higgins, stated: Fluxus started with the work, and then came together, applying the name Fluxus to work which already existed. It was as if it started in the middle of the situation, rather than at the beginning.Amongst the earliest pieces that would later be published by Fluxus were Brecht's event scores, the earliest of which dated from around 1958/9, and works such as Valoche, which had originally been exhibited in Brecht's solo show 'Toward's Events' at 1959. Robert Filliou places Fluxus opposite to conceptual art for its direct, immediate and urgent reference to everyday life, and turns around Duchamp's proposal, who starting from Ready-made, introduced the daily into art, whereas Fluxus dissolved art into the daily, many times with small actions or performances. John Cage was an American composer, music theorist, artist, and philosopher. A pioneer of indeterminacy in music, electroacoustic music, and non-standard use of musical instruments, Cage was one of the leading figures of the post-war avant-garde. Critics have lauded him as one of the most influential composers of the 20th century. He was also instrumental in the development of modern dance, mostly through his association with choreographer Merce Cunningham, who was also Cage's romantic partner for most of their lives. Process art Process art is an artistic movement where the end product of art and craft, the objet d’art (work of art/found object), is not the principal focus; the process of its making is one of the most relevant aspects if not the most important one: the gathering, sorting, collating, associating, patterning, and moreover the initiation of actions and proceedings. Process artists saw art as pure human expression. Process art defends the idea that the process of creating the work of art can be an art piece itself. Artist Robert Morris predicated "anti-form", process and time over an objectual finished product. Happening Wardrip-Fruin and Montfort in The New Media Reader, "The term 'Happening' has been used to describe many performances and events, organized by Allan Kaprow and others during the 1950s and 1960s, including a number of theatrical productions that were traditionally scripted and invited only limited audience interaction." A happening allows the artis to experiment with the movement of the body, recorded sounds, written and talked texts, and even smells. One of Kaprow's first works was Happenings in the New York Scene, written in 1961. Allan Kaprow's happenings turned the public into interpreters. Often the spectators became an active part of the act without realizing it. Other actors who created happenings were Jim Dine, Claes Oldenburg, Robert Whitman and Wolf Vostell: Theater is in the Street (Paris, 1958). Main artists The works by performance artists after 1968 showed many times influences from the political and cultural situation that year. Barbara T. Smith with Ritual Meal (1969) was at the vanguard of body and scenic feminist art in the seventies, which included, amongst others, Carolee Schneemann and Joan Jonas. These, along with Yoko Ono, Joseph Beuys, Nam June Paik, Wolf Vostell, Allan Kaprow, Vito Acconci, Chris Burden and Dennis Oppenheim were pioneers in the relationship between body art and performance art, as well as the Zaj collective in Spain with Esther Ferrer and Juan Hidalgo. Barbara Smith is an artist and United States activist. She is one of the main African-American exponents of feminism and LGBT activism in the United States. In the beginning of the 1970s she worked as a teacher, writer and defender of the black feminism current. She has taught at numerous colleges and universities in the last five years. Smith's essays, reviews, articles, short stories and literary criticism have appeared in a range of publications, including The New York Times, The Guardian, The Village Voice and The Nation. Carolee Schneemann was an American visual experimental artist, known for her multi-media works on the body, narrative, sexuality and gender. She created pieces such as Meat Joy (1964) and Interior Scroll (1975). Schneemann considered her body a surface for work. She described herself as a "painter who has left the canvas to activate the real space and the lived time." Joan Jonas (born July 13, 1936) is an American visual artist and a pioneer of video and performance art, who is one of the most important female artists to emerge in the late 1960s and early 1970s. Jonas' projects and experiments provided the foundation on which much video performance art would be based. Her influences also extended to conceptual art, theatre, performance art and other visual media. She lives and works in New York and Nova Scotia, Canada. Immersed in New York's downtown art scene of the 1960s, Jonas studied with the choreographer Trisha Brown for two years. Jonas also worked with choreographers Yvonne Rainer and Steve Paxton. Yoko Ono was part of the avant-garde movement of the 1960s. She was part of the Fluxus movement. She is known for her performance art pieces in the late 1960s, works such as Cut Piece, where visitors could intervene in her body until she was left naked. One of her best known pieces is Wall piece for orchestra (1962). Joseph Beuys was a German Fluxus, happening, performance artist, painter, sculptor, medallist and installation artist. In 1962 his actions alongside the Fluxus neodadaist movement started, group in which he ended up becoming the most important member. His most relevant achievement was his socialization of art, making it more accessible for every kind of public. In How to Explain Pictures to a Dead Hare (1965) he covered his face with honey and gold leaf and explained his work to a dead hare that lay in his arms. In this work he linked spacial and sculptural, linguistic and sonorous factors to the artist's figure, to his bodily gesture, to the conscience of a communicator whose receptor is an animal. Beuys acted as a shaman with healing and saving powers toward the society that he considered dead. In 1974 he carried out the performance I Like America and America Likes Me where Beuys, a coyote and materials such as paper, felt and thatch constituted the vehicle for its creation. He lived with the coyote for three days. He piled United States newspapers, a symbol of capitalism. With time, the tolerance between Beuys and the coyote grew and he ended up hugging the animal. Beuys repeats many elements used in other works. Objects that differ form Duchamp's ready-mades, not for their poor and ephemerality, but because they are part of Beuys's own life, who placed them after living with them and leaving his mark on them. Many have an autobiographical meaning, like the honey or the grease used by the tartars who saved in World War Two. In 1970 he made his Felt Suit. Also in 1970, Beuys taught sculpture in the Kunstakademie Düsseldorf. In 1979, the Solomon R. Guggenheim Museum of New York City exhibited a retrospective of his work from the 1940s to 1970. Nam June Paik was a South Korean performance artist, composer and video artist from the second half of the 20th century. He studied music and art history in the University of Tokyo. Later, in 1956, he traveled to Germany, where he studied Music Theory in Munich, then continued in Cologne in the Freiburg conservatory. While studying in Germany, Paik met the composers Karlheinz Stockhausen and John Cage and the conceptual artists Sharon Grace as well as George Maciunas, Joseph Beuys and Wolf Vostell and was from 1962 on, a member of the experimental art movement Fluxus. Nam June Paik then began participating in the Neo-Dada art movement, known as Fluxus, which was inspired by the composer John Cage and his use of everyday sounds and noises in his music. He was mates with Yoko Ono as a member of Fluxus. Wolf Vostell was a German artist, one of the most representative of the second half of the 20th century, who worked with various mediums and techniques sucha as painting, sculpture, installation, decollage, videoart, happening and fluxus. Vito Acconci was an influential American performance, video and installation artist, whose diverse practice eventually included sculpture, architectural design, and landscape design. His foundational performance and video art was characterized by "existential unease," exhibitionism, discomfort, transgression and provocation, as well as wit and audacity, and often involved crossing boundaries such as public–private, consensual–nonconsensual, and real world–art world. His work is considered to have influenced artists including Laurie Anderson, Karen Finley, Bruce Nauman, and Tracey Emin, among others. Acconci was initially interested in radical poetry, but by the late 1960s, he began creating Situationist-influenced performances in the street or for small audiences that explored the body and public space. Two of his most famous pieces were Following Piece (1969), in which he selected random passersby on New York City streets and followed them for as long as he was able, and Seedbed (1972), in which he claimed that he masturbated while under a temporary floor at the Sonnabend Gallery, as visitors walked above and heard him speaking. Chris Burden was an American artist working in performance, sculpture and installation art. Burden became known in the 1970s for his performance art works, including Shoot (1971), in which he arranged for a friend to shoot him in the arm with a small-caliber rifle. A prolific artist, Burden created many well-known installations, public artworks and sculptures before his death in 2015. Burden began to work in performance art in the early 1970s. He made a series of controversial performances in which the idea of personal danger as artistic expression was central. His first significant performance work, Five Day Locker Piece (1971), was created for his master's thesis at the University of California, Irvine, and involved his being locked in a locker for five days. Dennis Oppenheim was an American conceptual artist, performance artist, earth artist, sculptor and photographer. Dennis Oppenheim's early artistic practice is an epistemological questioning about the nature of art, the making of art and the definition of art: a meta-art which arose when strategies of the Minimalists were expanded to focus on site and context. As well as an aesthetic agenda, the work progressed from perceptions of the physical properties of the gallery to the social and political context, largely taking the form of permanent public sculpture in the last two decades of a highly prolific career, whose diversity could exasperate his critics. Yayoi Kusama is a Japanese artist who, throughout her career, has worked with a great variety of media including:sculpture, installation, painting, performance, film, fashion, poetry, fiction, and other arts; the majority of them exhibited her interest in psychedelia, repetition and patterns. Kusama is a pioneer of the pop art, minimalism and feminist art movements and influenced her coetaneous, Andy Warhol and Claes Oldenburg. She has been acknowledged as one of the most important living artists to come out of Japan and a very relevant voice in avant garde art. 1970s In the 1970s, artists that had derived to works related to performance art evolved and consolidated themselves as artists with performance art as their main discipline, deriving into installations created through performance, video performance, or collective actions, or in the context of a socio-historical and political context. Video performance In the early 1970s the use of video format by performance artists was consolidated. Some exhibitions by Joan Jonas and Vito Acconci were made entirely of video, activated by previous performative processes. In this decade, various books that talked about the use of the means of communication, video and cinema by performance artists, like Expanded Cinema, by Gene Youngblood, were published. One of the main artists who used video and performance, with notorious audiovisual installations, is the South Korean artist Nam June Paik, who in the early 1960s had already been in the Fluxus movement until becoming a media artist and evolving into the audiovisual installations he is known for. Carolee Schneemann's and Robert Whitman's 1960s work regarding their video-performances must be taken into consideration as well. Both were pioneers of performance art, turning it into an independent art form in the early seventies. Joan Jonas started to include video in her experimental performances in 1972, while Bruce Nauman scenified his acts to be directly recorded on video. Nauman is an American multimedia artist, whose sculptures, videos, graphic work and performances have helped diversify and develop culture from the 1960s on. His unsettling artworks emphasized the conceptual nature of art and the creation process. His priority is the idea and the creative process over the result. His art uses an incredible array of materials and especially his own body. Gilbert and George are Italian artist Gilbert Proesch and English artist George Passmore, who have developed their work inside conceptual art, performance and body art. They were best known for their live-sculpture acts. One of their first makings was The Singing Sculpture, where the artists sang and danced "Underneath the Arches", a song from the 1930s. Since then they have forged a solid reputation as live-sculptures, making themselves works of art, exhibited in front of spectators through diverse time intervals. They usually appear dressed in suits and ties, adopting diverse postures that they maintain without moving, though sometimes they also move and read a text, and occasionally they appear in assemblies or artistic installations. Apart from their sculptures, Gilbert and George have also made pictorial works, collages and photomontages, where they pictured themselves next to diverse objects from their immediate surroundings, with references to urban culture and a strong content; they addressed topics such as sex, race, death and HIV, religion or politics, critiquing many times the British government and the established power. The group's most prolific and ambitious work was Jack Freak Pictures, where they had a constant presence of the colors red, white and blue in the Union Jack. Gilbert and George have exhibited their work in museums and galleries around the world, like the Stedelijk van Abbemuseum of Eindhoven (1980), the Hayward Gallery in London (1987), and the Tate Modern (2007). They have participated in the Venice Biennale. In 1986 they won the Turner Prize. Endurance art Endurance performance art deepens the themes of trance, pain, solitude, deprivation of freedom, isolation or exhaustion. Some of the works, based on the passing of long periods of time are also known as long-durational performances. One of the pioneering artists was Chris Burden in California since the 1970s. In one of his best known works, Five days in a locker (1971) he stayed for five days inside a school locker, in Shoot (1971) he was shot with a firearm, and inhabited for twenty two days a bed inside an art gallery in Bed Piece (1972). Another example of endurance artist is Tehching Hsieh. During a performance created in 1980–1981 (Time Clock Piece), where he stayed a whole year repeating the same action around a metaphorical clock. Hsieh is also known for his performances about deprivation of freedom; he spent an entire year confined. In The House With the Ocean View (2003), Marina Abramović lived silently for twelve days without food. The Nine Confinements or The Deprivation of Liberty is a conceptual endurance artwork of critical content carried out in the years 2013 and 2016. All of them have in common the illegitimate deprivation of freedom. Performance in a political context In the mid-1970s, behind the Iron Curtain, in major Eastern Europe cities such as Budapest, Kraków, Belgrade, Zagreb, Novi Sad and others, scenic arts of a more experimental content flourished. Against political and social control, different artists who made performance of political content arose. Orshi Drozdik's performance series, titled Individual Mythology 1975–77 and the NudeModel 1976–77. All her actions were critical of the patriarchal discourse in art and the forced emancipation programme and constructed by the equally patriarchal state. Drozdik showed a pioneer and feminist point of view on both, becoming one of the precursors of this type of critical art in Eastern Europe. In the 1970s, performance art, due to its fugacity, had a solid presence in the Eastern European avant-garde, specially in Poland and Yugoslavia, where dozens of artists who explored the body conceptually and critically emerged. The Other In the mid-1976s, Ulay and Marina Abramović founded the collective The Other in the city of Amsterdam. When Abramović and Ulay started their collaboration. The main concepts they explored were the ego and artistic identity. This was the start of a decade of collaborative work. Both artists were interested in the tradition of their cultural heritage and the individual's desire for rituals. In consecuense, they formed a collective named The Other. They dressed and behaved as one, and created a relation of absolute confidence. They created a series of works in which their bodies created additional spaces for the audience's interaction. In Relation in Space they ran around the room, two bodies like two planets, meshing masculine and feminine energies into a third component they called "that self". Relation in Movement (1976) had the couple driving their car inside the museum, doing 365 spins. A black liquid dripped out of the car, forming a sculpture, and each round represented a year. After this, they created Death Self, where both of them united their lips and inspired the air expired by the other one until they used up all oxygen. Exactly 17 minutes after the start of the performance, both of them fell unconscious, due to their lungs filling with carbon dioxide. This piece explored the idea of the ability of a person to absorb the life out of another one, changing them and destroying them. In 1988, after some years of a tense relationship, Abramović and Ulay decided to make a spiritual travel that would put an end to the collective. They walked along the Great Wall of China, starting on opposite ends and finding each other halfway. Abramović conceived this walk on a dream, and it gave her what she saw as an appropriate and romantic ending to the relationship full of mysticism, energy and attraction. Ulay started on the Gobi dessert and Abramovic in the Yellow sea. Each one of them walked 2500 kilometres, found each other in the middle and said goodbye. Main artists In 1973, Laurie Anderson interpreted Duets on Ice in the streets of New York. Marina Abramović, in the performance Rhythm 10, included conceptually the violation of a body. Thirty years later, the topic of rape, shame and sex exploitation would be reimagined in the works of contemporary artists such as Clifford Owens, Gillian Walsh, Pat Oleszko and Rebecca Patek, amongst others. New artists with radical acts consolidated themselves as the main precursors of performance, like Chris Burden, with the 1971 work Shoot, where an assistant shot him in the arm from a five-meter distance, and Vito Acconci the same year with Seedbed. The work Eye Body (1963) by Carolee Schneemann en 1963, had already been considered a prototype of performance art. In 1975, Schneemann recurred to innovative solo acts such as Interior Scroll, that showed the feminine body as an artistic media. One of the main artists was Gina Pane, French artist of Italian origins. She studied at the École nationale supérieure des Beaux-Arts in París from 1960 until 1965 and was a member of the performance art movement in the 1970 in France, called "Art Corporel". Parallel to her art, Pane taught in the Ecole des Beaux-Arts in Mans from 1975 until 1990 and directed an atelier dedicated to performance art in the Pompidou Centre from 1978 to 1979. One of her best known works is The Conditioning (1973), in which she was lied into a metal bed spring over an area of lit candles. The Conditioning was created as an homage to Marina Abramović, part of her Seven Easy Pieces(2005) in the Solomon R. Guggenheim Museum in New York City in 2005. Great part of her works are protagonized by self-inflicted pain, separating her from most of other woman artists in the 1970s. Through the violence of cutting her skin with razors or extinguishing fires with her bare hands and feet, Pane has the intention of inciting a real experience in the visitor, who would feel moved for its discomfort. The impactful nature of these first performance art pieces or actions, as she preferred to call them, many times eclipsed her prolific photographic and sculptural work. Nonetheless, the body was the main concern in Panes's work, either literally or conceptually. 1980s The technique of performance art Until the 1980s, performance art has demystified virtuosism, this being one of its key characteristics. Nonetheless, from the 1980s on it started to adopt some technical brilliancy. In reference to the work Presence and Resistance by Philip Auslander, the dance critic Sally Banes writes, "... by the end of the 1980s, performance art had become so widely known that it no longer needed to be defined; mass culture, especially television, had come to supply both structure and subject matter for much performance art; and several performance artists, including Laurie Anderson, Spalding Gray, Eric Bogosian, Willem Dafoe, and Ann Magnuson, had indeed become crossover artists in mainstream entertainment." In this decade the parameters and technicalities built to purify and perfect performance art were defined. Critique and investigation of performance art Despite the fact that many performances are held within the circle of a small art-world group, Roselee Goldberg notes in Performance Art: From Futurism to the Present that "performance has been a way of appealing directly to a large public, as well as shocking audiences into reassessing their own notions of art and its relation to culture. Conversely, public interest in the medium, especially in the 1980s, stems from an apparent desire of that public to gain access to the art world, to be a spectator of its ritual and its distinct community, and to be surprised by the unexpected, always unorthodox presentations that the artists devise." In this decade, publications and compilations about performance art and its best known artists emerged. Performance art from a political context In the 1980s, the political context played an important role in the artistic development and especially in performance, as almost every one of the works created with a critical and political discourse were in this discipline. Until the decline of the European Eastern bloc during the late 1980s, performance art had actively been rejected by most communist governments. With the exception of Poland and Yugoslavia, performance art was more or less banned in countries where any independent public event was feared. In the GDR, Czechoslovakia, Hungary and Latvia it happened in apartments, at seemingly spontaneous gatherings in artist studios, in church-controlled settings, or was covered as another activity, like a photo-shoot. Isolated from the western conceptual context, in different settings it could be like a playful protest or a bitter comment, using subversive metaphors to express dissent with the political situation. Amongst the most remarkable performance art works of political content in this time were those of Tehching Hsieh between July 1983 and July 1984, Art/Life: One Year Performance (Rope Piece). Performance poetry In 1982 the terms "poetry" and "performance" were first used together. Performance poetry appeared to distinguish text-based vocal performances from performance art, especially the work of escenic and musical performance artists, such as Laurie Anderson, who worked with music at that time. Performance poets relied more on the rhetorical and philosophical expression in their poetics than performance artists, who arose from the visual art genres of painting and sculpture. Many artists since John Cage fuse performance with a poetical base. Feminist performance art Since 1973 the Feminist Studio Workshop in the Woman's Building of Los Ángeles had an impact in the wave of feminist acts, but until 1980 they did not completely fuse. The conjunction between feminism and performance art progressed through the last decade. In the first two decades of performance art development, works that had not been conceived as feminist are seen as such now. Still, not until 1980 did artists self-define themselves as feminists. Artist groups in which women influenced by the 1968 student movement as well as the feminist movement stood out. This connection has been treated in contemporary art history research. Some of the women whose innovative input in representations and shows was the most relevant were Pina Bausch and the Guerrilla Girls who emerged in 1985 in New York City, anonymous feminist and anti-racist art collective. They chose that name because they used guerrilla tactics in their activism to denounce discrimination against women in art through political and performance art. Their first performance was placing posters and making public appearances in museums and galleries in New York, to critique the fact that some groups of people were discriminated against for their gender or race. All of this was done anonymously; in all of these appearances they covered their faces with gorilla masks (this was due to the similar pronunciation of the words "gorilla" and "guerrilla"). They used as nicknames the names of female artists who had died. From the 1970s until the 1980s, amongst the works that challenged the system and their usual strategies of representation, the main ones feature women's bodies, such as Ana Mendieta's works in New York City where her body is outraged and abused, or the artistic representations by Louise Bourgeois with a rather minimalist discourse that emerge in the late seventies and eighties. Special mention to the works created with feminine and feminist corporeity such as Lynda Benglis and her phallic performative actions, who reconstructed the feminine image to turn it into more than a fetish. Through feminist performance art the body becomes a space for developing these new discourses and meanings. Artist Eleanor Antin, creator in the 1970s and 1980s, worked on the topics of gender, race and class. Cindy Sherman, in her first works in the seventies and already in her artistic maturity in the eighties, continues her critical line of overturning the imposed self, through her use of the body as an object of privilege. Cindy Sherman is an American photographer and artist. She is one of the most representative post-war artists and exhibited more than the work of three decades of her work in the MoMA. Even though she appears in most of her performative photographies, she doesn't consider them slef-portraits. Sherman uses herself as a vehicle to represent a great array of topics of the contemporary world, such as the part women play in our society and the way they are represented in the media as well as the nature of art creation. In 2020 she was awarded with the Wolf prize in arts. Judy Chicago is an artist and pioneer of feminist art and performance art in the United States. Chicago is known for her big collaborative art installation pieces on images of birth and creation, that examine women's part in history and culture. In the 1970s, Chicago has founded the first feminist art programme in the United States. Chicago's work incorporates a variety of artistic skills such as sewing, in contrast with skills that required a lot of workforce, like welding and pyrotechnics. Chicago's best known work is The Dinner Party, that was permanently installed in the Elizabeth A. Sackler Center for Feminist Art in the Brooklyn Museum. The Dinner Party celebrated the achievements of women throughout history and is widely considered as the first epic feminist artwork. Other remarkable projects include International Honor Quilt, The Birth Project, Powerplay, and The Holocaust Project. Expansion to Latin America In this decade performance art spread until reaching Latin America through the workshops and programmes that universities and academic institutions offered. It mainly developed in Mexico, Colombia -with artists such as Maria Teresa Hincapié—, in Brasil and in Argentina. Ana Mendieta was a conceptual and performance artist born in Cuba and raised in the United States. She's mostly known for her artworks and performance art pieces in land art. Mendieta's work was known mostly in the feminist art critic environment. Years after her death, specially since the Whitney Museum of American Art retrospective in 2004 and the retrospective in the Haywart Gallery in London in 2013 she is considered a pioneer of performance art and other practices related to body art and land art, sculpture and photography. She described her own work as earth-body art. Tania Bruguera is a Cuban artist specialized in performance art and political art. Her work mainly consists of her interpretation of political and social topics. She has developed concepts such as "conduct art" to define her artistic practices with a focus on the limits of language and the body confronted to the reaction and behavior of the spectators. She also came up with "useful art", that is ought to transform certain political and legal aspects of society. Brugera's work revolves around power and control topics, and a great portion of her work questions the current state of her home country, Cuba. In 2002 she created the Cátedra Arte de Conducta in La Habana. Regina José Galindo is a Guatemalan artist specialized in performance art. Her work is characterized by its explicit political and critical content, using her own body as a tool of confrontation and social transformation. Her artistic career has been marked by the Guatemalan Civil War that took place from 1960 to 1996, which triggered a genocide of more than 200 thousand people, many of them indigenous, farmers, women and children. With her work, Galindo denounces violence, sexism (one of her the main topics is femicide), the western beauty standards, the repression of the estates and the abuse of power, especially in the context of her country, even though her language transgresses borders. Since her beginnings she only used her body as media, which she occasionally takes to extreme situations (like in Himenoplasty (2004) where she goes through a hymen reconstruction, a work that won the Golden Lyon in the Venice Biennale), to later have volunteers or hired people to interact with her, so that she loses control over the action. 1990s The 1990s was a period of absence for classic European performance, so performance artists kept a low profile. Nevertheless, Eastern Europe experienced a peak. On the other hand, Latin American performance continued to boom, as well as feminist performance art. There also was a peak of this discipline in Asian countries, whose motivation emerged from the Butō dance in the 1950s, but in this period they professionalized and new Chinese artists arose, earning great recognition. There was also a general professionalization in the increase of exhibitions dedicated to performance art, at the opening of the Venice Art Biennial to performance art, where various artists of this discipline have won the Leone d'Oro, including Anne Imhof, Regina José Galindo or Santiago Sierra. Performance with political context While the Soviet Bloc dissolved, some forbidden performance art pieces began to spread. Young artists from the former Eastern Bloc, including Russia, devoted themselves to performance art. Scenic arts emerged around the same time in Cuba, the Caribbean and China. "In these contexts, performance art became a new critical voice with a social strength similar to that of Western Europe, the United States and South America in the sixties and early seventies. It must be emphasized that the rise of performance art in the 1990s in Eastern Europe, China, South Africa, Cuba and other places must not be considered secondary or an imitation of the West". Professionalization of performance art In the Western World, in the 1990s, performance art joined the mainstream culture. Diverse performance artworks, live, photographed or through documentation started to become part of galleries and museums that began to understand performance art as an art discipline. Nevertheless, it was not until the next decade that a major institutionalization happened, when every museum started to incorporate performance art pieces into their collections and dedicating great exhibitions and retrospectives, museums such as the la Tate Modern in London, the MoMA in New York City or the Pompidou Centre in Paris. From the 1990s on, many more performance artists were invited to important biennials like the Venice Biennale, the Sao Paulo Biennial and the Lyon Biennial. Performance in China In the late 1990s, Chinese contemporary art and performance art received great recognition internationally, as 19 Chinese artists were invited to the Venice Biennial. Performance art in China and its history had been growing since the 1970s due to the interest between art, process and tradition in Chinese culture, but it gained recognition from the 1990s on. In China, performance art is part of the fine arts education programme, and is becoming more and more popular. In the early 1990s, Chinese performance art was already acclaimed in the international art scene. Since the 2000s New-media performance In the late 1990s and into the 2000s, a number of artists incorporated technologies such as the World Wide Web, digital video, webcams, and streaming media, into performance artworks. Artists such as Coco Fusco, Shu Lea Cheang, and Prema Murthy produced performance art that drew attention to the role of gender, race, colonialism, and the body in relation to the Internet. Other artists, such as Critical Art Ensemble, Electronic Disturbance Theater, and Yes Men, used digital technologies associated with hacktivism and interventionism to raise political issues concerning new forms of capitalism and consumerism. In the second half of the decade, computer-aided forms of performance art began to take place. Many of these works led to the development of algorithmic art, generative art, and robotic art, in which the computer itself, or a computer-controlled robot, becomes the performer. Coco Fusco is an interdisciplinary Cuban-American artist, writer and curator who lives and works in the United States. Her artistic career began in 1988. In her work, she explores topics such as identity, race, power and gender through performance. She also makes videos, interactive installations and critical writing. Radical performance During the first and second decade of the 2000s, various artists were prosecuted, judicialized, detained or imprisoned for their works of political content. Artists such as Pussy Riot, Tania Bruguera, and Petr Pavlensky have been judged for diverse artistic actions created with the intention of denouncing and visibilizing. On February 21, 2012, as a part of their protest against the re-election of Vladímir Putin, various women of the artistic collective Pussy Riot entered the Cathedral of Christ the Saviour of Moscow of the Russian Orthodox Church. They made the sign of the cross, bowed before the shrine, and started to interpret a performance compound by a song and a dance under the motto "Virgin Mary, put Putin Away". On March 3, they were detained. On March 3, 2012, Maria Alyokhina and Nadezhda Tolokonnikova, Pussy Riot members, were arrested by the Russian authorities and accused of vandalism. At first, they both denied being members of the group and started a hunger strike for being incarcerated and taken apart from their children until the trials began in April. On March 16 another woman, Yekaterina Samutsévitch, who had been previously interrogated as a witness, was arrested and accused as well. On July 5, formal charges against the group and a 2800-page accusation were filed. That same day they were notified that they had until July 9 to prepare their defense. In reply, they announced a hunger strike, pleading that two days was an inappropriate time frame to prepare their defense. On July 21, the court extended their preventive prison to last six more months. The three detained members were recognized as political prisoners by the Union of Solidarity with Political Prisoners. Amnesty International considers them to be prisoners of conscience for "the severity of the response of the Russian authorities". Since 2012, artist Abel Azcona has been prosecuted for some of his works. The demand that gained the most repercussion was the one carried out by the Archbishopric of Pamplona and Tudela, in representation of the Catholic Church. The Church demanded Azcona for desecration and blasphemy crimes, hate crime and attack against the religious freedom and feelings for his work Amen or The Pederasty. In the last lawsuits, the petitioners included the crime of obstruction of justice. In 2016, Azcona was denounced for extolling terrorism for his exhibition Natura Morta, in which the artist recreated situations of violence, historical memory, terrorism or war conflicts through performance and hyperrealistic sculptures and installations. In 2018, he was denounced by the Francisco Franco Foundation for exposing an installation consisting of twelve documents that formed a technical examination for the detonation of the Monument of the Valle de los Caídos signed by an architect. He has also been criticized by the State of Israel for his work The Shame, in which he installed fragments of the Berlin Wall along the West Bank Wall as a critical performative installation. That same year he represented Spain in the Asian Art Biennial in Daca, Bangladesh. Azcona installed chairs in the pavilion with children from the streets of Daca in a situation of despair sitting on them. His performance was cancelled because of the protests, who were against the picture that the pavilion portrayed of the Biennial and the country. In December 2014 Tania Bruguera was detained in La Habana to prevent her from carrying out new reivindicative works. Her performance art pieces have earned her harsh critiques, and she has been accused of promoting resistance and public disturbances. In December 2015 and January 2016, Bruguera was detained for organizing a public performance in the plaza de la Revolución of La Habana. She was detained along with other Cuban artists, activists and reporters who took part in the campaign Yo También Exijo, which was created after the declarations of Raúl Castro and Barack Obama in favor of restoring their diplomatic relationship. During the performance El Susurro de Tatlin #6 she set microphones and talkers in the Plaza de la Revolución so the Cubans could express their feelings regarding the new political climate. The event had great repercussion in international media, including a presentation of El Susurro de Tatlin #6 in Times Square, and an action in which various artists and intellectuals expressed themselves in favour of the liberation of Bruguera by sending an open letter to Raúl Castro signed by thousands of people around the world asking for the return of her passport and claiming criminal injustice, as she only gave a microphone to the people so they could give their opinion. In November 2015 and October 2017 Petr Pavlensky was arrested for carrying out a radical performance art piece in which he set on fire the entry of the Lubyanka Building, headquarters of the Federal Security Service of Russia, and a branch office of the Bank of France. On both occasions he sprayed the main entrance with gasoline; in the second performance he sprayed the inside as well, and ignited it with a lighter. The doors of the building were partially burnt. Both times Pavlenski was arrested without resistance and accused of debauchery. A few hours after the actions, several political and artistic reivindicative videos appeared on the internet. Institutionalization of performance art Since the 2000s, big museums, institutions and collections have supported performance art. Since January 2003, Tate Modern in London has had a curated programme of live art and performance. With exhibitions by artists such as Tania Bruguera or Anne Imhof. In 2012 The Tanks at Tate Modern were opened: the first dedicated spaces for performance, film and installation in a major modern and contemporary art museum. The Museum of Modern Art held a major retrospective and performance recreation of Marina Abramović's work, the biggest exhibition of performance art in MoMA's history, from March 14 to 31, 2010. The exhibition consisted of more than twenty pieces by the artist, most of them from the years 1960–1980. Many of them were re-activated by other young artists of multiple nationalities selected for the show. In parallel to the exhibition, Abramovic performed The Artist is Present, a 726-hour and 30-minute static, silent piece, in which she sat immobile in the museum's atrium, while spectators were invited to take turns sitting opposite her. The work is an updated reproduction of one of the pieces from 1970, shown in the exhibition, where Abramovic stayed for full days next to Ulay, who was his sentimental companion. The performance attracted celebrities such as Björk, Orlando Bloom and James Franco who participated and received media coverage. Collective reivindication performance art In 2014 the performance art piece Carry That Weight is created, also known as "the mattress performance". The artist behind this piece is Emma Sulkowicz who, during her end of degree thesis in visual arts in the Columbia University in the city of New York City. In September 2014, Sulkowicz's piece began, as she started carrying her own mattress around the Columbia University campus. This work was created by the artist with the goal of denouncing her rape in that same mattress years before, in her own dormitory, which she reported and was not heard by the university or the justice, so she decided to carry the mattress with her for the entire semester, without leaving it at any moment, until her graduation ceremony in May 2015. The piece generated great controversy, but was supported by a bunch of her companions and activists who joined Sulkowicz multiple times when carrying the mattress, making the work an international reivindication. Art critic Jerry Saltz considered the artwork to be one of the most important of the year 2014. In November 2018 through a conference and live performance by artist Abel Azcona in the Bogotá Contemporary Art Museum the work Spain Asks for Forgiveness (España os Pide Perdón) began, a piece of critical and anticolonialist content. In the first action, Azcona read a text where "Spain asks for forgiveness" was repeated continuously. Two months later, in the Mexico City Museum, he installed a sailcloth with the same sentence on it. Just a few days later, the president of Mexico Andrés Manuel López Obrador during a press conference demanded publicly an apology from Spain. From then until mid-2020, the work has achieved to become a collective movement in cities such as La Habana, Lima, Caracas, Ciudad de Panamá, Tegucigalpa, or Quito, through diverse media. In 2019 the collective performance art piece A Rapist in Your Path was created by a feminist group from Valparaíso, Chile named Lastesis, which consisted of a demonstration against the women's rights violations in the context of the 2019-2020 Chilean protests. It was first performed in front of the Second Police Station of the Carabineros de Chile in Valparaíso on November 18, 2019. A second performance done by 2000 Chilean women on November 25, 2019, as a part of the International Day for the Elimination of Violence against Women, was filmed and became viral on social media. Its reach became global after feminist movements in dozens of countries adopted and translated the performance for their own protests and demands for the cessation and punishment of femicide and sexual violence, amongst others. See also ART/MEDIA Classificatory disputes about art Conceptual art COUM Transmissions Danger music Digital Live Art Endurance art Experimental theatre Flash mob Fluxus Graphic arts Guerrilla theatre Happening List of performance artists Living statue New media art Noise music Poetry slam Radio drama Survival Research Laboratories References Bibliography Bäckström, Per. "Performing the Poem. The Cross-Aesthetic Art of the Nordic Neo-Avant-Garde", The Angel of History. Literature, History and Culture, Vesa Haapala, Hannamari Helander, Anna Hollsten, Pirjo Lyytikäinen & Rita Paqvalen (eds.), Helsinki: The Department of Finnish Language and Literature, University of Helsinki, 2009. Bäckström, Per. “Kisses Sweeter than Wine. Öyvind Fahlström and Billy Klüver: The Swedish Neo-Avant-Garde in New York”, Artl@s Bulletin, vol. 6, 2017: 2 Migrations, Transfers, and Resemantization. Bäckström, Per. “The Intermedial Cluster.Åke Hodell's Lågsniff”, Acta Universitatis Sapientiae, Series Film & Media Studies, de Gruyter, no. 10 2015. Bäckström, Per. ”’The Trumpet in the Bottom’. Öyvind Fahlström and the Uncanny”, Edda 2017: 2. Beisswanger, Lisa: Performance on Display. Zur Geschichte lebendiger Kunst im Museum. Deutscher Kunstverlag, Berlin 2021, ISBN 978-3-422-98448-6 (in German) Beuys Brock Vostell. Aktion Demonstration Partizipation 1949–1983. ZKM – Zentrum für Kunst und Medientechnologie, Hatje Cantz, Karlsruhe, 2014, . Battcock, Gregory; Nickas, Robert (1984). The Art of Performance: A Critical Anthology. New York, E.P. Dutton. Carlson, Marvin (1996). Performance: A Critical Introduction. London and New York: Routledge. , Carr, C. (1993). On Edge: Performance at the End of the Twentieth Century. Wesleyan University Press. , Dempsey, Amy, Art in the Modern Era: A Guide to Styles, Schools, & Movements, Publisher: Harry N. Abrams, (basic definition and basic overview provided). Dreher, Thomas: Performance Art nach 1945. Aktionstheater und Intermedia. München: Wilhelm Fink 2001. (in German) Fischer-Lichte, Erika: Ästhetik des Performativen. Frankfurt: edition suhrkamp 2004. (in German) Goldberg, Roselee (1998) Performance: Live Art Since 1960. Harry N. Abrams, NY NY. Goldberg, Roselee (2001). Performance Art: From Futurism to the Present (World of Art). Thames & Hudson Gómez-Peña, Guillermo (2005). Ethno-techno: Writings on performance, activism and pedagogy. Routledge, London. Jones, Amelia and Heathfield, Adrian (eds.) (2012), Perform, Repeat, Record. Live Art in History. Intellect, Bristol. Phelan, Peggy: Unmarked. The Politics of Performance. Routledge, London 1993, ISBN 9780415068222 Rockwell, John (2004). "Preserve Performance Art?" New York Times, April 30. Schimmel, Paul (ed.) (1998). Out of Actions: Between Performance and the Object, 1949–1979. Thames and Hudson, Los Angeles. Library of the Congress NX456.5.P38 S35 1998 Smith, Roberta (2005). "Performance Art Gets Its Biennial". New York Times, November 2. Best, Susan, "The Serial Spaces of Ana Mendieta" Art History, April 2007 https://doi.org/10.1111/j.1467-8365.2007.00532.x Best, Susan, "Ana Mendieta: Affect Miniatiarizatin, Emotional Ties and the Silueta Series," Visualizing Feeling: Affect and the Feminine Avant-Garde (London: I B Tauris, 2011) 92–115 Del Valle, Alejandro. "Ana Mendieta: Performance in the way of the primitive". Arte, Individuo y Sociedad, 26 (1) 508–523 "Ana Mendieta: Earth Body, Sculpture and Performance 1972–1985." Hirshhorn Museum and Sculpture Garden. Traditional Fine Arts Organization, Inc. Ana Mendieta: New Museum archive External links Live Art Archives at the University of Bristol Theatre Collection Thomas Dreher: Intermedia Art: Performance Art (most articles in German) Contemporary art Performing arts Theatre Culture jamming techniques Art movements Postmodern theatre |
No, this text is not related with defense topics | Lysozyme PEGylation is the covalent attachment of Polyethylene glycol (PEG) to Lysozyme, which is one of the most widely investigated PEGylated proteins. The PEGylation of proteins has become a common practice of modern therapeutic drugs, as the process is capable of enhancing solubility, thermal stability, enzymatic degradation resistance, and serum half-life of the proteins of interest. Lysozyme, as a natural bactericidal enzyme, lyses the cell wall of various gram-positive bacteria and offers protection against microbial infections. Lysozyme has six lysine residues which are accessible for PEGylation reactions. Thus, the PEGylation of lysozyme, or lysozyme PEGylation, can be a good model system for the PEGylation of other proteins with enzymatic activities by showing the enhancement of its physical and thermal stability while retaining its activity. Previous works on lysozyme PEGylation showed various chromatographic schemes in order to purify PEGylated lysozyme, which included ion exchange chromatography, hydrophobic interaction chromatography, and size-exclusion chromatography (fast protein liquid chromatography), and proved its stable conformation via circular dichroism and improved thermal stability by enzymatic activity assays, SDS-PAGE, and size-exclusion chromatography (high-performance liquid chromatography). Methodology PEGylation The chemical modification of lysozyme by PEGylation involves the addition of methoxy-PEG-aldehyde (mPEG-aldehyde) with varying molecular sizes, ranging from 2 kDa to 40 kDa, to the protein. The protein and mPEG-aldehyde are dissolved using a sodium phosphate buffer with sodium cyanoborohydride, which acts as a reducing agent and conditions the aldehyde group of mPEG-aldehyde to have a strong affinity towards the lysine residue on the N-terminal of lysozyme. The commonly used molar ratio of lysozyme and mPEG-aldehyde is 1:6 or 1:6.67. When sufficient PEGylation is reached, the reaction can be terminated by addition of lysine to the solution or boiling of the solution. Various profiles can result in the PEGylation of the protein, which includes intact mono-PEGylated, di-PEGylated, tri-PEGylated, and also possibly their isoforms. Purification Ion exchange chromatography Ion exchange chromatography is often employed in the first step, or capturing step, for the separation of PEGylated proteins as PEGylation may affect the charges of target proteins by neutralizing electrostatic interaction, changing the isoelectric point (pI), and increasing the pKa value. Due to the high pI of lysozyme (pI = 10.7), cation exchange chromatography is used. As the increased degree of PEGylation decreases the ion strength of the protein, the poly-PEGylated proteins tend to bind to the cation resin weaker than the mono-PEGylated protein or the intact form does. Thus, the poly-PEGylated proteins elute faster and the intact protein eludes last in the cation exchange chromatography. As mono-PEGylated is widely investigated and described as a protection of target proteins, the target eluate in the cation exchange chromatography is usually the mono-PEGylated proteins. Hydrophobic interaction chromatography Despite the capability of the cation exchange chromatography in purification process, hydrophobic interaction chromatography is also employed, usually at the second step as a polishing step. By using relatively small bead-sized cation resin, the cation exchange chromatography can identify and separate between isoforms by the apparent charges in the condition, but hydrophobic interaction chromatography is capable of identification and separation of the isoforms by their hydrophobicity. Size-exclusion chromatography (FPLC) Due to the apparent size differences by the degree of PEGylation of the protein, size-exclusion chromatography (fast protein liquid chromatography or FPLC) can be used. There is a negative correlation between molecular weight and the retention time of the PEGylated protein in the chromatogram; larger protein, or more PEGylated protein elutes first, and smaller protein, or intact protein the latest. Characterization Identification The most common analyses for identifying intact and PEGylated lysozyme can be achieved via size-exclusion chromatography (high-performance liquid chromatography or HPLC), SDS-PAGE and Matrix-assisted laser desorption/ionization (MALDI). Conformation The secondary structure of intact and PEGylated lysozyme can be characterized by circular dichroism (CD) spectroscopy. The CD spectra range from 189 - 260 nm with a pitch of 0.1 nm showed no significant change in the secondary structure of the intact and PEGylated lysozyme. Enzymatic activity assay Glycol chitosan Enzymatic activity of intact and PEGylated lysozyme can be evaluated using glycol chitosan by reacting 1 mL of 0.05% (w/v) glycol chitosan in 100 mM of pH 5.5 acetate buffer and 100 μL of the intact or PEGylated protein at 40 °C for 30 min and subsequently adding 2 mL of 0.5 M sodium carbonate with 1 μg of potassium ferricyanide. The mixture is immediately heated, boiled for 15 minutes, and cooled for spectral analysis at 420 nm. As the enzymatic activity to hydrolyze β-1,4- N-acetylglucosamine linkage was retained after PEGylation, there was no decay in the enzymatic activity by increasing the degree of PEGylation. Micrococcus lysodeikticus By the measurement of decrease in turbidity of M. lysodeikticus by incubating it with lysozyme, enzymatic activity can be evaluated. 7.5 μL of 0.1 - 1 mg/mL proteins is added to 200 μL of M. lysodeikticus at its optical density (OD) of 1.7 AU, and the mixture is measured at 450 nm periodically for reaction rate calculation. On the contrary to the result from glycol chitosan enzymatic activity, the increasing degree of PEGylation decreased the enzymatic activity. This difference in the trend of the enzymatic activity can be due to PEGylation to free lysine causing steric hindrance and subsequently preventing from forming enzyme-substrate complex in the case of reacting with macromolecule, such as M. lysodeikticus. References Biotechnology |
No, this text is not related with defense topics | Hydroflight sports are a category of sport in which water jet propulsion is used to create sustained flight where lift and movement are controlled by a person riding on a propulsion device. It is a fast-paced sport that is growing in popularity at a fine rate of speed. Competitions for this sport started around 2012. There are many training centres throughout the world where beginners to go to learn and practice skills so they can fly these devices by themselves. Types of hydroflight equipment There are many different types of Hydroflight products that are used for flying. The three most common types are the Jetboards, Jetpacks and Jetbikes, but there are others such as the Hoverboard and Freedom Flyer. Different varieties of jetboards are manufactured by different companies such as Jetblade, Flyboard, Defy X and Wataboard. This gives buyers a choice on which style board will suite their style of use. Jetboard The jetboard is a device that has two jets either side of the deck, on top of the deck is where the boots/bindings (generally wakeboarding boots) are bolted in and this is where the pilot will strap themselves into. The direction and control of the jetboard comes down to the amount of propulsion being applied, the angle of the feet are pointing and the distribution of bodyweight. Jetpack The jetpack is a device that is attached to the back with the two jets situated next to the shoulders. People are held to the device by a five-point safety/racing harness (same as the ones used in race car seats). The direction of flight is controlled using two handles that are attached to the jet nozzles. Jetbike The Jetbike is a device that has a motorcycle style seat and allows its pilot to fly in a position that replicates a motorcyclist's form. There is nothing to hold a person in but two small straps located on the foot pad. The bike has one main jet underneath the seat and two smaller jets located at the front of the bike which have handles attached to them to control the flight path. Freedom flyer The Freedom Flyer is a device which can actually be used by those who may suffer from a disability to the lower half of their body. The device is in the shape of a chair which has one main jet under the seat and two jets situated on either side of it which has handles attached to either side so the pilot can alter their flight path. Hoverboard The Hoverboard is a snowboard style device which only has one main jet situated underneath it. It is ridden with a side-on stance and is directed by the distribution of the pilot's own body-weight. Events/Competitions HydroJam & Expo Open Competitions: Hydro Fest 2016 / Session One / FlyCup / Hydroflight World Championship / Louisiana Hydroflight Invitational Flyboard competitions, closed to only users of the Flyboard by Zapata Racing, since 2018 no longer held: Flyboard European Championship 2016 / XDubai Flyboard World Cup 2015 / XDubai Hoverboard Race 2015 / North American Flyboard Championship 2015 / XDubai Flyboard World Cup 2014 / Flyboard World Cup 2013 / Flyboard World Cup 2012 / Japan Flyboard World Cup Jetpack events have become a new segment of the hydroflight industry with an Australian business Jetpack Events incorporating fireworks and led lighting into the Hydroflight industry. Backpack type firework units are attached to flyers and triggered, whilst in flight for event entertainent. These pilots are typically dressed in an outer garment holding multiple led lighting strips to enable viewing of the pilot in these night events. The only country where hydroflight is officially recognized as a sport is Russia. In 2019, it is one of the water-motor sports. In 2020, the first national championship and the Cup of Russia were held. https://iz.ru/1052172/2020-08-24/v-rossii-vpervye-v-mire-proshel-ofitcialnyi-natcionalnyi-chempionat-po-gidroflaitu Injuries The risk of injury exists in hydroflight sports especially when the pilot starts to perform advanced movements. Injuries can occur any time of the flight or in the water environment. Hitting the pressurized hose at speed can cause concussion, bruising and broken bones. Moving water, river flow or strong tidal actions have posed the biggest danger to participants. When the jet ski and the floating rider move with the water, the 20+ meter long hose can hang up on underwater obstructions. This can pull the rider, and potentially the jet ski under water as the hose becomes taut. Deaths have occurred in this manner and hydroflight in moving water is strongly recommended against. Diving into the water presents the danger of breaking the neck, concussion, dislocation of shoulders, back or neck; if the pilot attempts to scoop the trajectory of the dive underwater against his current rotation he will be at serious risk of a back injury. Hitting the water flat (non-diving position) from a height of 10 meters will bring the pilot to a stop in a short distance which can cause serious bruising to the body and internal organs. It also strains the connective tissue securing the organs and possible minor haemorrhaging of lungs and other tissue is also possible. According to the website Hyper Physics (impact force of falling object generator), if a fully kitted up pilot with the weight of 75 kg fell from a full height of 16 meters (without the help of forward momentum or the force of the jets pushing them) he would hit the water at approximately 63 km/h. Yet most professionals weigh between 75 – 90 kg once fully kitted up. (Weight does not affect the speed of a fall.) While performing manoeuvres such as back flips or front flips the pilot can reach up to 4 Gs of force. World Hydroflight Association The World Hydroflight Association (WHFA) is a worldwide organization representing, hydroflight participants, manufacturers, companies and fans. It exists to promote the sport of hydroflight via the safe, responsible operation of hydroflight devices. Unfortunately the WHFA was an early effort and no longer exists (since 2017). Currently 2021, there is no governing or standards body for hydroflight. In Australia the representative body is the Aerial Watersports Association. It exists to promote the activities within Australia and to assist in the generation of new government requirements for the industry. References External links http://worldhydroflightassociation.org/ Water sports Air sports Sports |
No, this text is not related with defense topics | The acronyms ELSI (in the United States) and ELSA (in Europe) refer to research activities that anticipate and address ethical, legal and social implications (ELSI) or aspects (ELSA) of emerging sciences, notably genomics and nanotechnology. ELSI was conceived in 1988 when James Watson, at the press conference announcing his appointment as director of the Human Genome Project (HGP), suddenly and somewhat unexpectedly declared that the ethical and social implications of genomics warranted a special effort and should be directly funded by the National Institutes of Health. Spread Various ELSI or ELSA programs have been developed, in Canada, Europe and the Far East. Overview: U.S.A.: Ethical, ....Legal and Social Implications (ELSI) (funding agency: NIH, 1990) Canada: Genomics-related Ethical, Environmental, Economic, Legal and Social Aspects (GE3LS) (funding agency: Genome Canada, 2000) South-Korea: Ethical, Legal and Social Implications (ELSI) (funding: Government of South-Korea, 2001) United Kingdom: ESRC Genomics Network (EGN), including: Cesagen, Innogen, Egenis, Genomics Forum (funding agency: ESRC 2002) Netherlands: Centre for Society and the Life Sciences (CSG) (funding agency: Netherlands Genomics Initiative, 2002) Norway: ELSA Program (funding agency: Research Council of Norway, 2002) Germany, Austria, Finland: ELSAGEN Transnational Research Programme (funding agencies: GEN-AU, FFG, DFG, Academy of Finland, 2008) Features At least four features seem typical for an ELSA approach, namely: proximity (closeness to or embedding in large-scale scientific programs); early anticipation (of societal issues and potential controversies); interactivity (encouraging stakeholders and publics to assume an active role in co-designing research agendas); interdisciplinarity (bridging boundaries between research communities such as for instance bioethics and STS). Reception The ELSA approach has been widely endorsed by academics studying the societal impact of science and technology, but also criticized. Michael Yesley, responsible for the US Department of Energy (DOE) part of the ELSI programme, claims that the ELSI Program was in fact a discourse of justification, selecting topics of ethics research that will facilitate rather than challenge the advance of genetic technology. In other words, ELSA genomics as the handmaiden of genomics research. In Europe, in the context of the Horizon 2020 program, ELSA-style research is now usually framed as Responsible Research and Innovation. Examples of academic journals open to publishing ELSA research results are New Genetics and Society (Taylor and Francis) and Life Sciences, Society and Policy (SpringerOpen). References Social responsibility Bioethics Futures studies Science in society Ethics |
No, this text is not related with defense topics | A maternal effect is a situation where the phenotype of an organism is determined not only by the environment it experiences and its genotype, but also by the environment and genotype of its mother. In genetics, maternal effects occur when an organism shows the phenotype expected from the genotype of the mother, irrespective of its own genotype, often due to the mother supplying messenger RNA or proteins to the egg. Maternal effects can also be caused by the maternal environment independent of genotype, sometimes controlling the size, sex, or behaviour of the offspring. These adaptive maternal effects lead to phenotypes of offspring that increase their fitness. Further, it introduces the concept of phenotypic plasticity, an important evolutionary concept. It has been proposed that maternal effects are important for the evolution of adaptive responses to environmental heterogeneity. In genetics In genetics, a maternal effect occurs when the phenotype of an organism is determined by the genotype of its mother. For example, if a mutation is maternal effect recessive, then a female homozygous for the mutation may appear phenotypically normal, however her offspring will show the mutant phenotype, even if they are heterozygous for the mutation. Maternal effects often occur because the mother supplies a particular mRNA or protein to the oocyte, hence the maternal genome determines whether the molecule is functional. Maternal supply of mRNAs to the early embryo is important, as in many organisms the embryo is initially transcriptionally inactive. Because of the inheritance pattern of maternal effect mutations, special genetic screens are required to identify them. These typically involve examining the phenotype of the organisms one generation later than in a conventional (zygotic) screen, as their mothers will be potentially homozygous for maternal effect mutations that arise. In Drosophila early embryogenesis A Drosophila melanogaster oocyte develops in an egg chamber in close association with a set of cells called nurse cells. Both the oocyte and the nurse cells are descended from a single germline stem cell, however cytokinesis is incomplete in these cell divisions, and the cytoplasm of the nurse cells and the oocyte is connected by structures known as ring canals. Only the oocyte undergoes meiosis and contributes DNA to the next generation. Many maternal effect Drosophila mutants have been found that affect the early steps in embryogenesis such as axis determination, including bicoid, dorsal, gurken and oskar. For example, embryos from homozygous bicoid mothers fail to produce head and thorax structures. Once the gene that is disrupted in the bicoid mutant was identified, it was shown that bicoid mRNA is transcribed in the nurse cells and then relocalized to the oocyte. Other maternal effect mutants either affect products that are similarly produced in the nurse cells and act in the oocyte, or parts of the transportation machinery that are required for this relocalization. Since these genes are expressed in the (maternal) nurse cells and not in the oocyte or fertilised embryo, the maternal genotype determines whether they can function. Maternal effect genes are expresses during oogenesis by the mother (expressed prior to fertilization) and develop the anterior-posterior and dorsal ventral polarity of the egg. The anterior end of the egg becomes the head; posterior end becomes the tail. the dorsal side is on the top; the ventral side is in underneath. The products of maternal effect genes called maternal mRNAs are produced by nurse cell and follicle cells and deposited in the egg cells (oocytes). At the start of development process, mRNA gradients are formed in oocytes along anterior-posterior and dorsal ventral axes. About thirty maternal genes are involved in pattern formation have been identified. In particular, products of four maternal effect genes are critical to the formation of anterior-posterior axis. The product of two maternal effect gene, bicoid and hunchback, regulates formation of anterior structure while another pair nanos and caudal, specifies protein that regulates formation of posterior part of embryo. The transcript of all four genes-bicoid, hunchback, caudal, nanos are synthesized by nurse and follicle cells and transported into the oocytes. In birds In birds, mothers may pass down hormones in their eggs that affect an offspring's growth and behavior. Experiments in domestic canaries have shown that eggs that contain more yolk androgens develop into chicks that display more social dominance. Similar variation in yolk androgen levels has been seen in bird species like the American coot, though the mechanism of effect has yet to be established. In humans In 2015, obesity theorist Edward Archer published "The Childhood Obesity Epidemic as a Result of Nongenetic Evolution: The Maternal Resources Hypothesis" and a series of works on maternal effects in human obesity and health. In this body of work, Archer argued that accumulative maternal effects via the non-genetic evolution of matrilineal nutrient metabolism is responsible for the increased global prevalence of obesity and diabetes mellitus type 2. Archer posited that decrements in maternal metabolic control altered fetal pancreatic beta cell, adipocyte (fat cell) and myocyte (muscle cell) development thereby inducing an enduring competitive advantage of adipocytes in the acquisition and sequestering on nutrient energy. In Plants The environmental cues such as light, temperature, soil moisture and nutrients the mother plant encounters can cause variations in seed quality even within the same genotype. Thus, the mother plant greatly influences seed traits such as seed size, germination rate, and viability. Environmental maternal effects The environment or condition of the mother can also in some situations influence the phenotype of her offspring, independent of the offspring's genotype. Paternal effect genes In contrast, a paternal effect is when a phenotype results from the genotype of the father, rather than the genotype of the individual. The genes responsible for these effects are components of sperm that are involved in fertilization and early development. An example of a paternal-effect gene is the ms(3)sneaky in Drosophila. Males with a mutant allele of this gene produce sperm that are able to fertilize an egg, but the sneaky-inseminated eggs do not develop normally. However, females with this mutation produce eggs that undergo normal development when fertilized. Adaptive maternal effects Adaptive maternal effects induce phenotypic changes in offspring that result in an increase in fitness. These changes arise from mothers sensing environmental cues that work to reduce offspring fitness, and then responding to them in a way that then “prepares” offspring for their future environments. A key characteristic of “adaptive maternal effects” phenotypes is their plasticity. Phenotypic plasticity gives organisms the ability to respond to different environments by altering their phenotype. With these “altered” phenotypes increasing fitness it becomes important to look at the likelihood that adaptive maternal effects will evolve and become a significant phenotypic adaptation to an environment. Defining adaptive maternal effects When traits are influenced by either the maternal environment or the maternal phenotype, it is said to be influenced by maternal effects. Maternal effects work to alter the phenotypes of the offspring through pathways other than DNA. Adaptive maternal effects are when these maternal influences lead to a phenotypic change that increases the fitness of the offspring. In general, adaptive maternal effects are a mechanism to cope with factors that work to reduce offspring fitness; they are also environment specific. It can sometimes be difficult to differentiate between maternal and adaptive maternal effects. Consider the following: Gypsy moths reared on foliage of black oak, rather than chestnut oak, had offspring that developed faster. This is a maternal, not an adaptive maternal effect. In order to be an adaptive maternal effect, the mother's environment would have to have led to a change in the eating habits or behavior of the offspring. The key difference between the two therefore, is that adaptive maternal effects are environment specific. The phenotypes that arise are in response to the mother sensing an environment that would reduce the fitness of her offspring. By accounting for this environment she is then able to alter the phenotypes to actually increase the offspring's fitness. Maternal effects are not in response to an environmental cue, and further they have the potential to increase offspring fitness, but they may not. When looking at the likelihood of these “altered” phenotypes evolving there are many factors and cues involved. Adaptive maternal effects evolve only when offspring can face many potential environments; when a mother can “predict” the environment into which her offspring will be born; and when a mother can influence her offspring's phenotype, thereby increasing their fitness. The summation of all of these factors can then lead to these “altered” traits becoming favorable for evolution. The phenotypic changes that arise from adaptive maternal effects are a result of the mother sensing that a certain aspect of the environment may decrease the survival of her offspring. When sensing a cue the mother “relays” information to the developing offspring and therefore induces adaptive maternal effects. This tends to then cause the offspring to have a higher fitness because they are “prepared” for the environment they are likely to experience. These cues can include responses to predators, habitat, high population density, and food availability The increase in size of Northern American red squirrels is a great example of an adaptive maternal effect producing a phenotype that resulted in an increased fitness. The adaptive maternal effect was induced by the mothers sensing the high population density and correlating it to low food availability per individual. Her offspring were on average larger than other squirrels of the same species; they also grew faster. Ultimately, the squirrels born during this period of high population density showed an increased survival rate (and therefore fitness) during their first winter. Phenotypic plasticity When analyzing the types of changes that can occur to a phenotype, we can see changes that are behavioral, morphological, or physiological. A characteristic of the phenotype that arises through adaptive maternal effects, is the plasticity of this phenotype. Phenotypic plasticity allows organisms to adjust their phenotype to various environments, thereby enhancing their fitness to changing environmental conditions. Ultimately it is a key attribute to an organism's, and a population's, ability to adapt to short term environmental change. Phenotypic plasticity can be seen in many organisms, one species that exemplifies this concept is the seed beetle Stator limbatus. This seed beetle reproduces on different host plants, two of the more common ones being Cercidium floridum and Acacia greggii. When C. floridum is the host plant, there is selection for a large egg size; when A. greggii is the host plant, there is a selection for a smaller egg size. In an experiment it was seen that when a beetle who usually laid eggs on A. greggii was put onto C. floridum, the survivorship of the laid eggs was lower compared to those eggs produced by a beetle that was conditioned and remained on the C. florium host plant. Ultimately these experiments showed the plasticity of egg size production in the beetle, as well as the influence of the maternal environment on the survivorship of the offspring. Further examples of adaptive maternal effects In many insects: Cues such as rapidly cooling temperatures or decreasing daylight can result in offspring that enter into a dormant state. They therefore will better survive the cooling temperatures and preserve energy. When parents are forced to lay eggs on environments with low nutrients, offspring will be provided with more resources, such as higher nutrients, through an increased egg size. Cues such as poor habitat or crowding can lead to offspring with wings. The wings allow the offspring to move away from poor environments to ones that will provide better resources. Maternal diet and environment influence epigenetic effects Related to adaptive maternal effects are epigenetic effects. Epigenetics is the study of long lasting changes in gene expression that are produced by modifications to chromatin instead of changes in DNA sequence, as is seen in DNA mutation. This "change" refers to DNA methylation, histone acetylation, or the interaction of non-coding RNAs with DNA. DNA methylation is the addition of methyl groups to the DNA. When DNA is methylated in mammals, the transcription of the gene at that location is turned down or turned off entirely. The induction of DNA methylation is highly influenced by the maternal environment. Some maternal environments can lead to a higher methylation of an offspring's DNA, while others lower methylation.[22] The fact that methylation can be influenced by the maternal environment, makes it similar to adaptive maternal effects. Further similarities are seen by the fact that methylation can often increase the fitness of the offspring. Additionally, epigenetics can refer to histone modifications or non-coding RNAs that create a sort of cellular memory. Cellular memory refers to a cell's ability to pass nongenetic information to its daughter cell during replication. For example, after differentiation, a liver cell performs different functions than a brain cell; cellular memory allows these cells to "remember" what functions they are supposed to perform after replication. Some of these epigenetic changes can be passed down to future generations, while others are reversible within a particular individual's lifetime. This can explain why individuals with identical DNA can differ in their susceptibility to certain chronic diseases. Currently, researchers are examining the correlations between maternal diet during pregnancy and its effect on the offspring's susceptibility for chronic diseases later in life. The fetal programming hypothesis highlights the idea that environmental stimuli during critical periods of fetal development can have lifelong effects on body structure and health and in a sense they prepare offspring for the environment they will be born into. Many of these variations are thought to be due to epigenetic mechanisms brought on by maternal environment such as stress, diet, gestational diabetes, and exposure to tobacco and alcohol. These factors are thought to be contributing factors to obesity and cardiovascular disease, neural tube defects, cancer, diabetes, etc. Studies to determine these epigenetic mechanisms are usually performed through laboratory studies of rodents and epidemiological studies of humans. Importance for the general population Knowledge of maternal diet induced epigenetic changes is important not only for scientists, but for the general public. Perhaps the most obvious place of importance for maternal dietary effects is within the medical field. In the United States and worldwide, many non-communicable diseases, such as cancer, obesity, and heart disease, have reached epidemic proportions. The medical field is working on methods to detect these diseases, some of which have been discovered to be heavily driven by epigenetic alterations due to maternal dietary effects. Once the genomic markers for these diseases are identified, research can begin to be implemented to identify the early onset of these diseases and possibly reverse the epigenetic effects of maternal diet in later life stages. The reversal of epigenetic effects will utilize the pharmaceutical field in an attempt to create drugs which target the specific genes and genomic alterations. The creation of drugs to cure these non-communicable diseases could be used to treat individuals who already have these illnesses. General knowledge of the mechanisms behind maternal dietary epigenetic effects is also beneficial in terms of awareness. The general public can be aware of the risks of certain dietary behaviors during pregnancy in an attempt to curb the negative consequences which may arise in offspring later in their lives. Epigenetic knowledge can lead to an overall healthier lifestyle for the billions of people worldwide. The effect of maternal diet in species other than humans is also relevant. Many of the long term effects of global climate change are unknown. Knowledge of epigenetic mechanisms can help scientists better predict the impacts of changing community structures on species which are ecologically, economically, and/or culturally important around the world. Since many ecosystems will see changes in species structures, the nutrient availability will also be altered, ultimately affecting the available food choices for reproducing females. Maternal dietary effects may also be used to improve agricultural and aquaculture practices. Breeders may be able to utilize scientific data to create more sustainable practices, saving money for themselves, as well as the consumers. Maternal diet and environment epigenetically influences susceptibility for adult diseases Hyperglycemia during gestation correlated with obesity and heart disease in adulthood Hyperglycemia during pregnancy is thought to cause epigenetic changes in the leptin gene of newborns leading to a potential increased risk for obesity and heart disease. Leptin is sometimes known as the “satiety hormone” because it is released by fat cells to inhibit hunger. By studying both animal models and human observational studies, it has been suggested that a leptin surge in the perinatal period plays a critical role in contributing to long-term risk of obesity. The perinatal period begins at 22 weeks gestation and ends a week after birth.[34] DNA methylation near the leptin locus has been examined to determine if there was a correlation between maternal glycemia and neonatal leptin levels. Results showed that glycemia was inversely associated with the methylation states of LEP gene, which controls the production of the leptin hormone. Therefore, higher glycemic levels in mothers corresponded to lower methylation states in LEP gene in their children. With this lower methylation state, the LEP gene is transcribed more often, thereby inducing higher blood leptin levels. These higher blood leptin levels during the perinatal period were linked to obesity in adulthood, perhaps due to the fact that a higher “normal” level of leptin was set during gestation. Because obesity is a large contributor to heart disease, this leptin surge is not only correlated with obesity but also heart disease. High fat diets during gestation correlated with metabolic syndrome High fat diets in utero are believed to cause metabolic syndrome. Metabolic syndrome is a set of symptoms including obesity and insulin resistance that appear to be related. This syndrome is often associated with type II diabetes as well as hypertension and atherosclerosis. Using mice models, researchers have shown that high fat diets in utero cause modifications to the adiponectin and leptin genes that alter gene expression; these changes contribute to metabolic syndrome. The adiponectin genes regulate glucose metabolism as well as fatty acid breakdown; however, the exact mechanisms are not entirely understood. In both human and mice models, adiponectin has been shown to add insulin-sensitizing and anti-inflammatory properties to different types of tissue, specifically muscle and liver tissue. Adiponectin has also been shown to increase the rate of fatty acid transport and oxidation in mice, which causes an increase in fatty acid metabolism. With a high fat diet during gestation, there was an increase in methylation in the promoter of the adiponectin gene accompanied by a decrease in acetylation. These changes likely inhibit the transcription of the adiponectin genes because increases in methylation and decreases in acetylation usually repress transcription. Additionally, there was an increase in methylation of the leptin promoter, which turns down the production of the leptin gene. Therefore, there was less adiponectin to help cells take up glucose and break down fat, as well as less leptin to cause a feeling of satiety. The decrease in these hormones caused fat mass gain, glucose intolerance, hypertriglyceridemia, abnormal adiponectin and leptin levels, and hypertension throughout the animal's lifetime. However, the effect was abolished after three subsequent generations with normal diets. This study highlights the fact that these epigenetic marks can be altered in as many as one generation and can even be completely eliminated over time. This study highlighted the connection between high fat diets to the adiponectin and leptin in mice. In contrast, few studies have been done in humans to show the specific effects of high fat diets in utero on humans. However, it has been shown that decreased adiponectin levels are associated with obesity, insulin resistance, type II diabetes, and coronary artery disease in humans. It is postulated that a similar mechanism as the one described in mice may also contribute to metabolic syndrome in humans. High fat diets during gestation correlated with chronic inflammation In addition, high fat diets cause chronic low-grade inflammation in the placenta, adipose, liver, brain, and vascular system. Inflammation is an important aspect of the bodies’ natural defense system after injury, trauma, or disease. During an inflammatory response, a series of physiological reactions, such as increased blood flow, increased cellular metabolism, and vasodilation, occur in order to help treat the wounded or infected area. However, chronic low-grade inflammation has been linked to long-term consequences such as cardiovascular disease, renal failure, aging, diabetes, etc. This chronic low-grade inflammation is commonly seen in obese individuals on high fat diets. In a mice model, excessive cytokines were detected in mice fed on a high fat diet. Cytokines aid in cell signaling during immune responses, specifically sending cells towards sites of inflammation, infection, or trauma. The mRNA of proinflammatory cytokines was induced in the placenta of mothers on high fat diets. The high fat diets also caused changes in microbiotic composition, which led to hyperinflammatory colonic responses in offspring. This hyperinflammatory response can lead to inflammatory bowel diseases such as Crohn's disease or ulcerative colitis.[35] As previously mentioned, high fat diets in utero contribute to obesity; however, some proinflammatory factors, like IL-6 and MCP-1, are also linked to body fat deposition. It has been suggested that histone acetylation is closely associated with inflammation because the addition of histone deacetylase inhibitors has been shown to reduce the expression of proinflammatory mediators in glial cells. This reduction in inflammation resulted in improved neural cell function and survival. This inflammation is also often associated with obesity, cardiovascular disease, fatty liver, brain damage, as well as preeclampsia and preterm birth. Although it has been shown that high fat diets induce inflammation, which contribute to all these chronic diseases; it is unclear as to how this inflammation acts as a mediator between diet and chronic disease. Undernutrition during gestation correlated with cardiovascular disease A study done after the Dutch Hunger Winter of 1944-1945 showed that undernutrition during the early stages of pregnancy are associated with hypomethylation of the insulin-like growth factor II (IGF2) gene even after six decades. These individuals had significantly lower methylation rates as compared to their same sex sibling who had not been conceived during the famine. A comparison was done with children conceived prior to the famine so that their mothers were nutrient deprived during the later stages of gestation; these children had normal methylation patterns. The IGF2 stands for insulin-like growth factor II; this gene is a key contributor in human growth and development. IGF2 gene is also maternally imprinted meaning that the mother's gene is silenced. The mother's gene is typically methylated at the differentially methylated region (DMR); however, when hypomethylated, the gene is bi-allelically expressed. Thus, individuals with lower methylation states likely lost some of the imprinting effect. Similar results have been demonstrated in the Nr3c1 and Ppara genes of the offspring of rats fed on an isocaloric protein-deficient diet before starting pregnancy. This further implies that the undernutrition was the cause of the epigenetic changes. Surprisingly, there was not a correlation between methylation states and birth weight. This displayed that birth weight may not be an adequate way to determine nutritional status during gestation. This study stressed that epigenetic effects vary depending on the timing of exposure and that early stages of mammalian development are crucial periods for establishing epigenetic marks. Those exposed earlier in gestation had decreased methylation while those who were exposed at the end of gestation had relatively normal methylation levels. The offspring and descendants of mothers with hypomethylation were more likely to develop cardiovascular disease. Epigenetic alterations that occur during embryogenesis and early fetal development have greater physiologic and metabolic effects because they are transmitted over more mitotic divisions. In other words, the epigenetic changes that occur earlier are more likely to persist in more cells. Nutrient restriction during gestation correlated with diabetes mellitus type 2 In another study, researchers discovered that perinatal nutrient restriction resulting in intrauterine growth restriction (IUGR) contributes to diabetes mellitus type 2 (DM2). IUGR refers to the poor growth of the baby in utero. In the pancreas, IUGR caused a reduction in the expression of the promoter of the gene encoding a critical transcription factor for beta cell function and development. Pancreatic beta cells are responsible for making insulin; decreased beta cell activity is associated with DM2 in adulthood. In skeletal muscle, IUGR caused a decrease in expression of the Glut-4 gene. The Glut-4 gene controls the production of the Glut-4 transporter; this transporter is specifically sensitive to insulin. Thus, when insulin levels rise, more glut-4 transporters are brought to the cell membrane to increase the uptake of glucose into the cell. This change is caused by histone modifications in the cells of skeletal muscle that decrease the effectiveness of the glucose transport system into the muscle. Because the main glucose transporters are not operating at optimal capacity, these individuals are more likely to develop insulin resistance with energy rich diets later in life, contributing to DM2. High protein diet during gestation correlated with higher blood pressure and adiposity Further studies have examined the epigenetic changes resulting from a high protein/low carbohydrate diet during pregnancy. This diet caused epigenetic changes that were associated with higher blood pressure, higher cortisol levels, and a heightened Hypothalamic-pituitary-adrenal (HPA) axis response to stress. Increased methylation in the 11β-hydroxysteroid dehydrogenase type 2 (HSD2), glucocorticoid receptor (GR), and H19 ICR were positively correlated with adiposity and blood pressure in adulthood. Glucocorticoids play a vital role in tissue development and maturation as well as having effects on metabolism. Glucocorticoids’ access to GR is regulated by HSD1 and HSD2. H19 is an imprinted gene for a long coding RNA (lncRNA), which has limiting effects on body weight and cell proliferation. Therefore, higher methylation rates in H19 ICR repress transcription and prevent the lncRNA from regulating body weight. Mothers who reported higher meat/fish and vegetable intake and lower bread/potato intake in late pregnancy had a higher average methylation in GR and HSD2. However, one common challenge of these types of studies is that many epigenetic modifications have tissue and cell-type specificity DNA methylation patterns. Thus, epigenetic modification patterns of accessible tissues, like peripheral blood, may not represent the epigenetic patterns of the tissue involved in a particular disease. Neonatal estrogen exposure correlated with prostate cancer Strong evidence in rats supports the conclusion that neonatal estrogen exposure plays a role in the development of prostate cancer. Using a human fetal prostate xenograft model, researchers studied the effects of early exposure to estrogen with and without secondary estrogen and testosterone treatment. A xenograft model is a graft of tissue transplanted between organisms of different species. In this case, human tissue was transplanted into rats; therefore, there was no need to extrapolate from rodents to humans. Histopathological lesions, proliferation, and serum hormone levels were measured at various time-points after xenografting. At day 200, the xenograft that had been exposed to two treatments of estrogen showed the most severe changes. Additionally, researchers looked at key genes involved in prostatic glandular and stromal growth, cell-cycle progression, apoptosis, hormone receptors, and tumor suppressors using a custom PCR array. Analysis of DNA methylation showed methylation differences in CpG sites of the stromal compartment after estrogen treatment. These variations in methylation are likely a contributing cause to the changes in the cellular events in the KEGG prostate cancer pathway that inhibit apoptosis and increase cell cycle progression that contribute to the development of cancer. Supplementation may reverse epigenetic changes In utero or neonatal exposure to bisphenol A (BPA), a chemical used in manufacturing polycarbonate plastic, is correlated with higher body weight, breast cancer, prostate cancer, and an altered reproductive function. In a mice model, the mice fed on a BPA diet were more likely to have a yellow coat corresponding to their lower methylation state in the promoter regions of the retrotransposon upstream of the Agouti gene. The Agouti gene is responsible for determining whether an animal's coat will be banded (agouti) or solid (non-agouti). However, supplementation with methyl donors like folic acid or phytoestrogen abolished the hypomethylating effect. This demonstrates that the epigenetic changes can be reversed through diet and supplementation. Maternal diet effects and ecology Maternal dietary effects are not just seen in humans, but throughout many taxa in the animal kingdom. These maternal dietary effects can result in ecological changes on a larger scale throughout populations and from generation to generation. The plasticity involved in these epigenetic changes due to maternal diet represents the environment into which the offspring will be born. Many times, epigenetic effects on offspring from the maternal diet during development will genetically prepare the offspring to be better adapted for the environment in which they will first encounter. The epigenetic effects of maternal diet can be seen in many species, utilizing different ecological cues and epigenetic mechanisms to provide an adaptive advantage to future generations. Within the field of ecology, there are many examples of maternal dietary effects. Unfortunately, the epigenetic mechanisms underlying these phenotypic changes are rarely investigated. In the future, it would be beneficial for ecological scientists as well as epigenetic and genomic scientists to work together to fill the holes within the ecology field to produce a complete picture of environmental cues and epigenetic alterations producing phenotypic diversity. Parental diet affects offspring immunity A pyralid moth species, Plodia interpunctella, commonly found in food storage areas, exhibits maternal dietary effects, as well as paternal dietary effects, on its offspring. Epigenetic changes in moth offspring affect the production of phenoloxidase, an enzyme involved with melanization and correlated with resistance of certain pathogens in many invertebrate species. In this study, parent moths were housed in food rich or food poor environments during their reproductive period. Moths who were housed in food poor environments produced offspring with less phenoloxidase, and thus had a weaker immune system, than moths who reproduced in food rich environments. This is believed to be adaptive because the offspring develop while receiving cues of scarce nutritional opportunities. These cues allow the moth to allocate energy differentially, decreasing energy allocated for the immune system and devoting more energy towards growth and reproduction to increase fitness and insure future generations. One explanation for this effect may be imprinting, the expression of only one parental gene over the other, but further research has yet to be done. Parental-mediated dietary epigenetic effects on immunity has a broader significance on wild organisms. Changes in immunity throughout an entire population may make the population more susceptible to an environmental disturbance, such as the introduction of a pathogen. Therefore, these transgenerational epigenetic effects can influence the population dynamics by decreasing the stability of populations who inhabit environments different from the parental environment that offspring are epigenetically modified for. Maternal diet affects offspring growth rate Food availability also influences the epigenetic mechanisms driving growth rate in the mouthbrooding cichlid, Simochromis pleurospilus. When nutrient availability is high, reproducing females will produce many small eggs, versus fewer, larger eggs in nutrient poor environments. Egg size often correlates with fish larvae body size at hatching: smaller larvae hatch from smaller eggs. In the case of the cichlid, small larvae grow at a faster rate than their larger egg counterparts. This is due to the increased expression of GHR, the growth hormone receptor. Increased transcription levels of GHR genes increase the receptors available to bind with growth hormone, GH, leading to an increased growth rate in smaller fish. Fish of larger size are less likely to be eaten by predators, therefore it is advantageous to grow quickly in early life stages to insure survival. The mechanism by which GHR transcription is regulated is unknown, but it may be due to hormones within the yolk produced by the mother, or just by the yolk quantity itself. This may lead to DNA methylation or histone modifications which control genic transcription levels. Ecologically, this is an example of the mother utilizing her environment and determining the best method to maximize offspring survival, without actually making a conscious effort to do so. Ecology is generally driven by the ability of an organism to compete to obtain nutrients and successfully reproduce. If a mother is able to gather a plentiful amount of resources, she will have a higher fecundity and produce offspring who are able to grow quickly to avoid predation. Mothers who are unable to obtain as many nutrients will produce fewer offspring, but the offspring will be larger in hopes that their large size will help insure survival into sexual maturation. Unlike the moth example, the maternal effects provided to the cichlid offspring do not prepare the cichlids for the environment that they will be born into; this is because mouth brooding cichlids provide parental care to their offspring, providing a stable environment for the offspring to develop. Offspring who have a greater growth rate can become independent more quickly than slow growing counterparts, therefore decreasing the amount of energy spent by the parents during the parental care period. A similar phenomenon occurs in the sea urchin, Strongylocentrotus droebachiensis. Urchin mothers in nutrient rich environments produce a large number of small eggs. Offspring from these small eggs grow at a faster rate than their large egg counterparts from nutrient poor mothers. Again, it is beneficial for sea urchin larvae, known as planula, to grow quickly to decrease the duration of their larval phase and metamorphose into a juvenile to decrease predation risks. Sea urchin larvae have the ability to develop into one of two phenotypes, based on their maternal and larval nutrition. Larvae who grow at a fast rate from high nutrition, are able to devote more of their energy towards development into the juvenile phenotype. Larvae who grow at a slower rate with low nutrition, devote more energy towards growing spine-like appendages to protect themselves from predators in an attempt to increase survival into the juvenile phase. The determination of these phenotypes is based on both the maternal and the juvenile nutrition. The epigenetic mechanisms behind these phenotypic changes is unknown, but it is believed that there may be a nutritional threshold that triggers epigenetic changes affecting development and, ultimately, the larval phenotype. See also Maternal effect dominant embryonic arrest Xenia (plants) Extranuclear inheritance References Developmental biology Ecology Evolutionary biology Genetics |
No, this text is not related with defense topics | Communication software is used to provide remote access to systems and exchange files and messages in text, audio and/or video formats between different computers or users. This includes terminal emulators, file transfer programs, chat and instant messaging programs, as well as similar functionality integrated within MUDs. The term is also applied to software operating a bulletin board system, but seldom to that operating a computer network or Stored Program Control exchange. History E-mail was introduced in the early 1960's as a way for multiple users of a time-sharing mainframe computer to communicate. Basic text chat functionality has existed on multi-user computer systems and bulletin board systems since the early 1970s. In the 1980s, a terminal emulator was a piece of software necessary to log into mainframes and thus access e-mail. Prior to the rise of the Internet, computer files were exchanged over dialup lines, requiring ways to send binary files over communication systems that were primarily intended for plain text; programs implementing special transfer modes were implemented using various de facto standards, most notably Kermit. Chat In 1985 the first decentralized chat system was created called Bitnet Relay, whereas Minitel probably provided the largest chat system at the same time. In August 1988 the Internet Relay Chat followed. CU-SeeMe was the first chat system to be equipped with a video camera. Instant crashing featuring a buddy list and the notion of online presence was introduced by ICQ in 1996. In the days of the Internet boom, web chats were very popular, too. Chatting is a real-time conversation or message exchange that takes place in public or in private groupings called chat rooms. Some chatrooms have moderators who will trace and block COMMUNICATION comments and other kinds of Communications. Based on visual representation chats are divided into text based chat room just as were IRC and Bitnet Relay Chat, 2D – supporting graphic smilies; and 3D the conversation in which takes place in 2D graphic surrounding. References Internet New media Multimedia |
No, this text is not related with defense topics | Numerical analysis is the study of algorithms that use numerical approximation (as opposed to symbolic manipulations) for the problems of mathematical analysis (as distinguished from discrete mathematics). Numerical analysis finds application in all fields of engineering and the physical sciences, and in the 21st century also the life and social sciences, medicine, business and even the arts. Current growth in computing power has enabled the use of more complex numerical analysis, providing detailed and realistic mathematical models in science and engineering. Examples of numerical analysis include: ordinary differential equations as found in celestial mechanics (predicting the motions of planets, stars and galaxies), numerical linear algebra in data analysis, and stochastic differential equations and Markov chains for simulating living cells in medicine and biology. Before modern computers, numerical methods often relied on hand interpolation formulas, using data from large printed tables. Since the mid 20th century, computers calculate the required functions instead, but many of the same formulas continue to be used in software algorithms. The numerical point of view goes back to the earliest mathematical writings. A tablet from the Yale Babylonian Collection (YBC 7289), gives a sexagesimal numerical approximation of the square root of 2, the length of the diagonal in a unit square. Numerical analysis continues this long tradition: rather than giving exact symbolic answers translated into digits and applicable only to real-world measurements, approximate solutions within specified error bounds are used. General introduction The overall goal of the field of numerical analysis is the design and analysis of techniques to give approximate but accurate solutions to hard problems, the variety of which is suggested by the following: Advanced numerical methods are essential in making numerical weather prediction feasible. Computing the trajectory of a spacecraft requires the accurate numerical solution of a system of ordinary differential equations. Car companies can improve the crash safety of their vehicles by using computer simulations of car crashes. Such simulations essentially consist of solving partial differential equations numerically. Hedge funds (private investment funds) use tools from all fields of numerical analysis to attempt to calculate the value of stocks and derivatives more precisely than other market participants. Airlines use sophisticated optimization algorithms to decide ticket prices, airplane and crew assignments and fuel needs. Historically, such algorithms were developed within the overlapping field of operations research. Insurance companies use numerical programs for actuarial analysis. The rest of this section outlines several important themes of numerical analysis. History The field of numerical analysis predates the invention of modern computers by many centuries. Linear interpolation was already in use more than 2000 years ago. Many great mathematicians of the past were preoccupied by numerical analysis, as is obvious from the names of important algorithms like Newton's method, Lagrange interpolation polynomial, Gaussian elimination, or Euler's method. To facilitate computations by hand, large books were produced with formulas and tables of data such as interpolation points and function coefficients. Using these tables, often calculated out to 16 decimal places or more for some functions, one could look up values to plug into the formulas given and achieve very good numerical estimates of some functions. The canonical work in the field is the NIST publication edited by Abramowitz and Stegun, a 1000-plus page book of a very large number of commonly used formulas and functions and their values at many points. The function values are no longer very useful when a computer is available, but the large listing of formulas can still be very handy. The mechanical calculator was also developed as a tool for hand computation. These calculators evolved into electronic computers in the 1940s, and it was then found that these computers were also useful for administrative purposes. But the invention of the computer also influenced the field of numerical analysis, since now longer and more complicated calculations could be done. Direct and iterative methods Consider the problem of solving 3x3 + 4 = 28 for the unknown quantity x. For the iterative method, apply the bisection method to f(x) = 3x3 − 24. The initial values are a = 0, b = 3, f(a) = −24, f(b) = 57. From this table it can be concluded that the solution is between 1.875 and 2.0625. The algorithm might return any number in that range with an error less than 0.2. Discretization and numerical integration In a two-hour race, the speed of the car is measured at three instants and recorded in the following table. A discretization would be to say that the speed of the car was constant from 0:00 to 0:40, then from 0:40 to 1:20 and finally from 1:20 to 2:00. For instance, the total distance traveled in the first 40 minutes is approximately . This would allow us to estimate the total distance traveled as + + = , which is an example of numerical integration (see below) using a Riemann sum, because displacement is the integral of velocity. Ill-conditioned problem: Take the function . Note that f(1.1) = 10 and f(1.001) = 1000: a change in x of less than 0.1 turns into a change in f(x) of nearly 1000. Evaluating f(x) near x = 1 is an ill-conditioned problem. Well-conditioned problem: By contrast, evaluating the same function near x = 10 is a well-conditioned problem. For instance, f(10) = 1/9 ≈ 0.111 and f(11) = 0.1: a modest change in x leads to a modest change in f(x). Direct methods compute the solution to a problem in a finite number of steps. These methods would give the precise answer if they were performed in infinite precision arithmetic. Examples include Gaussian elimination, the QR factorization method for solving systems of linear equations, and the simplex method of linear programming. In practice, finite precision is used and the result is an approximation of the true solution (assuming stability). In contrast to direct methods, iterative methods are not expected to terminate in a finite number of steps. Starting from an initial guess, iterative methods form successive approximations that converge to the exact solution only in the limit. A convergence test, often involving the residual, is specified in order to decide when a sufficiently accurate solution has (hopefully) been found. Even using infinite precision arithmetic these methods would not reach the solution within a finite number of steps (in general). Examples include Newton's method, the bisection method, and Jacobi iteration. In computational matrix algebra, iterative methods are generally needed for large problems. Iterative methods are more common than direct methods in numerical analysis. Some methods are direct in principle but are usually used as though they were not, e.g. GMRES and the conjugate gradient method. For these methods the number of steps needed to obtain the exact solution is so large that an approximation is accepted in the same manner as for an iterative method. Discretization Furthermore, continuous problems must sometimes be replaced by a discrete problem whose solution is known to approximate that of the continuous problem; this process is called 'discretization'. For example, the solution of a differential equation is a function. This function must be represented by a finite amount of data, for instance by its value at a finite number of points at its domain, even though this domain is a continuum. Generation and propagation of errors The study of errors forms an important part of numerical analysis. There are several ways in which error can be introduced in the solution of the problem. Round-off Round-off errors arise because it is impossible to represent all real numbers exactly on a machine with finite memory (which is what all practical digital computers are). Truncation and discretization error Truncation errors are committed when an iterative method is terminated or a mathematical procedure is approximated and the approximate solution differs from the exact solution. Similarly, discretization induces a discretization error because the solution of the discrete problem does not coincide with the solution of the continuous problem. In the example above to compute the solution of , after ten iterations, the calculated root is roughly 1.99. Therefore, the truncation error is roughly 0.01. Once an error is generated, it propagates through the calculation. For example, the operation + on a computer is inexact. A calculation of the type is even more inexact. A truncation error is created when a mathematical procedure is approximated. To integrate a function exactly, an infinite sum of regions must be found, but numerically only a finite sum of regions can be found, and hence the approximation of the exact solution. Similarly, to differentiate a function, the differential element approaches zero, but numerically only a nonzero value of the differential element can be chosen. Numerical stability and well-posed problems Numerical stability is a notion in numerical analysis. An algorithm is called 'numerically stable' if an error, whatever its cause, does not grow to be much larger during the calculation. This happens if the problem is 'well-conditioned', meaning that the solution changes by only a small amount if the problem data are changed by a small amount. To the contrary, if a problem is 'ill-conditioned', then any small error in the data will grow to be a large error. Both the original problem and the algorithm used to solve that problem can be 'well-conditioned' or 'ill-conditioned', and any combination is possible. So an algorithm that solves a well-conditioned problem may be either numerically stable or numerically unstable. An art of numerical analysis is to find a stable algorithm for solving a well-posed mathematical problem. For instance, computing the square root of 2 (which is roughly 1.41421) is a well-posed problem. Many algorithms solve this problem by starting with an initial approximation x0 to , for instance x0 = 1.4, and then computing improved guesses x1, x2, etc. One such method is the famous Babylonian method, which is given by xk+1 = xk/2 + 1/xk. Another method, called 'method X', is given by xk+1 = (xk2 − 2)2 + xk. A few iterations of each scheme are calculated in table form below, with initial guesses x0 = 1.4 and x0 = 1.42. Observe that the Babylonian method converges quickly regardless of the initial guess, whereas Method X converges extremely slowly with initial guess x0 = 1.4 and diverges for initial guess x0 = 1.42. Hence, the Babylonian method is numerically stable, while Method X is numerically unstable. Numerical stability is affected by the number of the significant digits the machine keeps. If a machine is used that keeps only the four most significant decimal digits, a good example on loss of significance can be given by the two equivalent functions and Comparing the results of and by comparing the two results above, it is clear that loss of significance (caused here by catastrophic cancellation from subtracting approximations to the nearby numbers and , despite the subtraction being computed exactly) has a huge effect on the results, even though both functions are equivalent, as shown below The desired value, computed using infinite precision, is 11.174755... The example is a modification of one taken from Mathew; Numerical methods using MATLAB, 3rd ed. Areas of study The field of numerical analysis includes many sub-disciplines. Some of the major ones are: Computing values of functions One of the simplest problems is the evaluation of a function at a given point. The most straightforward approach, of just plugging in the number in the formula is sometimes not very efficient. For polynomials, a better approach is using the Horner scheme, since it reduces the necessary number of multiplications and additions. Generally, it is important to estimate and control round-off errors arising from the use of floating point arithmetic. Interpolation, extrapolation, and regression Interpolation solves the following problem: given the value of some unknown function at a number of points, what value does that function have at some other point between the given points? Extrapolation is very similar to interpolation, except that now the value of the unknown function at a point which is outside the given points must be found. Regression is also similar, but it takes into account that the data is imprecise. Given some points, and a measurement of the value of some function at these points (with an error), the unknown function can be found. The least squares-method is one way to achieve this. Solving equations and systems of equations Another fundamental problem is computing the solution of some given equation. Two cases are commonly distinguished, depending on whether the equation is linear or not. For instance, the equation is linear while is not. Much effort has been put in the development of methods for solving systems of linear equations. Standard direct methods, i.e., methods that use some matrix decomposition are Gaussian elimination, LU decomposition, Cholesky decomposition for symmetric (or hermitian) and positive-definite matrix, and QR decomposition for non-square matrices. Iterative methods such as the Jacobi method, Gauss–Seidel method, successive over-relaxation and conjugate gradient method are usually preferred for large systems. General iterative methods can be developed using a matrix splitting. Root-finding algorithms are used to solve nonlinear equations (they are so named since a root of a function is an argument for which the function yields zero). If the function is differentiable and the derivative is known, then Newton's method is a popular choice. Linearization is another technique for solving nonlinear equations. Solving eigenvalue or singular value problems Several important problems can be phrased in terms of eigenvalue decompositions or singular value decompositions. For instance, the spectral image compression algorithm is based on the singular value decomposition. The corresponding tool in statistics is called principal component analysis. Optimization Optimization problems ask for the point at which a given function is maximized (or minimized). Often, the point also has to satisfy some constraints. The field of optimization is further split in several subfields, depending on the form of the objective function and the constraint. For instance, linear programming deals with the case that both the objective function and the constraints are linear. A famous method in linear programming is the simplex method. The method of Lagrange multipliers can be used to reduce optimization problems with constraints to unconstrained optimization problems. Evaluating integrals Numerical integration, in some instances also known as numerical quadrature, asks for the value of a definite integral. Popular methods use one of the Newton–Cotes formulas (like the midpoint rule or Simpson's rule) or Gaussian quadrature. These methods rely on a "divide and conquer" strategy, whereby an integral on a relatively large set is broken down into integrals on smaller sets. In higher dimensions, where these methods become prohibitively expensive in terms of computational effort, one may use Monte Carlo or quasi-Monte Carlo methods (see Monte Carlo integration), or, in modestly large dimensions, the method of sparse grids. Differential equations Numerical analysis is also concerned with computing (in an approximate way) the solution of differential equations, both ordinary differential equations and partial differential equations. Partial differential equations are solved by first discretizing the equation, bringing it into a finite-dimensional subspace. This can be done by a finite element method, a finite difference method, or (particularly in engineering) a finite volume method. The theoretical justification of these methods often involves theorems from functional analysis. This reduces the problem to the solution of an algebraic equation. Software Since the late twentieth century, most algorithms are implemented in a variety of programming languages. The Netlib repository contains various collections of software routines for numerical problems, mostly in Fortran and C. Commercial products implementing many different numerical algorithms include the IMSL and NAG libraries; a free-software alternative is the GNU Scientific Library. Over the years the Royal Statistical Society published numerous algorithms in its Applied Statistics (code for these "AS" functions is here); ACM similarly, in its Transactions on Mathematical Software ("TOMS" code is here). The Naval Surface Warfare Center several times published its Library of Mathematics Subroutines (code here). There are several popular numerical computing applications such as MATLAB, TK Solver, S-PLUS, and IDL as well as free and open source alternatives such as FreeMat, Scilab, GNU Octave (similar to Matlab), and IT++ (a C++ library). There are also programming languages such as R (similar to S-PLUS), Julia, and Python with libraries such as NumPy, SciPy and SymPy. Performance varies widely: while vector and matrix operations are usually fast, scalar loops may vary in speed by more than an order of magnitude. Many computer algebra systems such as Mathematica also benefit from the availability of arbitrary-precision arithmetic which can provide more accurate results. Also, any spreadsheet software can be used to solve simple problems relating to numerical analysis. Excel, for example, has hundreds of available functions, including for matrices, which may be used in conjunction with its built in "solver". See also Analysis of algorithms Computational science Interval arithmetic List of numerical analysis topics Local linearization method Numerical differentiation Numerical Recipes Probabilistic numerics Symbolic-numeric computation Validated numerics Notes References Citations Sources (examples of the importance of accurate arithmetic). Trefethen, Lloyd N. (2006). "Numerical analysis", 20 pages. In: Timothy Gowers and June Barrow-Green (editors), Princeton Companion of Mathematics, Princeton University Press. External links Journals gdz.sub.uni-goettingen, Numerische Mathematik, volumes 1-66, Springer, 1959-1994 (searchable; pages are images). Numerische Mathematik, volumes 1–112, Springer, 1959–2009 Journal on Numerical Analysis, volumes 1-47, SIAM, 1964–2009 Online texts Numerical Recipes, William H. Press (free, downloadable previous editions) First Steps in Numerical Analysis (archived), R.J.Hosking, S.Joe, D.C.Joyce, and J.C.Turner CSEP (Computational Science Education Project), U.S. Department of Energy (archived 2017-08-01) Numerical Methods, ch 3. in the Digital Library of Mathematical Functions Numerical Interpolation, Differentiation and Integration, ch 25. in the Handbook of Mathematical Functions (Abramowitz and Stegun) Online course material Numerical Methods (), Stuart Dalziel University of Cambridge Lectures on Numerical Analysis, Dennis Deturck and Herbert S. Wilf University of Pennsylvania Numerical methods, John D. Fenton University of Karlsruhe Numerical Methods for Physicists, Anthony O’Hare Oxford University Lectures in Numerical Analysis (archived), R. Radok Mahidol University Introduction to Numerical Analysis for Engineering, Henrik Schmidt Massachusetts Institute of Technology Numerical Analysis for Engineering, D. W. Harder University of Waterloo Introduction to Numerical Analysis, Doron Levy University of Maryland Numerical Analysis - Numerical Methods (archived), John H. Mathews California State University Fullerton Mathematical physics Computational science |
No, this text is not related with defense topics | Nursing Personnel Convention, 1977 is an International Labour Organization Convention. It was established in 1977, with the preamble stating: Having decided upon the adoption of certain proposals with regard to employment and conditions of work and life of nursing personnel,... Ratifications As of 2013, the convention had been ratified by 41 states. External links Text. Ratifications. International Labour Organization conventions Nursing Treaties concluded in 1977 Treaties entered into force in 1979 Treaties of Azerbaijan Treaties of Bangladesh Treaties of the Byelorussian Soviet Socialist Republic Treaties of Belgium Treaties of the Republic of the Congo Treaties of Denmark Treaties of Ecuador Treaties of Egypt Treaties of El Salvador Treaties of Fiji Treaties of Finland Treaties of France Treaties of Ghana Treaties of Greece Treaties of Guatemala Treaties of Guinea Treaties of Guyana Treaties of Ba'athist Iraq Treaties of Italy Treaties of Jamaica Treaties of Kenya Treaties of Kyrgyzstan Treaties of Latvia Treaties of Lithuania Treaties of Luxembourg Treaties of Malawi Treaties of Malta Treaties of Norway Treaties of the Philippines Treaties of the Polish People's Republic Treaties of the Soviet Union Treaties of Portugal Treaties of Seychelles Treaties of Slovenia Treaties of Sweden Treaties of Tajikistan Treaties of Tanzania Treaties of the Ukrainian Soviet Socialist Republic Treaties of Uruguay Treaties of Venezuela Treaties of Zambia Health treaties 1977 in labor relations |
No, this text is not related with defense topics | A vertical ecosystem is an architectural gardening system developed by Ignacio Solano from the mur vegetal created by Patrick Blanc. This new approach enhances the previous archetype of mur vegetal and considers the relationship that exists between a set of living organisms, biocenosis, inhabiting a physical component, biotope. The system is based on the automated control of nutrients and plant parameters of the original wall, adding strains of bacteria, mycorrhizal fungi and interspecific symbiosis in plant selection, creating an artificial ecosystem from inert substrates. The system was created in 2007 and patented in 2010. Amongst abiotic factors that influence vertical ecosystems, namely the substrate and its environmental conditions, the physio-chemical characteristics possessed by the means are decisive.<ref>Clara Gerhardt and Brenda Vale (2010). Comparison of resource use and environmental performance of green walls with façade greenings and extensive green roofs.Victoria University of Wellington: School of Architecture.</ref> The texture, porosity and depth of the substrate, those that in a natural ecosystem are edaphic factors, have been tested to the point of finding fitogenerate materials with perfect levels of absorption and humidity for the development of more than forty living families of plants represented by around 120 species. Moreover, the substrate used in the system developed by Ignacio Solano provides the ecosystem with the necessary resistance to serve as a high-durability biotope. Environmental factors such as light, temperature and humidity are controlled by automated systems in interior vertical ecosystems. Ecosystems that are situated outside require study and analysis of the natural variables of their particular area, combined with a study of the behaviour of the numerous plant species in the biotope in each location. The selection and combination of species is one of the key factors for correct development. This is known as positive allelopathy. Hydrological factors, such as pH levels, the conductivity of the water, dissolved gases and salinity, are balanced with precision so that the hydroponic system functions at its maximum capacity. The objective required by this system is to incorporate all the nutrients and micronutrients necessary for the health of the plants, and therefore the whole ecosystem, in a constant manner. To control these variables, the vertical ecosystem implements a system of sensors and warnings that inform of any anomaly of measurements in real time, allowing remote monitoring and control of the system. Vertical ecosystems enable plant species, fungi, and bacteria to live in an environment of almost unlimited resources, generating interactions favourable to the system. In this way, they encourage healthy and exponential growth in their evolutionary stages, until they manage to adjust to their maximum value, known as the load capacity: the optimum capacity of living species interacting without stress in a limited space in search of mutualisms and intraspecific associations that benefit all the species involved. The success of a vertical ecosystem depends on the control of abiotic and biotic factors that limit the growth of plant populations, the control of environmental resistance''. Vertical ecosystems aim to prolong the life of planted species and bring the benefits of a traditional vertical garden, including: absorption of CO2, heavy metals and dust, natural thermal insulation, and reduction of noise pollution. Some of the works 2007 Vertical garden in Restaurante Els Vents, Alicante (Spain) 2007 Cube/Vertical garden, Getafe (Spain) 2008 Penthouse vertical garden, Murcia (Spain) 2008 Vertical garden, Paterna, Valencia (Spain) 2009 Indoor vertical garden, Elche (Spain) 2010 Luxury villa vertical garden, Ibiza (Spain) 2011 Cheese bar vertical garden, Madrid (Spain) 2011 Hollbox office vertical garden, Alicante (Spain) 2012 Hotel B3-Gaia vertical garden, Bogotá (Colombia) 2012 Scala shopping centre vertical garden, Quito (Ecuador) 2012 Unicentro Armenia shopping centre vertical garden, Colombia (advising and use of patent) 2012 Biomax vertical garden, Colombia (advising and use of patent) 2012 Librería Panamericana vertical garden, Colombia (advising and use of patent) 2012 Hotel Cosmos vertical garden, Colombia (advising and use of patent) 2013 Confederación de Empresarios Privados de Cochabamba vertical garden, Bolivia (advising and use of patent) 2013 Índalo banquet suite vertical garden, Alicante (Spain) 2013 Vertical garden, Murcia (Spain) 2013 Hotel Son Claret vertical garden, Mallorca (Spain) 2014 Vertical gardens in the Celebra Building of Montevideo (Uruguay) - (advising and use of patent) 2014 Vertical garden and green roof in the center of Elche (Spain) 2014 Vertical gardens in the Smart Building of Alhaurín de la Torre, Málaga (Spain) 2015 Vertical garden in Desguaces Otoniel, Alicante (Spain) References External links Plant roots Soil biology Symbiosis Oligotrophs Fungus ecology Botany Ecology |
No, this text is not related with defense topics | {{DISPLAYTITLE:beta-Methylamino-L-alanine}} β-Methylamino--alanine, or BMAA, is a non-proteinogenic amino acid produced by cyanobacteria. BMAA is a neurotoxin and its potential role in various neurodegenerative disorders is the subject of scientific research. Structure and properties BMAA is a derivative of the amino acid alanine with a methylamino group on the side chain. This non-proteinogenic amino acid is classified as a polar base. Sources and detection BMAA is produced by cyanobacteria in marine, freshwater, and terrestrial environments. In cultured non-nitrogen-fixing cyanobacteria, BMAA production increases in a nitrogen-depleted medium. BMAA has been found in aquatic organisms and in plants with cyanobacterial symbionts such as certain lichens, the floating fern Azolla, the leaf petioles of the tropical flowering plant Gunnera, cycads as well as in animals that eat the fleshy covering of cycad seeds, including flying foxes. High concentrations of BMAA are present in shark fins. Because BMAA is a neurotoxin, consumption of shark fin soup and cartilage pills therefore may pose a health risk. The toxin can be detected via several laboratory methods, including liquid chromatography, high-performance liquid chromatography, mass spectrometry, amino acid analyzer, capillary electrophoresis, and NMR spectroscopy. Neurotoxicity BMAA can cross the blood–brain barrier in rats. It takes longer to get into the brain than into other organs, but once there, it is trapped in proteins, forming a reservoir for slow release over time. Mechanisms Although the mechanisms by which BMAA causes motor neuron dysfunction and death are not entirely understood, current research suggests that there are multiple mechanisms of action. Acutely, BMAA can act as an excitotoxin on glutamate receptors, such as NMDA, calcium-dependent AMPA, and kainate receptors. The activation of the metabotropic glutamate receptor 5 is believed to induce oxidative stress in the neuron by depletion of glutathione. BMAA can be misincorporated into nascent proteins in place of -serine, possibly causing protein misfolding and aggregation, both hallmarks of tangle diseases, including Alzheimer's disease, Parkinson's disease, amyotrophic lateral sclerosis (ALS), progressive supranuclear palsy (PSP), and Lewy body disease. In vitro research has shown that protein association of BMAA may be inhibited in the presence of excess -serine. Effects A study performed in 2015 with Vervet monkeys (Chlorocebus sabaeus) in St. Kitts, which are homozygous for the apoE4 gene (a condition which in humans is a risk factor for Alzheimer's disease), found that vervets that were administered BMAA orally developed hallmark histopathology features of Alzheimer's disease, including amyloid beta plaques and neurofibrillary tangle accumulation. Vervets in the trial fed smaller doses of BMAA were found to have correlative decreases in these pathology features. Additionally, vervets that were co-administered BMAA with serine were found to have 70% less beta-amyloid plaques and neurofibrillary tangles than those administered BMAA alone, suggesting that serine may be protective against the neurotoxic effects of BMAA. This experiment represents the first in-vivo model of Alzheimer's disease that features both beta-amyloid plaques and hyperphosphorylated tau protein. This study also demonstrates that BMAA, an environmental toxin, can trigger neurodegenerative disease as a result of a gene-environment interaction. Degenerative locomotor diseases have been described in animals grazing on cycad species, fueling interest in a possible link between the plant and the etiology of ALS/PDC. Subsequent laboratory investigations discovered the presence of BMAA. BMAA induced severe neurotoxicity in rhesus macaques, including: limb muscle atrophy nonreactive degeneration of anterior horn cells degeneration and partial loss of pyramidal neurons of the motor cortex behavioral dysfunction conduction deficits in the central motor pathway neuropathological changes of motor cortex Betz cells There are reports that low BMAA concentrations can selectively kill cultured motor neurons from mouse spinal cords and produce reactive oxygen species. Scientists have also found that newborn rats treated with BMAA show a progressive neurodegeneration in the hippocampus, including intracellular fibrillar inclusions, and impaired learning and memory as adults. BMAA has been reported to be excreted into rodent breast milk, and subsequently transferred to the suckling offspring, suggesting mothers' and cows' milk might be other possible exposure routes. Human cases Chronic dietary exposure to BMAA is now considered to be a cause of the amyotrophic lateral sclerosis/parkinsonism–dementia complex (ALS/PDC) that had an extremely high rate of incidence among the Chamorro people of Guam. The Chamorro call the condition lytico-bodig. In the 1950s, ALS/PDC prevalence ratios and death rates for Chamorro residents of Guam and Rota were 50–100 times that of developed countries, including the United States. No demonstrable heritable or viral factors were found for the disease, and a subsequent decline of ALS/PDC after 1963 on Guam led to the search for responsible environmental agents. The use of flour made from cycad seed (Cycas micronesica) in traditional food items decreased as that plant became rarer and the Chamorro population became more Americanized following World War II. Cycads harbor symbiotic cyanobacteria of the genus Nostoc in specialized roots which push up through the leaf litter into the light; these cyanobacteria produce BMAA. In addition to eating traditional food items from cycad flour directly, BMAA may be ingested by humans through biomagnification. Flying foxes, a Chamorro delicacy, forage on the fleshy seed covering of cycad seeds and concentrate the toxin in their bodies. Twenty-four specimens of flying foxes from museum collections were tested for BMAA, which was found in large concentrations in the flying foxes from Guam. As of 2021 studies continued examining BMAA biomagnification in marine and estuarine systems and its possible impact on human health outside of Guam. Studies on human brain tissue of ALS/PDC, ALS, Alzheimer's disease, Parkinson's disease, Huntington's disease, and neurological controls indicated that BMAA is present in non-genetic progressive neurodegenerative disease, but not in controls or genetic-based Huntington's disease. research into the role of BMAA as an environmental factor in neurodegenerative disease continued. Clinical trials Safe and effective ways of treating ALS patients with -serine, which has been found to protect non-human primates from BMAA-induced neurodegeneration, have been goals of clinical trials conducted by the Phoenix Neurological Associates and the Forbes/Norris ALS/MND clinic and sponsored by the Institute for Ethnomedicine. See also Oxalyldiaminopropionic acid, a related toxin References Amino acids Neurotoxins Cyanotoxins Toxic amino acids |
No, this text is not related with defense topics | Libido (; colloquial: sex drive) is a person's overall sexual drive or desire for sexual activity. Libido is influenced by biological, psychological, and social factors. Biologically, the sex hormones and associated neurotransmitters that act upon the nucleus accumbens (primarily testosterone and dopamine, respectively) regulate libido in humans. Social factors, such as work and family, and internal psychological factors, such as personality and stress, can affect libido. Libido can also be affected by medical conditions, medications, lifestyle and relationship issues, and age (e.g., puberty). A person who has extremely frequent sexual urges, or a suddenly increased sex drive may be experiencing hypersexuality, while the opposite condition is hyposexuality. In psychoanalytic theory, libido is psychic drive or energy, particularly associated with sexual instinct, but also present in other instinctive desires and drives. A person may have a desire for sex, but not have the opportunity to act on that desire, or may on personal, moral or religious reasons refrain from acting on the urge. Psychologically, a person's urge can be repressed or sublimated. Conversely, a person can engage in sexual activity without an actual desire for it. Multiple factors affect human sex drive, including stress, illness, pregnancy, and others. A 2001 review found that, on average, men have a higher desire for sex than women. Sexual desires are often an important factor in the formation and maintenance of intimate relationships in humans. A lack or loss of sexual desire can adversely affect relationships. Changes in the sexual desires of any partner in a sexual relationship, if sustained and unresolved, may cause problems in the relationship. The infidelity of a partner may be an indication that a partner's changing sexual desires can no longer be satisfied within the current relationship. Problems can arise from disparity of sexual desires between partners, or poor communication between partners of sexual needs and preferences. There is no widely accepted measure of what is a healthy level for sex desire. Some people want to have sex every day, or more than once a day; others once a year or not at all. However, a person who lacks a desire for sexual activity for some period of time may be experiencing a hypoactive sexual desire disorder or may be asexual. Psychological perspectives Psychoanalysis Sigmund Freud, who is considered the originator of the modern use of the term, defined libido as "the energy, regarded as a quantitative magnitude... of those instincts which have to do with all that may be comprised under the word 'love'." It is the instinctual energy or force, contained in what Freud called the id, the strictly unconscious structure of the psyche. He also explained that it is analogous to hunger, the will to power, and so on insisting that it is a fundamental instinct that is innate in all humans. Freud developed the idea of a series of developmental phases in which the libido fixates on different erogenous zones—first in the oral stage (exemplified by an infant's pleasure in nursing), then in the anal stage (exemplified by a toddler's pleasure in controlling his or her bowels), then in the phallic stage, through a latency stage in which the libido is dormant, to its reemergence at puberty in the genital stage. (Karl Abraham would later add subdivisions in both oral and anal stages.) Freud pointed out that these libidinal drives can conflict with the conventions of civilised behavior, represented in the psyche by the superego. It is this need to conform to society and control the libido that leads to tension and disturbance in the individual, prompting the use of ego defenses to dissipate the psychic energy of these unmet and mostly unconscious needs into other forms. Excessive use of ego defenses results in neurosis. A primary goal of psychoanalysis is to bring the drives of the id into consciousness, allowing them to be met directly and thus reducing the patient's reliance on ego defenses. Freud viewed libido as passing through a series of developmental stages within the individual. Failure to adequately adapt to the demands of these different stages could result in libidinal energy becoming 'dammed up' or fixated in these stages, producing certain pathological character traits in adulthood. Thus the psychopathologized individual for Freud was an immature individual, and the goal of psychoanalysis was to bring these fixations to conscious awareness so that the libido energy would be freed up and available for conscious use in some sort of constructive sublimation. Analytical psychology According to Swiss psychiatrist Carl Gustav Jung, the libido is identified as the totality of psychic energy, not limited to sexual desire. As Jung states in "The Concept of Libido," "[libido] denotes a desire or impulse which is unchecked by any kind of authority, moral or otherwise. Libido is appetite in its natural state. From the genetic point of view it is bodily needs like hunger, thirst, sleep, and sex, and emotional states or affects, which constitute the essence of libido." The Duality (opposition) creates the energy (or libido) of the psyche, which Jung asserts expresses itself only through symbols: "It is the energy that manifests itself in the life process and is perceived subjectively as striving and desire." (Ellenberger, 697) These symbols may manifest as "fantasy-images" in the process of psychoanalysis which embody the contents of the libido, otherwise lacking in any definite form. Desire, conceived generally as a psychic longing, movement, displacement and structuring, manifests itself in definable forms which are apprehended through analysis. Defined more narrowly, libido also refers to an individual's urge to engage in sexual activity, and its antonym is the force of destruction termed mortido or destrudo. Factors that affect libido Endogenous compounds Libido is governed primarily by activity in the mesolimbic dopamine pathway (ventral tegmental area and nucleus accumbens). Consequently, dopamine and related trace amines (primarily phenethylamine) that modulate dopamine neurotransmission play a critical role in regulating libido. Other neurotransmitters, neuropeptides, and sex hormones that affect sex drive by modulating activity in or acting upon this pathway include: Testosterone (directly correlated) – and other androgens Estrogen (directly correlated) – and related female sex hormones Progesterone (inversely correlated) Oxytocin (directly correlated) Serotonin (inversely correlated) Norepinephrine (directly correlated) Acetylcholine Sex hormone levels and the menstrual cycle A woman's desire for sex is correlated to her menstrual cycle, with many women experiencing a heightened sexual desire in the several days immediately before ovulation, which is her peak fertility period, which normally occurs two days before and until two days after the ovulation. This cycle has been associated with changes in a woman's testosterone levels during the menstrual cycle. According to Gabrielle Lichterman, testosterone levels have a direct impact on a woman's interest in sex. According to her, testosterone levels rise gradually from about the 24th day of a woman's menstrual cycle until ovulation on about the 14th day of the next cycle, and during this period the woman's desire for sex increases consistently. The 13th day is generally the day with the highest testosterone levels. In the week following ovulation, the testosterone level is the lowest and as a result women will experience less interest in sex. Also, during the week following ovulation, progesterone levels increase, resulting in a woman experiencing difficulty achieving orgasm. Although the last days of the menstrual cycle are marked by a constant testosterone level, women's libido may get a boost as a result of the thickening of the uterine lining which stimulates nerve endings and makes a woman feel aroused. Also, during these days, estrogen levels decline, resulting in a decrease of natural lubrication. Although some specialists disagree with this theory, menopause is still considered by the majority a factor that can cause decreased sex desire in women. The levels of estrogen decrease at menopause and this usually causes a lower interest in sex and vaginal dryness which makes intercourse painful. However, the levels of testosterone increase at menopause and this may be why some women may experience a contrary effect of an increased libido. Psychological and social factors Certain psychological or social factors can reduce the desire for sex. These factors can include lack of privacy or intimacy, stress or fatigue, distraction, or depression. Environmental stress, such as prolonged exposure to elevated sound levels or bright light, can also affect libido. Other causes include experience of sexual abuse, assault, trauma, or neglect, body image issues, and anxiety about engaging in sexual activity. Individuals with PTSD may find themselves with reduced sexual desire. Struggling to find pleasure, as well as having trust issues, many with PTSD experience feelings of vulnerability, rage and anger, and emotional shutdowns, which have been shown to inhibit sexual desire in those with PTSD. Reduced sex drive may also be present in trauma victims due to issues arising in sexual function. For women, it has been found that treatment can improve sexual function, thus helping restore sexual desire. Depression and libido decline often coincide, with reduced sex drive being one of the symptoms of depression. Those suffering from depression often report the decline in libido to be far reaching and more noticeable than other symptoms. In addition, those with depression often are reluctant to report their reduced sex drive, often normalizing it with cultural/social values, or by the failure of the physician to inquire about it. Physical factors Physical factors that can affect libido include endocrine issues such as hypothyroidism, the effect of certain prescription medications (for example flutamide), and the attractiveness and biological fitness of one's partner, among various other lifestyle factors. In males, the frequency of ejaculations affects the levels of serum testosterone, a hormone which promotes libido. A study of 28 males aged 21–45 found that all but one of them had a peak (145.7% of baseline [117.8%–197.3%]) in serum testosterone on the 7th day of abstinence from ejaculation. Anemia is a cause of lack of libido in women due to the loss of iron during the period. Smoking, alcohol abuse, and the use of certain drugs can also lead to a decreased libido. Moreover, specialists suggest that several lifestyle changes such as exercising, quitting smoking, lowering consumption of alcohol or using prescription drugs may help increase one's sexual desire. Medications Some people purposefully attempt to decrease their libido through the usage of anaphrodisiacs. Aphrodisiacs, such as dopaminergic psychostimulants, are a class of drugs which can increase libido. On the other hand, a reduced libido is also often iatrogenic and can be caused by many medications, such as hormonal contraception, SSRIs and other antidepressants, antipsychotics, opioids, beta blockers and Isotretinoin. Isotretinoin and many SSRIs can cause a long-term decrease in libido and other sexual functions, even after users of those drugs have shown improvement in their depression and have stopped usage. Multiple studies have shown that with the exception of bupropion (Wellbutrin), trazodone (Desyrel) and nefazodone (Serzone), antidepressants generally will lead to lowered libido. SSRIs that typically lead to decreased libido are fluoxetine (Prozac), paroxetine (Paxil), fluvoxamine (Luvox), citalopram (Celexa) and sertraline (Zoloft). There are several ways to try to reap the benefits of the antidepressants while maintaining high enough sex drive levels. Some antidepressant users have tried decreasing their dosage in the hopes of maintaining an adequate sex drive. Results of this are often positive, with both drug effectiveness not reduced and libido preserved. Other users try enrolling in psychotherapy to solve depression-related issues of libido. However, the effectiveness of this therapy is mixed, with many reporting that it had no or little effect on sexual drive. Testosterone is one of the hormones controlling libido in human beings. Emerging research is showing that hormonal contraception methods like oral contraceptive pills (which rely on estrogen and progesterone together) are causing low libido in females by elevating levels of sex hormone-binding globulin (SHBG). SHBG binds to sex hormones, including testosterone, rendering them unavailable. Research is showing that even after ending a hormonal contraceptive method, SHBG levels remain elevated and no reliable data exists to predict when this phenomenon will diminish. Oral contraceptives lower androgen levels in users, and lowered androgen levels generally lead to a decrease in sexual desire. However, usage of oral contraceptives has shown to typically not have a connection with lowered libido in women. Multiple studies have shown that usage of oral contraceptives is associated with either a small increase or decrease in libido, with most users reporting a stable sex drive. Effects of age Males reach the peak of their sex drive in their teenage years, while females reach it in their thirties. The surge in testosterone hits the male at puberty resulting in a sudden and extreme sex drive which reaches its peak at age 15–16, then drops slowly over his lifetime. In contrast, a female's libido increases slowly during adolescence and peaks in her mid-thirties. Actual testosterone and estrogen levels that affect a person's sex drive vary considerably. Some boys and girls will start expressing romantic or sexual interest by age 10–12. The romantic feelings are not necessarily sexual, but are more associated with attraction and desire for another. For boys and girls in their preteen years (ages 11–12), at least 25% report "thinking a lot about sex". By the early teenage years (ages 13–14), however, boys are much more likely to have sexual fantasies than girls. In addition, boys are much more likely to report an interest in sexual intercourse at this age than girls. Masturbation among youth is common, with prevalence among the population generally increasing until the late 20s and early 30s. Boys generally start masturbating earlier, with less than 10% boys masturbating around age 10, around half participating by age 11–12, and over a substantial majority by age 13–14. This is in sharp contrast to girls where virtually none are engaging in masturbation before age 13, and only around 20% by age 13–14. People in their 60s and early 70s generally retain a healthy sex drive, but this may start to decline in the early to mid-70s. Older adults generally develop a reduced libido due to declining health and environmental or social factors. In contrast to common belief, postmenopausal women often report an increase in sexual desire and an increased willingness to satisfy their partner. Women often report family responsibilities, health, relationship problems, and well-being as inhibitors to their sexual desires. Aging adults often have more positive attitudes towards sex in older age due to being more relaxed about it, freedom from other responsibilities, and increased self-confidence. Those exhibiting negative attitudes generally cite health as one of the main reasons. Stereotypes about aging adults and sexuality often regard seniors as asexual beings, doing them no favors when they try to talk about sexual interest with caregivers and medical professionals. Non-western cultures often follow a narrative of older women having a much lower libido, thus not encouraging any sort of sexual behavior for women. Residence in retirement homes has affects on residents' libidos. In these homes, sex occurs, but it is not encouraged by the staff or other residents. Lack of privacy and resident gender imbalance are the main factors lowering desire. Generally, for older adults, being excited about sex, good health, sexual self-esteem and having a sexually talented partner can be factors. Sexual desire disorders A sexual desire disorder is more common in women than in men, and women tend to exhibit less frequent and less intense sexual desires than men. Erectile dysfunction may happen to the penis because of lack of sexual desire, but these two should not be confused. For example, large recreational doses of amphetamine or methamphetamine can simultaneously cause erectile dysfunction and significantly increase libido. However, men can also experience a decrease in their libido as they age. The American Medical Association has estimated that several million US women suffer from a female sexual arousal disorder, though arousal is not at all synonymous with desire, so this finding is of limited relevance to the discussion of libido. Some specialists claim that women may experience low libido due to some hormonal abnormalities such as lack of luteinising hormone or androgenic hormones, although these theories are still controversial. See also References Further reading Ellenberger, Henri (1970). The Discovery of the Unconscious: The History and Evolution of Dynamic Psychiatry. New York: Basic Books. Hardcover , softcover . Froböse, Gabriele, and Froböse, Rolf. Lust and Love: Is It More than Chemistry? Michael Gross (trans. and ed.). Royal Society of Chemistry, (2006) Giles, James, The Nature of Sexual Desire, Lanham, Maryland: University Press of America, 2008. Carl Jung Energy and instincts Estrogens Freudian psychology Motivation Philosophy of sexuality Psychoanalytic terminology Psychodynamics Testosterone |
No, this text is not related with defense topics | Salience (also called saliency) is that property by which some thing stands out. Salient events are an attentional mechanism by which organisms learn and survive; those organisms can focus their limited perceptual and cognitive resources on the pertinent (that is, salient) subset of the sensory data available to them. Saliency typically arises from contrasts between items and their neighborhood. They might be represented, for example, by a red dot surrounded by white dots, or by a flickering message indicator of an answering machine, or a loud noise in an otherwise quiet environment. Saliency detection is often studied in the context of the visual system, but similar mechanisms operate in other sensory systems. Just what is salient can be influenced by training: for example, for human subjects particular letters can become salient by training. There can be a sequence of necessary events, each of which has to be salient, in turn, in order for successful training in the sequence; the alternative is a failure, as in an illustrated sequence when tying a bowline; in the list of illustrations, even the first illustration is a salient: the rope in the list must cross over, and not under the bitter end of the rope (which can remain fixed, and not free to move); failure to notice that the first salient has not been satisfied means the knot will fail to hold, even when the remaining salient events have been satisfied. When attention deployment is driven by salient stimuli, it is considered to be bottom-up, memory-free, and reactive. Conversely, attention can also be guided by top-down, memory-dependent, or anticipatory mechanisms, such as when looking ahead of moving objects or sideways before crossing streets. Humans and other animals have difficulty paying attention to more than one item simultaneously, so they are faced with the challenge of continuously integrating and prioritizing different bottom-up and top-down influences. Neuroanatomy The brain component named the hippocampus helps with the assessment of salience and context by using past memories to filter new incoming stimuli, and placing those that are most important into long term memory. The entorhinal cortex is the pathway into and out of the hippocampus, and is an important part of the brain's memory network; research shows that it is a brain region that suffers damage early on in Alzheimer's disease, one of the effects of which is altered (diminished) salience. The pulvinar nuclei (in the thalamus) modulate physical/perceptual salience in attentional selection. One group of neurons (i.e., D1-type medium spiny neurons) within the nucleus accumbens shell (NAcc shell) assigns appetitive motivational salience ("want" and "desire", which includes a motivational component), aka incentive salience, to rewarding stimuli, while another group of neurons (i.e., D2-type medium spiny neurons) within the NAcc shell assigns aversive motivational salience to aversive stimuli. The primary visual cortex (V1) generates a bottom-up saliency map from visual inputs to guide reflexive attentional shifts or gaze shifts. According to V1 Saliency Hypothesis, the saliency of a location is higher when V1 neurons give higher responses to that location relative to V1 neurons' responses to other visual locations. For example, a unique red item among green items, or a unique vertical bar among horizontal bars, is salient since it evokes higher V1 responses and attracts attention or gaze. The V1 neural responses are sent to the superior colliculus to guide gaze shifts to the salient locations. A fingerprint of the saliency map in V1 is that attention or gaze can be captured by the location of an eye-of-origin singleton in visual inputs, e.g., a bar uniquely shown to the left eye in a background of many other bars shown to the right eye, even when observers cannot tell the difference between the singleton and the background bars. In psychology The term is widely used in the study of perception and cognition to refer to any aspect of a stimulus that, for any of many reasons, stands out from the rest. Salience may be the result of emotional, motivational or cognitive factors and is not necessarily associated with physical factors such as intensity, clarity or size. Although salience is thought to determine attentional selection, salience associated with physical factors does not necessarily influence selection of a stimulus. Salience bias Salience bias (also known as perceptual salience) is the cognitive bias that predisposes individuals to focus on items that are more prominent or emotionally striking and ignore those that are unremarkable, even though this difference is often irrelevant by objective standards. Salience bias is closely related to the concept of availability in behavioral economics: In interaction design Salience in design draws from the cognitive aspects of attention, and applies it to the making of 2D and 3D objects. When designing computer and screen interfaces, salience helps draw attention to certain objects like buttons and signify affordance, so designers can utilize this aspect of perception to guide users. There are several variables used to direct attention: Color. Hue, saturation, and value can all be used to call attention to areas or objects within an interface, and de-emphasize others. Size. Object size and proportion to surrounding elements creates visual hierarchy, both in interactive elements like buttons, but also within informative elements like text. Position. An object's orientation or spatial arrangement in relation to the surrounding objects creates differentiation to invite action. Accessibility A consideration for salience in interaction design is accessibility. Many interfaces used today rely on visual salience for guiding user interaction, and people with disabilities like color-blindness may have trouble interacting with interfaces using color or contrast to create salience. Aberrant salience hypothesis of schizophrenia Kapur (2003) proposed that a hyperdopaminergic state, at a "brain" level of description, leads to an aberrant assignment of salience to the elements of one's experience, at a "mind" level. These aberrant salience attributions have been associated with altered activities in the mesolimbic system, including the striatum, the amygdala, the hippocampus, the parahippocampal gyrus., the anterior cingulate cortex and the insula. Dopamine mediates the conversion of the neural representation of an external stimulus from a neutral bit of information into an attractive or aversive entity, i.e. a salient event. Symptoms of schizophrenia may arise out of 'the aberrant assignment of salience to external objects and internal representations', and antipsychotic medications reduce positive symptoms by attenuating aberrant motivational salience via blockade of the dopamine D2 receptors (Kapur, 2003). Alternative areas of investigation include supplementary motor areas, frontal eye fields and parietal eye fields. These areas of the brain are involved with calculating predictions and visual salience. Changing expectations on where to look restructures these areas of the brain. This cognitive repatterning can result in some of the symptoms found in such disorders. Visual saliency modeling In the domain of psychology, efforts have been made in modeling the mechanism of human attention, including the learning of prioritizing the different bottom-up and top-down influences. In the domain of computer vision, efforts have been made in modeling the mechanism of human attention, especially the bottom-up attentional mechanism, including both spatial and temporal attention. Such a process is also called visual saliency detection. Generally speaking, there are two kinds of models to mimic the bottom-up saliency mechanism. One way is based on the spatial contrast analysis: for example, a center-surround mechanism is used to define saliency across scales, which is inspired by the putative neural mechanism. The other way is based on the frequency domain analysis. While they used the amplitude spectrum to assign saliency to rarely occurring magnitudes, Guo et al. use the phase spectrum instead. Recently, Li et al. introduced a system that uses both the amplitude and the phase information. A key limitation in many such approaches is their computational complexity leading to less than real-time performance, even on modern computer hardware. Some recent work attempts to overcome these issues at the expense of saliency detection quality under some conditions. Other work suggests that saliency and associated speed-accuracy phenomena may be a fundamental mechanisms determined during recognition through gradient descent, needing not be spatial in nature. See also Availability heuristic Dopamine hypothesis of schizophrenia Latent inhibition Schizophrenia Schizotypy Spatial attention Temporal attention References External links iLab at the University of Southern California Scholarpedia article on visual saliency by Prof. Laurent Itti Saliency map at Scholarpedia Cognitive neuroscience Neuropsychology Attention Computer vision |
No, this text is not related with defense topics | Regulatory focus theory (RFT) is a theory of goal pursuit formulated by Columbia University psychology professor and researcher E. Tory Higgins regarding people's perceptions in the decision making process. RFT examines the relationship between the motivation of a person and the way in which they go about achieving their goal. RFT posits two separate and independent self-regulatory orientations: prevention and promotion (Higgins, 1997). This psychological theory, like many others, is applied in communication, specifically in the subfields of nonverbal communication and persuasion. Chronic regulatory focus is measured using the Regulatory Focus Questionnaire (Higgins et al., 2001) or the Regulatory Strength measure. Momentary regulatory focus can be primed or induced. Background Regulatory fit theory To understand RFT, it is important to understand another of E. Tory Higgins' theories: regulatory fit theory. When a person believes that there is "fit", they will involve themselves more in what they are doing and "feel right" about it. Regulatory fit should not directly affect the hedonic occurrence of a thing or occasion, but should influence a person's assurance in their reaction to the object or event. Regulatory fit theory suggests that a match between orientation to a goal and the means used to approach that goal produces a state of regulatory fit that both creates a feeling of rightness about the goal pursuit and increases task engagement (Higgins, 2001, 2005). Regulatory fit intensifies responses, such as the value of a chosen object, persuasion, and job satisfaction. Regulatory fit does not increase the assessment of a decision; instead when someone feels "right" about their decision, the experience of "correctness and importance" is transferred to the ensuing assessment of the chosen object, increasing its superficial worth. Research suggests that the "feeling right" experience can then sway retrospective or prospective evaluations. Regulatory fit can be manipulated incidentally (outside the context of interest) or integrally (within the context of interest). Definition RFT refers to when a person pursues a goal in a way that maintains the person's own personal values and beliefs, also known as regulatory orientation. This theory operates on the basic principle that people embrace pleasure but avoid pain, and they then maintain their regulatory fit based on this standard. The regulatory focus is basically the way in which someone approaches pleasure but avoids pain. An individual's regulatory focus concentrates on desired end-states, and the approach motivation used to go from the current state to the desired end-state. This theory differentiates between a promotion-focus on hopes and accomplishments, also known as gains. This focus is more concerned with higher level gains such as advancement and accomplishment. Another focus is the prevention-focus based on safety and responsibilities, also known as non-losses. This focus emphasizes security and safety by following the guidelines and the rules. These two regulatory focuses regulate the influences that a person would be exposed to in the decision-making process, and determine the different ways they achieve their goal, as discussed by RFT. An individual's regulatory orientation is not necessarily fixed. While individuals have chronic tendencies towards either promotion or prevention, these preferences may not hold for all situations. Furthermore, a specific regulatory focus can be induced. The value taken from interaction and goal attainment can be either positive or negative. The decision has positive value when people attempt to attain their goal in a way that fits their regulatory orientation and it will have negative value when people attempt to attain their goal in a way that does not fit their regulatory orientation. Regulatory fit allows value to be created by intensifying the commitment, based on one of the regulatory focus orientations. Making choices and fulfilling objectives are considered as activities, and with any activity, people can be more or less involved. When this involvement is strong, it can intensify the feelings and values about this activity, and the approach to the activity determines whether they are or are not satisfied with the outcome and method of achieving the outcome. This theory has noteworthy implications for increasing the value of life. For example, in interpersonal conflict, if each person experiences "fit", each one will be satisfied with and committed to the outcome. In the broad sense, for people to appreciate their own lives, they need to be satisfied and "feel right" about what they are doing, and the way they are doing it. If it is not satisfying, it is known as "non-fit", and they will not reach their desired goal. Goal attainment and motivation Regulatory focus theory, according to Higgins, views motivation in a way that allows an understanding of the foundational ways we approach a task or a goal. Different factors can motivate people during goal pursuit, and we self-regulate our methods and processes during our goal pursuit. RFT proposes that motivational strength is enhanced when the manner in which people work toward a goal sustains their regulatory orientation. Achieving a goal in a way that is consistent to a person's regulatory orientation leads to an individual sense of importance to the event. The impact of motivation is considered calculated and this creates a greater sense of commitment to the goal. The more strongly an individual is engaged (i.e., involved, occupied, fully engrossed) in an activity, the more intense the motivational force experienced. Engagement is of great importance to attain and motivate in order to reach a goal. Engagement serves as intensifier of the directional component of the value experience. An individual who is strongly engaged in a goal pursuits will experience a positive target more positively and a negative target more negatively. Individuals can pursue different goals with diverse regulatory orientations and in unlike ways. There are two different kinds of regulatory orientations that people use to obtain their goals: promotion-focus orientation and prevention-focus orientation. These terms are derived from E. Tory Higgins's Theory of Regulatory Focus. In which, he adds to the notion that people regulate their goal-oriented behavior in two very distinct ways, coined promotion-focus orientation and prevention-focus orientation E. Tory Higgins uses this example: there is Student A and Student B, and they both have the shared goal to make an A in a class they are both taking in college. Student A uses a promotion-focus orientation which slants them towards achieving their goal and towards advancement, growth and life accomplishment. This would cause Student A to view the goal as an ideal that satisfies their need for accomplishment. Student B uses a prevention-focus orientation where the goal is something that should be realized because it fulfills their need for security, protection and prevention of negative outcomes. Student A uses an eager approach where they read extra materials to obtain their goal of an A. Student B uses a vigilant approach where they become more detail oriented and pay careful attention to completing all of the course requirements. Both forms of regulatory orientation can work to fulfill goals, but the choice of orientation is based on individual preferences and style. When a person pursues their goal in the focus that fits their regulatory orientation, they are more likely to pursue their goal more eagerly and aggressively than if they were using the other focus. In this case each student has different styles. They both feel more comfortable in persuading their goal. The outcome in this experiment would have been different if the students were given an undesirable choice. When people make decisions, they often envision the possible "pleasure or pain" of the possible outcomes that the focus orientation will produce. A person imagining making a pleasing choice is more likely to engage in promotion-focus orientation because envisioning the possible outcome of success maintains eagerness about the outcome but does not place importance on vigilance. A person imagining the possible pain by making an undesirable choice maintains more vigilance but less eagerness. A person with promotion-focus orientation is more likely to remember the occasions where the goal is pursued by using eagerness approaches and less likely to remember occasions where the goal is pursued by vigilance approaches. A person with prevention-focus orientation is more likely to remember events where the goal is pursued by means of vigilance than if it was pursued using eagerness approaches. Application Regulatory focus theory and persuasion When relating regulatory focus theory to persuasion, it is important to remember that RFT is a goal-attainment theory, and that RFT can spawn feelings of rightness/wrongness which in turn may produce formulations for judgments. The feelings of rightness give an individual more commitment to the information coming in and therefore can avoid endangering their regulatory fit which in turn changes their regulatory focus and accepting a probable motive to change. If a person experiences feelings of wrongness they will suffer negative emotions and deem the experience and information as a threat to their regulatory fit and therefore a threat to their regulatory focus and their goal. Studies have been done where fit and focus have been applied to show their applicability to consumer purchasing, health advisories, and social policy issues. To be persuaded is to change your prior feelings, actions, and/or beliefs on a matter to where you agree with the persuader. The "fit" involved in RFT plays a large role in such issues and stories because it can be a device to help an individual receive and review the experience during a particular message delivery. Positive reinforcement and feelings of rightness while decoding the message creates a stronger engagement and relationship with processing the message, and negative reinforcement and feelings of wrongness lessens the engagement and attachment. Researchers found that targeting the two different regulatory focus orientations, and their coinciding types of fit, works as an effective process to aid in persuasive charm or pull when they introduced a manner of persuasion where the framing of the message was everything and the content was irrelevant to uphold or interrupt a person's regulatory fit and follow the pattern of logic used in regulatory orientation. Lee and Aaker (2004) conducted an experiment that involved whether or not to give their information in a prevention-focus- or promotion-focus-concerning way. The study involved an advertisement for a grape juice drink, which they split into two to create prevention-focus concerns (disease-preventing) and then promotion-focus concerns (energy enhancement). In doing so, they demonstrated that rather than trying to know each individual recipient's qualities, one needs only to start by nailing the focus (prevention/promotion) and then framing the message so that it creates that "rightness". Some may confuse RFT with regulatory fit, regulatory relevance, message matching, and source attractiveness in such an example. The extent of similarities between closely related theories of RFT, such as ones stated above, make it hard to clarify when this theory is applicable or apparent in respect to the persuasion process. Regulatory focus theory and nonverbal communication RFT can be a useful outline for a better understanding of the effects of nonverbal cues in persuasion and impression formation. Regulatory Fit Theory suggests that the effect of a cue cannot be understood without remembering what the cue means given a recipient's focus orientation. Nonverbal cues can be used by the message source to vary delivery style, more specifically to convey eagerness or vigilance, of a given message in a way that will produce regulatory fit in message recipients of different focus orientations. Advancement implies eager movement forward, so eagerness is conveyed by gestures that involve animated, broad opening movements such as hand movements projecting outward, forward leaning body positions, fast body movement, and fast speech rate. Caution implies vigilant carefulness, so vigilance should be conveyed by gestures that show precision like slightly backward-leaning body positions, slower body movement, and slower speech rate. An eager nonverbal delivery style will result in greater message effectiveness for promotion-focus recipients than for prevention-focus recipients, while the opposite is true for a vigilant nonverbal style. There are various aspects, which may contribute to whether or not a message's persuasive element is successful. One aspect is the effect of nonverbal cues and their association with persuasive appeals based on the message recipient's motivational regulatory orientation. This determines the recipient's impression of the source during impression formation. Research has found that nonverbal cues are an essential element of most persuasive appeals. RFT creates the background that allows a prediction for when and for whom a nonverbal cue can have an effect on persuasion. When nonverbal cues and signals are used appropriately, they increase the effectiveness of persuasion. References Sources Avnet, T., & Higgins, E. (2003, September). Locomotion, assessment, and regulatory fit: Value transfer from "how" to "what". Journal of Experimental Social Psychology, 39(5), 525. Retrieved April 10, 2009, Cesario, J., Higgins, E. (2008 May) Making Message Recipients "Feel Right": How Nonverbal Cues Can Increase Persuasion. Psychological Science, 19(5), 415–420, Cesario, J., Higgins, E., & Scholer, A. (2008, January). Regulatory fit and persuasion: Basic principles and remaining questions. Social and Personality Psychology Compass, 2(1), 444–463. Retrieved April 10, 2009, from PsycINFO database Higgins, E. (1997, December). Beyond pleasure and pain. American Psychologist, 52(12), 1280–1300. Retrieved April 10, 2009, Higgins, E. (2000, November). Making a good decision: Value from fit. American Psychologist, 55(11), 1217–1230. Retrieved April 10, 2009, Higgins, E. "Value From Regulatory Fit." American Psychological Society 14 (2005): 209–13. Higgins, E., Friedman, R., Harlow, R., Idson, L., Ayduk, O., & Taylor, A. (2001, January). Achievement orientations from subjective histories of success: Promotion pride versus prevention pride. European Journal of Social Psychology, 31(1), 3–23. Retrieved April 10, 2009, from Academic Search Elite database. Higgins, E., Idson, L., Freitas, A., Spiegel, S., & Molden, D. (2003, June). Transfer of value from fit. Journal of Personality and Social Psychology, 84(6), 1140–1153. Retrieved April 10, 2009, Spiegel, S., Grant-Pillow, H., & Higgins, E.(2004, January). How regulatory fit enhances motivational strength during goal pursuit. European Journal of Social Psychology, 34(1), 39–54. Retrieved April 10, 2009, . Vaughn, Leigh Ann, Sarah J. Hesse, Zhivka Petkova, and Lindsay Trudeau. ""This story is right on": The impact of regulatory fit on narrative engagement and persuasion." 4 Oct. 2008. Wiley InterScience. Journals. University of Oklahoma Library, Norman. 9 Apr. 2009 External links and further reading HigginsLab: An overview, with additional reading suggestions, for E. Tory Higgins' theories Understanding Regulatory Fit: A look from a marketing perspective Counterfactual thinking and regulatory focus and fit: Application of Regulatory Focus and Regulatory Fit to Counterfactual Thinking Recasting Goal Setting in Negotiation: A Regulatory Focus Perspective Communication Nonverbal communication Motivation Motivational theories Psychological theories Attitude change |
No, this text is not related with defense topics | Supposition theory was a branch of medieval logic that was probably aimed at giving accounts of issues similar to modern accounts of reference, plurality, tense, and modality, within an Aristotelian context. Philosophers such as John Buridan, William of Ockham, William of Sherwood, Walter Burley, Albert of Saxony, and Peter of Spain were its principal developers. By the 14th century it seems to have drifted into at least two fairly distinct theories, the theory of "supposition proper", which included an "ampliation" and is much like a theory of reference, and the theory of "modes of supposition" whose intended function is not clear. Supposition proper Supposition was a semantic relation between a term and what that term was being used to talk about. So, for example, in the suggestion Drink another cup, the term cup is suppositing for the wine contained in the cup. The logical suppositum of a term was the object the term referred to. (In grammar, suppositum was used in a different way). However, supposition was a different semantic relationship from signification. Signification was a conventional relationship between utterances and objects mediated by the particularities of a language. Poculum signifies in Latin what cup signifies in English. Signification is the imposition of a meaning on an utterance, but supposition is taking a meaningful term as standing in for something. According to Peter of Spain "Hence signification is prior to supposition. Neither do they belong to the same thing. For to signify belongs to an utterance, but to supposit belongs to a term already, as it were, put together out of an utterance and a signification." An easy way to see the difference is in our drink another cup example. Here cup as an utterance signifies a cup as an object, but cup as a term of the language English is being used to supposit for the wine contained in the cup. Medieval logicians divided supposition into many different kinds; the jargons for the different kinds, their relations and what they all mean get complex, and differ greatly from logician to logician. Paul Spade's webpage has a series of helpful diagrams here. The most important division is probably between material, simple, personal, and improper supposition. A term supposits materially when it is used to stand in for an utterance or inscription, rather than for what it signifies. When I say Cup is a monosyllabic word, I am using the word cup to supposit materially for the utterance cup rather than for a piece of pottery. Material supposition is a medieval way of doing the work we would do today by using quotation marks. According to Ockham (Summa of Logic I64, 8) "Simple supposition occurs when a term supposits for an intention of the soul, but is not take significatively." The idea is that simple supposition happens when the term is standing in for a human concept rather than for the object itself. If I say Cups are an important type of pottery the term cups is not standing in for any particular cup, but for the idea of a cup in the human mind (according to Ockham, and many medieval logicians, but not according to John Buridan). Personal supposition in contrast is when the term supposits for what it signifies. If I say Pass me the cup the term cup is standing in for the object that is called a cup in English, so it is in personal supposition. A term is in improper supposition if it is suppositing for an object, but a different object than it signifies, as in my example Drink another cup. Modes of supposition Personal supposition was further divided in types such as discrete, determinate, merely confused, and confused and distributive. In 1966 T.K. Scott proposed giving a separate name for Medieval discussions of the subvarieties of personal supposition, because he thought it was a fairly distinct issue from the other varieties of supposition. He proposed calling the subvarieties of personal supposition a theory of "modes of supposition." The Medieval logicians give elaborate sets of syntactical rules for determining when a term supposits discretely, determinately, confusedly, or confusedly and distributively. So for example the subject of a negative claim, or indefinite one supposits determinately, but the subject of a singular claim supposits discretely, while the subject of an affirmative claim supposits confusedly and determinately. Albert of Saxony gives 15 rules for determining which type of personal supposition a term is using. Further the medieval logicians did not seem to dispute about the details of the syntactic rules for determining type of personal supposition. These rules seem to be important because they were linked to theories of descent to particulars and ascent from particulars. When I say I want to buy a cup I've made an indefinite affirmative claim, with cup as the predicate term. Further cup is a common term, including many particular cups within it. So if I "descend to particulars" I can re-phrase my claim as I want to buy this cup or I want to buy that cup, or I want to buy that other cup - and so on for all cups. If I had an infinite disjunction of all particular cups, it could stand in for the term cup, in its simple supposition in I want to buy a cup. This is called determinate supposition. That is when I say I want to buy a cup I mean some determinate cup, but I don't necessarily know which one yet. Likewise if I say Some cup isn't a table, I could substitute This cup isn't a table, or that cup isn't a table or ... On the other hand, if I say No cup is a table, I don't mean This cup isn't a table or that one isn't a table or ... I mean This cup isn't a table, AND that cup isn't a table, AND that other cup isn't a table, AND .... Here I am referring not to a determinate particular cup, but to all cups "fused" together, that is all cups "confusedly." This is called confused and distributive supposition. If I say This cup is made of gold I cannot descend to a disjunction of particulars, or to a conjunction of particulars, but only because this cup is already a particular. This kind of personal supposition is called discrete supposition. However, the predicate of a universal affirmative claim won't really fit any of these models. All coffee cups are cups does not imply All coffee cups are this cup, or all coffee cups are that cup, or ..., but still less does it imply All coffee cups are this cup, and all coffee cups are that cup, and .... On the other hand, if it happened to be the case that there was only one coffee cup left in the world, it would be true that All coffee cups are that cup, so I can validly infer from All coffee cups are that cup, to All coffee cups are cups. Here descent to disjunction fails, and descent to conjunction fails, but "ascent from particulars" is valid. This is called "merely confused supposition." That is basically how the theory works, a much thornier problem is exactly what the theory is for. Some commentators, like Michael Loux, have suggested that the theory of ascent and descent to particulars is intended to provide truth conditions for the quantifiers. T. K. Scott has suggested that the theory of supposition proper was designed to answer the question What kind of thing are you talking about? but the theory of personal supposition was aimed at answering the question How many of them are you talking about? Paul Spade has suggested that by the 14th century the theory of modes of personal supposition wasn't aimed at anything at all anymore. Ampliation When I say No cups are made of lead, cups supposits for all the cups that exist. But if I say Some cups were made of lead in Roman times, cups cannot just be suppositing for all the cups that exist, but for cups in the past as well. Here I am expanding the normal supposition of the terms I use. Peter of Spain says "Ampliation is the extension of a common term from a lesser supposition to a greater one." In practice, if I speak of the past, or the future, or make a modal claim, the terms I use get ampliated to supposit for past things, future things, or possible things, rather than their usual supposition for present actual things. Thus, ampliation becomes the medieval theory for explaining modal and tense logics within the theory of supposition. References Bos, E.P. (ed. 2013), Medieval Supposition Theory Revisited. Studies in Memory of L. M. de Rijk, Brill: Leiden. De Rijk, Lambertus M. (1967). Logica Modernorum. Assen: Van Gorcum. Dutilh Novaes, C. (2007), Formalizing Medieval Logical Theories. Suppositio, Consequentiae and Obligationes. New York: Springer. Dutilh Novaes, C. (2011), Supposition Theory in H. Lagerlund (ed.) Encyclopedia of Medieval Philosophy, Dordrecht: Springer, 2011, pp. 1229-1236. Kneale, William & Martha Kneale (1962). Development of Logic. Oxford: Clarendon Press. Kretzmann, Norman, Anthony Kenny & Jan Pinborg (1982). Cambridge History of Later Medieval Philosophy Cambridge: Cambridge University Press. McGrade, A.S. (editor), (2003). The Cambridge Companion to Medieval Philosophy, Cambridge University Press. . Terence Parsons (2014). Articulating medieval Logic, New York: oxford University Press. External links Paul Vincent Spade. Mediaeval Logic and Philosophy Paul Vincent Spade. Thoughts, Words, and Things. An Introduction to Late Medieval Logic and Semantic Theory (PDF) Raul Corazzon. Annotated Bibliography on the Medieval Theories of Supposition and Mental Language Theories of language Medieval philosophy History of logic |
No, this text is not related with defense topics | This is a list of free-trade zones by country: Africa Morocco Tanger Free Zone Atlantic Free Zone Kenitra Free Zones at Tanger Med Ksar el Majaz Mellousa 1 and 2 Free Zone in Dakhla and Laayoune: Free Storage Zone of hydrocarbons: Kebdana and Nador Egypt Egypt has nine free-trade zones: Alexandria Public free Zone Damietta Public Free Zone Ismailia Public Free Zone Keft Public Free Zone Media Production City Free Zone Nasr City Public Free Zone Port Said Public Free Zone Shebin El Kom Public Free Zone Suez Public Free Zone Djibouti Djibouti Free Zone Eritrea Massawa Free Trade Zone. Gabon Zone économique spéciale de Nkok, at 30 km of Libreville Ghana Tema Export Processing Zone Shama Land Bank Sekondi Industrial Park Ashanti Technology Park Kenya There are about 40 Export Processing Zones with "close to 40,000 workers employed and contribution of 10.7 % of national exports. Over 70% of EPZ output is exported to the USA under AGOA". Libya Misrata Free Trade Zone Namibia Walvis Bay Export Processing Zone Oshikango (namibia-Angola)Border Export Processing Zone Nigeria Aluminium Smelter Company Free Trade Zone Border Free Trade Calabar Free Trade Zone Centenary Economic City Enugu Industrial Park (Free Zone Status), also known as, ENPOWER Kano Free Trade Zone Ibom Science & Technology Park Free Zone Lekki Free Trade Zone Maigatari Border Free Trade Zone Nigeria International Commerce city Onne Oil and Gas Free Trade Zone Ogun-Guangdong Free Trade Zone Illela International Border Market LADOL Free Zone Lagos Free Trade Zone Snake Island Free Trade Zone Tinapa Resort & Leisure Free Trade Zone NAHCO Free Trade Zone Tanzania Benjamin William Mkapa Special Economic Zone Togo Port of Lome Free Trade Zone/Export processing zone Tunisia Bizerte Zarzis Seychelles International Trade Zone Asia ASEAN UNIDO Viet Nam (United Nations Industrial Development Organization) has compiled in 2015 a list of Special Economic Zones in the ASEAN Economic Community in a report titled "Economic Zones in the ASEAN" written by Arnault Morisson. Bahrain Bahrain Logistics Zone Bangladesh Bangladesh Export Processing Zone Authority Chittagong Export Processing Zone Karnaphuli Export Processing Zone Dhaka Export Processing Zone Comilla Export Processing Zone Adamjee Export Processing Zone Mongla Export Processing Zone Ishwardi Export Processing Zone Uttara Export Processing Zone China Tianjin Free-Trade Zone Shanghai Free-Trade Zone Fujian Free-Trade Zone Guangdong Free-Trade Zone Liaoning Free Trade Zone Zhejiang Zone Henan Free Trade Zone Hubei Free Trade Zone Sichuan Free Trade Zone Shaanxi Free Trade Zone Chongqing Free Trade Zone India Kandla Special Economic Zone, India. India was one of the first in Asia to recognize the effectiveness of the Export Processing Zone (EPZ) model in promoting exports, with Asia's first EPZ set up in Kandla in 1965. With a view to overcome the shortcomings experienced on account of the multiplicity of controls and clearances; absence of world-class infrastructure, and an unstable fiscal regime and with a view to attract larger foreign investments in India, the Special Economic Zones (SEZs) Policy was announced in April 2000. SuRSEZ is the First Operating Zone in the private sector in India. The track record of SuRSEZ in the last 5 years speaks for itself. From a level of about Rs.62 crores in 2000–01, exports from SuRSEZ rose to Rs. 2400 crores in the year 2005–06. AMRL SEZ and FTWZ in Nanguneri Taluk of Tirunelvelli District is spread over 2518 Acres of development. Out of which 1618 acres are dedicated for multiproduct industrial space; 800 acres are planned for lifestyle zone and 100 acres of Free Trade and Warehousing Zone in south Tamil Nadu is being developed. The FTWZ has a grade A warehouse 100,000 sq ft out of which approximately 20,000 sq ft is already occupied. Inspira Pharma and Renewable Energy Park, Aurangabad, Maharashtra, India Sricity Multi product SEZ, part of Sricity which is a developing satellite city in Andhra Pradesh, India Arshiya International Ltd, India's first Free Trade and Warehousing Zone The largest multi-product free-trade and warehousing infrastructure in India. Arshiya's first 165-acre FTWZ is operational in Panvel, Mumbai, and is to be followed by one in Khurja near Delhi. Arshiya's Mega Logistics Hub at Khurja to have 135 acre FTWZ, 130 acre Industrial and Distribution Hub (Distripark) & 50 acre Rail siding. Arshiya International will be developing three more Free Trade and Warehousing zones in Central, South and East of India. Cochin Special Economic Zone is a Special Economic Zone in Cochin, in the State of Kerala in southwest India, set up for export- oriented ventures. The Special Economic Zone is a foreign territory within India with special rules for facilitating foreign direct investment. The Zone is run directly by the Government of India. Cochin SEZ is a multi-product Zone. Cochin is strategically located. It is in southwest India, just 11 nautical miles off the international sea route from Europe to the Pacific Rim. Cochin is being developed by the Dubai Ports International as a container transhipment terminal with direct sailings to important markets of the world, which could position it as Hub for South Asia. Hardware Park, Hyderabad Madras Export Processing Zone Indonesia Batam Free Trade Zone Bintan Free Trade Zone Karimun Free Trade Zone Sabang Free Trade Zone Tanjung Pinang Free Trade Zone Iran Anzali Free Zone, Gilan province Aras Free Zone, East Azerbaijan province Arvand Free Zone, Khouzestan province Chabahar Free Trade-Industrial Zone Kish Island, Hormozgan Province Maku Free Zone, West Azarbaijan province Qeshm Island, Hormozgan province Imam Khomeini Airport city Free Zone, Tehran Province Farzazan Pars Company, Tehran Province Israel Eilat Free Trade Zone Japan Okinawa FTZ Naha, Okinawa, Japan and Nakagusuku Free Trade Zone Jordan Diamonds Private Free zone Aqaba Special Economic Zone Authority Jordan Media City Korea, North Rason Special Economic Zone Malaysia Bayan Lepas Free Industrial Zone, Penang Hulu Klang Free Trade Zone (Statchippac, Texas Instrument) Kulim Hi-Tech Park, Kedah Melaka Batu Berendam Free Trade Zone (Texas Instrument, Dominant Semiconductor, Panasonic) Pasir Gudang Free Trade Zone, Johor Port Klang Free Zone, Klang, Selangor Sungai Way Free Trade Zone (Western Digital, Free Scale, etc.) Teluk Panglima Garang Free Trade Zone (Toshiba, etc.) Port of Tanjung Pelepas Free Zone, Johor Oman Al-Mazyunah Free Zone Special Economic Zone at Duqm Salalah Free Zone, Salalah (www.sfzco.com) Sohar Free Zone, Sohar (www.soharportandfreezone.com) Pakistan Gawadar port free trade zone Karachi Export Processing Zone Philippines Saudi Arabia Jazan Economic City King Abdullah Economic City Prince Abdulaziz Bin Mousaed Economic City Tajikistan Panj Free Economic Zone Sughd Free Economic Zone United Arab Emirates Abu Dhabi Khalifa Port Free Trade Zone Dubai Dubai Multi Commodities Centre Dubai Airport Freezone Dubai Internet City Dubai Knowledge Village Dubai Media City Dubai Silicon Oasis International Media Production Zone Jebel Ali Free Zone Ajman Ajman Free Zone Fujairah Creative City Umm Al Quwain Umm Al Quwain Free Trade Zone (UAQFTZ) Ras Al Khaimah Ras Al Khaimah Economic Zone Yemen Aden Europe Belarus Brest FEZ China-Belarus Industrial Park Grodno FEZ Mogilev Free Enterprise Zone FEZ Gomel-Raton Croatia Land free zone: Krapina–Zagorje Free Zone (in liquidation) Danube Free Zone of Vukovar Free Zone of Kukuljanovo (inactive) Free Zone of Port of Rijeka – Škrljevo Free Zone of Split–Dalmatia (in liquidation) Free Zone of Zagreb Port free zone: Free Zone of Port of Ploče Free Zone of Port of Pula Free Zone of Port of Rijeka Free Zone of Port of Split Ireland Shannon Free Zone Italy Porto Franco di Trieste Porto Franco di Venezia (Venice) Livigno Campione d'Italia (Until 1 January 2020) Latvia Liepāja Special Economic Zone Lithuania Akmenė Free Economic Zone Kaunas Free Economic Zone Klaipėda Free Economic Zone Kėdainiai Free Economic Zone Marijampolė Free Economic Zone Panevėžys Free Economic Zone Šiauliai Free Economic Zone Moldova Moldova has seven Free Trade Zones, called in the national legislation Free Economic Areas. FEA “Expo-Business-Chişinău” FEA “Bălţi” FEA PP “Valkaneş” FEA “Ungheni-Business” FEA “Tvardiţa” FEA PP “Otaci-Business” FEA PP “Taraclia” Poland Special Economic Zone EURO-PARK MIELEC Wałbrzych Special Economic Zone "INVEST-PARK" Romania Constanta South Free Zone Basarabi Free Zone Giurgiu Free Zone Arad-Curtici Free Zone Sulina Free Zone Galati Free Zone Braila Free Zone Georgia Kutaisi Free Industrial Zone Poti Free Industrial Zone North America Bahamas Freeport, Bahamas Canada CentrePort Canada - Winnipeg, Manitoba Calgary Region Inland Port FTZ - Calgary, Alberta Port Alberta - Edmonton, Alberta Halifax, Nova Scotia - Halifax Gateway - Halifax, Nova Scotia Global Transportation Hub Authority - Regina, Saskatchewan Regional Municipality of Niagara - Niagara Trade Zone - Thorold, Ontario Cape Breton Regional Municipality - CBRM Foreign Trade Zone (Sydney), Nova Scotia Windsor-Essex Foreign Trade Zone - Windsor, Ontario Saint John, New Brunswick - Foreign Trade Zone - Saint John, New Brunswick Dominican Republic Zona Franca Industrial La Palma LTD - Santiago Nigua Free Zone - Santo Domingo El Salvador Zona Franca Santa Ana Guatemala Zolic Haiti Lafito Industrial Free Zone Jamaica Jamaican Free Zones Mexico Maquiladoras Panama Colon Free Trade Zone United States South America Argentina General Pico Tierra del Fuego Province Brazil Bataguassu Free Economic Zone of Manaus Free Economic Zone of Ceara Chile Zona Franca of Iquique Colombia Zona Franca del Pacifico - Cali-Palmira, Colombia. Zona Franca Bogota - Bogota-Cundinamarca, Colombia Zona Franca de Cucuta Zona Franca Metropolitana S.a.s Zona Franca de Occidente Zona Franca Santander Zona Franca de Tocancipa S.a Zona Franca de Barranquilla Zona Franca Brisa S.a Zona Franca La Cayena Zona Franca Las Americas Paraguay Ciudad del Este Peru Zona Franca of Tacna - ZOFRATACNA CETICOS Matarani CETICOS Ilo CETICOS Paita Uruguay Aguada Park (Itsen S.A.)-Uruguay Parque de las Ciencias (Parque de las Ciencias S.A.)-Uruguay WTC Free Zone (WTC Free Zone S.A.)-Uruguay Zona Franca de Colonia (Grupo Continental S.A.)-Uruguay Zona Franca Colonia Suiza (Colonia Suiza S.A.)-Uruguay Zona Franca Floridasur (Florida S.A.)-Uruguay Zona Franca Libertad (Lideral S.A.)-Uruguay Zona Franca Nueva Palmira (Nueva Palmira)-Uruguay Zona Franca Río Negro (Río Negro S.A.)-Uruguay Zona Franca Rivera (Rivera)-Uruguay Zona Franca UPM (UPM Fray Bentos S.A.)-Uruguay Zonamerica Business & Technology Park - Uruguay See also List of free economic zones List of special economic zones References Tax avoidance Free trade International trade-related lists |
No, this text is not related with defense topics | In the field of pharmacy, compounding (performed in compounding pharmacies) is preparation of a custom formulation of a medication to fit a unique need of a patient that cannot be met with commercially available products. This may be done for medical reasons, such as administration in a different format (ex: tablet to liquid), to avoid a non-active ingredient the patient is allergic to, or to provide an exact dose that isn't commercially available. Medically necessary compounding is referred to as "traditional" compounding. It may also be done for medically optional reasons, such as preference of flavor or texture, or dietary restrictions. Hospital pharmacies typically engage in compounding medications for intravenous administration, whereas outpatient or community pharmacies typically engage in compounding medications for oral or topical administration. Due to the rising cost of compounding and drug shortages, some hospitals outsource their compounding needs to large-scale compounding pharmacies, particularly of sterile-injectable medications. Compounding preparations of a given formulation, as opposed to preparation for a specific patient, is known as "non-traditional" compounding. Jurisdictions have varying regulations that apply to drug manufacturers and pharmacies that do bulk compounding. History The earliest chemists were familiar with various natural substances and their uses. They compounded a variety of preparations such as medications, dyes, incense, perfumes, ceremonial compounds, preservatives and cosmetics. In the medieval Islamic world in particular, Muslim pharmacists and chemists developed advanced methods of compounding drugs. The first drugstores were opened by Muslim pharmacists in Baghdad in 754. The modern age of pharmacy compounding began in the 19th century with the isolation of various compounds from coal tar for the purpose of producing synthetic dyes. From this came the earliest antibacterial sulfa drugs, phenolic compounds made famous by Joseph Lister, and plastics. During the 1800s, pharmacists specialized in the raising, preparation and compounding of crude drugs. Crude drugs, like opium, are from natural sources and usually contain several chemical compounds. The pharmacist extracted these drugs using solvents such as water or alcohol to form extracts, concoctions and decoctions. They eventually began isolating and identifying the active ingredients in these drug concoctions. Using fractionation or recrystallization, they separated an active ingredient from the crude preparation, and compounded a medication using this active ingredient. With the isolation of medications from the raw materials or crude drugs came the birth of the modern pharmaceutical company. Pharmacists were trained to compound the preparations made by the drug companies, but they could not do it efficiently on a small scale. So economies of scale, not lack of skill or knowledge, produced the modern pharmaceutical industry. With the turn of the 20th century came greater government regulation of the practice of medicine. These new regulations forced the drug companies to prove that any new medication they brought to market was safe. With the discovery of penicillin, modern marketing techniques and brand promotion, the drug manufacturing industry came of age. Pharmacists continued to compound most prescriptions until the early 1950s when the majority of dispensed drugs came directly from the large pharmaceutical companies. Roles A physician may choose to prescribe a compounded medication for a patient with an unusual health need that cannot be met with commercially manufactured products. The physician may choose to prescribe a compounded medication for reasons such as Patients requiring an individualized compounded formulation to be developed by the pharmacist Patients who cannot take commercially prepared prescriptions of a drug Patients requiring limited dosage strengths, such as a very small dose for infants Patients requiring a different formulation, such as turning a pill into a liquid or transdermal gel for people who cannot swallow pills due to disability Patients requiring an allergen-free medication, such as one without gluten or colored dyes Patients who absorb or excrete medications abnormally Patients who need drugs that have been discontinued by pharmaceutical manufacturers because of low profitability Patients facing a supply shortage of their normal drug Children who want flavored additives in liquid drugs, usually so that the medication tastes like candy or fruit Veterinary medicine, for a change in dose, change to a more easily administered form (such as from a pill to a liquid or transdermal gel), or to add a flavor more palatable to the animal. In the United States, compounded veterinary medicine must meet the standards set forth in the Animal Medicinal Drug Use Clarification Act (AMDUCA) Many types of bioidentical hormone replacement therapy Patients who require multiple medications combined in various doses IV compounding in hospitals In hospitals, pharmacists and pharmacy technicians often make compounded sterile preparations (CSPs) using manual methods. The error rate for manually compounded sterile IV products is high. The Institute for Safe Medication Practices (ISMP) has expressed concern with manual methods, particularly the error-prone nature of the syringe pull-back method of verifying sterile preparations. To increase accuracy, some U.S. hospitals have adopted IV workflow management systems and robotic compounding systems. These technologies use barcode scanning to identify each ingredient and gravimetric weight measurement to confirm the proper dose amount. The workflow management systems incorporate software to guide pharmacy technicians through the process of preparing IV medications. The robotic systems prepare IV syringes and bags in an ISO Class 5 environment, and support sterility and dose accuracy by removing human error and contamination from the process. Regulation in Australia In Australia the Pharmacy Board of Australia is responsible for registration of pharmacists and professional practice including compounding. Although almost all pharmacies are able to prepare at least simple compounded medicines, some pharmacy staff undertake further training and education to be able to prepare more complex products. Although pharmacists who have undertaken further training to do complex compounding are not yet easily identified, the Board has been working to put a credentialing system in place. In 2011 the Pharmacy Board convened a Compounding Working Party to advise on revised compounding standards. Draft compounding guidelines for comment were released in April 2014. Pharmacists must comply with current guidelines or may be sanctioned by the Board. Both sterile and non-sterile compounding are legal provided the compounding is done for therapeutic use in a particular patient, and the compounded product is supplied on or from the compounding pharmacy. There are additional requirements for sterile compounding. Not only must a laminar flow cabinet [laminar flow hood] be used, but the environment in which the hood is located must be strictly controlled for microbial and particulate contamination and all procedures, equipment and personnel must be validated to ensure the safe preparation of sterile products. In non-sterile compounding, a powder containment hood is required when any hazardous material (e.g. hormones) are prepared or when there is a risk of cross-contamination of the compounded product. Pharmacists preparing compounded products must comply with these requirements and others published in the Australian Pharmaceutical Formulary & Handbook. Regulation in the United States In the United States, compounding pharmacies are licensed and regulated by states. National standards have been created by Pharmacy Compounding Accreditation Board (PCAB), however, obtaining accreditation is not mandatory and inspections for compliance occur only every three years. The Food and Drug Administration (FDA) has authority to regulate "manufacturing" of pharmaceutical products–which applies when drug products are not made or modified as to be tailored in some way to the individual patient–regardless of whether this is done at a factory or at a pharmacy. In the Drug Quality and Security Act (DQSA) of 2013 (H.R. 3204), Congress amended the Federal Food, Drug, and Cosmetic Act (FFDCA) to clarify limits of FDA jurisdiction over patient-specific compounding, and to provide an optional pathway for "non-traditional" or bulk compounders to operate. The law established that pharmacies compounding only "patient-specific" preparations made in response to a prescription (503A pharmacies) cannot be required to obtain FDA approval for such products, as they will remain exclusively under state-level pharmacy regulation. At the same time, section 503B of the law regulates "outsourcing facilities" which conduct bulk compounding or are used as outsourcing for compounding by other pharmacies. These outsourcing facilities can be explicitly authorized by the Food and Drug Administration under specified circumstances, while being exempted from certain requirements otherwise imposed on mass-producers. In any pharmacy, compounding is not permitted for a drug product that is "essentially a copy" of a mass-produced drug product, however outsourcing pharmacies are subject to a broader definition of "essentially a copy". For traditional/patient-specific compounding, 503A's definition of "copy" retains its original focus on drug products or ultimate dosage forms rather than drug substances or active ingredients, and in any event it explicitly excludes from its definition any compounded drug product that a given patient's prescribing practitioner determines makes a "significant difference" for the patient. The FDA weighs the following factors in deciding whether it has authority to "exercise its discretion" to require approval for a custom-compounded drug product: Compounding in anticipation of receiving prescriptions Compounding drugs removed from the market for safety reasons Compounding from bulk ingredients not approved by FDA Receiving, storing, or using drugs not made in an FDA-registered facility Receiving, storing, or using drugs' components not determined to meet compendia requirements Using commercial-scale manufacturing or testing equipment Compounding for third parties for resale Compounding drugs that are essentially the same as commercially available products Failing to operate in conformance with applicable state law Outsourcing facilities The DQSA amended the FFDCA to create a new class of FDA-regulated entities known as "outsourcing facilities" whose compounding activities "may or may not" be patient-specific based on individualized prescriptions. Registered outsourcing facilities, unlike traditional compounding facilities, are subject to the FDA's oversight. In addition to being subjected to Food and Drug Administration inspections, registration, fees, and specified reporting requirements, other requirements of outsourcing facilities include: Drugs are compounded by or under the direct supervision of a licensed pharmacist The facility does not compound using "bulk drug substances" (unless certain exceptions apply) and its drugs are manufactured by an FDA-registered establishment Other ingredients used in compounding the drug must comply with the standards of the applicable United States Pharmacopeia or National Formulary monograph, if a monograph exists The drug does not appear on a list published by FDA of unsafe or ineffective drugs The drug is not "essentially a copy" of one or more marketed drugs (as defined uniquely in section 503B, notably more broadly and with narrower exclusions than for "traditional" compounding) The drug does not appear on the FDA list of drugs or categories of drugs that present "demonstrable difficulties" for compounding The compounding pharmacist demonstrates that he or she will use controls comparable to the controls applicable under any applicable risk evaluation and mitigation strategy (REMS) The drug will not be sold or transferred by an entity other than the outsourcing facility The label of the drug states that it is a compounded drug, as well as the name of the outsourcing facility, the lot or batch number of the drug, dosage form and strength, and other key information Drug testing and reporting of incidents Poor practices on the part of drug compounders can result in contamination of products, or products that do not meet their stated strength, purity, or quality. Unless a complaint is filed or a patient is harmed, drugs made by compounders are seldom tested. In Texas, one of only two states that does random testing, significant problems have been found. Random tests by the state's pharmacy board over the last several years have found that as many as one in four compounded drugs was either too weak or too strong. In Missouri, the only other state that does testing, potency varied by as much as 300 percent. In 2002, the Food and Drug Administration, concerned about the rising number of accidents related to compounded medications, identified "red flag" factors and issued a guide devoted to human pharmacy compounding, These factors include instances where pharmacists are: Compounding drug products that have been pulled from the market because they were found to be unsafe or ineffective Compounding drugs that are essentially copies of a commercially available drug product Compounding drugs in advance of receiving prescriptions, except in very limited quantities relating to the amounts of drugs previously compounded based on valid prescriptions Compounding finished drugs from bulk active ingredients that aren't components of FDA-approved drugs, without an FDA-sanctioned, investigational new-drug application Receiving, storing, or using drug substances without first obtaining written assurance from the supplier that each lot of the drug substance has been made in an FDA-registered facility Failing to conform to applicable state law regulating the practice of pharmacy New England Compounding Center incident In October 2012 news reports surfaced of an outbreak of fungal meningitis tied to the New England Compounding Center, a pharmacy which engaged in bulk compounding. At that time it was also disclosed that the United States and Massachusetts state health regulators were aware in 2002 that steroid treatments from the New England Compounding Center could cause adverse patient reactions. It was further disclosed that in 2001–02, four people died, more than a dozen were injured and hundreds exposed after they received back-pain injections tainted with a common fungus dispensed by two compounding pharmacies in California and South Carolina. In August 2013 further reports tied to the New England compounding center said that about 750 people were sickened, including 63 deaths, and that infections were linked to more than 17,600 doses of methylprednisolone acetate steroid injections used to treat back and joint pain that were shipped to 23 states. At that time, another incident was reported after at least 15 people at two Texas hospitals developed bacterial infections. All lots of medications dispensed since May 9, 2013, made by Specialty Compounding, LLC of Cedar Park, Texas were recalled. The hospitals reported affected were Corpus Christi Medical Center Bay Area and Corpus Christi Medical Center Doctors Regional. The patients had received intravenous infusions of calcium gluconate, a drug used to treat calcium deficiencies and too much potassium in the blood. Implicated in these cases is the Rhodococcus bacteria, which can cause symptoms such as fever and pain. Misuse prompting regulatory changes The FDA, among others, claims that larger compounding pharmacies act like drug manufacturers and yet circumvent FDA regulations under the banner of compounding. Drugs from compounding pharmacies can be cheaper or alleviate shortages, but can pose greater risk of contamination due in part to the lack of oversight. "Non-traditional" compounders behave like drug manufacturers in some cases by having sales teams that market non-personalized drug products or production capability to doctors, by making drugs that are essentially the same as commercially available mass-produced drug products, or by preparing large batches of a given drug product in anticipation of additional prescriptions before actually receiving them. An FDA spokesperson stated, "The methods of these companies seem far more consistent with those of drug manufacturers than with those of retail pharmacies. Some firms make large amounts of compounded drugs that are copies or near copies of FDA-approved, commercially available drugs. Other firms sell to physicians and patients with whom they have only a remote professional relationship." The head of the FDA has recently requested the following authority from Congress: Various ideas have been proposed to expand federal US regulation in this area, including laws making it easier to identify misuse or misnomered-use and/or stricter enforcement of the longstanding distinction between compounding versus manufacturing. Some US states have also taken initiatives to strengthen oversight of compounding pharmacies. A major source of opposition to new Food and Drug Administration regulation on compounding is makers of dietary supplements. See also Apothecary - the ancestral practitioner of compounding, and his shop Bioidentical hormone replacement therapy - Compounding is involved in the surrounding controversy New England Compounding Center meningitis outbreak Professional Compounding Centers of America References External links International Academy of Compounding Pharmacists International Journal of Pharmaceutical Compounding Drug Compounding: FDA Authority and Possible Issues for Congress from the Congressional Research Service and Federation of American Scientists Pharmacy |
No, this text is not related with defense topics | The bullroarer, rhombus, or turndun, is an ancient ritual musical instrument and a device historically used for communicating over great distances. It dates to the Paleolithic period, being found in Ukraine dating from 18,000 BC. Anthropologist Michael Boyd, a bullroarer expert, documents a number found in Europe, Asia, Africa, the Americas, and Australia. In ancient Greece it was a sacred instrument used in the Dionysian Mysteries and is still used in rituals worldwide. It was a prominent musical technology among the Australian Aboriginal people, used in ceremonies and to communicate with different people groups across the continent. Many different cultures believe that the sounds they make ward-off evil influences. Design, use, and sound A bullroarer consists of a weighted airfoil (a rectangular thin slat of wood about (6 in) to (24 in) long and about (0.5 in) to (2 in) wide) attached to a long cord. Typically, the wood slat is trimmed down to a sharp edge around the edges, and serrations along the length of the wooden slat may or may not be used, depending on the cultural traditions of the region in question. The cord is given a slight initial twist, and the roarer is then swung in a large circle in a horizontal plane, or in a smaller circle in a vertical plane. The aerodynamics of the roarer will keep it spinning about its axis even after the initial twist has unwound. The cord winds fully first in one direction and then the other, alternating. It makes a characteristic roaring vibrato sound with notable sound modulations occurring from the rotation of the roarer along its longitudinal axis, and the choice of whether a shorter or longer length of cord is used to spin the bullroarer. By modifying the expansiveness of its circuit and the speed given it, and by changing the plane in which the bullroarer is whirled from horizontal to vertical or vice versa, the modulation of the sound produced can be controlled, making the coding of information possible. Audio/visual demonstration Sound modulation by changing orbital plane. The low-frequency component of the sound travels extremely long distances, clearly audible over many miles on a quiet night. Various cultures have used bullroarers as musical, ritual, and religious instruments and long-range communication devices for at least 19,000 years. In culture {| class="wikitable" style="float:right" ! colspan="3"|North American Indian Bullroarers. |- | | | |- |<center>Navajotsin ndi'ni'''"groaning stick"Young, R & Morgan, W An Analytical Lexicon of Navajo, (1992) University of New Mexico Press , p. 461.</center> |Apachetzi-ditindi"sounding wood" |Gros Ventrenakaantan"making cold" |} This instrument has been used by numerous early and traditional cultures in both the northern and southern hemispheres but in the popular consciousness it is perhaps best known for its use by Australian Aborigines (it is from one of their languages that the name turndun comes). Henry Cowell composed a composition for two violins, viola, two celli, and two bullroarers. A bullroarer featured in the Kate Bush Before The Dawn concerts in London 2014. Australian Aboriginal culture Bullroarers have been used in initiation ceremonies and in burials to ward off evil spirits, bad tidings, and especially women and children. Bullroarers are considered secret men's business by all or almost all Aboriginal tribal groups, and hence forbidden for women, children, non-initiated men, or outsiders to even hear. Fison and Howitt documented this in "Kamilaroi and Kurnai" (page 198). Anyone caught breaching the imposed secrecy was to be punished by death. They are used in men's initiation ceremonies, and the sound they produce is considered in some indigenous cultures to represent the sound of the Rainbow Serpent. In the cultures of southeastern Australia, the sound of the bullroarer is the voice of Daramulan, and a successful bullroarer can only be made if it has been cut from a tree containing his spirit. The bullroarer can also be used as a tool in Aboriginal art. Bullroarers have sometimes been referred to as "wife-callers" by Australian Aborigines. A bullroarer is used by Paul Hogan in the 1988 film Crocodile Dundee II. John Antill included one in the orchestration of his ballet Corroboree (1946). See: Corroboree. An Australian band Midnight Oil included a recording of an imitation bullroarer on their album Diesel and Dust (1987) at the beginning of the song "Bullroarer". In an interview, band's drummer Rob Hirst stated "it's a sacred instrument... only initiated men are supposed to hear those sounds. So we didn't use a real bullroarer as that would have been cultural imperialism. Instead we used an imitation bullroarer that school kids in Australia use. It is a ruler with a piece of rope wrapped around it." Ancient Greece In Ancient Greece, bullroarers were especially used in the ceremonies of the cult of Cybele. A bullroarer was known as a rhombos (literally meaning "whirling" or "rumbling"), both to describe its sonic character and its typical shape, the rhombus. (Rhombos also sometimes referred to the rhoptron, a buzzing drum). Britain and Ireland In Britain and Ireland, the bullroarer—under a number of different names and styles—is used chiefly for amusement, although formerly it may have been used for ceremonial purposes. In parts of Scotland it was known as a "thunder-spell" and was thought to protect against being struck by lightning. In the Elizabeth Goudge novel Gentian Hill (1949), set in Devon in the early 19th century, a bullroarer figures as a toy cherished by Sol, an elderly farm labourer, who being mute, uses it occasionally to express strong emotion; however, the sound it makes is perceived as being both eerie and unlucky by two other characters, who have an uneasy sense that ominous spirits of the air ("Them") are being invoked by its whirring whistle. Scandinavia Scandinavian Stone Age cultures used the bullroarer. In 1991, the archeologists Hein B. Bjerck and Martinius Hauglid found a 6.4 cm-long piece of slate that turned out to be a 5000-year-old bullroarer (called a brummer in Scandinavia). It was found in Tuv in northern Norway, a place that was inhabited in the Stone Age. Mali The Dogon use bullroarers to announce the beginning of ceremonies conducted during the Sigui festival held every sixty years over a seven-year period. The sound has been identified as the voice of an ancestor from whom all Dogon are descended. Māori culture (New Zealand) The pūrerehua is a traditional Māori bullroarer. Its name comes from the Māori word for moth. Made from wood, stone or bone and attached to a long string, the instruments were traditionally used for healing or making rain. Native North American Almost all the native tribes in North America used bullroarers in religious and healing ceremonies and as toys. There are many styles. North Alaskan Inupiat bullroarers are known as imigluktaaq or imigluktaun and described as toy noise-maker of bone or wood and braided sinew (wolf-scare). Banks Island Eskimos were still using Bullroarers in 1963, when a 59 year old woman Susie scared off four polar bears armed only with three seal hooks acting as such accompanied by vocals. Aleut, Eskimo and Inuit used bullroarers occasionally as a children's toy or musical instruments, but preferred drums and rattles. Pomo The inland Pomo tribes of California used bullroarers as a central part of the xalimatoto or Thunder ceremony. Four male tribe members, accompanied by a drummer, would spin bullroarers made from cottonwood, imitating the sound of a thunder storm. Native South American Shamans of the Amazon basin, for example in Tupi, Kamayurá and Bororo culture used bullroarers as musical instrument for rituals. In Tupian languages, the bullroarer is known as hori hori.See also Buzzer (whirligig) References Other sources Franciscan Fathers. An Ethnologic Dictionary of the Navaho Language. Saint Michaels, Arizona: Navajo Indian Mission (1910. Haddon, Alfred C. The Study of Man. New York: G.P. Putnam's Sons (1898). Lang, A. "Bull-roarer", in J. Hastings, "Encyclopedia of Religion and Ethics II", p. 889-890 (1908-1927). Kroeber, A.L. "Ethnology of the Gros Ventre", Anthropological Papers of the American Museum of Natural History pp. 145–283. New York: Published by Order of the Trustees (1908). Powell, J.W. (Director). Ninth Annual Report of the Bureau of Ethnology to the Secretary of the Smithsonian Institution 1887-'88. Washington, D.C.: Government Printing Office (1892). Hart, Mickey Planet Drum, A Celebration of Percussion and Rhythm pp. 154–155. New York: HarperCollins (1991). Battaglia, R., Sopravvivenze del rombo nelle Province Venete (con 7 illustrazioni)'', Studi e Materiali di Storia delle Religioni 1 (1925), pp. 190–217. Rotating and whirling aerophones Australian Aboriginal bushcraft History of telecommunications Australian Aboriginal music Australian musical instruments Sacred musical instruments Anthropology of religion Magic (supernatural) Folklore Religious objects Objects believed to protect from evil Amulets Talismans |
No, this text is not related with defense topics | Historical classification groups the various history topics into different categories according to subject matter as shown below. Meta-history Philosophy of history By geographic region World Africa Americas Asia Europe Oceania Antarctica By geographic subregion North America South America Latin America Central America Pre-Columbian Mesoamerica Caribbean Eurasia History of Europe Prehistoric Europe Classical antiquity Late Antiquity Middle Ages Early modern period Modern Europe Central Asia South Asia East Asia Southeast Asia Middle East Ancient Near East Australasia (Australia, New Guinea, Micronesia, Melanesia, Polynesia) Pacific Islands By date Centuries Decades Periodization List of named time periods List of timelines By time period Prehistory Ancient history Modern world See also Periodization. By religion History of religion History of Christianity History of Islam Jewish history History of Buddhism Hinduism History of Hinduism By nation History of extinct nations and states By field Cultural movements Diaspora studies Family history Environmental history Local history Maritime history Microhistory Confederation Social History Urban History Mathematics and the hard sciences History of mathematics History of science and technology History of astronomy History of physics History of chemistry History of geology History of biology History of medicine History of mental illness Social sciences History of art History of astrology History of cinema History of economic thought/Economic history History of ideas History of literature History of music History of philosophy History of sexuality History of theatre Intellectual history Legal history Microhistory Military history By ideological classification (historiography) Although there is arguably some intrinsic bias in history studies (with national bias perhaps being the most significant), history can also be studied from ideological perspectives, which practitioners feel are often ignored, such as: Marxist historiography Feminist history (also called herstory). A form of historical speculation known commonly as counterfactual history has also been adopted by some historians as a means of assessing and exploring the possible outcomes if certain events had not occurred or had occurred in a different way. This is somewhat similar to the alternate history genre in fiction. Lists of false or dubious historical resources and historical myths that were once popular and widespread, or have become so, have also been prepared. classifications History-related lists History |
No, this text is not related with defense topics | The numerical response in ecology is the change in predator density as a function of change in prey density. The term numerical response was coined by M. E. Solomon in 1949. It is associated with the functional response, which is the change in predator's rate of prey consumption with change in prey density. As Holling notes, total predation can be expressed as a combination of functional and numerical response. The numerical response has two mechanisms: the demographic response and the aggregational response. The numerical response is not necessarily proportional to the change in prey density, usually resulting in a time lag between prey and predator populations. For example, there is often a scarcity of predators when the prey population is increasing. Demographic response The demographic response consists of changes in the rates of predator reproduction or survival due to a changes in prey density. The increase in prey availability translates into higher energy intake and reduced energy output. This is different from an increase in energy intake due to increased foraging efficiency, which is considered a functional response. This concept can be articulated in the Lotka-Volterra Predator-Prey Model. a = conversion efficiency: the fraction of prey energy assimilated by the predator and turned into new predators P = predator density V = prey density m = predator mortality c = capture rate Demographic response consists of a change in dP/dt due to a change in V and/or m. For example, if V increases, then predator growth rate (dP/dt) will increase. Likewise if the energy intake increases (due to greater food availability) and a decrease in energy output (from foraging), then predator mortality (m) will decrease and predator growth rate (dP/dt) will increase. In contrast, the functional response consists of a change in conversion efficiency (a) or capture rate (c). The relationship between available energy and reproductive efforts can be explained with the life history theory in the trade-off between fecundity and growth/survival. If an organism has more net energy, then the organism will sacrifice less energy dedicated to survival per reproductive effort and will therefore increase its reproduction rate. In parasitism, functional response is measured by the rate of infection or laying of eggs in host, rather than the rate of prey consumption as it is measured in predation. Numerical response in parasitism is still measured by the change in number of adult parasites relative to change in host density. Parasites can demonstrate a more pronounced numerical response to changes in host density since there is often a more direct connection (less time lag) between food and reproduction in that both needs are immediately satisfied by its interaction with the host. Aggregational response The aggregational response, as defined by Readshaw in 1973, is a change in predator population due to immigration into an area with increased prey population. In an experiment conducted by Turnbull in 1964, he observed the consistent migration of spiders from boxes without prey to boxes with prey. He proved that hunger impacts predator movement. Riechert and Jaeger studied how predator competition interferes with the direct correlation between prey density and predator immigration. One way this can occur is through exploitation competition: the differential efficiency in use of available resources, for example, an increase in spiders' web size (functional response). The other possibility is interference competition where site owners actively prevent other foragers from coming in vicinity. Ecological relevance The concept of numerical response becomes practically important when trying to create a strategy for pest control. The study of spiders as a biological mechanism for pest control has driven much of the research on aggregational response. Antisocial predator populations that display territoriality, such as spiders defending their web area, may not display the expected aggregational response to increased prey density. A credible, simple alternative to the Lotka-Volterra predator-prey model and its common prey dependent generalizations is the ratio dependent or Arditi-Ginzburg model. The two are the extremes of the spectrum of predator interference models. According to the authors of the alternative view, the data show that true interactions in nature are so far from the Lotka-Volterra extreme on the interference spectrum that the model can simply be discounted as wrong. They are much closer to the ratio dependent extreme, so if a simple model is needed one can use the Arditi-Ginzburg model as the first approximation. References Ecology |
No, this text is not related with defense topics | Real world data (RWD) in medicine is data derived from a number of sources that are associated with outcomes in a heterogeneous patient population in real-world settings, including but not limited to electronic health records, health insurance claims and patient surveys. While no universal definition of real world data exists, researchers typically understand RWD as distinct from data sourced from randomized clinical trials. Real world data (RWD) in healthcare Real-world data refer to observational data as opposed to data gathered in an experimental setting such as a randomized controlled trial (RCT). They are derived from electronic health records (EHRs), claims and billing activities, product and disease registries, etc. A systematic scoping review of the literature suggests data quality dimensions and methods with RWD is not consistent in the literature, and as a result quality assessments are challenging due to the complex and heterogeneous nature of these data. The sources of RWD are only rarely interoperable, as each hospital-maintained EHR system is, by design, secured for patient privacy. Healthcare providers responsible for entering patient data into their EHR may agree to pooling that data with others, once it has been de-identified in accordance with privacy regulations such as HIPAA or GDPR. The result is a larger, more heterogenous population for research, where trends and statistical associations may be more apparent. Results from analysis on aggregated RWD can inform the design of clinical study protocols or advance post-approval research. Real world evidence (RWE) When working with RWD, the goal is often to generate evidence. The term real world evidence (RWE) is highly related to RWD. RWE is defined by FDA as "clinical evidence regarding the usage and potential benefits or risks of a medical product derived from analysis of RWD". An example of a study utilizing RWE is "Clinical Features and Outcomes of Coronavirus Disease 2019 Among People Who Have HIV in the United States: A Multi-center Study From a Large Global Health Research Network (TriNetX)" In this study, Covid-19 outcomes were compared between people with HIV and HIV-negative controls from a database of de-identified health records. The TriNetX platform allowed the researchers to consider the HIV and HIV-negative subjects in incidence of hospitalizations, ICU admissions, ventilation and severe disease, to understand the impact Covid-19 infection has on those with HIV. Regional context US context In December 2018, the FDA published a framework for Real World Evidence program. EU context In 2018, the EMA published a discussion paper on the use of patient disease registries for regulatory purposes (methodological and operational considerations). See also 21st Century Cures Act (US) Correlation does not imply causation Qualitative research Quantitative research Sentinel Initiative References Citations Sources Real-World Evidence—What Is It and What Can It Tell Us? The New England Journal of Medicine, Dec. 6, 2016 Mahajan, Rajiv. “Real World Data: Additional Source for Making Clinical Decisions.” International Journal of Applied and Basic Medical Research 5.2 (2015): 82. PMC. Web. 5 May 2018. External links "Real World Evidence" at FDA Real world data at TriNetX, LLC Evidence Health informatics Clinical research |
No, this text is not related with defense topics | In art criticism of the 1960s and 1970s, flatness described the smoothness and absence of curvature or surface detail of a two-dimensional work of art. Views Critic Clement Greenberg believed that flatness, or two-dimensional, was an essential and desirable quality in painting, a criterion which implies rejection of painterliness and impasto. The valorization of flatness led to a number of art movements, including minimalism and post-painterly abstractionism. Modernism of the arts happened during the second half of the 19th century and extended into most of the 20th. This period of art is identified by art forms consisting of an image on a flat two-dimensional surface. This art evolution began in the 1860s and culminated 50 years later. By this time almost all three-dimensional works had been eliminated. This new approach to painting was to create a visual appearance of realism. Looking at a surface with only two-dimensions our perception of depth is an illusion. The reduction of depth in painting was the consequence of investigation. This new essence of self-analysis attempted to establish an experience or effect from the viewer of the painting. Terminology and history The term flatness can be used to describe much of the popular American art work of the 1950s and 1960s. The art of this period had a basic yet colorful design that held a degree of two dimensional form. Thus the term flatness is used to describe this medium. The groundwork idea for Minimalism began in Russia in 1913 when Kazimir Malevich placed a black square on a white background claiming that: “Art no longer cares to serve the state and religion it no longer wishes to illustrate the history of manners, it wants to have nothing further to do with the object as such, and believes that it can exist in and for itself without things.” One of the first Minimalism artworks was created in 1964 by Dan Flavin. He produced a neon sculpture titled Monument for V. Tatlin. This work was a simplistic assembly of neon tubes that were not carved or constructed in any way. The idea was that they were not supposed to symbolize anything but to just merely exist. The Minimalist approach to art was to conceive by the mind before execution. Traditional modes of art composition were rejected in favor of improvisation, spontaneity and automatism. This new expressionist style consisted of improvised pattern making where every stroke of the brush was viewed as expression and subjective freedom. Pop Art This concept inspired a whole new art form called Pop Art. It retained the color scheme and simplicity of Minimalism, but it borrowed images from pop culture to become relatable. The works now in question held a meaning for the viewer with familiar imagery but it still retained the avant-garde approach of Minimalism. Pop Art is a well-recognized movement in 1960s culture. This type of art was very free-form fashionable and rebellious. It was wild and colorful but many works retained the idea of two dimensional flatness. Op Art Pop Art fell out of fashion and a new movement came into being. Op Art or Optic Art was now the latest trend in home décor and fashion. This form of modern art shares a strong relationship with the culture thought and design of the 1960s. This new art form focused on non-objective painting that focused on design, color, form, and line. These paintings were hand-drawn or created with a mechanical aid. They featured a flat-looking two dimensional design that could appear to pop out in an almost three dimensional form. Some pieces look as if they are moving due to shape and line placement creating a trick of the eye. This form of art was created to test the limits of the conscious perception of the viewer. Bright colors were no longer favored, as much of Op Art is black and white with little use of color. The designs presented migrated back to the Minimalist idea of art simply existing and not representing an ideal. A well noted artist of this style is Bridget Riley who shaped the contemporary art scene of the early 50s and 70s. Her works are designed to pull the eye in such a way to stretch and disorder the perceptual sense. She is considered a ground-breaking artist in the realm of modern art. Her work mainly consists of detailed line and circle patterns to create an optical challenge for the viewer. Riley composes her art with the thought in mind that we all have a narrow view on how we see things and our vision is rarely stretched to new abilities. Her work confronts the observer with new imaginative sensations, and the overall purpose of the artist's disappears and is replaced with what the viewer conceives. This form of art has no clear outlined theme therefore it allows total freedom for one to compose their imagination. Riley's work ignores object and instead focuses on visual movement to create a seemingly endless pattern. Riley's art appeared in the fashion of that era. Similar patterns still remain popular in clothing today. References Painting |
No, this text is not related with defense topics | The eavesdrop or eavesdrip is the width of ground around a house or building which receives the rain water dropping from the eaves. By an ancient Anglo-Saxon law, a landowner was forbidden to erect any building at less than two feet from the boundary of his land, and was thus prevented from injuring his neighbour's house or property by the dripping of water from his eaves. The law of Eavesdrip had its equivalent in the Roman stillicidium, which prohibited building up to the very edge of an estate. See also Eaves-drip burial References Architecture |
No, this text is not related with defense topics | The Penman equation describes evaporation (E) from an open water surface, and was developed by Howard Penman in 1948. Penman's equation requires daily mean temperature, wind speed, air pressure, and solar radiation to predict E. Simpler Hydrometeorological equations continue to be used where obtaining such data is impractical, to give comparable results within specific contexts, e.g. humid vs arid climates. Details Numerous variations of the Penman equation are used to estimate evaporation from water, and land. Specifically the Penman–Monteith equation refines weather based potential evapotranspiration (PET) estimates of vegetated land areas. It is widely regarded as one of the most accurate models, in terms of estimates. The original equation was developed by Howard Penman at the Rothamsted Experimental Station, Harpenden, UK. The equation for evaporation given by Penman is: where: m = Slope of the saturation vapor pressure curve (Pa K−1) Rn = Net irradiance (W m−2) ρa = density of air (kg m−3) cp = heat capacity of air (J kg−1 K−1) δe = vapor pressure deficit (Pa) ga = momentum surface aerodynamic conductance (m s−1) λv = latent heat of vaporization (J kg−1) γ = psychrometric constant (Pa K−1) which (if the SI units in parentheses are used) will give the evaporation Emass in units of kg/(m2·s), kilograms of water evaporated every second for each square meter of area. Remove λ to obviate that this is fundamentally an energy balance. Replace λv with L to get familiar precipitation units ETvol, where Lv=λvρwater. This has units of m/s, or more commonly mm/day, because it is flux m3/s per m2=m/s. This equation assumes a daily time step so that net heat exchange with the ground is insignificant, and a unit area surrounded by similar open water or vegetation so that net heat & vapor exchange with the surrounding area cancels out. Some times people replace Rn with and A for total net available energy when a situation warrants account of additional heat fluxes. Temperature, wind speed, relative humidity impact the values of m, g, cp, ρ, and δe. Shuttleworth (1993) In 1993, W.Jim Shuttleworth modified and adapted the Penman equation to use SI, which made calculating evaporation simpler. The resultant equation is: where: Emass = Evaporation rate (mm day−1) m = Slope of the saturation vapor pressure curve (kPa K−1) Rn = Net irradiance (MJ m−2 day−1) γ = psychrometric constant = (kPa K−1) U2 = wind speed (m s−1) δe = vapor pressure deficit (kPa) λv = latent heat of vaporization (MJ kg−1) Note: this formula implicitly includes the division of the numerator by the density of water (1000 kg m−3) to obtain evaporation in units of mm d−1 Some useful relationships δe = (es - ea) = (1 – relative humidity) es es = saturated vapor pressure of air, as is found inside plant stoma. ea = vapor pressure of free flowing air. es, mmHg = exp(21.07-5336/Ta), approximation by Merva, 1975 Therefore , mmHg/K Ta = air temperature in kelvins See also Pan evaporation Evapotranspiration Thornthwaite model Blaney–Criddle equation Penman–Monteith equation Notes References Jarvis, P.G. (1976) The interpretation of the variations in leaf water potential and stomatal conductance found in canopies in the field. Phil. Trans. R. Soc. Lond. B. 273, 593–610. Neitsch, S.L.; J.G. Arnold; J.R. Kliniry; J.R. Wolliams. 2005. Soil and Water Assessment Tool Theoretical Document; Version 2005. Grassland, Soil and Water Research Laboratory; Agricultural Research Service. and Blackland Research Center; Texas Agricultural Experiment Station. Temple, Texas. https://web.archive.org/web/20090116193356/http://www.brc.tamus.edu/swat/downloads/doc/swat2005/SWAT%202005%20theory%20final.pdf Penman, H.L. (1948): Natural evaporation from open water, bare soil and grass. Proc. Roy. Soc. London A(194), S. 120–145. Agronomy Equations Hydrology |
No, this text is not related with defense topics | Bioproducts engineering or bioprocess engineering refers to engineering of bio-products from renewable bioresources. This pertains to the design and development of processes and technologies for the sustainable manufacture of bioproducts (materials, chemicals and energy) from renewable biological resources. Bioproducts engineers harness the molecular building blocks of renewable resources to design, develop and manufacture environmentally friendly industrial and consumer products. From biofuels, renewable energy, and bioplastics to paper products and "green" building materials such as bio-based composites, Bioproducts engineers are developing sustainable solutions to meet the world's growing materials and energy demand. Conventional bioproducts and emerging bioproducts are two broad categories used to categorize bioproducts. Examples of conventional bio-based products include building materials, pulp and paper, and forest products. Examples of emerging bioproducts or biobased products include biofuels, bioenergy, starch-based and cellulose-based ethanol, bio-based adhesives, biochemicals, biodegradable plastics, etc. Bioproducts Engineers play a major role in the design and development of "green" products including biofuels, bioenergy, biodegradable plastics, biocomposites, building materials, paper and chemicals. Bioproducts engineers also develop energy efficient, environmentally friendly manufacturing processes for these products as well as effective end-use applications. Bioproducts engineers play a critical role in a sustainable 21st century bio-economy by using renewable resources to design, develop, and manufacture the products we use every day. The career outlook for bioproducts engineers is very bright with employment opportunities in a broad range of industries, including pulp and paper, alternative energy, renewable plastics, and other fiber, forest products, building materials and chemical-based industries. Commonly referred to as bioprocess engineering, bioprocess engineering is a specialization of biotechnology, biological engineering, chemical engineering or of agricultural engineering. It deals with the design and development of equipment and processes for the manufacturing of products such as food, feed, pharmaceuticals, nutraceuticals, chemicals, and polymers and paper from biological materials. Bioprocees engineering is a conglomerate of mathematics, biology and industrial design, and consists of various spectrums like designing of fermentors, study of fermentors (mode of operations etc.). It also deals with studying various biotechnological processes used in industries for large scale production of biological product for optimization of yield in the end product and the quality of end product. Bio process engineering may include the work of mechanical, electrical and industrial engineers to apply principles of their disciplines to processes based on using living cells or sub component of such cells. See also Biochemicals Biofact (biology) Biogas Biomass Biomass (ecology) Biorefining Bioresource engineering Forest Non-timber forest product Outline of forestry Colleges and universities University of Minnesota (Bioproducts and Biosystems Engineering) SUNY-ESF (Bioprocess Engineering Program) Université de Sherbrooke UC Berkeley Savannah Technical College East Carolina University Institute of Chemical Technology (ICT) Mumbai Jadavpur University Universidade Federal do Rio de Janeiro Universidade Estadual do Rio Grande do Sul University of Stellenbosch NC State University Virginia Tech Washington State University University of Washington University of Maine References Further reading Bowyer, J.L., Ramaswamy, S. Bioenergy development: Alignment is essential, Part 1, Bioenergy Technologies Tappi Publication, January 2009, p14-17 Bowyer, J.L., Ramaswamy, S. Bioenergy development: Alignment is essential for Bioenergy Development, Part II, Exploring possible scenarios resulting from a supply gap, and possible effects of bioenergy development in environmental quality Tappi Publication, March 2009, p16-19 External links U.S. DOE Biomass Program Energy Title (Title IX) of the Farm Security and Rural Investment Act of 2002 Sustainable agriculture Biotechnology Sustainable business Sustainable forest management |
No, this text is not related with defense topics | Arven Pharmaceuticals is a Turkish pharmaceutical corporation headquartered in Istanbul established as a subsidiary of Toksöz Group in 2013. Arven’s primary focus is development and production of high-technology inhaler and biotechnology products. The company is specialized on difficult to make products and strives to develop quality products. Arven is the first Turkish company developing biosimilars for global markets, including USA and EU. Arven has obtained a marketing authorization in 2016 for the biosimilar of Filgrastim, marketed under Fraven, which is the first biosimilar drug, developed and manufactured from cell to final product in Turkey. Additionally, Arven is the first Turkish company to design and develop a patented Dry Powder Inhaler (DPI) device under Arvohaler trademark and globally introduced Cyplos (salmeterol/fluticasone) product inhaled with Arvohaler device. Some of the ministry of health foundations have visitations to the company for the vaccine manufacturing potential during Covid-19 pandemic period. History In 2007 the Toksöz Group launched investments to contribute to the development of biotechnological products and the advancement of the pharmaceutical field in Turkey. A Biotechnology Division was first established in the Sanovel Silivri facility within the same year, and research and development work was initiated to produce Turkey’s first biosimilar product. In the following years, the Toksöz Group brought together high-technology inhaler and biotechnology products under separate legal entity as“Arven İlaç San ve Tic. A.Ş.” continuing its investments, the group then decided in 2013 to build a dedicated factory for manufacturing of Arvenproducts. Facilities Arven Kırklareli factory As of 2020, Arven employs 200 staff in factory and R&D facility. The factory was built in the Kırklareli Industrial Zoneclose to the western borders of Turkey and started operations by obtaining a manufacturing license from the Pharmaceutical and Medical Devices Agency of Turkey in 2017. Arven Kırklareli factory was constructed on 30.000 square meter area. Arven R&D Center The R&D facility in Selimpaşa-Istanbul was established at the same time period with the manufacturing facility in Kırklareli. R&D Center certificate was granted in 2017 by the Ministry of Industry, and started to develop new products. Arven R&D Center is currently carrying out research and development activities on inhaler (respiration) medicine and biosimilar drugs. Biotechnology The research and development activity on biotechnological and biosimilar drugs are divided into two main groups: microbial drugs and mammalian cell culture-based drugs. The biotechnology team at Arven R&D center comprises Microbial Manufacturing, Mammalian Production, Biotechnology R&D and Biotechnology Quality Control divisions. Biosimilar development project generally includes following basic steps: Characterization of the reference product, cell line development, analytical method development, process development, head-to-head comparability studies, manufacturing of the product at different scales for toxicology study and clinical trials, stability studies, animal studies, and phase 1 clinical trials. The facility is utilized for upstream processes, including inoculation, fermentation, cell disruption and harvesting; and downstream processes, including advanced technology filtration and chromatographic purification techniques, and has the capacity to manufacture bulk products in controlled, GMP classified areas. Following the bulk manufacture of products, the final product is obtained through syringe filling under aseptic conditions in GMP areas using validated processes. It was the first GMP certificated-biotechnology manufacturing area in Turkey. Fraven, biosimilar of Filgrastim, is the first biosimilar drug developed from cell to the finished product in Turkey and was approved by Turkish Ministry of Health in April 2016. The manufacturing protocol of the product was successfully patented before the European Patent Office. This success has also led company to move R&D projects to more complicated biomolecules, such as monoclonal antibodies. Inhaler Arven was the first Turkish Pharmaceutical company to develop a medical device and corresponding anti-asthmatic product as a dry powder inhaler (DPI) complies with international guidelines and regulations including WHO, ICH, FDA and EMA. The company has new R&D projects on other inhalable molecules on the pipeline. Salmeterol/ Fluticasone propionate combined doses are manufacturated as 50 mcg/ 500 mcg, 50 mcg/250 mcg and 50/100 mcg inhalation powder forms. Development of Cyplos Arvohaler products were started in 2006, authorization step in Turkey was completed in 2011 and the products were launched in 2012 to the Turkish market. Product development first started with design and development of a plastic inhalation device (Arvohaler) in 2006 with a 100% percent of domestic capital in Turkey. Whole research and design studies were conducted internally by a team of people from R&D, IP and other related departments of Arven. Development stages were completed together with a local mold and device manufacturer having a clean room in Turkey. After the development stage today Arvohaler is a worldwide patented multi-unit dose dry powder inhalation (DPI) device developed by Arven. The device itself is protected in Turkey, EU, US and many other countries globally with a number of patent families. Arvohaler consists of 18 components including 16 plastic parts and 2 stainless steel springs. Whole material selections and documentation of the development stages were performed according to international guidelines and regulations including WHO, ICH, FDA and EMA. As a whole product; Cyplos Arvohaler is manufactured in DPI Production Unit located in Arven’s GMP approved production facilities in clean room conditions. Production consists of subsequent stages. Production of plastic inhaler device Arvohaler: Arvohaler plastic components are produced from medical grade plastic materials in a well-equipped device manufacturer in clean room conditions. Production of drug product Cyplos Arvohaler: Weighing , mixing, blister filling of the drug product, coiling of blister strips into device Final processing including final assembly, labeling and secondary packaging. Every stage of production and the finished product is controlled by in-process and chemical tests. Products Fraven (Filgrastim) Cyplos Arvohaler (salmeterol/fluticasone) Tutast Arvohaler(Tiotropium bromide) Patents Arven has filed 571 patent applications before the patent offices since its foundation. 298 of the patents belong to the inhaler device(Arvohaler) and corresponding formulation technology inventions. Arven’s innovator inhaler device, Arvohaler, succeed to obtain patent grants from European Patent Office, USPTO, Japan Patent Office and Chinese State Patent Office. Additionally first biotechnology drug patent application in Turkey was filed by Arven Pharmaceuticals. Arven has 23 European granted patents and ranked tenth among all of the Turkish companies in terms of number of registered European patents. References Biotechnology Pharmaceutical industry Pharmaceutical companies of Turkey |
No, this text is not related with defense topics | In the subject area of control theory, an internal model is a process that simulates the response of the system in order to estimate the outcome of a system disturbance. The internal model principle was first articulated in 1976 by B. A. Francis and W. M. Wonham as an explicit formulation of the Conant and Ashby good regulator theorem. It stands in contrast to classical control, in that the classical feedback loop fails to explicitly model the controlled system (although the classical controller may contain an implicit model). The internal model theory of motor control argues that the motor system is controlled by the constant interactions of the “plant” and the “controller.” The plant is the body part being controlled, while the internal model itself is considered part of the controller. Information from the controller, such as information from the central nervous system (CNS), feedback information, and the efference copy, is sent to the plant which moves accordingly. Internal models can be controlled through either feed-forward or feedback control. Feed-forward control computes its input into a system using only the current state and its model of the system. It does not use feedback, so it cannot correct for errors in its control. In feedback control, some of the output of the system can be fed back into the system's input, and the system is then able to make adjustments or compensate for errors from its desired output. Two primary types of internal models have been proposed: forward models and inverse models. In simulations, models can be combined to solve more complex movement tasks. Forward models In their simplest form, forward models take the input of a motor command to the “plant” and output a predicted position of the body. The motor command input to the forward model can be an efference copy, as seen in Figure 1. The output from that forward model, the predicted position of the body, is then compared with the actual position of the body. The actual and predicted position of the body may differ due to noise introduced into the system by either internal (e.g. body sensors are not perfect, sensory noise) or external (e.g. unpredictable forces from outside the body) sources. If the actual and predicted body positions differ, the difference can be fed back as an input into the entire system again so that an adjusted set of motor commands can be formed to create a more accurate movement. Inverse models Inverse models use the desired and actual position of the body as inputs to estimate the necessary motor commands which would transform the current position into the desired one. For example, in an arm reaching task, the desired position (or a trajectory of consecutive positions) of the arm is input into the postulated inverse model, and the inverse model generates the motor commands needed to control the arm and bring it into this desired configuration (Figure 2). Inverse internal models are also in close connection with the uncontrolled manifold hypothesis (UCM), see also here. Combined forward and inverse models Theoretical work has shown that in models of motor control, when inverse models are used in combination with a forward model, the efference copy of the motor command output from the inverse model can be used as an input to a forward model for further predictions. For example, if, in addition to reaching with the arm, the hand must be controlled to grab an object, an efference copy of the arm motor command can be input into a forward model to estimate the arm's predicted trajectory. With this information, the controller can then generate the appropriate motor command telling the hand to grab the object. It has been proposed that if they exist, this combination of inverse and forward models would allow the CNS to take a desired action (reach with the arm), accurately control the reach and then accurately control the hand to grip an object. Adaptive Control theory With the assumption that new models can be acquired and pre-existing models can be updated, the efference copy is important for the adaptive control of a movement task. Throughout the duration of a motor task, an efference copy is fed into a forward model known as a dynamics predictor whose output allows prediction of the motor output. When applying adaptive control theory techniques to motor control, efference copy is used in indirect control schemes as the input to the reference model. Scientists A wide range of scientists contribute to progress on the internal model hypothesis. Michael I. Jordan, Emanuel Todorov and Daniel Wolpert contributed significantly to the mathematical formalization. Sandro Mussa-Ivaldi, Mitsuo Kawato, Claude Ghez, Reza Shadmehr, Randy Flanagan and Konrad Kording contributed with numerous behavioral experiments. The DIVA model of speech production developed by Frank H. Guenther and colleagues uses combined forward and inverse models to produce auditory trajectories with simulated speech articulators. Two interesting inverse internal models for the control of speech production were developed by Iaroslav Blagouchine & Eric Moreau. Both models combine the optimum principles and the equilibrium-point hypothesis (motor commands λ are taken as coordinates of the internal space). The input motor command λ is found by minimizing the length of the path traveled in the internal space, either under the acoustical constraint (the first model), or under the both acoustical and mechanical constraints (the second model). The acoustical constraint is related to the quality of the produced speech (measured in terms of formants), while the mechanical one is related to the stiffness of the tongue's body. The first model, in which the stiffness remains uncontrolled, is in agreement with the standard UCM hypothesis. In contrast, the second optimum internal model, in which the stiffness is prescribed, displays the good variability of speech (at least, in the reasonable range of stiffness) and is in agreement with the more recent versions of the uncontrolled manifold hypothesis (UCM). There is also a rich clinical literature on internal models including work from John Krakauer, Pietro Mazzoni, Maurice A. Smith, Kurt Thoroughman, Joern Diedrichsen, and Amy Bastian. References Motor control Neuroscience Control theory |
No, this text is not related with defense topics | Vagina dentata (Latin for toothed vagina) describes a folk tale in which a woman's vagina is said to contain teeth, with the associated implication that sexual intercourse might result in injury, emasculation, or castration for the man involved. The topic of "vagina dentata" may also cover a rare medical condition affecting the vagina, in which case it is more accurately termed a vaginal dermoid cyst. In folklore Such folk stories are frequently told as cautionary tales warning of the dangers of unknown women and to discourage rape. The psychologist Erich Neumann wrote that in one such myth, "...a fish inhabits the vagina of the Terrible Mother; the hero is the man who overcomes the Terrible Mother, breaks the teeth out of her vagina, and so makes her into a woman." South America The legend also appears in the mythology of the Chaco and Guiana tribes of South America. In some versions, the hero leaves one tooth. North America The Ponca-Otoe tell a story in which Coyote outwits a wicked old woman who placed teeth in the vaginas of her daughter and another young woman she kept prisoner, in order to seduce, kill, and rob young men. Coyote kills the woman and her daughter but marries the other young woman, after knocking out the teeth in her vagina "except for one blunt tooth that was very thrilling when making love". Hinduism In Hinduism, the asura Andhaka, son of Shiva and Parvati (but not aware of it), is killed by Shiva when he tries to force the disguised Shiva into surrendering Parvati. Andhaka's son Adi, also an asura, takes the form of Parvati to seduce and kill Shiva with a toothed vagina in order to avenge Andhaka, but is also slain. Ainu legends The Ainu legend is that a sharp-toothed demon hid inside the vagina of a young woman and emasculated two young men on their wedding nights. Consequently, the woman sought help from a blacksmith who fashioned an iron phallus to break the demon's teeth. Māori mythology In Māori mythology, the trickster Māui tries to grant mankind immortality by reversing the birth process, turning into a worm and crawling into the vagina of Hine-nui-te-pō, the goddess of night and of death, and out through her mouth while she sleeps. His trick is ruined when a laughs at the sight of his entry, awakening Hine-nui-te-pō, who bites the worm to death with her obsidian vaginal teeth. Western Asia Arabs from South-Eastern Iran and islands in Strait of Hormuz have a legend about Menmendas, a creature that looks like a beautiful young woman with spikes on her thighs. She walks in the coastal mountains with a small box of jewels and attracts every man on her way. Menmendas goes with an attracted man into an empty house, puts the box of jewels under her head and lies down with spreading legs. If the man understands who this woman is, he can cast a fistful of sand in her eyes and run away with the box. If the man gets attracted and turned on, the woman gathers him into half-and-half by her legs. Metaphorical usage In her book Sexual Personae (1991), Camille Paglia wrote: "The toothed vagina is no sexist hallucination: every penis is made less in every vagina, just as mankind, male and female, is devoured by mother nature." In his book The Wimp Factor, Stephen J. Ducat expresses a similar view, that these myths express the threat sexual intercourse poses for men who, although entering triumphantly, always leave diminished. In popular culture In the novel Snow Crash by Neal Stephenson, the vagina of Y.T., a female character, is equipped with a dentata, a device which injects a powerful soporific to whatever penetrates it, in order to prevent rape. The folk tale is the basis for the 2007 American comedy horror film Teeth, written and directed by Mitchell Lichtenstein. In the film, Jess Weixler plays Dawn O'Keefe, a teenage spokesperson for a Christian abstinence group, who has vagina dentata and employs it to fight back against rape and sexual abuse. Medical In rare instances, dermoid cysts (a type of tumor) may grow in the vagina. Dermoid cysts are formed from the outer layers of embryonic skin cells. These cells are able to mature into many different types of tissues, and these cysts are able to form anywhere the skin is or where the skin folds inwards to become another organ, such as in the ear or the vagina. However, when dermoid cysts occur in the vagina, they are covered by a layer of normal vaginal tissue and therefore appear as a lump, not as recognizable teeth. See also References External links Article at BBC - h2g2 Folklore Latin words and phrases Sexual urban legends Vagina Fictional body parts |
No, this text is not related with defense topics | Hospital medicine is a medical specialty that exists in some countries as a branch of internal or family medicine, dealing with the care of acutely ill hospitalized patients. Physicians whose primary professional focus is caring for hospitalized patients only while they are in the hospital are called hospitalists. Originating in the United States, this type of medical practice has extended into Australia and Canada. The vast majority of physicians who refer to themselves as hospitalists focus their practice upon hospitalized patients. Hospitalists are not necessarily required to have separate board certification in hospital medicine. The term hospitalist was first coined by Robert Wachter and Lee Goldman in a 1996 New England Journal of Medicine article. The scope of hospital medicine includes acute patient care, teaching, research, and executive leadership related to the delivery of hospital-based care. Hospital medicine, like emergency medicine, is a specialty organized around the location of care (the hospital), rather than an organ (like cardiology), disease (like oncology), or a patient’s age (like pediatrics). The emergence of hospital medicine in the United States can be compared and contrasted with the parallel development of acute medicine in the United Kingdom, reflecting health system differences. Training Hospitalists are physicians with a Doctor of Medicine (M.D.), Doctor of Osteopathic Medicine (D.O.), or a Bachelor of Medicine/Bachelor of Surgery (MBBS/MBChB) degree. Most hospitalists practicing in hospitals in the United States lack board certification in hospital medicine. To address this, residency programs are starting to develop hospitalist tracks with more tailored education. Several universities have also started fellowship programs specifically geared toward hospital medicine. According to the State of Hospital Medicine Survey by the Medical Group Management Association and the Society of Hospital Medicine, 89.60% of hospitalists specialize in general internal medicine, 5.5% in a pediatrics subspecialty, 3.7% in family practice and 1.2% in internal medicine pediatrics. Data from the survey also reported that 53.5% of hospitalists are employed by hospitals/integrated delivery system and 25.3% are employed by independent hospitalists groups. According to recent data, there are more than 50,000 hospitalists practicing in approximately 75% of U.S. hospitals, including all highly ranked academic medical centers. Australia In Australia, Hospitalists are career hospital doctors; they are generalist medical practitioners whose principal focus is the provision of clinical care to patients in hospitals; they are typically beyond the internship-residency phase of their career, but have decidedly chosen as a conscious career choice not to partake in vocational-specialist training to acquire fellowship specialist qualification. Whilst not specialists, these clinicians are nonetheless experienced in their years of medical practice, and depending on their scope of practice, they typically work with a reasonable degree of independence and autonomy under the auspices of their specialist colleagues and supervisors. Hospitalists form a demographically small but important workforce of doctors in hospitals across Australia where on-site specialist coverage is otherwise unavailable. Hospitalists are typically employed in a variety of public and private hospital settings on a contractual or salaried basis. Dependent on their place of employment and duties, the responsibilities and remuneration of non-specialist hospitalists are usually comparable to somewhere between registrars and consultants. Despite the common trend for clinicians to specialise nowadays, non-specialist hospitalist clinicians have an important role in fulfilling shortages in the medical workforce, especially when specialist coverage or accessibility is unavailable and where there is an area-of-need or after-hours or on-site medical care is required. These clinicians and employed across Australia in a variety of environments which include Medical & Surgical Wards, Intensive Care Units and Emergency Departments. Nonetheless, these clinicians work closely and continually consult with the relevant attending specialists on-call; that is, final responsibility and care for the patient ultimately still rests with the attending specialist. They are also known as: Career Medical Officers (CMO), Senior Medical Officers (SMO) and Multi-skilled Medical Officers (MMO). Hospitalists are represented by the Australian Medical Association (AMA), Australasian Society of Career Medical Officers (ASCMO) and Australian Salaried Medical Officers Federation (AMSOF). Despite being non-specialist clinicians, they are still required to meet continuing professional development requirements and frequently attend courses facilitated by these organisations and hospitals to keep their practice and skillets up-to-date alongside their specialist registered colleagues. Canada In Canada, there are currently no official residency programs specializing in hospital medicine. Nevertheless, some universities, such as McGill University in Montreal, have come up with family medicine enhanced skills programs focused on hospital medicine. This program, which is available to practicing physicians and family medicine residents, has a duration of six or twelve months. The main goal behind the program is to prepare medical doctors with training in family practice to assume shared care roles with other specialists, such as cardiologists, neurologists, and nephrologists, in a hospital setting. Moreover, the program prepares family physicians by giving them a set of skills required for caring for their complicated hospitalized patients. History Hospital medicine is a relatively new phenomenon in American medicine and as such is the fastest growing specialty in the history of medicine. Almost unheard of a generation ago, this type of practice arose from three powerful shifts in medical practice: Nearly all states, as well as the national residency accreditation organizations, the Accreditation Council for Graduate Medical Education (ACGME) and the American Osteopathic Association (AOA), have established limitations on house staff duty hours, the number of hours that interns and residents can work. Many hospitalists are coming to perform the same tasks formerly performed by residents; although this is usually referred to as a House Officer rather than a hospitalist. The fundamental difference between a hospitalist and a house officer is that the hospitalist is the attending physician of a patient while that patient is hospitalized. The house officer admits the patient for another attending physician and cares for that patient until the attending physician can see the patient. Most primary care physicians are experiencing a shrinking role in hospital care. Many primary care physicians find they can generate more revenue in the office during the hour or more they would have spent on inpatient rounds, including traveling to and from the hospital. In addition to patient care duties, hospitalists are often involved in developing and managing aspects of hospital operations such as inpatient flow and quality improvement. The formation of hospitalist training tracks in residency programs has been driven in part by the need to educate future hospitalists about business and operational aspects of medicine, as these topics are not covered in traditional residencies. Certification As a relatively new specialty, only recently has certification for specialty experience and training for hospital medicine been offered. The American Board of Hospital Medicine (ABHM), a Member Board of the American Board of Physician Specialties (ABPS), was founded in 2009. The ABHM was North America’s first board of certification devoted exclusively to hospital medicine. In September 2009, the American Board of Internal Medicine (ABIM) created a program that provides general internists practicing in hospital settings the opportunity to maintain Internal Medicine Certification with a Focused Practice in Hospital Medicine (FPHM). Quality initiatives Research shows that hospitalists reduce the length of stay, treatment costs and improve the overall efficiency of care for hospitalized patients. Hospitalists are leaders on several quality improvement initiatives in key areas including transitions of care, co-management of patients, reducing hospital acquired diseases and optimizing the care of patients. Employment The number of available hospitalists positions grew exponentially from 2006 to 2010 but has since then leveled off. However, the job market still remained very active with some hospitals maintaining permanent openings for capable hospitalists. Salaries are generally very competitive, averaging almost $230,000 per year for adult hospitalists. Hospitalists who are willing to work night shifts only (nocturnists) are generally compensated higher than their day shift peers. Related terminology Though hospital medicine is a young field, there have been attempts at further division of labor in the field. A nocturnist is a hospitalist who typically covers the twelve-hour shift at night and admits patients as well as receives calls about already admitted patients. A proceduralist is generally defined as a hospitalist who primarily does procedures in the hospital such as central venous catheter insertions, lumbar punctures, and paracenteses. A neurohospitalist cares for hospitalized patients with or at risk for neurological problems. A surgicalist is a surgeon who specializes and focuses on surgical care in the hospital setting. The following are other commonly used (negative) nicknames: An admitologist or admitter is a hospitalist who only admits patients and does not round on the already admitted ones, or discharge the admitted patients. A dischargologist A rounder is a hospitalist who only sees the already admitted patients. See also Society of Hospital Medicine Obstetric hospitalist Lists of hospitals References Further reading "What Is a Hospitalist? A Guide for Family Caregivers", free consumer guide available in four languages External links American Board of Hospital Medicine Society of Hospital Medicine American Board of Internal Medicine Hospitals |
No, this text is not related with defense topics | The ecosystem approach is a conceptual framework for resolving ecosystem issues. The idea is to protect and manage the environment through the use of scientific reasoning. Another point of the ecosystem approach is preserving the Earth and its inhabitants from potential harm or permanent damage to the planet itself. With the preservation and management of the planet through an ecosystem approach, the future monetary and planetary gain are the by-product of sustaining and/or increasing the capacity of that particular environment. This is possible as the ecosystem approach incorporates humans, the economy, and ecology to the solution of any given problem. The initial idea for an ecosystem approach would come to light during the second meeting (November 1995) at the Conference of the Parties (COP) it was the central topic in implementation and framework for the Convention on Biological Diversity (CBD), it would further elaborate on the ecosystem approach as using varies methodologies for solving complex issues. Throughout, the use and incorporation of ecosystem approaches, two similar terms have been created in that time: ecosystem-based management and ecosystem management. The Convention on Biological Diversity has seen ecosystem-based management as a supporting topic/concept for the ecosystem approach. Similarly, ecosystem management has a minor difference with the two terms. Conceptual the differences between the three terms come from a framework structure and the different methods used in solving complex issues. The key component and definition between the three terms refer to the concept of conservation and protection of the ecosystem. The use of the ecosystem approach has been incorporated with managing water, land, and living organisms ecosystems and advocating the nourishment and sustainment of those ecological space. Since the ecosystem approach is a conceptual model for solving problems, the key idea could combat various problems. History On December 29, 1993, the Convention on Biological Diversity (CBD) was signed and applied as a multilateral treaty. With the purpose of achieving: Biodiversity Sustainability of species diversity Endorse genetic diversity (e.g. to maintain and endorses livestock, crops, and wildlife) Two years after the CBD was signed, during the second meeting of the Conference of the Parties (November 1995) the representatives of the signed treaty would agree upon employing a strategy to combat the intricate and actively changing ecosystem. The ecosystem approach would represent as the equalizer for obtaining knowledge and creating countermeasures in preventing the endangerment of any ecological environment. With the acknowledgment of the ecosystem approach, during the fifth meeting of the Conference of the Parties, a group consensus agreed on a concrete definition and elaboration for the ecosystem approach would be needed, and the Parties would request Subsidiary Body on Scientific Technical and Technological Advice (SBSTTA) to create a guideline with 12 principles and a description of the ecosystem approach. The final results are given at COP 5 Decision V/6 summary. During the seventh meeting of the Conference of the Parties, further iteration on the ecosystem approach would be seen as a priority, during the meeting the parties would agree new implementation and strategical development could be incorporated with the ecosystem approach into the CBD. Furthermore, creating a new relationship with sustaining forest organization and the ecosystem approach was talked about. All topics and discussion regarding seventh meeting are given at COP 7 Decision Vll/11 summary. Ecosystem approach and management With the development and use of the ecosystem approach, different variation to that form have been created and used. The two being ecosystem management and ecosystem-based management, the framework of the three methods are still the same (the conservation and protection of the ecosystem). The distinguishing part beings with how to initiate the approach of solving the problem. Ecosystem-based management (EBM) is used for projects that incorporate interaction of different levels: organisms, the ecosystem, and the human component; however, its varies from the other methods as the scale of the problem is larger and intricate. The objectives should be straightforward and condensed with important systematic information. Also, EBM considers social and cultural aspects into the solution not just only scientific reasoning. With ecosystem management, the process is similar to EBM; however, factors such as socioeconomics and politics can impact the decision and solution. A cultural aspect is also considered when creating a solution. Ecosystem approach to fisheries The ecosystem approach is currently being used in the fields of environmental and ocean management (i.e. the ecosystem approach does not stop there; it is being used in various fields and sub-fields as well). The goal is to address the current problems facing those fields through the use of conceptual thinking and approach that would determine a viable and sustaining solution. One example, in particular, would be pertaining to Fishery (a commercial industry in capturing and selling of fishes). In recent years, inland fisheries have quintupled from 2 million metric tons to 11 million metric tons in the span of 60 years from 1950 to 2010. Through the use of the ecosystem approach more specifically the ecosystem approach to fisheries (EAF), sometimes referred to as Ecosystem-based fisheries. EAF is seen as a framework to creating local strategies for each specific fishery ecosystem and implementing the new strategies gradually with the already existing rules and regulation. With the use of EAF if successful fishery industries could generate substantial income; as well as, improve the fragile ecosystem of aquarium species. See also Ecosystem Convention on Biological Diversity The Conference of the Parties Ecosystem management Ecosystem-based management Fishery Ecosystem based fisheries References Ecology Natural resource management |
No, this text is not related with defense topics | Photowalking is a communal activity of camera enthusiasts who gather in a group to walk around with a camera for the main purpose of taking pictures of things that interest them. The word is sometimes used incorrectly in the marketing title of a photography class or workshop. Although the term implies the single activity of taking pictures while walking, the more modern use of the term specifically relates to a communal activity of camera enthusiasts. The activity is typically organized by camera clubs, ad hoc gatherings from online forums such as Facebook or Twitter, or sponsored by commercial organizations or photographers such as the Scott Kelby's yearly Worldwide Photowalk, and the now-canceled yearly 500px Global Photowalk. Photowalks vs street photography Photowalking is sometimes compared to street photography, a type of documentary photography activity. However, although a person participating in a photowalk may practice street photography, they are not limited to that scope; they may also practice Macro photography, Architectural photography, Nature photography, etc. Also, street photography is typically an activity practiced as an individual photographer rather than in a group. History The activity of walking and photographing with a group of other photographers dates back more than 100 years. An early example of photography clubs is The Camera Club of New York, established in 1884. The Eastman Kodak Company of New York launched the Brownie camera in 1900. The camera sold for $1 and put photography in the hands of the average consumer. As photography became part of people's daily lives, photography clubs such as societies and associations, proliferated around the world to support this emerging technology. Adding to the confusion is that "photo walk" and "photowalk" are used interchangeably. Scott Kelby, one of the most well-known photowalking event leaders, has used both forms of the word since 2007. Social and cultural implications Because it is so tightly coupled with the social aspect of group photography, photowalking can have benefits other than just exercise and photography practice. For example, in some situations, there is safety in numbers and the photography experience can be more enjoyable in a group. Without tight controls and good organization, events can get out of control and in some cases involve the police. Many photowalk organizations also participate in charities. Since 2008, Scott Kelby's Worldwide Photo Walk has sponsored the Springs Of Hope Orphanage in Kenya. Other organizations donate money to the locations or museums they visit. References Photography by genre 2000s neologisms Walking |
No, this text is not related with defense topics | The Graduate School of Media Communication and Performing Arts (, or ALMED) is an Italian educational institution of Università Cattolica del Sacro Cuore. History Mario Apollonio was the founder of the School of Journalism and Audiovisual Media in Bergamo in 1961. Later, the school moved to Milan where it focused on teaching and research at the Università Cattolica del Sacro Cuore. It was renamed as the School of Specialization in Communications and offers degrees in studies of Journalism, Advertising and Entertainment. It was again renamed to the School of Specialization in Analysis and Communication Management in 1998. In 2002, it joined the system of the Postgraduate School's Cattolica with its current name. Courses The school offers master's degrees in: Musical communication Communication and marketing of film Communications, digital marketing and interactive advertising Master of Cultural Events (MEC): Design and planning of cultural events, art, cinema, entertainment Art Events: Planning of art, culture and design for cities, businesses and territories (organized in collaboration with the Polytechnic of Milan) The enterprise culture: management, finance, communication culture of the area Audiovisual production for film and digital media FareTV: management, development, communication Media relations and corporate communications Projects Magzine is the online newspaper written by the School of Journalism since 2002. Forty journalists work in the newsroom. References External links Università Cattolica del Sacro Cuore Graduate schools in Italy Universities and colleges in Milan Media studies Educational institutions established in 2002 2002 establishments in Italy |
No, this text is not related with defense topics | A plaquette (, small plaque) is a small low relief sculpture in bronze or other materials. These were popular in the Italian Renaissance and later. They may be commemorative, but especially in the Renaissance and Mannerist periods were often made for purely decorative purposes, with often crowded scenes from religious, historical or mythological sources. Only one side is decorated, giving the main point of distinction with the artistic medal, where both sides are normally decorated. Most are rectangular or circular, but other shapes are found, as in the example illustrated. Typical sizes range from about two inches up to about seven across a side, or as the diameter, with the smaller end or middle of that range more common. They "typically fit within the hand", as Grove puts it. At the smaller end they overlap with medals, and at the larger they begin to be called plaques. The form began in the 1440s in Italy, but spread across Europe in the next century, especially to France, Germany and the Low Countries. By about 1550 it had fallen from fashion in Italy, but French plaquettes were entering their best period, and there and in Germany they continued to be popular into the 17th century. The form continued to be made at a low level, with something of a revival from about 1850. They have always been closely related to the medal, and many awards today are in the form of plaquettes, but plaquettes were less restricted in their subject-matter than the medal, and allowed the artist more freedom. Usage The purpose and use of decorative plaquettes was evidently varied and remains somewhat unclear; their creation and use is relatively poorly documented. Some were mounted in furniture, boxes or other objects such as lamps, and many examples have holes for hanging on walls, added later. Other copies have three or four holes, for holding in a setting. Religious subjects in a pair or set might be set into the doors of tabernacles, and many were used for paxes, sometimes after being given a frame. Some shapes were designed for particular roles such as decorating sword hilts, though perhaps not all copies made were used in this way. Others were framed for hanging, but many were probably just kept and displayed loose, perhaps propped up on a shelf or desk, or in drawers or boxes. Many images show signs of wear. Devotional images were probably often carried around in a pocket, a habit that became common with crucifixes in Florence after a plague in 1373. A large part of the market was probably other artists and craftsmen looking for models for other forms. Plaquette bindings are leather bookbindings that incorporate plaquette casts in gesso, often of designs that are also found in metal. Plaquettes were also collected, and in particular 16th-century examples are often crowded with figures, making the scenes hard to read. They are best appreciated when held in the hand near a good light source, and were probably passed round when a collection was shown to fellow connoisseurs. The difficulty of reading the scenes, and an often obscure choice of subjects, suggest that a self-conscious display of classical learning was part of their appeal, for collectors and artists alike. They were one of the types of objects often found in the – normally male – environment of the studiolo and cabinet of curiosities, along with other small forms such as classical coins and engraved gems. The artists who made them tended to be either sculptors in bronze, also making small figures and objects such as inkwells, or goldsmiths, who often practised in the related field of engraving. They were relatively cheap and transportable, and were soon disseminated widely across Europe, offering an opportunity for artists to display their virtuosity and sophistication, and promote themselves beyond their own city. The same factors, combined with their modern display behind glass, make them relatively little appreciated today. The moulds were also sometimes re-used at considerable distances from their time and place of creation, or new moulds were made from a plaquette. German 17th-century plaquettes were still being used as models for silverware in Regency London. Plaquettes, like prints, played an important part in the diffusion of styles and trends in iconography, especially for classical subjects. Some drawings for plaquette designs survive; others copied prints, book illustrations and designs in other media, including classical engraved gems and sculpture. In Germany models in wood or limestone might be made. They were often made in sets, illustrating a story, or set of figures. Materials and technique As with medals, Renaissance plaquettes were normally made using the lost wax technique of casting, and numbers of copies were presumably normally made, although many now only survive in a unique copy, and perhaps never had others. The quality of individual castings can vary considerably, and the time and locations of individual castings from the same mould my vary considerably. Some designs can be shown to have had different generations of casts made from casts. Most are in bronze, but silver and gold, in solid or plated and gilded forms, are also found, as well as other metals. Often plaquettes with copies in precious metal also exist in bronze copies. In early 16th-century Nuremberg, which was the main German centre, plaquettes, like other metalwork types of objects, were often made in the relatively plebeian material of brass, even by top artists like the Vischer family and Peter Flötner. Lead was also used, especially in German castings intended as artisan's models rather than for collectors. From the 19th century on cast iron was also used, especially in Germany. In Italy lead was also used for an initial trial cast. The castings were normally not worked much further with tools, beyond polishing and often giving an artificial patina. History The word plaquette is a 19th-century invention by the French art historian Eugene Piot. Les Bronzes de la Renaissance. Les Plaquettes by Émile Molinier of 1886 was the first large study, and these two between them defined the form as it is understood today. To Renaissance Italians plaquettes were known, along with other similar types of objects, by a variety of somewhat vague terms such as piastra and medaglietti, rilievi, or modelli. Italy Plaquettes grew from two rather different Italian origins. In Rome in the 1440s and 1450s they began as a way of reproducing the designs of classical engraved gems, by taking a wax impression of them. The Venetian Pietro Barbo (1417–1471) became a cardinal when his uncle was elected Pope Eugenius IV in 1431. He became an enthusiastic pioneer of this form, maintaining a foundry in his new Palazzo Venezia, and perhaps participating in the casting himself. These plaquettes had the same small size and classical subject matter as the gems they replicated. Around the same time north Italian artists began making plaquettes, often much larger and with religious subject matter. Padua, already an important centre of metalworking, is seen by many historians as the crucial location. Two significant works, neither typical of later examples, were the self-portrait head by Leon Battista Alberti, oval and 20 cm high, and a slightly larger circular Madonna and Child with putti by Donatello (Victoria and Albert Museum, London). This remained highly unusual in that the reverse is concave and repeats the design. Other larger religious reliefs by Donatello were copied or adapted in a smaller plaquette format by other artists, probably including his own workshop. These grew out of a wider context of small religious images that represented mass-produced versions for the middle classes of the larger and unique religious art made for the rich and for churches. Also in the 1440s Pisanello was establishing the genre of the double-sided portrait medal, followed by Matteo de' Pasti and others. By the later decades of the century medals and plaquettes were being produced in most of the north Italian artistic centres. Significant later artists included Moderno (as he signed many of his works), who was very likely Galleazzo Mondella, a goldsmith from Verona recorded in Rome around 1500. Some 45 plaquettes are signed by or attributed to him (and hardly any medals), and a number of members of his workshop have been identified by their styles. Andrea Riccio, Giovanni Bernardi, Francesco di Giorgio Martini, Valerio Belli, and Leone Leoni, are among the artists to whom a clear name can be attached. Many significant unidentified masters are given notnames by art historians, such as Moderno and Master IO.F.F., who often signed their works. Belli and Bernardi were the leaders in the luxury form of small intaglios engraved in rock crystal, and several of these were reproduced in plaquette form around 1520–40, some cast from wax impressions taken off the crystals. Riccio was also a sculptor of small bronzes, and his plaquettes tended to have a relatively high relief. He had a large workshop and many followers. Germany German production began in Nuremberg, around 1500, but by 1600 Augsburg was the main centre. German examples tended to draw their designs from prints, and were in turn frequently reused in other media, and perhaps more often produced primarily as models for other trades. The repeated reuse of moulds, and their distribution far from their place of making, are especially typical of south German plaquettes. Even fewer of the artists involved are known than in Italy. Production lasted well into the 17th century, when it became involved in the "Dürer revival", with several of his prints being turned into plaquettes. France and the Netherlands Further north plaquettes were produced from around 1550, initially under influence more from Germany than Italy. Artists (often Huguenot in France) included Étienne Delaune, who mostly lived in Strasbourg, and François Briot from Lorraine. François Duquesnoy from Brussels worked as a sculptor in Rome from 1618, and influenced Flemish plaquettes. Later history The form saw a small revival in the 19th century; examples from this period are typically rather larger than in the Renaissance. Artists such as, in America, Augustus Saint-Gaudens and Emil Fuchs made commemorative portrait plaquettes of figures such as Leo Tolstoy and Mark Twain (both by Saint-Gaudens). Especially in France and Germany, commemorative plaquettes for industry and institutions involved a wide range of contemporary subject matter. A number of artists produced examples purely because they were attracted by the form, or the possibility of reaching a wider market. A number of regular awards by institutions chose the plaquette form, though often retaining "medal" in the name of the award. The circular so-called "death penny" (the Memorial Plaque) minted in the UK after World War I is a large twentieth-century commemorative example. Collections Many major museums have collections, which are not always given room in the gallery displays. The National Gallery of Art in Washington D.C., despite being essentially a collection of paintings, has what is recognised as the finest single collection, especially of Italian Renaissance work, which includes over 450 plaquettes, and is very well displayed on the ground floor. The Washington collection of medals, plaquettes and small bronzes includes the leading French collection assembled by Gustave Dreyfus (1837–1914), which was bought by Samuel H. Kress (1863–1955). In 1945 the Kress Foundation added over 1,300 bronzes collected by the British art dealer Lord Duveen, and donated all its collection to the museum in 1957. Joseph E. Widener had already given the museum a significant collection in 1942. The Wallace Collection in London has a good smaller display, as do the Victoria and Albert Museum, the Cabinet des médailles, Paris, the Hermitage Museum, the Ashmolean in Oxford, and a number of German museums, although the outstanding Berlin collection was lost in World War II. Not much of the British Museum's important collection is on display, nor that of the Vatican Museums. The Bargello in Florence has some 400 plaquettes, about half from the collection of the Medici family, who played an important role in the development of the form. Most of the rest are from the collection of Louis Carrand, who bequeathed it to Florence. After that of Drefus, this is the next most important collection assembled in Paris in the 19th century and still intact. Paris was then the centre of plaquette collecting. See also Royal Copenhagen 2010 plaquettes modern ceramic examples Notes References Bober, Phyllis Pray, review of Italian Plaquettes by Alison Luchs, Renaissance Quarterly, Vol. 44, No. 3 (Autumn, 1991), pp. 590–593, The University of Chicago Press on behalf of the Renaissance Society of America, Article DOI: 10.2307/2862612, JSTOR "Grove": "Plaquette" in The Grove Encyclopedia of Decorative Arts, Volume 1, Editor, Gordon Campbell, pp. 220–223, 2006, Oxford University Press, , 9780195189483, Google books Hayward, J.F., review of Deutsche, Niederländische und Französische Plaketten 1500–1650, 2 Vols by Ingrid Weber, The Burlington Magazine, Vol. 118, No. 884 (Nov., 1976), pp. 779–780, JSTOR Marks, P.J.M., Beautiful Bookbindings, A Thousand Years of the Bookbinder's Art, 2011, British Library, Palmer, Allison Lee, The Walters' "Madonna and Child" Plaquette and Private Devotional Art in Early Renaissance Italy, The Journal of the Walters Art Museum, Vol. 59, Focus on the Collections (2001), pp. 73–84, The Walters Art Museum, JSTOR Syson, Luke and Thornton, Dora, Objects of Virtue: Art in Renaissance Italy, 2001, Getty Trust Publications: J. Paul Getty Museum, , 9780892366576, google books Warren, Jeremy, Review of Placchette, secoli XV-XVIII nel Museo Nazionale del Bargello by Giuseppe Toderi, The Burlington Magazine, Vol. 138, No. 1125 (Dec., 1996), pp. 832–833, JSTOR Wilson, Carolyn C., Renaissance Small Bronze Sculpture and Associated Decorative Arts, 1983, National Gallery of Art (Washington), Further reading Studies in the History of Art, Vol. 22, Symposium Papers IX: Italian Plaquettes (1989) External links European sculpture and metalwork, a collection catalogue from The Metropolitan Museum of Art Libraries (fully available online as PDF), which contains material on plaquettes (see index) Italian Renaissance Bronze sculptures Sculpture Award items |
No, this text is not related with defense topics | Déjà vu ( , ; "already seen") is a French loanword expressing when a person has done something and they experience the same feelings or the feeling that one has lived through the present situation before. Although some interpret déjà vu in a paranormal context, mainstream scientific approaches reject the explanation of déjà vu as "precognition" or "prophecy". It is an anomaly of memory whereby, despite the strong sense of recollection, the time, place, and practical context of the "previous" experience are uncertain or believed to be impossible. Two types of déjà vu are recognized: the pathological déjà vu usually associated with epilepsy or that which, when unusually prolonged or frequent, or associated with other symptoms such as hallucinations, may be an indicator of neurological or psychiatric illness, and the non-pathological type characteristic of healthy people, about two-thirds of whom have had déjà vu experiences. People who travel often or frequently watch films are more likely to experience déjà vu than others. Furthermore, people also tend to experience déjà vu more in fragile conditions or under high pressure, and research shows that the experience of déjà vu also decreases with age. Etymology The expression "sensation de déjà-vu" (sensation of déjà vu) was coined in 1876 by the French philosopher Émile Boirac (1851-1917), who used it in his book L’Avenir des sciences psychiques, it is now used internationally. Medical disorders Déjà vu is associated with temporal lobe epilepsy. This experience is a neurological anomaly related to epileptic electrical discharge in the brain, creating a strong sensation that an event or experience currently being experienced has already been experienced in the past. Migraines with aura are also associated with déjà vu. Early researchers tried to establish a link between déjà vu and mental disorders such as anxiety, dissociative identity disorder and schizophrenia but failed to find correlations of any diagnostic value. No special association has been found between déjà vu and schizophrenia. A 2008 study found that déjà vu experiences are unlikely to be pathological dissociative experiences. Some research has looked into genetics when considering déjà vu. Although there is not currently a gene associated with déjà vu, the LGI1 gene on chromosome 10 is being studied for a possible link. Certain forms of the gene are associated with a mild form of epilepsy, and, though by no means a certainty, déjà vu, along with jamais vu, occurs often enough during seizures (such as simple partial seizures) that researchers have reason to suspect a link. Pharmacology Certain drugs increase the chances of déjà vu occurring in the user, resulting in a strong sensation that an event or experience currently being experienced has already been experienced in the past. Some pharmaceutical drugs, when taken together, have also been implicated in the cause of déjà vu. Taiminen and Jääskeläinen (2001) reported the case of an otherwise healthy male who started experiencing intense and recurrent sensations of déjà vu upon taking the drugs amantadine and phenylpropanolamine together to relieve flu symptoms. He found the experience so interesting that he completed the full course of his treatment and reported it to the psychologists to write up as a case study. Because of the dopaminergic action of the drugs and previous findings from electrode stimulation of the brain (e.g. Bancaud, Brunet-Bourgin, Chauvel, & Halgren, 1994), Tamminen and Jääskeläinen speculate that déjà vu occurs as a result of hyperdopaminergic action in the medial temporal areas of the brain. Explanations Split perception explanation Déjà vu may happen if a person experienced the current sensory experience twice successively. The first input experience is brief, degraded, occluded, or distracted. Immediately following that, the second perception might be familiar because the person naturally related it to the first input. One possibility behind this mechanism is that the first input experience involves shallow processing, which means that only some superficial physical attributes are extracted from the stimulus. Memory-based explanation Implicit memory Research has associated déjà vu experiences with good memory functions. Recognition memory enables people to realize the event or activity that they are experiencing has happened before. When people experience déjà vu, they may have their recognition memory triggered by certain situations which they have never encountered. The similarity between a déjà-vu-eliciting stimulus and an existing, or non-existing but different, memory trace may lead to the sensation that an event or experience currently being experienced has already been experienced in the past. Thus, encountering something that evokes the implicit associations of an experience or sensation that cannot be remembered may lead to déjà vu. In an effort to reproduce the sensation experimentally, Banister and Zangwill (1941) used hypnosis to give participants posthypnotic amnesia for material they had already seen. When this was later re-encountered, the restricted activation caused thereafter by the posthypnotic amnesia resulted in three of the 10 participants reporting what the authors termed "paramnesias". Two approaches are used by researchers to study feelings of previous experience, with the process of recollection and familiarity. Recollection-based recognition refers to an ostensible realization that the current situation has occurred before. Familiarity-based recognition refers to the feeling of familiarity with the current situation without being able to identify any specific memory or previous event that could be associated with the sensation. In 2010, O’Connor, Moulin, and Conway developed another laboratory analog of déjà vu based on two contrast groups of carefully selected participants, a group under posthypnotic amnesia condition (PHA) and a group under posthypnotic familiarity condition (PHF). The idea of PHA group was based on the work done by Banister and Zangwill (1941), and the PHF group was built on the research results of O’Connor, Moulin, and Conway (2007). They applied the same puzzle game for both groups, "Railroad Rush Hour", a game in which one aims to slide a red car through the exit by rearranging and shifting other blocking trucks and cars on the road. After completing the puzzle, each participant in the PHA group received a posthypnotic amnesia suggestion to forget the game in the hypnosis. Then, each participant in the PHF group was not given the puzzle but received a posthypnotic familiarity suggestion that they would feel familiar with this game during the hypnosis. After the hypnosis, all participants were asked to play the puzzle (the second time for PHA group) and reported the feelings of playing. In the PHA condition, if a participant reported no memory of completing the puzzle game during hypnosis, researchers scored the participant as passing the suggestion. In the PHF condition, if participants reported that the puzzle game felt familiar, researchers scored the participant as passing the suggestion. It turned out that, both in the PHA and PHF conditions, five participants passed the suggestion and one did not, which is 83.33% of the total sample. More participants in PHF group felt a strong sense of familiarity, for instance, comments like "I think I have done this several years ago." Furthermore, more participants in PHF group experienced a strong déjà vu, for example, "I think I have done the exact puzzle before." Three out of six participants in the PHA group felt a sense of déjà vu, and none of them experienced a strong sense of it. These figures are consistent with Banister and Zangwill's findings. Some participants in PHA group related the familiarity when completing the puzzle with an exact event that happened before, which is more likely to be a phenomenon of source amnesia. Other participants started to realize that they may have completed the puzzle game during hypnosis, which is more akin to the phenomenon of breaching. In contrast, participants in the PHF group reported that they felt confused about the strong familiarity of this puzzle, with the feeling of playing it just sliding across their minds. Overall, the experiences of participants in the PHF group is more likely to be the déjà vu in life, while the experiences of participants in the PHA group is unlikely to be real déjà vu. A 2012 study in the journal Consciousness and Cognition, that used virtual reality technology to study reported déjà vu experiences, supported this idea. This virtual reality investigation suggested that similarity between a new scene's spatial layout and the layout of a previously experienced scene in memory (but which fails to be recalled) may contribute to the déjà vu experience. When the previously experienced scene fails to come to mind in response to viewing the new scene, that previously experienced scene in memory can still exert an effect—that effect may be a feeling of familiarity with the new scene that is subjectively experienced as a feeling that an event or experience currently being experienced has already been experienced in the past, or of having been there before despite knowing otherwise. Cryptomnesia Another possible explanation for the phenomenon of déjà vu is the occurrence of "cryptomnesia", which is where information learned is forgotten but nevertheless stored in the brain, and similar occurrences invoke the contained knowledge, leading to a feeling of familiarity because the event or experience being experienced has already been experienced in the past, known as "déjà vu". Some experts suggest that memory is a process of reconstruction, rather than a recollection of fixed, established events. This reconstruction comes from stored components, involving elaborations, distortions, and omissions. Each successive recall of an event is merely a recall of the last reconstruction. The proposed sense of recognition (déjà vu) involves achieving a good match between the present experience and the stored data. This reconstruction, however, may now differ so much from the original event it is as though it had never been experienced before, even though it seems similar. Dual neurological processing In 1964, Robert Efron of Boston's Veterans Hospital proposed that déjà vu is caused by dual neurological processing caused by delayed signals. Efron found that the brain's sorting of incoming signals is done in the temporal lobe of the brain's left hemisphere. However, signals enter the temporal lobe twice before processing, once from each hemisphere of the brain, normally with a slight delay of milliseconds between them. Efron proposed that if the two signals were occasionally not synchronized properly, then they would be processed as two separate experiences, with the second seeming to be a re-living of the first. Dream-based explanation Dreams can also be used to explain the experience of déjà vu, and they are related in three different aspects. Firstly, some déjà vu experiences duplicate the situation in dreams instead of waking conditions, according to the survey done by Brown (2004). Twenty percent of the respondents reported their déjà vu experiences were from dreams and 40% of the respondents reported from both reality and dreams. Secondly, people may experience déjà vu because some elements in their remembered dreams were shown. Research done by Zuger (1966) supported this idea by investigating the relationship between remembered dreams and déjà vu experiences, and suggested that there is a strong correlation. Thirdly, people may experience déjà vu during a dream state, which links déjà vu with dream frequency. Related terms Jamais vu Jamais vu (from French, meaning "never seen") is any familiar situation which is not recognized by the observer. Often described as the opposite of déjà vu, jamais vu involves a sense of eeriness and the observer's impression of seeing the situation for the first time despite rationally knowing that they have been in the situation before. Jamais vu is more commonly explained as when a person momentarily does not recognize a word, person or place that they already know. Jamais vu is sometimes associated with certain types of aphasia, amnesia, and epilepsy. Theoretically, a jamais vu feeling in a sufferer of a delirious disorder or intoxication could result in a delirious explanation of it, such as in the Capgras delusion, in which the patient takes a known person for a false double or impostor. If the impostor is himself, the clinical setting would be the same as the one described as depersonalization, hence jamais vus of oneself or of the "reality of reality", are termed depersonalization (or surreality) feelings. The feeling has been evoked through semantic satiation. Chris Moulin of the University of Leeds asked 95 volunteers to write the word "door" 30 times in 60 seconds. Sixty-eight percent of the subjects reported symptoms of jamais vu, with some beginning to doubt that "door" was a real word. The experience has also been named "vuja de" and "véjà du". Déjà vécu Déjà vécu (from French, meaning "already lived") is an intense, but false, feeling of having already lived through the present situation. Recently, it has been considered a pathological form of déjà vu. However, unlike déjà vu, déjà vécu has behavioral consequences. Because of the intense feeling of familiarity, patients experiencing déjà vécu may withdraw from their current events or activities. Patients may justify their feelings of familiarity with beliefs bordering on delusion. Presque vu Presque vu (, from French, meaning "almost seen") is the intense feeling of being on the very brink of a powerful epiphany, insight, or revelation, without actually achieving the revelation. The feeling is often therefore associated with a frustrating, tantalizing sense of incompleteness or near-completeness. Déjà rêvé Déjà rêvé (from French, meaning "already dreamed") is the feeling of having already dreamed something that is currently being experienced. Déjà entendu Déjà entendu (literally "already heard") is the experience of feeling sure about having already heard something, even though the exact details are uncertain or were perhaps imagined. See also Intuition (knowledge) Repression (psychology) Scientific skepticism Screen memory Uncanny References Further reading Neppe, Vernon. (1983). The Psychology of Déjà vu: Have We Been Here Before?. Witwatersrand University Press. External links Anne Cleary discussing a virtual reality investigation of déjà vu Dream Déjà Vu - Psychology Today Chronic déjà vu - quirks and quarks episode (mp3) Déjà vu - The Skeptic's Dictionary How Déjà Vu Works — a Howstuffworks article Déjà Experience Research — a website dedicated to providing déjà experience information and research Nikhil Swaminathan, Think You've Previously Read About This?, Scientific American, June 8, 2007 Deberoh Halber, Research Deciphers Deju-Vu Brain Mechanics, MIT Report, June 7, 2007 Memory Philosophy of mind Perception French words and phrases Time in life |
No, this text is not related with defense topics | Túath (plural túatha) is the Old Irish term for the basic political and jurisdictional unit of Gaelic Ireland. Túath can refer to both a geographical territory as well the people who lived in that territory. Social structure In ancient Irish terms, a household was reckoned at about 30 people per dwelling. A trícha cét ("thirty hundreds"), was an area comprising 100 dwellings or, roughly, 3,000 people. A túath consisted of a number of allied trícha céta, and therefore referred to no fewer than 6,000 people. Probably a more accurate number for a túath would be no fewer than 9,000 people. Each túath was a self-contained unit, with its own executive, assembly, courts system and defence force. Túatha were grouped together into confederations for mutual defence. There was a hierarchy of túatha statuses, depending on geographical position and connection to the ruling dynasties of the region. The organisation of túatha is covered to a great extent within the Brehon laws, Irish laws written down in the 7th century, also known as the Fénechas. The old Irish political system was altered during and after the Elizabethan conquest, being gradually replaced by a system of baronies and counties under the new colonial system. Due to a loss of knowledge, there has been some confusion regarding old territorial units in Ireland, mainly between trícha céta and túatha, which in some cases seem to be overlapping units, and in others, different measurements altogether. The trícha céta were primarily for reckoning military units; specifically, the number of fighting forces a particular population could rally. Some scholars equate the túath with the modern parish, whereas others equate it with the barony. This partly depends on how the territory was first incorporated into the county system. In cases where surrender and regrant was the method, the match between the old túath and the modern barony is reasonably equivalent. Whereas in cases like Ulster, which involved large scale colonisation and confiscation of land, the shape of the original divisions is not always clear or recoverable. It has been suggested that the baronies are, for the most part, divided along the boundaries of the ancient túatha, as many bog bodies and offerings, such as bog butter, are primarily found along present-day baronial boundaries. This implies that the territorial divisions of the petty kingdoms of Ireland have been more or less the same since at least the Iron Age. Etymology Túath in Old Irish means both "the people", "country, territory", and "territory, petty kingdom, the political and jurisdictional unit of ancient Ireland". The word possibly derives from Proto-Celtic *toutā ("tribe, tribal homeland"; cognate roots may be found in the Gaulish god name Toutatis), which is perhaps from Proto-Indo-European *tewtéh₂ ("tribesman, tribal citizen"). In Modern Irish it is spelled tuath, without the fada accent, and is usually used to refer to "rural districts" or "the country" (as in "the countryside"); however the historical meaning is still understood and employed, as well. Historical examples Cairbre Drom Cliabh Tir Fhiacrach Muaidhe Tir Olliol Corann Dartraighe Osraige - túath that later became the kingdom of the same name in the Christian era Dál Riata - the túath that became a confederation of túatha and eventually settled in Alba, creating the modern nation of Scotland Clandonnell, Glenconkeyne, Killetra, Melanagh, Tarraghter, and Tomlagh, which all once formed the ancient territory of Loughinsholin See also Trícha cét List of Irish kingdoms Gaelic Ireland Tuatha Dé Danann History of Ireland References Further reading Colonisation under early kings of Tara, Eoin Mac Neill, Journal of the Galway Archaeological and Historical Society, volume 16, pp. 101–124, 1935 Corpus genealogiarum Hibernia, i, M.A. O'Brien, Dublin, 1962 Early Irish Society Francis John Byrne, in The Course of Irish History, ed. T.W. Moody and F.X. Martin, pp. 43–60, Cork, 1967 Hui Failgi relations with the Ui Neill in the century after the loss of the plain of Mide, A. Smyth, Etudes Celtic 14:2, pp. 502–23 Tribes and Tribalism in early Ireland, Francis John Byrne, Eiru 22, 1971, pp. 128–166. Origins of the Eóganachta, David Sproule, Eiru 35, pp. 31–37, 1974 Some Early Connacht Population-Groups, Nollaig O Muraile, in Seanchas:Studies in Early and Medieval Irish Archaeology, History and Literature in Honour of Francis John Byrne, pp. 161–177, ed. Alfred P. Smyth, Four Courts Press, Dublin, 2000 The Airgialla Charter Poem:The Political Context, Edel Bhreathnach, in The Kingship and Landscape of Tara, ed. Edel Bhreathnach, pp. 95–100, 2005 Cultural anthropology Irish words and phrases Former subdivisions of Ireland Medieval Ireland Gaelic nobility of Ireland Historic Gaelic Territories |
No, this text is not related with defense topics | Worldwide, two to three million people are estimated to be permanently disabled because of leprosy. India has the greatest number of cases, with Brazil second and Indonesia third. In 1999, the world incidence of Hansen's disease was estimated to be 640,000. In 2000, 738,284 new cases were identified. In 2000, the World Health Organization (WHO) listed 91 countries in which Hansen's disease is endemic. India, Myanmar and Nepal contained 70% of cases. India reports over 50% of the world's leprosy cases. In 2002, 763,917 new cases were detected worldwide, and in that year the WHO listed India, Brazil, Madagascar, Mozambique, Tanzania and Nepal as having 90% of Hansen's disease cases. According to recent figures from the WHO, 208,619 new cases of leprosy were reported in 2018 from 127 countries. A total of 16,000 new child cases were detected in 2018. In the United States, Hansen's disease is tracked by the Centers for Disease Control and Prevention (CDC), with a total of 92 cases being reported in 2002. Although the number of cases worldwide continues to fall, pockets of high prevalence continue in certain areas such as Brazil, South Asia (India, Nepal), some parts of Africa (Tanzania, Madagascar, Mozambique) and the western Pacific. Risk groups At highest risk are those living in endemic areas with poor conditions such as inadequate bedding, contaminated water and insufficient diet, or other diseases (such as HIV) that compromise immune function. Recent research suggests that there is a defect in cell-mediated immunity that causes susceptibility to the disease. Less than ten percent of the world's population is actually capable of acquiring the disease. The region of DNA responsible for this variability is also involved in Parkinson disease, giving rise to current speculation that the two disorders may be linked in some way at the biochemical level. In addition, men are twice as likely to contract leprosy as women. According to The Leprosy Mission Canada, most peopleabout 95% of the populationare naturally immune. Disease burden Although the number of new leprosy cases occurring each year is important as a measure of transmission, it is difficult to measure in leprosy due to its long incubation period, delays in diagnosis after onset of the disease and the lack of laboratory tools to detect leprosy in its very early stages. Instead, the registered prevalence is used. Registered prevalence is a useful proxy indicator of the disease burden as it reflects the number of active leprosy cases diagnosed with the disease and receiving treatment with MDT at a given point in time. The prevalence rate is defined as the number of cases registered for MDT treatment among the population in which the cases have occurred, again at a given point in time. New case detection is another indicator of the disease that is usually reported by countries on an annual basis. It includes cases diagnosed with onset of disease in the year in question (true incidence) and a large proportion of cases with onset in previous years (termed a backlog prevalence of undetected cases). Endemic countries also report the number of new cases with established disabilities at the time of detection, as an indicator of the backlog prevalence. Determination of the time of onset of the disease is generally unreliable, is very labor-intensive and is seldom done in recording these statistics. Global situation As reported to WHO by 115 countries and territories in 2006, and published in the Weekly Epidemiological Record, the global registered prevalence of leprosy at the beginning of the year was 219,826 cases. New-case detection during the previous year (2005 – the last year for which full country information is available) was 296,499. The reason for the annual detections being higher than the prevalence at the end of the year can be explained by the fact that a proportion of new cases complete their treatment within the year and, therefore, no longer remain on the registers. The global detection of new cases continues to show a sharp decline, falling by 110,000 cases (27%) during 2005 compared with the previous year. Table 1 shows that global annual detection has been declining since 2001. The African region reported an 8.7% decline in the number of new cases compared with 2004. The comparable figure for the Americas was 20.1%, for South-East Asia 32%, and for the Eastern Mediterranean 7.6%. The Western Pacific area, however, showed a 14.8% increase during the same period. Table 2 shows the leprosy situation in the four major countries that have yet to achieve the goal of elimination at the national level. Elimination is defined as a prevalence of less than 1 case per 10,000 population. Madagascar reached elimination at the national level in September 2006. Nepal detection reported from mid-November 2004 to mid-November 2005. D.R. Congo officially reported to WHO in 2008 that it had reached elimination by the end of 2007 at the national level. South America Argentina In 2013, there were 550 cases, of which 277 were considered as new cases. Between 350 and 400 cases of leprosy are diagnosed every year. In 1926 the law 11,359 ordered compulsory quarantine for the leper, that law was abolished by the law 22,964 in 1983, that ordered compulsory quarantine for the leper only if they refuse medical indications and involve some risk for the healthy population. Five leper colonies were built: Pedro L. Baliña, Posadas, Misiones, opened on February 6, 1938. José J. Puente, San Francisco del Chañar, Córdoba, opened on March 18, 1939. Maximiliano Aberastury, Isla del Cerrito, Chaco, opened on March 30, 1939. Baldomero Sommer, General Rodríguez, Buenos Aires, opened on November 21, 1941. Enrique Fidanza, Colonia Ensayo, Entre Ríos opened on March 28, 1948. Only the Baldomero Sommer continues serving as a leper colony. Brazil Leprosy is a serious public health problem in Brazil, between 2001 and 2013 over 500,000 cases were reported and in 2015 alone over 25,000 cases were diagnosed. Asia People's Republic of China The People's Republic of China has many leprosy recovered patients who have been isolated from the rest of society. In the 1950s the Chinese Communist government created "Recovered Villages" in rural remote mountaintops for the recovered patients. Although leprosy is now curable with the advent of the multi-drug treatment, the villagers remain because they have been stigmatized by the outside world. Health NGOs such as Joy in Action have arisen in China to especially focus on improving the conditions of "Recovered Villages". The number of leprosy cases in China has been showing a steady decline over recent years, with a prevalence of 2697 registered cases in 2017 which is a 55.3% reduction compared to that in 2010. India British India enacted the Leprosy Act of 1898 which institutionalized those affected and segregated them by sex to prevent reproduction. The Act was difficult to enforce but was repealed in 1983 only after MDT had become widely available. In 1983, the National Leprosy Elimination Programme, previously the National Leprosy Control Programme, changed its methods from surveillance to the treatment of people with leprosy. India reported a far larger decline in leprosy cases than any other country – from 473,658 new cases in 2002 to 161,457 in 2005. According to WHO 16 million people worldwide were cured of leprosy since the past 20 years. India has estimated three million people with disability or health issues stemming from leprosy. India announced that leprosy had been “eliminated as a public health problem,” meaning that there would be fewer than one new case per 10,000 people (as defined by the WHO). Reported new cases exceed 125,000 per year (60% of the world total). 135,485 new leprosy cases were detected in India in 2017. Malaysia Malaysia was announced to be eliminated of leprosy by WHO in 1994, which signifies a reduction in the prevalence rate of the disease to less than 1 case per 10,000 people. However, it was reported that there is a rise in incidence across the country over recent years, reaching 1.02 cases per 10,000 people in 2014. Europe Portugal The Hospital-Colónia Rovisco Pais (the Rovisco Pais Hospital–Colony) was founded in Portugal in 1947 as a national center for the treatment of leprosy. It was renamed in 2007 as the Centro de Medicina de Reabilitação da Região Centro-Rovisco Pais. It still retains a leprosy service in which 25 ex patients live. Between 1988 and 2003 102 patients were treated in Portugal for leprosy. Spain The Sanatorio de Fontilles (Fontilles Sanatorium) in Spain was founded in 1902 and admitted its first patient in 1909. In 2002, the Sanatorio had 68 in-patients in the Sanatorium, and more than 150 receiving out-patient treatment. A small number of cases continue to be reported. Greece Two indigenous cases were reported from Greece in 2009. France One case was reported in France in 2009 Germany Leprosy was almost eradicated in most of Europe by 1700 but sometime after 1850 leprosy was re introduced into East Prussia by Lithuanian rural workers immigrating from the Russian empire. The first leprosarium was founded in 1899 in Memel (now Klaipėda in Lithuania). Legislation was introduced in 1900 and 1904 requiring patients to be isolated and not allowed to work with others. United Kingdom The last confirmed case of Leprosy being transmitted within the UK was in 1953. Between 2003 & 2012, an average of 139 cases/year of leprosy were diagnosed (& notified) in the UK, none of which are believed to have been acquired within the UK. The UK's national referral service for leprosy is run by Prof. Diana Lockwood, the UK's only Leprologist, at the Hospital for Tropical Diseases, London. Malta The first documented case of leprosy (erga corpore morbi leprae) in Malta in a Gozitan woman (Garita Xejbais) was in 1492 but it is certain that it was present on the island before this time. The next recorded case was in 1630 in a Dominican friar. A report in 1687 recorded five cases. A further three cases were reported in 1808. Between 1839 and 1858 an additional seven cases were recorded. In 1890 a population survey recorded a total of 69 cases. A later survey in 1957 identified 151 people infected with leprosy. In June 1972 an eradication programme was started. The project was based on the work of Enno Freerksen, Director of the Borsal Institute in Hamburg. Dr Freerksen's earlier trial had used rifampacin, izoniazid, dapsone and prothionamide. The Malta project used rifampacin, dapsone and clofamazine. The project formally concluded in 1999 having treated about 300 patients. Romania The last leper colony in Europe is at Tichileşti, Romania. Until 1991 patients were not allowed to leave the colony. At this colony patients get food, a place to sleep, clothes and medical attention. Some live in long pavilions and others in houses with vegetable and flower gardens. There are two churches in the colony – Orthodox and Baptist – and a farm where the colony grows its own corn. North America Canada There were cases of leprosy in Atlantic Canada at the end of the nineteenth century, beginning in 1815. The patients were first housed on Sheldrake Island in the Miramichi River and later transferred to Tracadie. Catholic nuns (the religieuses hospitalières de Saint-Joseph, RHSJ) came to take care of the sick. They opened the first French-language hospital in New Brunswick and many more followed. Many hospitals opened by the RHSJ nuns are still in use today. The last hospital to house lepers in Tracadie was demolished in 1991. Its lazaretto section had been closed since 1965. In a century of existence, it had housed not only Acadian victims of the disease, but people from all over Canada as well as sick immigrants from Iceland, Russia and China, among other nations. Cape Breton Island also suffered an outbreak, in the Lake Ainslie region. Nine people were affected, but it died out soon after 1882. Another outbreak of twenty cases occurred in the Lake O'Law region around 1852. United States In the United States, the first definite reference to the disease was in Florida in 1758. In 2004, there were 131 total cases of the disease in the United States. Of the 131 cases, two-thirds were male. Also out of the 131 cases, 25 (19%) were of individuals who were born in the country. Mexico (18.3%), Micronesia (11.5%), Brazil (9.2%), and the Philippines (7.6%) were the next leading countries where those with the disease were originally born. A total of 20 cases were found to be white, not of Hispanic origin. As of October, 2005, 3,604 patients on the United States registry were currently receiving care. In 2018 there are about 5,000 people who no longer have leprosy but have long-term complications of disease and continue to receive care. The disease is tracked by the Centers for Disease Control and Prevention (CDC), with a total of 166 new cases reported in the US in 2005. Most (100 or 60%) of these new cases were reported in California, Louisiana, Massachusetts, New York, and Texas. References External links Leprosy Leprosy |
No, this text is not related with defense topics | In computing, a compound document is a document that “combines multiple document formats, either by reference, by inclusion, or both.” Compound documents are often produced using word processing software, and may include text and non-text elements such as barcodes, spreadsheets, pictures, digital videos, digital audio, and other multimedia features. Compound document technologies are commonly utilized on top of a software componentry framework, but the idea of software componentry includes several other concepts apart from compound documents, and software components alone do not enable compound documents. Well-known technologies for compound documents include: ActiveX Documents Bonobo by Ximian (primarily used by GNOME) KParts in KDE Mixed Object Document Content Architecture Multipurpose Internet Mail Extensions (MIME) Object linking and embedding (OLE) by Microsoft; see Compound File Binary Format Open Document Architecture from ITU-T (not used) OpenDoc by IBM and Apple Computer (now defunct) Verdantuim XML and XSL are encapsulation formats used for compound documents of all kinds The first public implementation of compound documents was on the Xerox Star workstation, released in 1981. See also COM Structured Storage Transclusion References Electronic documents Multimedia |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.