page_title
stringlengths
1
91
page_text
stringlengths
0
34.2k
Tinea cruris
Tinea cruris, also known as jock itch, is a common type of contagious, superficial fungal infection of the groin region, which occurs predominantly but not exclusively in men and in hot-humid climates.Typically, over the upper inner thighs, there is an intensely itchy red raised rash with a scaly well-defined curved border. It is often associated with athletes foot and fungal nail infections, excessive sweating and sharing of infected towels or sports clothing. It is uncommon in children.Its appearance may be similar to some other rashes that occur in skin folds including candidal intertrigo, erythrasma, inverse psoriasis and seborrhoeic dermatitis. Tests may include microscopy and culture of skin scrapings.Treatment is with topical antifungal medications and is particularly effective if symptoms have recent onset. Prevention of recurrences include treating concurrent fungal infections and taking measures to avoid moisture build-up including keeping the groin region dry, avoiding tight clothing and losing weight if obese. Names Other names include "jock rot", "dhobi itch", "crotch itch", "scrot rot", "gym itch", "ringworm of groin" and "eczema marginatum". Signs and symptoms Typically, over the upper inner thighs, there is a red raised rash with a scaly well-defined border. There may be some blistering and weeping, and the rash can reach near to the anus. The distribution is usually on both sides of the groin and the center may be lighter in colour. The rash may appear reddish, tan, or brown, with flaking, rippling, peeling, iridescence, or cracking skin.If the person is hairy, hair follicles can become inflamed resulting in some bumps (papules, nodules and pustules) within the plaque. The plaque may reach the scrotum in men and the labia majora and mons pubis in women. The penis is usually unaffected unless there is immunodeficiency or there has been use of steroids.Affected people usually experience intense itching in the groin which can extend to the anus. Causes Tinea cruris is often associated with athletes foot and fungal nail infections. Rubbing from clothing, excessive sweating, diabetes and obesity are risk factors. It is contagious and can be transmitted person-to-person by skin-to-skin contact or by contact with contaminated sports clothing and sharing towels.The type of fungus involved may vary in different parts of the world; for example, Trichophyton rubrum and Epidermophyton floccosum are common in New Zealand. Less commonly Trichophyton mentagrophytes and Trichophyton verrucosum are involved. Trichophyton interdigitale has also been implicated. Diagnosis Tests are usually not needed to make a diagnosis, but if required, may include microscopy and culture of skin scrapings, a KOH examination to check for fungus or skin biopsy. Differential diagnosis The symptoms of tinea cruris may be similar to other causes of itch in the groin. Its appearance may be similar to some other rashes that occur in skin folds including candidal intertrigo, erythrasma, inverse psoriasis and seborrhoeic dermatitis. Prevention To prevent recurrences of tinea cruris, concurrent fungal infections such as athletes foot need to be treated. Also advised are measures to avoid moisture build-up including keeping the groin region dry, avoiding tight clothing and losing weight if obese. People with athletes foot or tinea cruris can prevent spread by not lending their towels to others. Treatment Tinea cruris is treated by applying antifungal medications of the allylamine or azole type to the groin region. Studies suggest that allylamines (naftifine and terbinafine) are a quicker but more expensive form of treatment compared to azoles (clotrimazole, econazole, ketoconazole, oxiconazole, miconazole, sulconazole). If the symptoms have been present for long or the condition worsens despite applying creams, terbinafine or itraconazole can be given by mouth.The benefits of the use of topical steroids in addition to an antifungal is unclear. There might be a greater cure rate but no guidelines currently recommend its addition. The effect of Whitfields ointment is also unclear, but when given, it is prescribed at half strength.Wearing cotton underwear and socks, in addition to keeping the groin dry and using antifungal powders are helpful. Prognosis and complications Tinea cruris is not life-threatening and treatment is effective, particularly if the symptoms have not been present for long. However, recurrence may occur. The intense itch may lead to lichenification and secondary bacterial infection. Irritant and allergic contact dermatitis may be caused by applied medications. Epidemiology Tinea cruris is common in hot-humid climates and is the second most common clinical presentation for dermatophytosis. It is uncommon in children. References External links eMedicineHealth.com
Tinea nigra
Tinea nigra, also known as superficial phaeohyphomycosis and Tinea nigra palmaris et plantaris, is a superficial fungal infection, a type of phaeohyphomycosis rather than a tinea, that causes usually a single 1–5 cm dark brown-black, non-scaly, flat, painless patch on the palms of the hands and the soles of the feet of healthy people. There may be multiple spots. The macules occasionally extend to the fingers, toes, and nails, and may be reported on the chest, neck, or genital area.: 311  Tinea nigra infections can present with multiple macules that can be mottled or velvety in appearance, and may be oval or irregular in shape. The macules can be anywhere from a few mm to several cm in size.Most cases are caused by Hortaea werneckii, a pigmented fungus, which is a dark yeast found in sewage, soil, rotting vegetation and wood and in places with a high salt content such as moldy salted fish and on beaches, where contact with sand may result in transmission. Infection is by direct contact and the fungus enters and remains in the outer dead layer of skin with little or no skin inflammation. The infection does not invade deeper tissues.Diagnosis is by visualisation, dermoscopy, and microscopy and culture of skin scrapings. Differential diagnosis includes Addisons disease, syphilis, pinta, yaws, melanoma, lentigines, lichen planus of the palms, and junctional melanocytes nevus. Treatment is with topical Whitfields ointment or salicylic acid ointment. Topical antifungals or oral itraconazole are other options. Scraping the lesion can be curative. Prevention is by general hygiene measures.It is uncommon. It generally occurs in tropical and subtropical countries of Central and South America, the Caribbean, Europe, South East Asia, Australia and the Far East. The disease was first described by Alexandre Cerqueira from Brazil in 1891. No cases in animals have been reported. Causes This infection is caused by the fungus formerly classified as Cladosporium werneckii, but more recently classified as Hortaea werneckii. The causative organism has also been described as Phaeoannellomyces werneckii. Tinea nigra is extremely superficial and can be removed from the skin by forceful scraping. It tends to appear in areas where eccrine sweat glands are highly concentrated. Infections generally start to appear on the skin around 2–7 weeks post inoculation. The ability of H. werneckii to tolerate high salt concentrations and acidic conditions allows it to flourish inside the stratum corneum. H. wernickii tends remain localized in one spot or region, and produces darkly-colored, brown macules on the skin due to the production of a melanin-like substance. Diagnosis Diagnosis of tinea nigra is made based on microscopic examination of stratum corneum skin scrapings obtained by using a scalpel. The scrapings are mixed with potassium hydroxide (KOH). The KOH lyses the nonfungal debris. The skin scrapings are cultured on Sabourauds agar at 25°C and allowed to grow for about a week. H. werneckii can generally be distinguished due to its two-celled yeast form and the presence of septate hyphae with thick, darkly pigmented walls. Treatment Treatment consists of topical application of dandruff shampoo, which contains selenium sulfide, over the skin. Topical antifungal imidazoles such as ketoconazole, itraconazole, and miconazole may also be used. Imidazoles are generally used twice daily for a two-week period. This is the same treatment plan for tinea or pityriasis versicolor. Other treatment methods include the use of epidermal tape stripping, Undecylenic acid, and other topical agents such as ciclopirox. Once a tinea nigra infection has been eradicated from the host, it is not likely to reoccur. Epidemiology Tinea nigra is commonly found in Africa, Asia, Central America, and South America. It is typically not found in the United States or Europe, although cases have been documented in the Southeastern United States. People of all ages can be infected; however, it is generally more apparent in children and younger adults. Females are three times more likely than males to become infected. See also List of cutaneous conditions References External links DermAtlas 1049165786 DermNet fungal/tinea-nigra
Athletes foot
Athletes foot, known medically as tinea pedis, is a common skin infection of the feet caused by a fungus. Signs and symptoms often include itching, scaling, cracking and redness. In rare cases the skin may blister. Athletes foot fungus may infect any part of the foot, but most often grows between the toes. The next most common area is the bottom of the foot. The same fungus may also affect the nails or the hands. It is a member of the group of diseases known as tinea.Athletes foot is caused by a number of different fungi, including species of Trichophyton, Epidermophyton, and Microsporum. The condition is typically acquired by coming into contact with infected skin, or fungus in the environment. Common places where the fungi can survive are around swimming pools and in locker rooms. They may also be spread from other animals. Usually diagnosis is made based on signs and symptoms; however, it can be confirmed either by culture or seeing hyphae using a microscope.Athletes foot is not limited to just athletes: it can be caused by going barefoot in public showers, letting toenails grow too long, wearing shoes that are too tight, and not changing socks daily. It can be treated with topical antifungal medications such as clotrimazole or, for persistent infections, using oral antifungal medications such as terbinafine. Topical creams are typically recommended to be used for four weeks. Keeping infected feet dry and wearing sandals also assists with treatment.Athletes foot was first medically described in 1908. Globally, athletes foot affects about 15% of the population. Males are more often affected than females. It occurs most frequently in older children or younger adults. Historically it is believed to have been a rare condition that became more frequent in the 20th century due to the greater use of shoes, health clubs, war, and travel. Signs and symptoms Athletes foot is divided into four categories or presentations: chronic interdigital, plantar (chronic scaly; aka "moccasin foot"), acute ulcerative, and vesiculobullous. "Interdigital" means between the toes. "Plantar" here refers to the sole of the foot. The ulcerative condition includes macerated lesions with scaly borders. Maceration is the softening and breaking down of skin due to extensive exposure to moisture. A vesiculobullous disease is a type of mucocutaneous disease characterized by vesicles and bullae (blisters). Both vesicles and bullae are fluid-filled lesions, and they are distinguished by size (vesicles being less than 5–10 mm and bulla being larger than 5–10 mm, depending upon what definition is used).Athletes foot occurs most often between the toes (interdigital), with the space between the fourth and fifth digits (the little toe and the fore toe) most commonly affected. Cases of interdigital athletes foot caused by Trichophyton rubrum may be symptomless, it may itch, or the skin between the toes may appear red or ulcerative (scaly, flaky, with soft and white if skin has been kept wet), with or without itching. An acute ulcerative variant of interdigital athletes foot caused by T. mentagrophytes is characterized by pain, maceration of the skin, erosions and fissuring of the skin, crusting, and an odor due to secondary bacterial infection.Plantar athletes foot (moccasin foot) is also caused by T. rubrum which typically causes asymptomatic, slightly erythematous plaques (areas of redness of the skin) to form on the plantar surface (sole) of the foot that are often covered by fine, powdery hyperkeratotic scales.The vesiculobullous type of athletes foot is less common and is usually caused by T. mentagrophytes and is characterized by a sudden outbreak of itchy blisters and vesicles on an erythematous base, usually appearing on the sole of the foot. This subtype of athletes foot is often complicated by secondary bacterial infection by Streptococcus pyogenes or Staphylococcus aureus. Complications As the disease progresses, the skin may crack, leading to bacterial skin infection and inflammation of the lymphatic vessels. If allowed to grow for too long, athletes foot fungus may spread to infect the toenails, feeding on the keratin in them, a condition called onychomycosis.Because athletes foot may itch, it may also elicit the scratch reflex, causing the host to scratch the infected area before they realize it. Scratching can further damage the skin and worsen the condition by allowing the fungus to more easily spread and thrive. The itching sensation associated with athletes foot can be so severe that it may cause hosts to scratch vigorously enough to inflict excoriations (open wounds), which are susceptible to bacterial infection. Further scratching may remove scabs, inhibiting the healing process. Scratching infected areas may also spread the fungus to the fingers and under the fingernails. If not washed away soon enough, it can infect the fingers and fingernails, growing in the skin and in the nails (not just underneath). After scratching, it can be spread to wherever the person touches, including other parts of the body and to ones environment. Scratching also causes infected skin scales to fall off into ones environment, leading to further possible spread. When athletes foot fungus or infested skin particles spread to ones environment (such as to clothes, shoes, bathroom, etc.) whether through scratching, falling, or rubbing off, not only can they infect other people, they can also reinfect (or further infect) the host they came from. For example, infected feet infest ones socks and shoes which further expose the feet to the fungus and its spores when worn again.The ease with which the fungus spreads to other areas of the body (on ones fingers) poses another complication. When the fungus is spread to other parts of the body, it can easily be spread back to the feet after the feet have been treated. And because the condition is called something else in each place it takes hold (e.g., tinea corporis (ringworm) or tinea cruris (jock itch)), persons infected may not be aware it is the same disease. Some individuals may experience an allergic response to the fungus called an id reaction in which blisters or vesicles can appear in areas such as the hands, chest, and arms. Treatment of the underlying infection typically results in the disappearance of the id reaction. Causes Athletes foot is a form of dermatophytosis (fungal infection of the skin), caused by dermatophytes, fungi (most of which are mold) which inhabit dead layers of skin and digest keratin. Dermatophytes are anthropophilic, meaning these parasitic fungi prefer human hosts. Athletes foot is most commonly caused by the molds known as Trichophyton rubrum and T. mentagrophytes, but may also be caused by Epidermophyton floccosum. Most cases of athletes foot in the general population are caused by T. rubrum; however, the majority of athletes foot cases in athletes are caused by T. mentagrophytes. Transmission According to the UKs National Health Service, "Athletes foot is very contagious and can be spread through direct and indirect contact." The disease may spread to others directly when they touch the infection. People can contract the disease indirectly by coming into contact with contaminated items (clothes, towels, etc.) or surfaces (such as bathroom, shower, or locker room floors). The fungi that cause athletes foot can easily spread to ones environment. Fungi rub off of fingers and bare feet, but also travel on the dead skin cells that continually fall off the body. Athletes foot fungi and infested skin particles and flakes may spread to socks, shoes, clothes, to other people, pets (via petting), bed sheets, bathtubs, showers, sinks, counters, towels, rugs, floors, and carpets. When the fungus has spread to pets, it can subsequently spread to the hands and fingers of people who pet them. If a pet frequently gnaws upon itself, it might not be fleas it is reacting to, it may be the insatiable itch of tinea. One way to contract athletes foot is to get a fungal infection somewhere else on the body first. The fungi causing athletes foot may spread from other areas of the body to the feet, usually by touching or scratching the affected area, thereby getting the fungus on the fingers, and then touching or scratching the feet. While the fungus remains the same, the name of the condition changes based on where on the body the infection is located. For example, the infection is known as tinea corporis ("ringworm") when the torso or limbs are affected or tinea cruris (jock itch or dhobi itch) when the groin is affected. Clothes (or shoes), body heat, and sweat can keep the skin warm and moist, just the environment the fungus needs to thrive. Risk factors Besides being exposed to any of the modes of transmission presented above, there are additional risk factors that increase ones chance of contracting athletes foot. Persons who have had athletes foot before are more likely to become infected than those who have not. Adults are more likely to catch athletes foot than children. Men have a higher chance of getting athletes foot than women. People with diabetes or weakened immune systems are more susceptible to the disease. HIV/AIDS hampers the immune system and increases the risk of acquiring athletes foot. Hyperhidrosis (abnormally increased sweating) increases the risk of infection and makes treatment more difficult. Diagnosis When visiting a doctor, the basic diagnosis procedure applies. This includes checking the patients medical history and medical record for risk factors, a medical interview during which the doctor asks questions (such as about itching and scratching), and a physical examination. Athletes foot can usually be diagnosed by visual inspection of the skin and by identifying less obvious symptoms such as itching of the affected area. If the diagnosis is uncertain, direct microscopy of a potassium hydroxide preparation of a skin scraping (known as a KOH test) can confirm the diagnosis of athletes foot and help rule out other possible causes, such as candidiasis, pitted keratolysis, erythrasma, contact dermatitis, eczema, or psoriasis. Dermatophytes known to cause athletes foot will demonstrate multiple septate branching hyphae on microscopy.A Woods lamp (black light), although useful in diagnosing fungal infections of the scalp (tinea capitis), is not usually helpful in diagnosing athletes foot, since the common dermatophytes that cause this disease do not fluoresce under ultraviolet light. Prevention There are several preventive foot hygiene measures that can prevent athletes foot and reduce recurrence. Some of these include: keeping the feet dry; clipping toenails short; using a separate nail clipper for infected toenails; using socks made from well-ventilated cotton or synthetic moisture wicking materials (to soak moisture away from the skin to help keep it dry); avoiding tight-fitting footwear; changing socks frequently; and wearing sandals while walking through communal areas such as gym showers and locker rooms.According to the Centers for Disease Control and Prevention, "Nails should be clipped short and kept clean. Nails can house and spread the infection." Recurrence of athletes foot can be prevented with the use of antifungal powder on the feet.The fungi (molds) that cause athletes foot require warmth and moisture to survive and grow. There is an increased risk of infection with exposure to warm, moist environments (e.g., occlusive footwear—shoes or boots that enclose the feet) and in shared humid environments such as communal showers, shared pools, and treatment tubs. Chlorine bleach is a disinfectant and common household cleaner that kills mold. Cleaning surfaces with a chlorine bleach solution prevents the disease from spreading from subsequent contact. Cleaning bathtubs, showers, bathroom floors, sinks, and counters with bleach helps prevent the spread of the disease, including reinfection. Keeping socks and shoes clean (using bleach in the wash) is one way to prevent fungi from taking hold and spreading. Avoiding the sharing of boots and shoes is another way to prevent transmission. Athletes foot can be transmitted by sharing footwear with an infected person. Hand-me-downs and purchasing used shoes are other forms of shoe-sharing. Not sharing also applies to towels, because, though less common, fungi can be passed along on towels, especially damp ones. Treatment Athletes foot resolves without medication (resolves by itself) in 30–40% of cases. Topical antifungal medication consistently produces much higher rates of cure.Conventional treatment typically involves thoroughly washing the feet daily or twice daily, followed by the application of a topical medication. Because the outer skin layers are damaged and susceptible to reinfection, topical treatment generally continues until all layers of the skin are replaced, about 2–6 weeks after symptoms disappear. Keeping feet dry and practicing good hygiene (as described in the above section on prevention) is crucial for killing the fungus and preventing reinfection. Treating the feet is not always enough. Once socks or shoes are infested with fungi, wearing them again can reinfect (or further infect) the feet. Socks can be effectively cleaned in the wash by adding bleach or by washing in water 60 °C (140 °F). Washing with bleach may help with shoes, but the only way to be absolutely certain that one cannot contract the disease again from a particular pair of shoes is to dispose of those shoes.To be effective, treatment includes all infected areas (such as toenails, hands, torso, etc.). Otherwise, the infection may continue to spread, including back to treated areas. For example, leaving fungal infection of the nail untreated may allow it to spread back to the rest of the foot, to become athletes foot once again. Allylamines such as terbinafine are considered more efficacious than azoles for the treatment of athletes foot.Severe or prolonged fungal skin infections may require treatment with oral antifungal medication. Topical treatments There are many topical antifungal drugs useful in the treatment of athletes foot including: miconazole nitrate, clotrimazole, tolnaftate (a synthetic thiocarbamate), terbinafine hydrochloride, butenafine hydrochloride and undecylenic acid. The fungal infection may be treated with topical antifungal agents, which can take the form of a spray, powder, cream, or gel. Topical application of an antifungal cream such as butenafine once daily for one week or terbinafine once daily for two weeks is effective in most cases of athletes foot and is more effective than application of miconazole or clotrimazole. Plantar-type athletes foot is more resistant to topical treatments due to the presence of thickened hyperkeratotic skin on the sole of the foot. Keratolytic and humectant medications such as urea, salicyclic acid (Whitfields ointment), and lactic acid are useful adjunct medications and improve penetration of antifungal agents into the thickened skin. Topical glucocorticoids are sometimes prescribed to alleviate inflammation and itching associated with the infection.A solution of 1% potassium permanganate dissolved in hot water is an alternative to antifungal drugs. Potassium permanganate is a salt and a strong oxidizing agent. Oral treatments For severe or refractory cases of athletes foot oral terbinafine is more effective than griseofulvin. Fluconazole or itraconazole may also be taken orally for severe athletes foot infections. The most commonly reported adverse effect from these medications is gastrointestinal upset. Epidemiology Globally, fungal infections affect about 15% of the population and 20% of adults. Athletes foot is common in individuals who wear unventilated (occlusive) footwear, such as rubber boots or vinyl shoes. Countries and regions where going barefoot is more common experience much lower rates of athletes foot than do populations which habitually wear shoes; as a result, the disease has been called "a penalty of civilization". Studies have demonstrated that men are infected 2–4 times more often than women. See also Toenail fungus, tinea unguium, an infection affecting the toenails Trench foot, due to moisture and decay References External links Media related to Athletes foot at Wikimedia Commons "Athletes Foot". MedlinePlus. U.S. National Library of Medicine.
Tinea versicolor
Tinea versicolor (also pityriasis versicolor) is a condition characterized by a skin eruption on the trunk and proximal extremities. The majority of tinea versicolor is caused by the fungus Malassezia globosa, although Malassezia furfur is responsible for a small number of cases. These yeasts are normally found on the human skin and become troublesome only under certain circumstances, such as a warm and humid environment, although the exact conditions that cause initiation of the disease process are poorly understood.The condition pityriasis versicolor was first identified in 1846. Versicolor comes from the Latin versāre to turn + color. It is also commonly referred to as Peter Elams disease in many parts of South Asia. Signs and symptoms The symptoms of this condition include: Occasional fine scaling of the skin producing a very superficial ash-like scale Pale, dark tan, or pink in color, with a reddish undertone that can darken when the patient is overheated, such as in a hot shower or during/after exercise. Tanning typically makes the affected areas contrast more starkly with the surrounding skin. Sharp borderPityriasis versicolor is more common in hot, humid climates or in those who sweat heavily, so it may recur each summer.The yeasts can often be seen under the microscope within the lesions and typically have a so-called "spaghetti and meatball appearance" as the round yeasts produce filaments. In people with dark skin tones, pigmentary changes such as hypopigmentation (loss of color) are common, while in those with lighter skin color, hyperpigmentation (increase in skin color) is more common. These discolorations have led to the term "sun fungus". Pathophysiology In cases of tinea versicolor caused by the fungus Malassezia furfur, lightening of the skin occurs due to the funguss production of azelaic acid, which has a slight bleaching effect. Diagnosis Tinea versicolor may be diagnosed by a potassium hydroxide (KOH) preparation and lesions may fluoresce copper-orange when exposed to Woods lamp. The differential diagnosis for tinea versicolor infection includes: Progressive macular hypomelanosis Pityriasis alba Pityriasis rosea Seborrheic dermatitis Erythrasma Vitiligo Leprosy Syphilis Post-inflammatory hypopigmentation Treatment Treatments for tinea versicolor include: Topical antifungal medications containing selenium sulfide are often recommended. Ketoconazole (Nizoral ointment and shampoo) is another treatment. It is normally applied to dry skin and washed off after 10 minutes, repeated daily for two weeks. Ciclopirox (ciclopirox olamine) is an alternative treatment to ketoconazole, as it suppresses growth of the yeast Malassezia furfur. Initial results show similar efficacy to ketoconazole with a relative increase in subjective symptom relief due to its inherent anti-inflammatory properties. Other topical antifungal agents such as clotrimazole, miconazole, terbinafine, or zinc pyrithione can lessen symptoms in some patients. Additionally, hydrogen peroxide has been known to lessen symptoms and, on certain occasions, remove the problem, although permanent scarring has occurred with this treatment in some people. Clotrimazole is also used combined with selenium sulfide. Oral medications are viewed as a second-line of treatment for pityriasis versicolor in the event of widespread, severe, recalcitrant or recurrent cases. Systemic therapies include itraconazole (200 mg daily for seven days) and fluconazole (150 to 300 mg weekly dose for 2 to 4 weeks) that are preferred to oral ketoconazole which is no longer approved due to its potential hepatotoxic sides effects. The single-dose regimens and pulse therapy regimens can be made more effective by having the patient exercise 1–2 hours after the dose, to induce sweating. The sweat is allowed to evaporate, and showering is delayed for a day, leaving a film of the medication on the skin. Epidemiology This skin disease commonly affects adolescents and young adults, especially in warm and humid climates. The yeast is thought to feed on skin oils (lipids), as well as dead skin cells. Infections are more common in people who have seborrheic dermatitis, dandruff, and hyperhidrosis. References External links Media related to Tinea versicolor at Wikimedia Commons
Tocolytic
Tocolytics (also called anti-contraction medications or labor suppressants) are medications used to suppress premature labor (from Greek τόκος tókos, "childbirth", and λύσις lúsis, "loosening"). Preterm birth accounts for 70% of neonatal deaths. Therefore, tocolytic therapy is provided when delivery would result in premature birth, postponing delivery long enough for the administration of glucocorticoids, which accelerate fetal lung maturity but may require one to two days to take effect. Commonly used tocolytic medications include β2 agonists, calcium channel blockers, NSAIDs, and magnesium sulfate. These can assist in delaying preterm delivery by suppressing uterine muscle contractions and their use is intended to reduce fetal morbidity and mortality associated with preterm birth. The suppression of contractions is often only partial and tocolytics can only be relied on to delay birth for a matter of days. Depending on the tocolytic used, the pregnant woman or fetus may require monitoring (e.g., blood pressure monitoring when nifedipine is used as it reduces blood pressure; cardiotocography to assess fetal well-being). In any case, the risk of preterm labor alone justifies hospitalization. Indications Tocolytics are used in preterm labor, which refers to when a baby is born too early before 37 weeks of pregnancy. As preterm birth represents one of the leading causes of neonatal morbidity and mortality, the goal is to prevent neonatal morbidity and mortality through delaying delivery and increasing gestational age by gaining more time for other management strategies like corticosteroids therapy that may help with fetus lung maturity. Tocolytics are considered for women with confirmed preterm labor between 24 and 34 weeks of gestation age and used in conjunction with other therapies that may include corticosteroids administration, fetus neuroprotection, and safe transfer to facilities. Types of agents There is no clear first-line tocolytic agent. Current evidence suggests that first line treatment with β2 agonists, calcium channel blockers, or NSAIDs to prolong pregnancy for up to 48 hours is the best course of action to allow time for glucocorticoid administration.Various types of agents are used, with varying success rates and side effects. Some medications are not specifically approved by the U.S. Food and Drug Administration (FDA) for use in stopping uterine contractions in preterm labor, instead being used off-label. Calcium-channel blockers (such as nifedipine) and oxytocin antagonists (such as atosiban) may delay delivery by 2 to 7 days, depending on how quickly the medication is administered. NSAIDs (such as indomethacin) and calcium channel blockers (such as nifedipine) are the most likely to delay delivery for 48 hours, with the least amount of maternal and neonatal side effects. Otherwise, tocolysis is rarely successful beyond 24 to 48 hours because current medications do not alter the fundamentals of labor activation. However, postponing premature delivery by 48 hours appears sufficient to allow pregnant women to be transferred to a center specialized for management of preterm deliveries, and thus administer corticosteroids for the possibility to reduce neonatal organ immaturity.The efficacy of β-adrenergic agonists, atosiban, and indomethacin is a decreased odds ratio (OR) of delivery within 24 hours of 0.54 (95% confidence interval (CI): 0.32-0.91) and 0.47 within 48 hours (OR 0.47, 95% CI: 0.30-0.75).Antibiotics were thought to delay delivery, but no studies have shown any evidence that using antibiotics during preterm labor effectively delays delivery or reduces neonatal morbidity. Antibiotics are used in people with premature rupture of membranes, but this is not characterized as tocolysis. Contraindications to tocolytics In addition to drug-specific contraindications, several general factors may contraindicate delaying childbirth with the use of tocolytic medications. Fetus is older than 34 weeks gestation Fetus weighs less than 2.5 kg, or has intrauterine growth restriction (IUGR) or placental insufficiency Lethal congenital or chromosomal abnormalities Cervical dilation is greater than 4 centimeters Chorioamnionitis or intrauterine infection is present Pregnant woman has severe pregnancy-induced hypertension, severe eclampsia/preeclampsia, active vaginal bleeding, placental abruption, a cardiac disease, or another condition which indicates that the pregnancy should not continue. Maternal hemodynamic instability with bleeding Intrauterine fetal demise, lethal fetal anomaly, or non-reassuring fetal status Future direction of tocolytics Most tocolytics are currently being used off-label. The future direction of the development of tocolytics agents should be directed toward better efficacy in intentionally prolonging pregnancy. This will potentially result in less maternal, fetal, and neonatal adverse effects when delaying preterm childbirth. A few tocolytic alternatives worth pursuing include Barusiban, a last generation of oxytocin receptor antagonists, as well as COX-2 inhibitors. More studies on the use of multiple tocolytics must be directed to research overall health outcomes rather than solely pregnancy prolongation. See also Labor induction == References ==
Tourette syndrome
Tourette syndrome or Tourettes syndrome (abbreviated as TS or Tourettes) is a common neurodevelopmental disorder that begins in childhood or adolescence. It is characterized by multiple movement (motor) tics and at least one vocal (phonic) tic. Common tics are blinking, coughing, throat clearing, sniffing, and facial movements. These are typically preceded by an unwanted urge or sensation in the affected muscles known as a premonitory urge, can sometimes be suppressed temporarily, and characteristically change in location, strength, and frequency. Tourettes is at the more severe end of a spectrum of tic disorders. The tics often go unnoticed by casual observers. Tourettes was once regarded as a rare and bizarre syndrome and has popularly been associated with coprolalia (the utterance of obscene words or socially inappropriate and derogatory remarks). It is no longer considered rare; about 1% of school-age children and adolescents are estimated to have Tourettes, and coprolalia occurs only in a minority. There are no specific tests for diagnosing Tourettes; it is not always correctly identified, because most cases are mild, and the severity of tics decreases for most children as they pass through adolescence. Therefore, many go undiagnosed or may never seek medical attention. Extreme Tourettes in adulthood, though sensationalized in the media, is rare, but for a small minority, severely debilitating tics can persist into adulthood. Tourettes does not affect intelligence or life expectancy. There is no cure for Tourettes and no single most effective medication. In most cases, medication for tics is not necessary, and behavioral therapies are the first-line treatment. Education is an important part of any treatment plan, and explanation alone often provides sufficient reassurance that no other treatment is necessary. Other conditions, such as attention deficit hyperactivity disorder (ADHD) and obsessive–compulsive disorder (OCD), are more likely to be present among those who are referred to specialty clinics than they are among the broader population of persons with Tourettes. These co-occurring conditions often cause more impairment to the individual than the tics; hence it is important to correctly distinguish co-occurring conditions and treat them. Tourette syndrome was named by French neurologist Jean-Martin Charcot for his intern, Georges Gilles de la Tourette, who published in 1885 an account of nine patients with a "convulsive tic disorder". While the exact cause is unknown, it is believed to involve a combination of genetic and environmental factors. The mechanism appears to involve dysfunction in neural circuits between the basal ganglia and related structures in the brain. Classification Most published research on Tourette syndrome originates in the United States; in international TS research and clinical practice, the Diagnostic and Statistical Manual of Mental Disorders (DSM) is preferred over the World Health Organization (WHO) classification, which is criticized in the 2021 European Clinical Guidelines. In the fifth version of the DSM (DSM-5), published in 2013, Tourette syndrome is classified as a motor disorder (a disorder of the nervous system that causes abnormal and involuntary movements). It is listed in the neurodevelopmental disorder category. Tourettes is at the more severe end of the spectrum of tic disorders; its diagnosis requires multiple motor tics and at least one vocal tic to be present for more than a year. Tics are sudden, repetitive, nonrhythmic movements that involve discrete muscle groups, while vocal (phonic) tics involve laryngeal, pharyngeal, oral, nasal or respiratory muscles to produce sounds. The tics must not be explained by other medical conditions or substance use.Other conditions on the spectrum include persistent (chronic) motor or vocal tics, in which one type of tic (motor or vocal, but not both) has been present for more than a year; and provisional tic disorder, in which motor or vocal tics have been present for less than one year. The fifth edition of the DSM replaced what had been called transient tic disorder with provisional tic disorder, recognizing that "transient" can only be defined in retrospect. Some experts believe that TS and persistent (chronic) motor or vocal tic disorder should be considered the same condition, because vocal tics are also motor tics in the sense that they are muscular contractions of nasal or respiratory muscles.Tourette syndrome is defined only slightly differently by the WHO; in its ICD-11, the International Statistical Classification of Diseases and Related Health Problems, Tourette syndrome is classified as a disease of the nervous system and a neurodevelopmental disorder, and only one motor tic is required for diagnosis. Older versions of the ICD called it "combined vocal and multiple motor tic disorder [de la Tourette]".Genetic studies indicate that tic disorders cover a spectrum that is not recognized by the clear-cut distinctions in the current diagnostic framework. Since 2008, studies have suggested that Tourettes is not a unitary condition with a distinct mechanism, as described in the existing classification systems. Instead, the studies suggest that subtypes should be recognized to distinguish "pure TS" from TS that is accompanied by attention deficit hyperactivity disorder (ADHD), obsessive–compulsive disorder (OCD) or other disorders, similar to the way that subtypes have been established for other conditions, such as type 1 and type 2 diabetes. Elucidation of these subtypes awaits fuller understanding of the genetic and other causes of tic disorders. Characteristics Tics Tics are movements or sounds that take place "intermittently and unpredictably out of a background of normal motor activity", having the appearance of "normal behaviors gone wrong". The tics associated with Tourettes wax and wane; they change in number, frequency, severity, anatomical location, and complexity; each person experiences a unique pattern of fluctuation in their severity and frequency. Tics may also occur in "bouts of bouts", which also vary among people. The variation in tic severity may occur over hours, days, or weeks. Tics may increase when someone is experiencing stress, fatigue, anxiety, or illness, or when engaged in relaxing activities like watching TV. They sometimes decrease when an individual is engrossed in or focused on an activity like playing a musical instrument.In contrast to the abnormal movements associated with other movement disorders, the tics of Tourettes are nonrhythmic, often preceded by an unwanted urge, and temporarily suppressible. Over time, about 90% of individuals with Tourettes feel an urge preceding the tic, similar to the urge to sneeze or scratch an itch. The urges and sensations that precede the expression of a tic are referred to as premonitory sensory phenomena or premonitory urges. People describe the urge to express the tic as a buildup of tension, pressure, or energy which they ultimately choose consciously to release, as if they "had to do it" to relieve the sensation or until it feels "just right". The urge may cause a distressing sensation in the part of the body associated with the resulting tic; the tic is a response that relieves the urge in the anatomical location of the tic. Examples of this urge are the feeling of having something in ones throat, leading to a tic to clear ones throat, or a localized discomfort in the shoulders leading to shrugging the shoulders. The actual tic may be felt as relieving this tension or sensation, similar to scratching an itch or blinking to relieve an uncomfortable feeling in the eye. Some people with Tourettes may not be aware of the premonitory urge associated with tics. Children may be less aware of it than are adults, but their awareness tends to increase with maturity; by the age of ten, most children recognize the premonitory urge.Premonitory urges which precede the tic make suppression of the impending tic possible. Because of the urges that precede them, tics are described as semi-voluntary or "unvoluntary", rather than specifically involuntary; they may be experienced as a voluntary, suppressible response to the unwanted premonitory urge. The ability to suppress tics varies among individuals, and may be more developed in adults than children. People with tics are sometimes able to suppress them for limited periods of time, but doing so often results in tension or mental exhaustion. People with Tourettes may seek a secluded spot to release the suppressed urge, or there may be a marked increase in tics after a period of suppression at school or work. Children may suppress tics while in the doctors office, so they may need to be observed when not aware of being watched.Complex tics related to speech include coprolalia, echolalia and palilalia. Coprolalia is the spontaneous utterance of socially objectionable or taboo words or phrases. Although it is the most publicized symptom of Tourettes, only about 10% of people with Tourettes exhibit it, and it is not required for a diagnosis. Echolalia (repeating the words of others) and palilalia (repeating ones own words) occur in a minority of cases. Complex motor tics include copropraxia (obscene or forbidden gestures, or inappropriate touching), echopraxia (repetition or imitation of another persons actions) and palipraxia (repeating ones own movements). Onset and progression There is no typical case of Tourette syndrome, but the age of onset and the severity of symptoms follow a fairly reliable course. Although onset may occur anytime before eighteen years, the typical age of onset of tics is from five to seven, and is usually before adolescence. A 1998 study from the Yale Child Study Center showed that tic severity increased with age until it reached its highest point between ages eight and twelve. Severity declines steadily for most children as they pass through adolescence, when half to two-thirds of children see a dramatic decrease in tics.In people with TS, the first tics to appear usually affect the head, face, and shoulders, and include blinking, facial movements, sniffing and throat clearing. Vocal tics often appear months or years after motor tics but can appear first. Among people who experience more severe tics, complex tics may develop, including "arm straightening, touching, tapping, jumping, hopping and twirling". There are different movements in contrasting disorders (for example, the autism spectrum disorders), such as self-stimulation and stereotypies.The severity of symptoms varies widely among people with Tourettes, and many cases may be undetected. Most cases are mild and almost unnoticeable; many people with TS may not realize they have tics. Because tics are more commonly expressed in private, Tourette syndrome may go unrecognized, and casual observers might not notice tics. Most studies of TS involve males, who have a higher prevalence of TS than females, and gender-based differences are not well studied; a 2021 review suggested that the characteristics and progression for females, particularly in adulthood, may differ and better studies are needed.Most adults with TS have mild symptoms and do not seek medical attention. While tics subside for the majority after adolescence, some of the "most severe and debilitating forms of tic disorder are encountered" in adults. In some cases, what appear to be adult-onset tics can be childhood tics re-surfacing. Co-occurring conditions Because people with milder symptoms are unlikely to be referred to specialty clinics, studies of Tourettes have an inherent bias towards more severe cases. When symptoms are severe enough to warrant referral to clinics, ADHD and OCD are often also found. In specialty clinics, 30% of those with TS also have mood or anxiety disorders or disruptive behaviors. In the absence of ADHD, tic disorders do not appear to be associated with disruptive behavior or functional impairment, while impairment in school, family, or peer relations is greater in those who have more comorbid conditions. When ADHD is present along with tics, the occurrence of conduct disorder and oppositional defiant disorder increases. Aggressive behaviors and angry outbursts in people with TS are not well understood; they are not associated with severe tics, but are connected with the presence of ADHD. ADHD may also contribute to higher rates of anxiety, and aggression and anger control problems are more likely when both OCD and ADHD co-occur with Tourettes.Compulsions that resemble tics are present in some individuals with OCD; "tic-related OCD" is hypothesized to be a subgroup of OCD, distinguished from non-tic related OCD by the type and nature of obsessions and compulsions. Compared to the more typical compulsions of OCD without tics that relate to contamination, tic-related OCD presents with more "counting, aggressive thoughts, symmetry and touching" compulsions. Compulsions associated with OCD without tics are usually related to obsessions and anxiety, while those in tic-related OCD are more likely to be a response to a premonitory urge. There are increased rates of anxiety and depression in those adults with TS who also have OCD.Among individuals with TS studied in clinics, between 2.9% and 20% had autism spectrum disorders, but one study indicates that a high association of autism and TS may be partly due to difficulties distinguishing between tics and tic-like behaviors or OCD symptoms seen in people with autism.Not all people with Tourettes have ADHD or OCD or other comorbid conditions, and estimates of the rate of pure TS or TS-only vary from 15% to 57%; in clinical populations, a high percentage of those under care do have ADHD. Children and adolescents with pure TS are not significantly different from their peers without TS on ratings of aggressive behaviors or conduct disorders, or on measures of social adaptation. Similarly, adults with pure TS do not appear to have the social difficulties present in those with TS plus ADHD.Among those with an older age of onset, more substance abuse and mood disorders are found, and there may be self-injurious tics. Adults who have severe, often treatment-resistant tics are more likely to also have mood disorders and OCD. Coprolalia is more likely in people with severe tics plus multiple comorbid conditions. Neuropsychological function There are no major impairments in neuropsychological function among people with Tourettes, but conditions that occur along with tics can cause variation in neurocognitive function. A better understanding of comorbid conditions is needed to untangle any neuropsychological differences between TS-only individuals and those with comorbid conditions.Only slight impairments are found in intellectual ability, attentional ability, and nonverbal memory—but ADHD, other comorbid disorders, or tic severity could account for these differences. In contrast with earlier findings, visual motor integration and visuoconstructive skills are not found to be impaired, while comorbid conditions may have a small effect on motor skills. Comorbid conditions and severity of tics may account for variable results in verbal fluency, which can be slightly impaired. There might be slight impairment in social cognition, but not in the ability to plan or make decisions. Children with TS-only do not show cognitive deficits. They are faster than average for their age on timed tests of motor coordination, and constant tic suppression may lead to an advantage in switching between tasks because of increased inhibitory control.Learning disabilities may be present, but whether they are due to tics or comorbid conditions is controversial; older studies that reported higher rates of learning disability did not control well for the presence of comorbid conditions. There are often difficulties with handwriting, and disabilities in written expression and math are reported in those with TS plus other conditions. Causes The exact cause of Tourettes is unknown, but it is well established that both genetic and environmental factors are involved. Genetic epidemiology studies have shown that Tourettes is highly heritable, and 10 to 100 times more likely to be found among close family members than in the general population. The exact mode of inheritance is not known; no single gene has been identified, and hundreds of genes are likely involved. Genome-wide association studies were published in 2013 and 2015 in which no finding reached a threshold for significance; a 2019 meta-analysis found only a single genome-wide significant locus on chromosome 13, but that result was not found in broader samples. Twin studies show that 50 to 77% of identical twins share a TS diagnosis, while only 10 to 23% of fraternal twins do. But not everyone who inherits the genetic vulnerability will show symptoms. A few rare highly penetrant genetic mutations have been found that explain only a small number of cases in single families (the SLITRK1, HDC, and CNTNAP2 genes).Psychosocial or other non-genetic factors—while not causing Tourettes—can affect the severity of TS in vulnerable individuals and influence the expression of the inherited genes. Pre-natal and peri-natal events increase the risk that a tic disorder or comorbid OCD will be expressed in those with the genetic vulnerability. These include paternal age; forceps delivery; stress or severe nausea during pregnancy; and use of tobacco, caffeine, alcohol, and cannabis during pregnancy. Babies who are born premature with low birthweight, or who have low Apgar scores, are also at increased risk; in premature twins, the lower birthweight twin is more likely to develop TS.Autoimmune processes may affect the onset of tics or exacerbate them. Both OCD and tic disorders are hypothesized to arise in a subset of children as a result of a post-streptococcal autoimmune process. Its potential effect is described by the controversial hypothesis called PANDAS (pediatric autoimmune neuropsychiatric disorders associated with streptococcal infections), which proposes five criteria for diagnosis in children. PANDAS and the newer pediatric acute-onset neuropsychiatric syndrome (PANS) hypotheses are the focus of clinical and laboratory research, but remain unproven. There is also a broader hypothesis that links immune-system abnormalities and immune dysregulation with TS.Some forms of OCD may be genetically linked to Tourettes, although the genetic factors in OCD with and without tics may differ. The genetic relationship of ADHD to Tourette syndrome, however, has not been fully established. A genetic link between autism and Tourettes has not been established as of 2017. Mechanism The exact mechanism affecting the inherited vulnerability to Tourettes is not well established. Tics are believed to result from dysfunction in cortical and subcortical brain regions: the thalamus, basal ganglia and frontal cortex. Neuroanatomic models suggest failures in circuits connecting the brains cortex and subcortex; imaging techniques implicate the frontal cortex and basal ganglia. In the 2010s, neuroimaging and postmortem brain studies, as well as animal and genetic studies, made progress towards better understanding the neurobiological mechanisms leading to Tourettes. These studies support the basal ganglia model, in which neurons in the striatum are activated and inhibit outputs from the basal ganglia.Cortico-striato-thalamo-cortical (CSTC) circuits, or neural pathways, provide inputs to the basal ganglia from the cortex. These circuits connect the basal ganglia with other areas of the brain to transfer information that regulates planning and control of movements, behavior, decision-making, and learning. Behavior is regulated by cross-connections that "allow the integration of information" from these circuits. Involuntary movements may result from impairments in these CSTC circuits, including the sensorimotor, limbic, language and decision making pathways. Abnormalities in these circuits may be responsible for tics and premonitory urges.The caudate nuclei may be smaller in subjects with tics compared to those without tics, supporting the hypothesis of pathology in CSTC circuits in Tourettes. The ability to suppress tics depends on brain circuits that "regulate response inhibition and cognitive control of motor behavior". Children with TS are found to have a larger prefrontal cortex, which may be the result of an adaptation to help regulate tics. It is likely that tics decrease with age as the capacity of the frontal cortex increases. Cortico-basal ganglia (CBG) circuits may also be impaired, contributing to "sensory, limbic and executive" features. The release of dopamine in the basal ganglia is higher in people with Tourettes, implicating biochemical changes from "overactive and dysregulated dopaminergic transmissions".Histamine and the H3 receptor may play a role in the alterations of neural circuitry. A reduced level of histamine in the H3 receptor may result in an increase in other neurotransmitters, causing tics. Postmortem studies have also implicated "dysregulation of neuroinflammatory processes". Diagnosis According to the Diagnostic and Statistical Manual of Mental Disorders (DSM-5), Tourettes may be diagnosed when a person exhibits both multiple motor tics and one or more vocal tics over a period of one year. The motor and vocal tics need not be concurrent. The onset must have occurred before the age of 18 and cannot be attributed to the effects of another condition or substance (such as cocaine). Hence, other medical conditions that include tics or tic-like movements—for example, autism or other causes of tics—must be ruled out.Patients referred for a tic disorder are assessed based on their family history of tics, vulnerability to ADHD, obsessive–compulsive symptoms, and a number of other chronic medical, psychiatric and neurological conditions. In individuals with a typical onset and a family history of tics or OCD, a basic physical and neurological examination may be sufficient. There are no specific medical or screening tests that can be used to diagnose Tourettes; the diagnosis is usually made based on observation of the individuals symptoms and family history, and after ruling out secondary causes of tic disorders (tourettism).Delayed diagnosis often occurs because professionals mistakenly believe that TS is rare, always involves coprolalia, or must be severely impairing. The DSM has recognized since 2000 that many individuals with Tourettes do not have significant impairment; diagnosis does not require the presence of coprolalia or a comorbid condition, such as ADHD or OCD. Tourettes may be misdiagnosed because of the wide expression of severity, ranging from mild (in the majority of cases) or moderate, to severe (the rare but more widely recognized and publicized cases). About 20% of people with Tourette syndrome do not realize that they have tics.Tics that appear early in the course of TS are often confused with allergies, asthma, vision problems, and other conditions. Pediatricians, allergists and ophthalmologists are among the first to see or identify a child as having tics, although the majority of tics are first identified by the childs parents. Coughing, blinking, and tics that mimic unrelated conditions such as asthma are commonly misdiagnosed. In the UK, there is an average delay of three years between symptom onset and diagnosis. Differential diagnosis Tics that may appear to mimic those of Tourettes—but are associated with disorders other than Tourettes—are known as tourettism and are ruled out in the differential diagnosis for Tourette syndrome. The abnormal movements associated with choreas, dystonias, myoclonus, and dyskinesias are distinct from the tics of Tourettes in that they are more rhythmic, not suppressible, and not preceded by an unwanted urge. Developmental and autism spectrum disorders may manifest tics, other stereotyped movements, and stereotypic movement disorder. The stereotyped movements associated with autism typically have an earlier age of onset; are more symmetrical, rhythmical and bilateral; and involve the extremities (for example, flapping the hands).If another condition might better explain the tics, tests may be done; for example, if there is diagnostic confusion between tics and seizure activity, an EEG may be ordered. An MRI can rule out brain abnormalities, but such brain imaging studies are not usually warranted. Measuring thyroid-stimulating hormone blood levels can rule out hypothyroidism, which can be a cause of tics. If there is a family history of liver disease, serum copper and ceruloplasmin levels can rule out Wilsons disease. The typical age of onset of TS is before adolescence. In teenagers and adults with an abrupt onset of tics and other behavioral symptoms, a urine drug screen for stimulants might be requested.Increasing episodes of tic-like behavior among teenagers (predominantly adolescent girls) were reported in several countries during the COVID-19 pandemic. Researchers linked their occurrence to followers of certain TikTok or YouTube artists. Described in 2006 as psychogenic, abrupt-onset movements resembling tics are referred to as a functional movement disorder or functional tic-like movements. Functional tic-like movements can be difficult to distinguish from tics that have an organic (rather than psychological) cause. They may occur alone or co-exist in individuals with tic disorders. These tics are inconsistent with the classic tics of TS in several ways: the premonitory urge (present in 90% of those with tics disorders) is absent in functional tic-like movements; the suppressibility seen in tic disorders is lacking; there is no family or childhood history of tics and there is a female predominance in functional tics, with a later-than-typical age of first presentation; onset is more abrupt than typical with movements that are more suggestible; and there is less co-occurring OCD or ADHD and more co-occurring disorders. Functional tics are "not fully sterotypical", do not respond to medications, do not demonstrate the classic waxing and waning pattern of Tourettic tics, and do not progress in the typical fashion, in which tics often first appear in the face and gradually move to limbs.Other conditions that may manifest tics include Sydenhams chorea; idiopathic dystonia; and genetic conditions such as Huntingtons disease, neuroacanthocytosis, pantothenate kinase-associated neurodegeneration, Duchenne muscular dystrophy, Wilsons disease, and tuberous sclerosis. Other possibilities include chromosomal disorders such as Down syndrome, Klinefelter syndrome, XYY syndrome and fragile X syndrome. Acquired causes of tics include drug-induced tics, head trauma, encephalitis, stroke, and carbon monoxide poisoning. The extreme self-injurious behaviors of Lesch-Nyhan syndrome may be confused with Tourette syndrome or stereotypies, but self-injury is rare in TS even in cases of violent tics. Most of these conditions are rarer than tic disorders and a thorough history and examination may be enough to rule them out without medical or screening tests. Screening for other conditions Although not all those with Tourettes have comorbid conditions, most presenting for clinical care exhibit symptoms of other conditions along with their tics. ADHD and OCD are the most common, but autism spectrum disorders or anxiety, mood, personality, oppositional defiant, and conduct disorders may also be present. Learning disabilities and sleep disorders may be present; higher rates of sleep disturbance and migraine than in the general population are reported. A thorough evaluation for comorbidity is called for when symptoms and impairment warrant, and careful assessment of people with TS includes comprehensive screening for these conditions.Comorbid conditions such as OCD and ADHD can be more impairing than tics, and cause greater impact on overall functioning. Disruptive behaviors, impaired functioning, or cognitive impairment in individuals with comorbid Tourettes and ADHD may be accounted for by the ADHD, highlighting the importance of identifying comorbid conditions. Children and adolescents with TS who have learning difficulties are candidates for psychoeducational testing, particularly if the child also has ADHD. Management There is no cure for Tourettes. There is no single most effective medication, and no one medication effectively treats all symptoms. Most medications prescribed for tics have not been approved for that use, and no medication is without the risk of significant adverse effects. Treatment is focused on identifying the most troubling or impairing symptoms and helping the individual manage them. Because comorbid conditions are often a larger source of impairment than tics, they are a priority in treatment. The management of Tourettes is individualized and involves shared decision-making between the clinician, patient, family and caregivers. Practice guidelines for the treatment of tics were published by the American Academy of Neurology in 2019.Education, reassurance and psychobehavioral therapy are often sufficient for the majority of cases. In particular, psychoeducation targeting the patient and their family and surrounding community is a key management strategy. Watchful waiting "is an acceptable approach" for those who are not functionally impaired. Symptom management may include behavioral, psychological and pharmacological therapies. Pharmacological intervention is reserved for more severe symptoms, while psychotherapy or cognitive behavioral therapy (CBT) may ameliorate depression and social isolation, and improve family support. The decision to use behavioral or pharmacological treatment is "usually made after
Tourette syndrome
the educational and supportive interventions have been in place for a period of months, and it is clear that the tic symptoms are persistently severe and are themselves a source of impairment in terms of self-esteem, relationships with the family or peers, or school performance". Psychoeducation and social support Knowledge, education and understanding are uppermost in management plans for tic disorders, and psychoeducation is the first step. A childs parents are typically the first to notice their tics; they may feel worried, imagine that they are somehow responsible, or feel burdened by misinformation about Tourettes. Effectively educating parents about the diagnosis and providing social support can ease their anxiety. This support can also lower the chance that their child will be unnecessarily medicated or experience an exacerbation of tics due to their parents emotional state.People with Tourettes may suffer socially if their tics are viewed as "bizarre". If a child has disabling tics, or tics that interfere with social or academic functioning, supportive psychotherapy or school accommodations can be helpful. Even children with milder tics may be angry, depressed or have low self-esteem as a result of increased teasing, bullying, rejection by peers or social stigmatization, and this can lead to social withdrawal. Some children feel empowered by presenting a peer awareness program to their classmates. It can be helpful to educate teachers and school staff about typical tics, how they fluctuate during the day, how they impact the child, and how to distinguish tics from naughty behavior. By learning to identify tics, adults can refrain from asking or expecting a child to stop ticcing, because "tic suppression can be exhausting, unpleasant, and attention-demanding and can result in a subsequent rebound bout of tics".Adults with TS may withdraw socially to avoid stigmatization and discrimination because of their tics. Depending on their countrys healthcare system, they may receive social services or help from support groups. Behavioral Behavioral therapies using habit reversal training (HRT) and exposure and response prevention (ERP) are first-line interventions in the management of Tourette syndrome, and have been shown to be effective. Because tics are somewhat suppressible, when people with TS are aware of the premonitory urge that precedes a tic, they can be trained to develop a response to the urge that competes with the tic. Comprehensive behavioral intervention for tics (CBIT) is based on HRT, the best researched behavioral therapy for tics. TS experts debate whether increasing a childs awareness of tics with HRT/CBIT (as opposed to ignoring tics) can lead to more tics later in life.When disruptive behaviors related to comorbid conditions exist, anger control training and parent management training can be effective. CBT is a useful treatment when OCD is present. Relaxation techniques, such as exercise, yoga and meditation may be useful in relieving the stress that can aggravate tics. Beyond HRT, the majority of behavioral interventions for Tourettes (for example, relaxation training and biofeedback) have not been systematically evaluated and are not empirically supported. Medication Children with tics typically present when their tics are most severe, but because the condition waxes and wanes, medication is not started immediately or changed often. Tics may subside with education, reassurance and a supportive environment. When medication is used, the goal is not to eliminate symptoms. Instead, the lowest dose that manages symptoms without adverse effects is used, because adverse effects may be more disturbing than the symptoms being treated with medication.The classes of medication with proven efficacy in treating tics—typical and atypical neuroleptics—can have long-term and short-term adverse effects. Some antihypertensive agents are also used to treat tics; studies show variable efficacy but a lower side effect profile than the neuroleptics. The antihypertensives clonidine and guanfacine are typically tried first in children; they can also help with ADHD symptoms, but there is less evidence that they are effective for adults. The neuroleptics risperidone and aripiprazole are tried when antihypertensives are not effective, and are generally tried first for adults. Because of lower side effects, aripiprazole is preferred over other antipsychotics. The most effective medication for tics is haloperidol, but it has a higher risk of side effects. Methylphenidate can be used to treat ADHD that co-occurs with tics, and can be used in combination with clonidine. Selective serotonin reuptake inhibitors are used to manage anxiety and OCD. Other Complementary and alternative medicine approaches, such as dietary modification, neurofeedback and allergy testing and control have popular appeal, but they have no proven benefit in the management of Tourette syndrome. Despite this lack of evidence, up to two-thirds of parents, caregivers and individuals with TS use dietary approaches and alternative treatments and do not always inform their physicians.There is low confidence that tics are reduced with tetrahydrocannabinol, and insufficient evidence for other cannabis-based medications in the treatment of Tourettes. There is no good evidence supporting the use of acupuncture or transcranial magnetic stimulation; neither is there evidence supporting intravenous immunoglobulin, plasma exchange, or antibiotics for the treatment of PANDAS.Deep brain stimulation (DBS) has become a valid option for individuals with severe symptoms that do not respond to conventional therapy and management, although it is an experimental treatment. Selecting candidates who may benefit from DBS is challenging, and the appropriate lower age range for surgery is unclear; it is potentially useful in less than 3% of individuals. The ideal brain location to target has not been identified as of 2019. Pregnancy A quarter of women report that their tics increase before menstruation; however, studies have not shown consistent evidence of a change in frequency or severity of tics related to pregnancy or hormonal levels. Overall, symptoms in women respond better to haloperidol than they do for men.Most women find they can withdraw from medication during pregnancy without much trouble. When needed, medications are used at the lowest doses possible. During pregnancy, neuroleptic medications are avoided when possible because of the risk of pregnancy complications. When needed, olanzapine, risperidone and quetiapine are most often used as they have not been shown to cause fetal abnormalities. One report found that haloperidol could be used during pregnancy, to minimize the side effects in the mother, including low blood pressure, and anticholinergic effects, although it may cross the placenta.If severe tics might interfere with administration of local anesthesia, other anesthesia options are considered. Neuroleptics in low doses may not affect the breastfed infant, but most medications are avoided. Clonidine and amphetamines may be present in breast milk. Prognosis Tourette syndrome is a spectrum disorder—its severity ranges from mild to severe. Symptoms typically subside as children pass through adolescence. In a group of ten children at the average age of highest tic severity (around ten or eleven), almost four will see complete remission by adulthood. Another four will have minimal or mild tics in adulthood, but not complete remission. The remaining two will have moderate or severe tics as adults, but only rarely will their symptoms in adulthood be more severe than in childhood.Regardless of symptom severity, individuals with Tourettes have a normal life span. Symptoms may be lifelong and chronic for some, but the condition is not degenerative or life-threatening. Intelligence among those with pure TS follows a normal curve, although there may be small differences in intelligence in those with comorbid conditions. The severity of tics early in life does not predict their severity in later life. There is no reliable means of predicting the course of symptoms for a particular individual, but the prognosis is generally favorable. By the age of fourteen to sixteen, when the highest tic severity has typically passed, a more reliable prognosis might be made.Tics may be at their highest severity when they are diagnosed, and often improve as an individuals family and friends come to better understand the condition. Studies report that almost eight out of ten children with Tourettes experience a reduction in the severity of their tics by adulthood, and some adults who still have tics may not be aware that they have them. A study that used video to record tics in adults found that nine out of ten adults still had tics, and half of the adults who considered themselves tic-free displayed evidence of mild tics. Quality of life People with Tourettes are affected by the consequences of tics and by the efforts to suppress them. Head and eye tics can interfere with reading or lead to headaches, and forceful tics can lead to repetitive strain injury. Severe tics can lead to pain or injuries; as an example, a rare cervical disc herniation was reported from a neck tic. Some people may learn to camouflage socially inappropriate tics or channel the energy of their tics into a functional endeavor.A supportive family and environment generally give those with Tourettes the skills to manage the disorder. Outcomes in adulthood are associated more with the perceived significance of having tics as a child than with the actual severity of the tics. A person who was misunderstood, punished or teased at home or at school is likely to fare worse than a child who enjoyed an understanding environment. The long-lasting effects of bullying and teasing can influence self-esteem, self-confidence, and even employment choices and opportunities. Comorbid ADHD can severely affect the childs well-being in all realms, and extend into adulthood.Factors impacting quality of life change over time, given the natural fluctuating course of tic disorders, the development of coping strategies, and a persons age. As ADHD symptoms improve with maturity, adults report less negative impact in their occupational lives than do children in their educational lives. Tics have a greater impact on adults psychosocial function, including financial burdens, than they do on children. Adults are more likely to report a reduced quality of life due to depression or anxiety; depression contributes a greater burden than tics to adults quality of life compared to children. As coping strategies become more effective with age, the impact of OCD symptoms seems to diminish. Epidemiology Tourette syndrome is a common but underdiagnosed condition that reaches across all social, racial and ethnic groups. It is three to four times more frequent in males than in females. Observed prevalence rates are higher among children than adults because tics tend to remit or subside with maturity and a diagnosis may no longer be warranted for many adults. Up to 1% of the overall population experiences tic disorders, including chronic tics and transient (provisional or unspecified) tics in childhood. Chronic tics affect 5% of children and transient tics affect up to 20%.Many individuals with tics do not know they have tics, or do not seek a diagnosis, so epidemiological studies of TS "reflect a strong ascertainment bias" towards those with co-occurring conditions. The reported prevalence of TS varies "according to the source, age, and sex of the sample; the ascertainment procedures; and diagnostic system", with a range reported between 0.15% and 3.0% for children and adolescents. Sukhodolsky, et al. wrote in 2017 that the best estimate of TS prevalence in children was 1.4%. Both Robertson and Stern states that the prevalence in children is 1%. The prevalence of TS in the general population is estimated as 0.3% to 1.0%. According to turn of the century census data, these prevalence estimates translated to half a million children in the US with TS and half a million people in the UK with TS, although symptoms in many older individuals would be almost unrecognizable.Tourette syndrome was once thought to be rare: in 1972, the US National Institutes of Health (NIH) believed there were fewer than 100 cases in the United States, and a 1973 registry reported only 485 cases worldwide. However, numerous studies published since 2000 have consistently demonstrated that the prevalence is much higher. Recognizing that tics may often be undiagnosed and hard to detect, newer studies use direct classroom observation and multiple informants (parents, teachers and trained observers), and therefore record more cases than older studies. As the diagnostic threshold and assessment methodology have moved towards recognition of milder cases, the estimated prevalence has increased.Because of the high male prevalence of TS, there is limited data on females from which conclusion about gender-based differences can be drawn; caution may be warranted in extending conclusions to females regarding the characteristics and treatment of tics based on studies of mostly males. A 2021 review stated that females may see a later peak than males in symptoms, with less remission over time, along with a higher prevalence of anxiety and mood disorders. History A French doctor, Jean Marc Gaspard Itard, reported the first case of Tourette syndrome in 1825, describing the Marquise de Dampierre, an important woman of nobility in her time. In 1884, Jean-Martin Charcot, an influential French physician, assigned his student and intern Georges Gilles de la Tourette, to study patients with movement disorders at the Salpêtrière Hospital, with the goal of defining a condition distinct from hysteria and chorea. In 1885, Gilles de la Tourette published an account in Study of a Nervous Affliction of nine people with "convulsive tic disorder", concluding that a new clinical category should be defined. The eponym was bestowed by Charcot after and on behalf of Gilles de la Tourette, who later became Charcots senior resident.Following the 19th-century descriptions, a psychogenic view prevailed and little progress was made in explaining or treating tics until well into the 20th century. The possibility that movement disorders, including Tourette syndrome, might have an organic origin was raised when an encephalitis lethargica epidemic from 1918 to 1926 was linked to an increase in tic disorders.During the 1960s and 1970s, as the beneficial effects of haloperidol on tics became known, the psychoanalytic approach to Tourette syndrome was questioned. The turning point came in 1965, when Arthur K. Shapiro—described as "the father of modern tic disorder research"—used haloperidol to treat a person with Tourettes, and published a paper criticizing the psychoanalytic approach. In 1975, The New York Times headlined an article with "Bizarre outbursts of Tourettes disease victims linked to chemical disorder in brain", and Shapiro said: "The bizarre symptoms of this illness are rivaled only by the bizarre treatments used to treat it."During the 1990s, a more neutral view of Tourettes emerged, in which a genetic predisposition is seen to interact with non-genetic and environmental factors. The fourth revision of the DSM (DSM-IV) in 1994 added a diagnostic requirement for "marked distress or significant impairment in social, occupational, or other important areas of functioning", which led to an outcry from TS experts and researchers, who noted that many people were not even aware they had TS, nor were they distressed by their tics; clinicians and researchers resorted to using the older criteria in research and practice. In 2000, the American Psychiatric Association revised its diagnostic criteria in the fourth text revision of the DSM (DSM-IV-TR) to remove the impairment requirement, recognizing that clinicians often see people who have Tourettes without distress or impairment. Society and culture Not everyone with Tourettes wants treatment or a cure, especially if that means they may lose something else in the process. The researchers Leckman and Cohen believe that there may be latent advantages associated with an individuals genetic vulnerability to developing Tourette syndrome that may have adaptive value, such as heightened awareness and increased attention to detail and surroundings.Accomplished musicians, athletes, public speakers and professionals from all walks of life are found among people with Tourettes. The athlete Tim Howard, described by the Chicago Tribune as the "rarest of creatures—an American soccer hero", and by the Tourette Syndrome Association as the "most notable individual with Tourette Syndrome around the world", says that his neurological makeup gave him an enhanced perception and an acute focus that contributed to his success on the field.Samuel Johnson is a historical figure who likely had Tourette syndrome, as evidenced by the writings of his friend James Boswell. Johnson wrote A Dictionary of the English Language in 1747, and was a prolific writer, poet, and critic. There is little support for speculation that Mozart had Tourettes: the potentially coprolalic aspect of vocal tics is not transferred to writing, so Mozarts scatological writings are not relevant; the composers available medical history is not thorough; the side effects of other conditions may be misinterpreted; and "the evidence of motor tics in Mozarts life is doubtful".Likely portrayals of TS or tic disorders in fiction predating Gilles de la Tourettes work are "Mr. Pancks" in Charles Dickenss Little Dorrit and "Nikolai Levin" in Leo Tolstoys Anna Karenina. The entertainment industry has been criticized for depicting those with Tourette syndrome as social misfits whose only tic is coprolalia, which has furthered the publics misunderstanding and stigmatization of those with Tourettes. The coprolalic symptoms of Tourettes are also fodder for radio and television talk shows in the US and for the British media. High-profile media coverage focuses on treatments that do not have established safety or efficacy, such as deep brain stimulation, and alternative therapies involving unstudied efficacy and side effects are pursued by many parents. Research directions Research since 1999 has advanced knowledge of Tourettes in the areas of genetics, neuroimaging, neurophysiology, and neuropathology, but questions remain about how best to classify it and how closely it is related to other movement or psychiatric disorders. Modeled after genetic breakthroughs seen with large-scale efforts in other neurodevelopmental disorders, three groups are collaborating in research of the genetics of Tourettes: The Tourette Syndrome Association International Consortium for Genetics (TSAICG) Tourette International Collaborative Genetics Study (TIC Genetics) European Multicentre Tics in Children Studies (EMTICS)Compared to the progress made in gene discovery in certain neurodevelopmental or mental health disorders—autism, schizophrenia and bipolar disorder—the scale of related TS research is lagging in the United States due to funding. Notes References Book sources Further reading External links Tourette syndrome at Curlie
Toxoplasmosis
Toxoplasmosis is a parasitic disease caused by Toxoplasma gondii, an apicomplexan. Infections with toxoplasmosis are associated with a variety of neuropsychiatric and behavioral conditions. Occasionally, people may have a few weeks or months of mild, flu-like illness such as muscle aches and tender lymph nodes. In a small number of people, eye problems may develop. In those with a weak immune system, severe symptoms such as seizures and poor coordination may occur. If a person becomes infected during pregnancy, a condition known as congenital toxoplasmosis may affect the child.Toxoplasmosis is usually spread by eating poorly cooked food that contains cysts, exposure to infected cat feces, and from an infected woman to their baby during pregnancy. Rarely, the disease may be spread by blood transfusion. It is not otherwise spread between people. The parasite is known to reproduce sexually only in the cat family. However, it can infect most types of warm-blooded animals, including humans. Diagnosis is typically by testing blood for antibodies or by testing the amniotic fluid in pregnant people for the parasites DNA.Prevention is by properly preparing and cooking food. Pregnant people are also recommended not to clean cat litter boxes or, if they must, to wear gloves and wash their hands afterwards. Treatment of otherwise healthy people is usually not needed. During pregnancy, spiramycin or pyrimethamine/sulfadiazine and folinic acid may be used for treatment.Up to half of the worlds population is infected by toxoplasmosis, but have no symptoms. In the United States, approximately 11% of people have been infected, while in some areas of the world this is more than 60%. Approximately 200,000 cases of congenital toxoplasmosis occur a year. Charles Nicolle and Louis Manceaux first described the organism in 1908. In 1941, transmission during pregnancy from a pregnant parent to their baby was confirmed. There is tentative evidence that infection may affect peoples behavior. Signs and symptoms Infection has three stages: Acute Acute toxoplasmosis is often asymptomatic in healthy adults. However, symptoms may manifest and are often influenza-like: swollen lymph nodes, headaches, fever, and fatigue, or muscle aches and pains that last for a month or more. It is rare for a human with a fully functioning immune system to develop severe symptoms following infection. People with weakened immune systems are likely to experience headache, confusion, poor coordination, seizures, lung problems that may resemble tuberculosis or Pneumocystis jiroveci pneumonia (a common opportunistic infection that occurs in people with AIDS), or chorioretinitis caused by severe inflammation of the retina (ocular toxoplasmosis). Young children and immunocompromised people, such as those with HIV/AIDS, those taking certain types of chemotherapy, or those who have recently received an organ transplant, may develop severe toxoplasmosis. This can cause damage to the brain (encephalitis) or the eyes (necrotizing retinochoroiditis). Infants infected via placental transmission may be born with either of these problems, or with nasal malformations, although these complications are rare in newborns. The toxoplasmic trophozoites causing acute toxoplasmosis are referred to as tachyzoites, and are typically found in various tissues and body fluids, but rarely in blood or cerebrospinal fluid.Swollen lymph nodes are commonly found in the neck or under the chin, followed by the armpits and the groin. Swelling may occur at different times after the initial infection, persist, and recur for various times independently of antiparasitic treatment. It is usually found at single sites in adults, but in children, multiple sites may be more common. Enlarged lymph nodes will resolve within 1–2 months in 60% of cases. However, a quarter of those affected take 2–4 months to return to normal, and 8% take 4–6 months. A substantial number (6%) do not return to normal until much later. Latent Due to the absence of obvious symptoms, hosts easily become infected with T. gondii and develop toxoplasmosis without knowing it. Although mild, flu-like symptoms occasionally occur during the first few weeks following exposure, infection with T. gondii produces no readily observable symptoms in healthy human adults. In most immunocompetent people, the infection enters a latent phase, during which only bradyzoites (in tissue cysts) are present; these tissue cysts and even lesions can occur in the retinas, alveolar lining of the lungs (where an acute infection may mimic a Pneumocystis jirovecii infection), heart, skeletal muscle, and the central nervous system (CNS), including the brain. Cysts form in the CNS (brain tissue) upon infection with T. gondii and persist for the lifetime of the host. Most infants who are infected while in the womb have no symptoms at birth, but may develop symptoms later in life.Reviews of serological studies have estimated that 30–50% of the global population has been exposed to and may be chronically infected with latent toxoplasmosis, although infection rates differ significantly from country to country. This latent state of infection has recently been associated with numerous disease burdens, neural alterations, and subtle gender-dependent behavioral changes in immunocompetent humans, as well as an increased risk of motor vehicle collisions. Skin While rare, skin lesions may occur in the acquired form of the disease, including roseola and erythema multiforme-like eruptions, prurigo-like nodules, urticaria, and maculopapular lesions. Newborns may have punctate macules, ecchymoses, or "blueberry muffin" lesions. Diagnosis of cutaneous toxoplasmosis is based on the tachyzoite form of T. gondii being found in the epidermis. It is found in all levels of the epidermis, is about 6 by 2 μm and bow-shaped, with the nucleus being one-third of its size. It can be identified by electron microscopy or by Giemsa staining tissue where the cytoplasm shows blue, the nucleus red. Cause Parasitology In its lifecycle, T. gondii adopts several forms. Tachyzoites are responsible for acute infection; they divide rapidly and spread through the tissues of the body. Tachyzoites are also known as "tachyzoic merozoites", a descriptive term that conveys more precisely the parasitological nature of this stage. After proliferating, tachyzoites convert into bradyzoites, which are inside latent intracellular tissue cysts that form mainly in the muscles and brain. The formation of cysts is in part triggered by the pressure of the host immune system. The bradyzoites (also called "bradyzoic merozoites") are not responsive to antibiotics. Bradyzoites, once formed, can remain in the tissues for the lifespan of the host. In a healthy host, if some bradyzoites convert back into active tachyzoites, the immune system will quickly destroy them. However, in immunocompromised individuals, or in fetuses, which lack a developed immune system, the tachyzoites can run rampant and cause significant neurological damage.The parasites survival is dependent on a balance between host survival and parasite proliferation. T. gondii achieves this balance by manipulating the hosts immune response, reducing the hosts immune response, and enhancing the parasites reproductive advantage. Once it infects a normal host cell, it resists damage caused by the hosts immune system, and changes the hosts immune processes.As it forces its way into the host cell, the parasite forms a parasitophorous vacuole (PV) membrane from the membrane of the host cell. The PV encapsulates the parasite, and is both resistant to the activity of the endolysosomal system, and can take control of the hosts mitochondria and endoplasmic reticulum.When first invading the cell, the parasite releases ROP proteins from the bulb of the rhoptry organelle. These proteins translocate to the nucleus and the surface of the PV membrane where they can activate STAT pathways to modulate the expression of cytokines at the transcriptional level, bind and inactivate PV membrane destroying IRG proteins, among other possible effects. Additionally, certain strains of T. gondii can secrete a protein known as GRA15, activating the NF-κB pathway, which upregulates the pro-inflammatory cytokine IL-12 in the early immune response, possibly leading to the parasites latent phase. The parasites ability to secrete these proteins depends on its genotype and affects its virulence.The parasite also influences an anti-apoptotic mechanism, allowing the infected host cells to persist and replicate. One method of apoptosis resistance is by disrupting pro-apoptosis effector proteins, such as BAX and BAK. To disrupt these proteins, T. gondii causes conformational changes to the proteins, which prevent the proteins from being transported to various cellular compartments where they initiate apoptosis events. T. gondii does not, however, cause downregulation of the pro-apoptosis effector proteins.T. gondii also has the ability to initiate autophagy of the hosts cells. This leads to a decrease in healthy, uninfected cells, and consequently fewer host cells to attack the infected cells. Research by Wang et al finds that infected cells lead to higher levels of autophagosomes in normal and infected cells. Their research reveals that T. gondii causes host cell autophagy using a calcium-dependent pathway. Another study suggests that the parasite can directly affect calcium being released from calcium stores, which are important for the signalling processes of cells.The mechanisms above allow T. gondii to persist in a host. Some limiting factors for the toxoplasma is that its influence on the host cells is stronger in a weak immune system and is quantity-dependent, so a large number of T. gondii per host cell cause a more severe effect. The effect on the host also depends on the strength of the host immune system. Immunocompetent individuals do not normally show severe symptoms or any at all, while fatality or severe complications can result in immunocompromised individuals.Since the parasite can change the hosts immune response, it may also have an effect, positive or negative, on the immune response to other pathogenic threats. This includes, but is not limited to, the responses to infections by Helicobacter felis, Leishmania major, or other parasites, such as Nippostrongylus brasiliensis. Transmission Toxoplasmosis is generally transmitted through the mouth when Toxoplasma gondii oocysts or tissue cysts are accidentally eaten. Congenital transmittance from mother to fetus can also occur. Transmission may also occur during the solid organ transplant process or hematogenous stem cell transplants.Oral transmission may occur through: Ingestion of raw or partly cooked meat, especially pork, lamb, or venison containing Toxoplasma cysts: Infection prevalence in countries where undercooked meat is traditionally eaten has been related to this transmission method. Tissue cysts may also be ingested during hand-to-mouth contact after handling undercooked meat, or from using knives, utensils, or cutting boards contaminated by raw meat. Ingestion of unwashed fruit or vegetables that have been in contact with contaminated soil containing infected cat feces. Ingestion of cat feces containing oocysts: This can occur through hand-to-mouth contact following gardening, cleaning a cats litter box, contact with childrens sandpits; the parasite can survive in the environment for months. Ingestion of untreated, unfiltered water through direct consumption or utilization of water for food preparation. Ingestion of unpasteurized milk and milk products, particularly goats milk. Ingestion of raw seafood.Cats excrete the pathogen in their feces for a number of weeks after contracting the disease, generally by eating an infected intermediate host that could include mammals (like rodents) or birds. Oocyst shedding usually starts from the third day after ingestion of infected intermediate hosts, and may continue for weeks. The oocysts are not infective when excreted. After about a day, the oocyst undergoes a process called sporulation and becomes potentially pathogenic. In addition to cats, birds and mammals including human beings are also intermediate hosts of the parasite and are involved in the transmission process. However the pathogenicity varies with the age and species involved in infection and the mode of transmission of T. gondii.Toxoplasmosis may also be transmitted through solid organ transplants. Toxoplasma-seronegative recipients who receive organs from recently infected Toxoplasma-seropositive donors are at risk. Organ recipients who have latent toxoplasmosis are at risk of the disease reactivating in their system due to the immunosuppression occurring during solid organ transplant. Recipients of hematogenous stem cell transplants may experience higher risk of infection due to longer periods of immunosuppression.Heart and lung transplants provide the highest risk for toxoplasmosis infection due to the striated muscle making up the heart, which can contain cysts, and risks for other organs and tissues vary widely. Risk of transmission can be reduced by screening donors and recipients prior to the transplant procedure and providing treatment. Pregnancy precautions Congenital toxoplasmosis is a specific form of toxoplasmosis in which an unborn fetus is infected via the placenta. Congenital toxoplasmosis is associated with fetal death and miscarriage, and in infants, it is associated with hydrocephalus, cerebral calcifications and chorioretinitis, leading to encephalopathy and possibly blindness. A positive antibody titer indicates previous exposure and immunity, and largely ensures the unborn fetus safety. A simple blood draw at the first prenatal doctor visit can determine whether or not a woman has had previous exposure and therefore whether or not she is at risk. If a woman receives her first exposure to T. gondii while pregnant, the fetus is at particular risk.Not much evidence exists around the effect of education before pregnancy to prevent congenital toxoplasmosis. However educating parents before the baby is born has been suggested to be effective because it may improve food, personal and pet hygiene. More research is needed to find whether antenatal education can reduce congenital toxoplasmosis.For pregnant women with negative antibody titers, indicating no previous exposure to T. gondii, serology testing as frequent as monthly is advisable as treatment during pregnancy for those women exposed to T. gondii for the first time dramatically decreases the risk of passing the parasite to the fetus. Since a babys immune system does not develop fully for the first year of life, and the resilient cysts that form throughout the body are very difficult to eradicate with antiprotozoans, an infection can be very serious in the young.Despite these risks, pregnant women are not routinely screened for toxoplasmosis in most countries, for reasons of cost-effectiveness and the high number of false positives generated; Portugal, France, Austria, Uruguay, and Italy are notable exceptions, and some regional screening programmes operate in Germany, Switzerland and Belgium. As invasive prenatal testing incurs some risk to the fetus (18.5 pregnancy losses per toxoplasmosis case prevented), postnatal or neonatal screening is preferred. The exceptions are cases where fetal abnormalities are noted, and thus screening can be targeted.Pregnant women should avoid handling raw meat, drinking raw milk (especially goat milk) and be advised to not eat raw or undercooked meat regardless of type. Because of the obvious relationship between Toxoplasma and cats it is also often advised to avoid exposure to cat feces, and refrain from gardening (cat feces are common in garden soil) or at least wear gloves when so engaged. Most cats are not actively shedding oocysts, since they get infected in the first six months of their life, when they shed oocysts for a short period of time (1–2 weeks.) However, these oocysts get buried in the soil, sporulate and remain infectious for periods ranging from several months to more than a year. Numerous studies have shown living in a household with a cat is not a significant risk factor for T. gondii infection, though living with several kittens has some significance.In 2006, a Czech research team discovered women with high levels of toxoplasmosis antibodies were significantly more likely to give birth to baby boys than baby girls. In most populations, the birth rate is around 51% boys, but people infected with T. gondii had up to a 72% chance of a boy. Diagnosis Diagnosis of toxoplasmosis in humans is made by biological, serological, histological, or molecular methods, or by some combination of the above. Toxoplasmosis can be difficult to distinguish from primary central nervous system lymphoma. It mimics several other infectious diseases so clinical signs are non-specific and are not sufficiently characteristic for a definite diagnosis. As a result, the possibility of an alternate diagnosis is supported by a failed trial of antimicrobial therapy (pyrimethamine, sulfadiazine, and folinic acid (USAN: leucovorin)), i.e., if the drugs produce no effect clinically and no improvement on repeat imaging. T. gondii may also be detected in blood, amniotic fluid, or cerebrospinal fluid by using polymerase chain reaction. T. gondii may exist in a host as an inactive cyst that would likely evade detection.Serological testing can detect T. gondii antibodies in blood serum, using methods including the Sabin–Feldman dye test (DT), the indirect hemagglutination assay, the indirect fluorescent antibody assay (IFA), the direct agglutination test, the latex agglutination test (LAT), the enzyme-linked immunosorbent assay (ELISA), and the immunosorbent agglutination assay test (IAAT).The most commonly used tests to measure IgG antibody are the DT, the ELISA, the IFA, and the modified direct agglutination test. IgG antibodies usually appear within a week or two of infection, peak within one to two months, then decline at various rates. Toxoplasma IgG antibodies generally persist for life, and therefore may be present in the bloodstream as a result of either current or previous infection.To some extent, acute toxoplasmosis infections can be differentiated from chronic infections using an IgG avidity test, which is a variation on the ELISA. In the first response to infection, toxoplasma-specific IgG has a low affinity for the toxoplasma antigen; in the following weeks and month, IgG affinity for the antigen increases. Based on the IgG avidity test, if the IgG in the infected individual has a high affinity, it means that the infection began three to five months before testing. This is particularly useful in congenital infection, where pregnancy status and gestational age at time of infection determines treatment.In contrast to IgG, IgM antibodies can be used to detect acute infection but generally not chronic infection. The IgM antibodies appear sooner after infection than the IgG antibodies and disappear faster than IgG antibodies after recovery. In most cases, T. gondii-specific IgM antibodies can first be detected approximately a week after acquiring primary infection and decrease within one to six months; 25% of those infected are negative for T. gondii-specific IgM within seven months. However, IgM may be detectable months or years after infection, during the chronic phase, and false positives for acute infection are possible. The most commonly used tests for the measurement of IgM antibody are double-sandwich IgM-ELISA, the IFA test, and the immunosorbent agglutination assay (IgM-ISAGA). Commercial test kits often have low specificity, and the reported results are frequently misinterpreted.In 2021, twenty commercial anti-Toxoplasma IgG assays were evaluated in a systematic review, in comparison with an accepted reference method. Most of them were enzyme-immunoassays, followed by agglutination tests, immunochromatographic tests, and a Western-Blot assay. The mean sensitivity of IgG assays ranged from 89.7% to 100% for standard titers and from 13.4% to 99.2% for low IgG titers. A few studies pointed out the ability of some methods, especially WB to detect IgG early after primary infection. The specificity of IgG assays was generally high, ranging from 91.3% to 100%; and higher than 99% for most EIA assays. The positive predictive value (PPV) was not a discriminant indicator among methods, whereas significant disparities (87.5%–100%) were reported among negative predictive values (NPV), a key-parameter assessing the ability to definitively rule out a Toxoplasma infection in patients at-risk for opportunistic infections. Congenital Recommendations for the diagnosis of congenital toxoplasmosis include: prenatal diagnosis based on testing of amniotic fluid and ultrasound examinations; neonatal diagnosis based on molecular testing of placenta and cord blood and comparative mother-child serologic tests and a clinical examination at birth; and early childhood diagnosis based on neurologic and ophthalmologic examinations and a serologic survey during the first year of life. During pregnancy, serological testing is recommended at three week intervals.Even though diagnosis of toxoplasmosis heavily relies on serological detection of specific anti-Toxoplasma immunoglobulin, serological testing has limitations. For example, it may fail to detect the active phase of T. gondii infection because the specific anti-Toxoplasma IgG or IgM may not be produced until after several weeks of infection. As a result, a pregnant woman might test negative during the active phase of T. gondii infection leading to undetected and therefore untreated congenital toxoplasmosis. Also, the test may not detect T. gondii infections in immunocompromised patients because the titers of specific anti-Toxoplasma IgG or IgM may not rise in this type of patient.Many PCR-based techniques have been developed to diagnose toxoplasmosis using clinical specimens that include amniotic fluid, blood, cerebrospinal fluid, and tissue biopsy. The most sensitive PCR-based technique is nested PCR, followed by hybridization of PCR products. The major downside to these techniques is that they are time-consuming and do not provide quantitative data.Real-time PCR is useful in pathogen detection, gene expression and regulation, and allelic discrimination. This PCR technique utilizes the 5 nuclease activity of Taq DNA polymerase to cleave a nonextendible, fluorescence-labeled hybridization probe during the extension phase of PCR. A second fluorescent dye, e.g., 6-carboxy-tetramethyl-rhodamine, quenches the fluorescence of the intact probe. The nuclease cleavage of the hybridization probe during the PCR releases the effect of quenching resulting in an increase of fluorescence proportional to the amount of PCR product, which can be monitored by a sequence detector.Toxoplasmosis cannot be detected with immunostaining. Lymph nodes affected by Toxoplasma have characteristic changes, including poorly demarcated reactive germinal centers, clusters of monocytoid B cells, and scattered epithelioid histiocytes. The classic triad of congenital toxoplasmosis includes: chorioretinitis, hydrocephalus, and intracranial arteriosclerosis. Other consequences include sensorineural deafness, seizures, and intellectual disability.Congenital toxoplasmosis may also impact a childs hearing. Up to 30% of newborns have some degree of sensorineural hearing loss. The childs communication skills may also be affected. A study published in 2010 looked at 106 patients, all of whom received toxoplasmosis treatment prior to 2.5 months. Of this group, 26.4% presented with language disorders. Treatment Treatment is recommended for people with serious health problems, such as people with HIV whose CD4 counts are under 200 cells/mm3. Trimethoprim/sulfamethoxazole is the drug of choice to prevent toxoplasmosis, but not for treating active disease. A 2012 study shows a promising new way to treat the active and latent form of this disease using two endochin-like quinolones. Acute The medications prescribed for acute toxoplasmosis are the following: Pyrimethamine – an antimalarial medication Sulfadiazine – an antibiotic used in combination with pyrimethamine to treat toxoplasmosis Combination therapy is usually given with folic acid supplements to reduce incidence of thrombocytopaenia. Combination therapy is most useful in the setting of HIV. Clindamycin Spiramycin – an antibiotic used most often for pregnant women to prevent the infection of their children.(other antibiotics, such as minocycline, have seen some use as a salvage therapy). If infected during pregnancy, spiramycin is recommended in the first and early second trimesters while pyrimethamine/sulfadiazine and leucovorin is recommended in the late second and third trimesters. Latent In people with latent toxoplasmosis, the cysts are immune to these treatments, as the antibiotics do not reach the bradyzoites in sufficient concentration. The medications prescribed for latent toxoplasmosis are: Atovaquone – an antibiotic that has been used to kill Toxoplasma cysts inside AIDS patients Clindamycin – an antibiotic that, in combination with atovaquone, seemed to optimally kill cysts in mice Congenital When a pregnant woman is diagnosed with acute toxoplasmosis, amniocentesis can be used to determine whether the fetus has been infected or not. When a pregnant woman develops acute toxoplasmosis, the tachyzoites have approximately a 30% chance of entering the placental tissue, and from there entering and infecting the fetus. As gestational age at the time of infection increases, the chance of fetal infection also increases.If the parasite has not yet reached the fetus, spiramycin can help to prevent placental transmission. If the fetus has been infected, the pregnant woman can be treated with pyrimethamine and sulfadiazine, with folinic acid, after the first trimester. They are treated after the first trimester because pyrimethamine has an antifolate effect, and lack of folic acid can interfere with fetal brain formation and cause thrombocytopaenia. Infection in earlier gestational stages correlates with poorer fetal and neonatal outcomes, particularly when the infection is untreated.Newborns who undergo 12 months of postnatal anti-toxoplasmosis treatment have a low chance of sensorineural hearing loss. Information regarding treatment milestones for children with congenital toxoplasmosis have been created for this group. Epidemiology T. gondii infections occur throughout the world, although infection rates differ significantly by country. For women of childbearing age, a survey of 99 studies within 44 countries found the areas of highest prevalence are within Latin America (about 50–80%), parts of Eastern and Central Europe (about 20–60%), the Middle East (about 30–50%), parts of Southeast Asia (about 20–60%), and parts of Africa (about 20–55%).In the United States, data from the National Health and Nutrition Examination Survey (NHANES) from 1999 to 2004 found 9.0% of US-born persons 12–49 years of age were seropositive for IgG antibodies against T. gondii, down from 14.1% as measured in the NHANES 1988–1994. In the 1999–2004 survey, 7.7% of US-born and 28.1% of foreign-born women 15–44 years of age were T. gondii seropositive. A trend of decreasing seroprevalence has been observed by numerous studies in the United States and many European countries. Toxoplasma gondii is considered the second leading cause of food
Toxoplasmosis
borne-related deaths and the fourth leading cause of foodborne-related hospitalizations in the United States.The protist responsible for toxoplasmosis is T. gondii. There are three major types of T. gondii responsible for the patterns of toxoplasmosis throughout the world. There are types I, II, and III. These three types of T. gondii have differing effects on certain hosts, mainly mice and humans due to their variation in genotypes. Type I: virulent in mice and humans, seen in people with AIDS. Type II: non-virulent in mice, virulent in humans (mostly Europe and North America), seen in people with AIDS. Type III: non-virulent in mice, virulent mainly in animals but seen to a lesser degree in humans as well.Current serotyping techniques can only separate type I or III from type II parasites.Because the parasite poses a particular threat to fetuses when it is contracted during pregnancy, much of the global epidemiological data regarding T. gondii comes from seropositivity tests in women of childbearing age. Seropositivity tests look for the presence of antibodies against T. gondii in blood, so while seropositivity guarantees one has been exposed to the parasite, it does not necessarily guarantee one is chronically infected. History Toxoplasma gondii was first described in 1908 by Nicolle and Manceaux in Tunisia, and independently by Splendore in Brazil. Splendore reported the protozoan in a rabbit, while Nicolle and Manceaux identified it in a North African rodent, the gundi (Ctenodactylus gundi). In 1909 Nicolle and Manceaux differentiated the protozoan from Leishmania. Nicolle and Manceaux then named it Toxoplasma gondii after the curved shape of its infectious stage (Greek root toxon= bow).The first recorded case of congenital toxoplasmosis was in 1923, but it was not identified as caused by T. gondii. Janků (1923) described in detail the autopsy results of an 11-month-old boy who had presented to hospital with hydrocephalus. The boy had classic marks of toxoplasmosis including chorioretinitis (inflammation of the choroid and retina of the eye). Histology revealed a number of "sporocytes", though Janků did not identify these as T. gondii.It was not until 1937 that the first detailed scientific analysis of T. gondii took place using techniques previously developed for analyzing viruses. In 1937 Sabin and Olitsky analyzed T. gondii in laboratory monkeys and mice. Sabin and Olitsky showed that T. gondii was an obligate intracellular parasite and that mice fed T. gondii-contaminated tissue also contracted the infection. Thus Sabin and Olitsky demonstrated T. gondii as a pathogen transmissible between animals. T. gondii was first described as a human pathogen in 1939 at Babies Hospital in New York City. Wolf, Cowen and Paige identified T. gondii infection in an infant girl delivered full-term by Caesarean section. The infant developed seizures and had chorioretinitis in both eyes at three days. The infant then developed encephalomyelitis and died at one month of age. Wolf, Cowen and Paige isolated T. gondii from brain tissue lesions. Intracranial injection of brain and spinal cord samples into mice, rabbits and rats produced encephalitis in the animals. Wolf, Cowen and Page reviewed additional cases and concluded that T. gondii produced recognizable symptoms and could be transmitted from mother to child.The first adult case of toxoplasmosis was reported in 1940 with no neurological signs. Pinkerton and Weinman reported the presence of Toxoplasma in a 22-year-old man from Peru who died from a subsequent bacterial infection and fever.In 1948, a serological dye test was created by Sabin and Feldman based on the ability of the patients antibodies to alter staining of Toxoplasma. The Sabin Feldman Dye Test is now the gold standard for identifying Toxoplasma infection.Transmission of Toxoplasma by eating raw or undercooked meat was demonstrated by Desmonts et al. in 1965 Paris. Desmonts observed that the therapeutic consumption of raw beef or horse meat in a tuberculosis hospital was associated with a 50% per year increase in Toxoplasma antibodies. This means that more T. gondii was being transmitted through the raw meat. In 1974, Desmonts and Couvreur showed that infection during the first two trimesters produces most harm to the fetus, that transmission depended on when mothers were infected during pregnancy, that mothers with antibodies before pregnancy did not transmit the infection to the fetus, and that spiramycin lowered the transmission to the fetus.Toxoplasma gained more attention in the 1970s with the rise of immune-suppressant treatment given after organ or bone marrow transplants and the AIDS epidemic of the 1980s. Patients with lowered immune system function are much more susceptible to disease. Society and culture "Crazy cat-lady" "Crazy cat-lady syndrome" is a term coined by news organizations to describe scientific findings that link the parasite Toxoplasma gondii to several mental disorders and behavioral problems. The suspected correlation between cat ownership in childhood and later development of schizophrenia suggested that further studies were needed to determine a risk factor for children; however, later studies showed that T. gondii was not a causative factor in later psychoses. Researchers also found that cat ownership does not strongly increase the risk of a T. gondii infection in pregnant women.The term crazy cat-lady syndrome draws on both stereotype and popular cultural reference. It was originated as instances of the aforementioned afflictions were noted amongst the populace. A cat lady is a cultural stereotype of a woman who compulsively hoards and dotes upon cats. The biologist Jaroslav Flegr is a proponent of the theory that toxoplasmosis affects human behaviour. Notable cases Tennis player Arthur Ashe developed neurological problems from toxoplasmosis (and was later found to be HIV-positive). Actor Merritt Butrick was HIV-positive and died from toxoplasmosis as a result of his already-weakened immune system. Pedro Zamora, reality television personality and HIV/AIDS activist, was diagnosed with toxoplasmosis as a result of his immune system being weakened by HIV. Prince François, Count of Clermont, pretender to the throne of France had congenital toxoplasmosis; his disability caused him to be overlooked in the line of succession. Actress Leslie Ash contracted toxoplasmosis in the second month of pregnancy. British middle-distance runner Sebastian Coe contracted toxoplasmosis in 1983, which was probably transmitted by a cat while he trained in Italy. Tennis player Martina Navratilova experienced toxoplasmosis during the 1982 US Open. Other animals Although T. gondii has the capability of infecting virtually all warm-blooded animals, susceptibility and rates of infection vary widely between different genera and species. Rates of infection in populations of the same species can also vary widely due to differences in location, diet, and other factors. Although infection with T. gondii has been noted in several species of Asian primates, seroprevalence of T. gondii antibodies were found for the first time in toque macaques (Macaca sinica) that are endemic to the island of Sri Lanka.Australian marsupials are particularly susceptible to toxoplasmosis. Wallabies, koalas, wombats, pademelons and small dasyurids can be killed by it, with eastern barred bandicoots typically dying within about 3 weeks of infection.It is estimated that 23% of wild swine worldwide are seropositive for T. gondii. Seroprevalence varies across the globe with the highest seroprevalence in North America (32%) and Europe (26%) and the lowest in Asia (13%) and South America (5%). Geographical regions located at higher latitudes and regions that experience warmer, humid climates are associated with increased seroprevalence of T. gondii among wild boar. Wild boar infected with T. gondii pose a potential health risk for humans who consume their meat. Livestock Among livestock, pigs, sheep and goats have the highest rates of chronic T. gondii infection. The prevalence of T. gondii in meat-producing animals varies widely both within and among countries, and rates of infection have been shown to be dramatically influenced by varying farming and management practices. For instance, animals kept outdoors or in free-ranging environments are more at risk of infection than animals raised indoors or in commercial confinement operations. Pigs Worldwide, the percentage of pigs harboring viable parasites has been measured to be 3–71.43% and in the United States (via bioassay in mice or cats) to be as high as 92.7% and as low as 0%, depending on the farm or herd. Surveys of seroprevalence (T. gondii antibodies in blood) are more common, and such measurements are indicative of the high relative seroprevalence in pigs across the world. Neonatal piglets have been found to experience the entire range of severity, including progression to stillbirth.: 95  This was especially demonstrated in the foundational Thiptara et al. 2006, reporting a litter birth of three stillborns and six live in Thailand. This observation has been relevant not only to that country but to toxoplasmosis control in porciculture around the world.: 95 Sheep Along with pigs, sheep and goats are among the most commonly infected livestock of epidemiological significance for human infection. Prevalence of viable T. gondii in sheep tissue has been measured (via bioassay) to be as high as 78% in the United States, and a 2011 survey of goats intended for consumption in the United States found a seroprevalence of 53.4%. Chickens Due to a lack of exposure to the outdoors, chickens raised in large-scale indoor confinement operations are not commonly infected with T. gondii. Free-ranging or backyard-raised chickens are much more commonly infected. A survey of free-ranging chickens in the United States found its prevalence to be 17–100%, depending on the farm. Because chicken meat is generally cooked thoroughly before consumption, poultry is not generally considered to be a significant risk factor for human T. gondii infection. Cattle Although cattle and buffalo can be infected with T. gondii, the parasite is generally eliminated or reduced to undetectable levels within a few weeks following exposure. Tissue cysts are rarely present in buffalo meat or beef, and meat from these animals is considered to be low-risk for harboring viable parasites. Horses Horses are considered resistant to chronic T. gondii infection. However, viable cells have been isolated from US horses slaughtered for export, and severe human toxoplasmosis in France has been epidemiologically linked to the consumption of horse meat. Domestic cats In 1942, the first case of feline toxoplasmosis was diagnosed and reported in a domestic cat in Middletown, New York. The investigators isolated oocysts from feline feces and found that the oocysts could be infectious for up to 12 months in the environment.The seroprevalence of T. gondii in domestic cats, worldwide has been estimated to be around 30–40% and exhibits significant geographical variation. In the United States, no official national estimate has been made, but local surveys have shown levels varying between 16% and 80%. A 2012 survey of 445 purebred pet cats and 45 shelter cats in Finland found an overall seroprevalence of 48.4%, while a 2010 survey of feral cats from Giza, Egypt found a seroprevalence rate of 97.4%. Another survey from Colombia recorded seroprevalence of 89.3%, whereas a Chinese (Guangdong) study found just a 2.1% prevalence.T. gondii infection rates in domestic cats vary widely depending on the cats diets and lifestyles. Feral cats that hunt for their food are more likely to be infected than domestic cats, and naturally also depends on the prevalence of T. gondii-infected prey such as birds and small mammals.Most infected cats will shed oocysts only once in their lifetimes, for a period of about one to two weeks. This shedding can release millions of oocysts, each capable of spreading and surviving for months. An estimated 1% of cats at any given time are actively shedding oocysts.It is difficult to control the cat population with the infected oocysts due to lack of an effective vaccine. This remains a challenge in most cases and the programs that are readily available are questionable in efficacy. Rodents Infection with T. gondii has been shown to alter the behavior of mice and rats in ways thought to increase the rodents chances of being preyed upon by cats. Infected rodents show a reduction in their innate aversion to cat odors; while uninfected mice and rats will generally avoid areas marked with cat urine or with cat body odor, this avoidance is reduced or eliminated in infected animals. Moreover, some evidence suggests this loss of aversion may be specific to feline odors: when given a choice between two predator odors (cat or mink), infected rodents show a significantly stronger preference to cat odors than do uninfected controls.In rodents, T. gondii–induced behavioral changes occur through epigenetic remodeling in neurons associated with observed behaviors; for example, it modifies epigenetic methylation to induce hypomethylation of arginine vasopressin-related genes in the medial amygdala to greatly decrease predator aversion. Similar epigenetically induced behavioral changes have also been observed in mouse models of addiction, where changes in the expression of histone-modifying enzymes via gene knockout or enzyme inhibition in specific neurons produced alterations in drug-related behaviors. Widespread histone–lysine acetylation in cortical astrocytes appears to be another epigenetic mechanism employed by T. gondii.T. gondii-infected rodents show a number of behavioral changes beyond altered responses to cat odors. Rats infected with the parasite show increased levels of activity and decreased neophobic behavior. Similarly, infected mice show alterations in patterns of locomotion and exploratory behavior during experimental tests. These patterns include traveling greater distances, moving at higher speeds, accelerating for longer periods of time, and showing a decreased pause-time when placed in new arenas. Infected rodents have also been shown to have lower anxiety, using traditional models such as elevated plus mazes, open field arenas, and social interaction tests. Marine mammals A University of California, Davis study of dead sea otters collected from 1998 to 2004 found toxoplasmosis was the cause of death for 13% of the animals. Proximity to freshwater outflows into the ocean was a major risk factor. Ingestion of oocysts from cat feces is considered to be the most likely ultimate source. Surface runoff containing wild cat feces and litter from domestic cats flushed down toilets are possible sources of oocysts. These same sources may have also introduced the toxoplasmosis infection to the endangered Hawaiian monk seal. Infection with the parasite has contributed to the death of at least four Hawaiian monk seals. A Hawaiian monk seals infection with T. gondii was first noted in 2004. The parasites spread threatens the recovery of this highly endangered pinniped. The parasites have been found in dolphins and whales. Researchers Black and Massie believe anchovies, which travel from estuaries into the open ocean, may be helping to spread the disease. Giant panda Toxoplasma gondii has been reported as the cause of death of a giant panda kept in a zoo in China, who died in 2014 of acute gastroenteritis and respiratory disease. Although seemingly anecdotal, this report emphasizes that all warm-blooded species are likely to be infected by T. gondii, including endangered species such as the giant panda. Research Chronic infection with T. gondii has traditionally been considered asymptomatic in people with normal immune function. Some evidence suggests latent infection may subtly influence a range of human behaviors and tendencies, and infection may alter the susceptibility to or intensity of a number of psychiatric or neurological disorders.In most of the current studies where positive correlations have been found between T. gondii antibody titers and certain behavioral traits or neurological disorders, T. gondii seropositivity tests are conducted after the onset of the examined disease or behavioral trait; that is, it is often unclear whether infection with the parasite increases the chances of having a certain trait or disorder, or if having a certain trait or disorder increases the chances of becoming infected with the parasite. Groups of individuals with certain behavioral traits or neurological disorders may share certain behavioral tendencies that increase the likelihood of exposure to and infection with T. gondii; as a result, it is difficult to confirm causal relationships between T. gondii infections and associated neurological disorders or behavioral traits. Mental health Some evidence links T. gondii to schizophrenia. Two 2012 meta-analyses found that the rates of antibodies to T. gondii in people with schizophrenia were 2.7 times higher than in controls. T. gondii antibody positivity was therefore considered an intermediate risk factor in relation to other known risk factors. Cautions noted include that the antibody tests do not detect toxoplasmosis directly, most people with schizophrenia do not have antibodies for toxoplasmosis, and publication bias might exist. While the majority of these studies tested people already diagnosed with schizophrenia for T. gondii antibodies, associations between T. gondii and schizophrenia have been found prior to the onset of schizophrenia symptoms. Sex differences in the age of schizophrenia onset may be explained in part by a second peak of T. gondii infection incidence during ages 25–30 in females only. Although a mechanism supporting the association between schizophrenia and T. gondii infection is unclear, studies have investigated a molecular basis of this correlation. Antipsychotic drugs used in schizophrenia appear to inhibit the replication of T. gondii tachyzoites in cell culture. Supposing a causal link exists between T. gondii and schizophrenia, studies have yet to determine why only some individuals with latent toxoplasmosis develop schizophrenia; some plausible explanations include differing genetic susceptibility, parasite strain differences, and differences in the route of the acquired T. gondii infection.Correlations have also been found between antibody titers to T. gondii and OCD, as well as suicide among people with mood disorders including bipolar disorder. Positive antibody titers to T. gondii appear to be uncorrelated with major depression or dysthymia. Although there is a correlation between T. gondii and many psychological disorders, the underlying mechanism is unclear. A 2016 study of 236 persons with high levels of toxoplasmosis antibodies found that "there was little evidence that T. gondii was related to increased risk of psychiatric disorder, poor impulse control, personality aberrations or neurocognitive impairment". Neurological disorders Latent infection has been linked to Parkinsons disease and Alzheimers disease.Individuals with multiple sclerosis show infection rates around 15% lower than the general public. Traffic accidents Latent T. gondii infection in humans has been associated with a higher risk of automobile accidents, potentially due to impaired psychomotor performance or enhanced risk-taking personality profiles. Climate change Climate change has been reported to affect the occurrence, survival, distribution and transmission of T. gondii. T. gondii has been identified in the Canadian arctic, a location that was once too cold for its survival. Higher temperatures increase the survival time of T. gondii. More snowmelt and precipitation can increase the amount of T. gondii oocysts that are transported via river flow. Shifts in bird, rodent, and insect populations and migration patterns can impact the distribution of T. gondii due to their role as reservoir and vector. Urbanization and natural environmental degradation are also suggested to affect T. gondii transmission and increase risk of infection. See also Toxoplasmic chorioretinitis TORCH infection Pyrimethamine References Parts of this article are taken from the public domain CDC factsheet: Toxoplasmosis Bibliography Weiss, L. M.; Kim, K. (28 April 2011). Toxoplasma gondii: The Model Apicomplexan. Perspectives and Methods. Academic Press. ISBN 978-0-08-047501-1. Retrieved 12 March 2013. Dubey, J. P. (2016). Toxoplasmosis of Animals and Humans (2nd ed.). Boca Raton: CRC Press. pp. xvii+313. ISBN 978-1-4200-9237-0. OCLC 423572366. ISBN 1-4200-9236-7 ISBN 9781420092363 Dubey JP, Lindsay DS, Speer CA (April 1998). "Structures of Toxoplasma gondii tachyzoites, bradyzoites, and sporozoites and biology and development of tissue cysts". Clinical Microbiology Reviews. 11 (2): 267–299. doi:10.1128/CMR.11.2.267. PMC 106833. PMID 9564564. Jaroslav Flegr (2011). Pozor, Toxo!. Academia, Prague, Czech Republic. ISBN 978-80-200-2022-2. External links How a cat-borne parasite infects humans (National Geographic) Toxoplasmosis at Merck Manual of Diagnosis and Therapy Professional Edition Toxoplasmosis at Health Protection Agency (HPA), United Kingdom Pictures of Toxoplasmosis Medical Image Database Video-Interview with Professor Robert Sapolsky on Toxoplasmosis and its effect on human behavior (24:27 min) "Toxoplasmosis". MedlinePlus. U.S. National Library of Medicine.
Trachoma
Trachoma is an infectious disease caused by bacterium Chlamydia trachomatis. The infection causes a roughening of the inner surface of the eyelids. This roughening can lead to pain in the eyes, breakdown of the outer surface or cornea of the eyes, and eventual blindness. Untreated, repeated trachoma infections can result in a form of permanent blindness when the eyelids turn inward.The bacteria that cause the disease can be spread by both direct and indirect contact with an affected persons eyes or nose. Indirect contact includes through clothing or flies that have come into contact with an affected persons eyes or nose. Children spread the disease more often than adults. Poor sanitation, crowded living conditions, and not enough clean water and toilets also increase spread.Efforts to prevent the disease include improving access to clean water and treatment with antibiotics to decrease the number of people infected with the bacterium. This may include treating, all at once, whole groups of people in whom the disease is known to be common. Washing, by itself, is not enough to prevent disease, but may be useful with other measures. Treatment options include oral azithromycin and topical tetracycline. Azithromycin is preferred because it can be used as a single oral dose. After scarring of the eyelid has occurred, surgery may be required to correct the position of the eyelashes and prevent blindness.Globally, about 80 million people have an active infection. In some areas, infections may be present in as many as 60–90% of children. Among adults, it more commonly affects women than men – likely due to their closer contact with children. The disease is the cause of decreased vision in 2.2 million people, of whom 1.2 million are completely blind. Trachoma is a public health problem in 44 countries across Africa, Asia, and Central and South America, with 136.9 million people at risk. It results in US$8 billion of economic losses a year. It belongs to a group of diseases known as neglected tropical diseases. Signs and symptoms The bacterium has an incubation period of 6 to 12 days, after which the affected individual experiences symptoms of conjunctivitis, or irritation similar to "pink eye". Blinding endemic trachoma results from multiple episodes of reinfection that maintains the intense inflammation in the conjunctiva. Without reinfection, the inflammation gradually subsides.The conjunctival inflammation is called "active trachoma" and usually is seen in children, especially preschool children. It is characterized by white lumps in the undersurface of the upper eyelid (conjunctival follicles or lymphoid germinal centres) and by nonspecific inflammation and thickening often associated with papillae. Follicles may also appear at the junction of the cornea and the sclera (limbal follicles). Active trachoma often can be irritating and have a watery discharge. Bacterial secondary infection may occur and cause a purulent discharge.The later structural changes of trachoma are referred to as "cicatricial trachoma". These include scarring in the eyelid (tarsal conjunctiva) that leads to distortion of the eyelid with buckling of the lid (tarsus) so the lashes rub on the eye (trichiasis). These lashes can lead to corneal opacities and scarring and then to blindness. Linear scars present in the sulcus subtarsalis are called Arlts lines (named after Carl Ferdinand von Arlt). In addition, blood vessels and scar tissue can invade the upper cornea (pannus). Resolved limbal follicles may leave small gaps in the pannus (Herberts pits).Most commonly, children with active trachoma do not present with any symptoms, as the low-grade irritation and ocular discharge is just accepted as normal, but further symptoms may include: Eye discharge Swollen eyelids Trichiasis (misdirected eyelashes) Swelling of lymph nodes in front of the ears Sensitivity to bright lights Increased heart rate Further ear, nose, and throat complications.The major complication or the most important one is corneal ulcer occurring due to rubbing by concentrations, or trichiasis with superimposed bacterial infection. Cause Trachoma is caused by Chlamydia trachomatis, serotypes (serovars) A, B, and C. It is spread by direct contact with eye, nose, and throat secretions from affected individuals, or contact with fomites (inanimate objects that carry infectious agents), such as towels and/or washcloths, that have had similar contact with these secretions. Flies can also be a route of mechanical transmission. Untreated, repeated trachoma infections result in entropion (the inward turning of the eyelids), which may result in blindness due to damage to the cornea. Children are the most susceptible to infection due to their tendency to get dirty easily, but the blinding effects or more severe symptoms are often not felt until adulthood.Blinding endemic trachoma occurs in areas with poor personal and family hygiene. Many factors are indirectly linked to the presence of trachoma including lack of water, absence of latrines or toilets, poverty in general, flies, close proximity to cattle, and crowding. The final common pathway, though, seems to be the presence of dirty faces in children, facilitating the frequent exchange of infected ocular discharge from one childs face to another. Most transmission of trachoma occurs within the family. Diagnosis McCallans classification McCallan in 1908 divided the clinical course of trachoma into four stages: WHO classification The World Health Organization recommends a simplified grading system for trachoma. The Simplified WHO Grading System is summarized below: Trachomatous inflammation, follicular (TF)—Five or more follicles of >0.5 mm on the upper tarsal conjunctiva Trachomatous inflammation, intense (TI)—Papillary hypertrophy and inflammatory thickening of the upper tarsal conjunctiva obscuring more than half the deep tarsal vessels Trachomatous scarring (TS)—Presence of scarring in tarsal conjunctiva. Trachomatous trichiasis (TT)—At least one ingrown eyelash touching the globe, or evidence of epilation (eyelash removal) Corneal opacity (CO)—Corneal opacity blurring part of the pupil margin Prevention Although trachoma was eliminated from much of the developed world in the 20th century (Australia being a notable exception), this disease persists in many parts of the developing world, particularly in communities without adequate access to water and sanitation. Environmental measures Environmental improvement: Modifications in water use, fly control, latrine use, health education, and proximity to domesticated animals have all been proposed to reduce transmission of C. trachomatis. These changes pose numerous challenges for implementation. These environmental changes are likely to ultimately affect the transmission of ocular infection by means of lack of facial cleanliness. Particular attention is required for environmental factors that limit clean faces. A systematic review examining the effectiveness of environmental sanitary measures on the prevalence of active trachoma in endemic areas showed that use of insecticide spray resulted in significant reductions of trachoma and fly density in some studies. Health education also resulted in reductions of active trachoma when implemented. Improved water supply did not result in a reduction of trachoma incidence. Antibiotics WHO Guidelines recommend that a region should receive community-based, mass antibiotic treatment when the prevalence of active trachoma among one- to nine-year-old children is greater than 10%. Subsequent annual treatment should be administered for three years, at which time the prevalence should be reassessed. Annual treatment should continue until the prevalence drops below 5%. At lower prevalences, antibiotic treatment should be family-based. Management Antibiotics Azithromycin (single oral dose of 20 mg/kg) or topical tetracycline (1% eye ointment twice a day for six weeks). Azithromycin is preferred because it is used as a single oral dose. Although it is expensive, it is generally used as part of the international donation program organized by Pfizer. Azithromycin can be used in children from the age of six months and in pregnancy. As a community-based antibiotic treatment, some evidence suggests that oral azithromycin was more effective than topical tetracycline, but no consistent evidence supported either oral or topical antibiotics as being more effective. Antibiotic treatment reduces the risk of active trachoma in individuals infected with chlamydial trachomatis. Surgery For individuals with trichiasis, a bilamellar tarsal rotation procedure is warranted to direct the lashes away from the globe. Evidence suggests that use of a lid clamp and absorbable sutures would result in reduced lid contour abnormalities and granuloma formulation after surgery. Early intervention is beneficial as the rate of recurrence is higher in more advanced disease. Lifestyle measures The WHO-recommended SAFE strategy includes: Surgery to correct advanced stages of the disease Antibiotics to treat active infection, using azithromycin Facial cleanliness to reduce disease transmission Environmental change to increase access to clean water and improved sanitationChildren with visible nasal discharge, discharge from the eyes, or flies on their faces are at least twice as likely to have active trachoma as children with clean faces. Intensive community-based health education programs to promote face-washing can reduce the rates of active trachoma, especially intense trachoma. If an individual is already infected, washing ones face is encouraged, especially a child, to prevent reinfection. Some evidence shows that washing the face combined with topical tetracycline might be more effective in reducing severe trachoma compared to topical tetracycline alone. The same trial found no statistical benefit of eye washing alone or in combination with tetracycline eye drops in reducing follicular trachoma amongst children. Prognosis If not treated properly with oral antibiotics, the symptoms may escalate and cause blindness, which is the result of ulceration and consequent scarring of the cornea. Surgery may also be necessary to fix eyelid deformities. Without intervention, trachoma keeps families in a cycle of poverty, as the disease and its long-term effects are passed from one generation to the next. Epidemiology As of 2011, about 21 million people are actively affected by trachoma, with around 2.2 million people being permanently blind or have severe visual impairment from trachoma. An additional 7.3 million people are reported to have trichiasis. 51 countries are currently classified as endemic for blinding trachoma. Of these, Africa is considered the worst affected area, with over 85% of all known active cases of trachoma. Within the continent, South Sudan and Ethiopia have the highest prevalence. In many of these communities, women are three times more likely than men to be blinded by the disease, due to their roles as caregivers in the family. Approximately 158 million people are living in areas were trachoma is common. An additional 229 million live where trachoma could potentially occur. Australia is the only developed country that has trachoma. In 2008, trachoma was found in half of Australias very remote communities. Elimination In 1996, the WHO launched its Alliance for the Global Elimination of Trachoma by 2020, and in 2006, the WHO officially set 2020 as the target to eliminate trachoma as a public-health problem. The International Coalition for Trachoma Control has produced maps and a strategic plan called 2020 INSight that lays out actions and milestones to achieve global elimination of blinding trachoma by 2020. The program recommends the SAFE protocol for blindness prevention: Surgery for trichiasis, Antibiotics to clear infection, Facial cleanliness, and Environmental improvement to reduce transmission. This includes sanitation infrastructure to reduce the open presence of human feces that can breed flies.As of 2018, Cambodia, Ghana, Iran, Laos, Mexico, Nepal, Morocco, and Oman have been certified as having eliminated trachoma as a public-health problem; China, Gambia, Iran, Iraq, and Myanmar make that claim, but have not sought certification. Eradication of the bacterium that causes the disease is seen as impractical; the WHO definition of "eliminated as a public-health problem" means less than 5% of children have any symptoms, and less than 0.1% of adults have vision loss. Having already donated more doses (about 700 million since 2002) of the drug than it has sold during the same time period, the drug company Pfizer has agreed to donate azithromycin until 2025, if necessary, for elimination of the disease. The campaign unexpectedly found distribution of azithromycin to very poor children reduced their early death rate by up to 25%. History The disease is one of the earliest known eye afflictions, having been identified in Egypt as early as 15 BC.Its presence was also recorded in ancient China and Mesopotamia. Trachoma became a problem as people moved into crowded settlements or towns where hygiene was poor. It became a particular problem in Europe in the 19th century. After the Egyptian Campaign (1798–1802) and the Napoleonic Wars (1798–1815), trachoma was rampant in the army barracks of Europe and spread to those living in towns as troops returned home. Stringent control measures were introduced, and by the early 20th century, trachoma was essentially controlled in Europe, although cases were reported until the 1950s. Today, most victims of trachoma live in underdeveloped and poverty-stricken countries in Africa, the Middle East, and Asia.In the United States, the Centers for Disease Control says, "No national or international surveillance [for trachoma] exists. Blindness due to trachoma has been eliminated from the United States. The last cases were found among Native American populations and in Appalachia, and those in the boxing, wrestling, and sawmill industries (prolonged exposure to combinations of sweat and sawdust often led to the disease). In the late 19th and early 20th centuries, trachoma was the main reason for an immigrant coming through Ellis Island to be deported."In 1913, President Woodrow Wilson signed an act designating funds for the eradication of the disease. Immigrants who attempted to enter the U.S. through Ellis Island, New York, had to be checked for trachoma. During this time, treatment for the disease was by topical application of copper sulfate. By the late 1930s, a number of ophthalmologists reported success in treating trachoma with sulfonamide antibiotics. In 1948, Vincent Tabone (who was later to become the President of Malta) was entrusted with the supervision of a campaign in Malta to treat trachoma using sulfonamide tablets and drops.Due to improved sanitation and overall living conditions, trachoma virtually disappeared from the industrialized world by the 1950s, though it continues to plague the developing world to this day. Epidemiological studies were conducted in 1956–63 by the Trachoma Control Pilot Project in India under the Indian Council for Medical Research. This potentially blinding disease remains endemic in the poorest regions of Africa, Asia, and the Middle East and in some parts of Latin America and Australia. Currently, 8 million people are visually impaired as a result of trachoma, and 41 million have an active infection. Of the 54 countries that the WHO cited as still having blinding trachoma occurring, Australia is the only developed country—Australian Aboriginal people who live in remote communities with inadequate sanitation are still blinded by this infectious eye disease.Indias Health and Family Welfare Minister JP Nadda declared India free of infective trachoma in 2017. Etymology The term is derived from New Latin trāchōma, from Greek τράχωμα trākhōma, from τραχύς trākhus "rough". Economics The economic burden of trachoma is huge, particularly with regard to covering treatment costs and productivity losses as a result of increased visual impairment, and in some cases, permanent blindness. The global estimated cost of trachoma is reported between $US2.9 and 5.3 billion each year. By including the cost for trichiasis treatment, the estimated overall cost for the disease increases to about $US 8 billion. References External links CDC Disease Info trachomaCelia W. Dugger (31 March 2006), "Preventable Disease Blinds Poor in Third World", The New York Times Photographs of trachoma patients Trachoma Atlas International Trachoma Initiative
Travelers diarrhea
Travelers diarrhea (TD) is a stomach and intestinal infection. TD is defined as the passage of unformed stool (one or more by some definitions, three or more by others) while traveling. It may be accompanied by abdominal cramps, nausea, fever, headache and bloating. Occasionally bloody diarrhea may occur. Most travelers recover within three to four days with little or no treatment. About 10% of people may have symptoms for a week.Bacteria are responsible for more than half of cases, typically via foodborne illness and waterborne diseases. The bacteria enterotoxigenic Escherichia coli (ETEC) are typically the most common except in Southeast Asia, where Campylobacter is more prominent. About 10 to 20 percent of cases are due to norovirus. Protozoa such as Giardia may cause longer term disease. The risk is greatest in the first two weeks of travel and among young adults. People affected are more often from the developed world.Recommendations for prevention include eating only properly cleaned and cooked food, drinking bottled water, and frequent hand washing. The oral cholera vaccine, while effective for cholera, is of questionable use for travelers diarrhea. Preventive antibiotics are generally discouraged. Primary treatment includes rehydration and replacing lost salts (oral rehydration therapy). Antibiotics are recommended for significant or persistent symptoms, and can be taken with loperamide to decrease diarrhea. Hospitalization is required in less than 3 percent of cases.Estimates of the percentage of people affected range from 20 to 50 percent among travelers to the developing world. TD is particularly common among people traveling to Asia (except for Japan and Singapore), the Middle East, Africa, Mexico, and Central and South America. The risk is moderate in Southern Europe, Russia, and China. TD has been linked to later irritable bowel syndrome and Guillain–Barré syndrome. It has colloquially been known by a number of names, including "Montezumas revenge," the "Nile runs" and "Delhi belly". Signs and symptoms The onset of TD usually occurs within the first week of travel, but may occur at any time while traveling, and even after returning home, depending on the incubation period of the infectious agent. Bacterial TD typically begins abruptly, but Cryptosporidium may incubate for seven days, and Giardia for 14 days or more, before symptoms develop. Typically, a traveler experiences four to five loose or watery bowel movements each day. Other commonly associated symptoms are abdominal cramping, bloating, fever, and malaise. Appetite may decrease significantly. Though unpleasant, most cases of TD are mild, and resolve in a few days without medical intervention.Blood or mucus in the diarrhea, significant abdominal pain, or high fever suggests a more serious cause, such as cholera, characterized by a rapid onset of weakness and torrents of watery diarrhea with flecks of mucus (described as "rice water" stools). Medical care should be sought in such cases; dehydration is a serious consequence of cholera, and may trigger serious sequelae—including, in rare instances, death—as rapidly as 24 hours after onset if not addressed promptly. Causes Infectious agents are the primary cause of travelers diarrhea. Bacterial enteropathogens cause about 80% of cases. Viruses and protozoans account for most of the rest.The most common causative agent isolated in countries surveyed has been enterotoxigenic Escherichia coli (ETEC). Enteroaggregative E. coli is increasingly recognized. Shigella spp. and Salmonella spp. are other common bacterial pathogens. Campylobacter, Yersinia, Aeromonas, and Plesiomonas spp. are less frequently found. Mechanisms of action vary: some bacteria release toxins which bind to the intestinal wall and cause diarrhea; others damage the intestines themselves by their direct presence.Brachyspira pilosicoli pathogen also appears to be responsible for many chronic intermittent watery diarrhea and is only diagnosed through colonic biopsies and microscopic discovery of a false brush border on H&E or Warthin silver stain: its brush-border is stronger and longer that Brachyspira aalborgis brush-border. It is unfortunately often not diagnosed as coproculture does not allow growth and 16S PCR panel primers do not match Brachyspira sequences.While viruses are associated with less than 20% of adult cases of travelers diarrhea, they may be responsible for nearly 70% of cases in infants and children. Diarrhea due to viral agents is unaffected by antibiotic therapy, but is usually self-limited. Protozoans such as Giardia lamblia, Cryptosporidium and Cyclospora cayetanensis can also cause diarrhea. Pathogens commonly implicated in travelers diarrhea appear in the table in this section.A subtype of travelers diarrhea afflicting hikers and campers, sometimes known as wilderness diarrhea, may have a somewhat different frequency of distribution of pathogens. Risk factors The primary source of infection is ingestion of fecally contaminated food or water. Attack rates are similar for men and women.The most important determinant of risk is the travelers destination. High-risk destinations include developing countries in Latin America, Africa, the Middle East, and Asia. Among backpackers, additional risk factors include drinking untreated surface water and failure to maintain personal hygiene practices and clean cookware. Campsites often have very primitive (if any) sanitation facilities, making them potentially as dangerous as any developing country.Although travelers diarrhea usually resolves within three to five days (mean duration: 3.6 days), in about 20% of cases, the illness is severe enough to require bedrest, and in 10%, the illness duration exceeds one week. For those prone to serious infections, such as bacillary dysentery, amoebic dysentery, and cholera, TD can occasionally be life-threatening. Others at higher-than-average risk include young adults, immunosuppressed persons, persons with inflammatory bowel disease or diabetes, and those taking H2 blockers or antacids. Immunity Travelers often get diarrhea from eating and drinking foods and beverages that have no adverse effects on local residents. This is due to immunity that develops with constant, repeated exposure to pathogenic organisms. The extent and duration of exposure necessary to acquire immunity has not been determined; it may vary with each individual organism. A study among expatriates in Nepal suggests that immunity may take up to seven years to develop—presumably in adults who avoid deliberate pathogen exposure. Conversely, immunity acquired by American students while living in Mexico disappeared, in one study, as quickly as eight weeks after cessation of exposure. Prevention Sanitation Recommendations include avoidance of questionable foods and drinks, on the assumption that TD is fundamentally a sanitation failure, leading to bacterial contamination of drinking water and food. While the effectiveness of this strategy has been questioned, given that travelers have little or no control over sanitation in hotels and restaurants, and little evidence supports the contention that food vigilance reduces the risk of contracting TD, guidelines continue to recommend basic, common-sense precautions when making food and beverage choices: Maintain good hygiene and use only safe water for drinking and brushing teeth. Safe beverages include bottled water, bottled carbonated beverages, and water boiled or appropriately treated by the traveler (as described below). Caution should be exercised with tea, coffee, and other hot beverages that may be only heated, not boiled. In restaurants, insist that bottled water be unsealed in your presence; reports of locals filling empty bottles with untreated tap water and reselling them as purified water have surfaced. When in doubt, a bottled carbonated beverage is the safest choice, since it is difficult to simulate carbonation when refilling a used bottle. Avoid ice, which may not have been made with safe water. Avoid green salads, because the lettuce and other uncooked ingredients are unlikely to have been washed with safe water. Avoid eating raw fruits and vegetables unless cleaned and peeled personally.If handled properly, thoroughly cooked fresh and packaged foods are usually safe. Raw or undercooked meat and seafood should be avoided. Unpasteurized milk, dairy products, mayonnaise, and pastry icing are associated with increased risk for TD, as are foods and beverages purchased from street vendors and other establishments where unhygienic conditions may be present. Water Although safe bottled water is now widely available in most remote destinations, travelers can treat their own water if necessary, or as an extra precaution. Techniques include boiling, filtering, chemical treatment, and ultraviolet light; boiling is by far the most effective of these methods. Boiling rapidly kills all active bacteria, viruses, and protozoa. Prolonged boiling is usually unnecessary; most microorganisms are killed within seconds at water temperature above 55–70 °C (130–160 °F). The second-most effective method is to combine filtration and chemical disinfection. Filters eliminate most bacteria and protozoa, but not viruses. Chemical treatment with halogens—chlorine bleach, tincture of iodine, or commercial tablets—have low-to-moderate effectiveness against protozoa such as Giardia, but work well against bacteria and viruses. UV light is effective against both viruses and cellular organisms, but only works in clear water, and it is ineffective unless manufacturers instructions are carefully followed for maximum water depth/distance from UV source, and for dose/exposure time. Other claimed advantages include short treatment time, elimination of the need for boiling, no taste alteration, and decreased long-term cost compared with bottled water. The effectiveness of UV devices is reduced when water is muddy or turbid; as UV is a type of light, any suspended particles create shadows that hide microorganisms from UV exposure. Medications Bismuth subsalicylate four times daily reduces rates of travelers diarrhea. Though many travelers find a four-times-per-day regimen inconvenient, lower doses have not been shown to be effective. Potential side effects include black tongue, black stools, nausea, constipation, and ringing in the ears. Bismuth subsalicylate should not be taken by those with aspirin allergy, kidney disease, or gout, nor concurrently with certain antibiotics such as the quinolones, and should not be taken continuously for more than three weeks. Some countries do not recommend it due to the risk of rare but serious side effects.A hyperimmune bovine colostrum to be taken by mouth is marketed in Australia for prevention of ETEC-induced TD. As yet, no studies show efficacy under actual travel conditions.Though effective, antibiotics are not recommended for prevention of TD in most situations because of the risk of allergy or adverse reactions to the antibiotics, and because intake of preventive antibiotics may decrease effectiveness of such drugs should a serious infection develop subsequently. Antibiotics can also cause vaginal yeast infections, or overgrowth of the bacterium Clostridium difficile, leading to pseudomembranous colitis and its associated severe, unrelenting diarrhea.Antibiotics may be warranted in special situations where benefits outweigh the above risks, such as immunocompromised travelers, chronic intestinal disorders, prior history of repeated disabling bouts of TD, or scenarios in which the onset of diarrhea might prove particularly troublesome. Options for prophylactic treatment include the quinolone antibiotics (such as ciprofloxacin), azithromycin, and trimethoprim/sulfamethoxazole, though the latter has proved less effective in recent years. Rifaximin may also be useful. Quinolone antibiotics may bind to metallic cations such as bismuth, and should not be taken concurrently with bismuth subsalicylate. Trimethoprim/sulfamethoxazole should not be taken by anyone with a history of sulfa allergy. Vaccination The oral cholera vaccine, while effective for prevention of cholera, is of questionable use for prevention of TD. A 2008 review found tentative evidence of benefit. A 2015 review stated it may be reasonable for those at high risk of complications from TD. Several vaccine candidates targeting ETEC or Shigella are in various stages of development. Probiotics One 2007 review found that probiotics may be safe and effective for prevention of TD, while another review found no benefit. A 2009 review confirmed that more study is needed, as the evidence to date is mixed. Treatment Most cases of TD are mild and resolve in a few days without treatment, but severe or protracted cases may result in significant fluid loss and dangerous electrolytic imbalance. Dehydration due to diarrhea can also alter the effectiveness of medicinal and contraceptive drugs. Adequate fluid intake (oral rehydration therapy) is therefore a high priority. Commercial rehydration drinks are widely available; alternatively, purified water or other clear liquids are recommended, along with salty crackers or oral rehydration salts (available in stores and pharmacies in most countries) to replenish lost electrolytes. Carbonated water or soda, left open to allow dissipation of the carbonation, is useful when nothing else is available. In severe or protracted cases, the oversight of a medical professional is advised. Antibiotics If diarrhea becomes severe (typically defined as three or more loose stools in an eight-hour period), especially if associated with nausea, vomiting, abdominal cramps, fever, or blood in stools, medical treatment should be sought. Such patients may benefit from antimicrobial therapy. A 2000 literature review found that antibiotic treatment shortens the duration and severity of TD; most reported side effects were minor, or resolved on stopping the antibiotic.The antibiotic recommended varies based upon the destination of travel. Trimethoprim–sulfamethoxazole and doxycycline are no longer recommended because of high levels of resistance to these agents. Antibiotics are typically given for three to five days, but single doses of azithromycin or levofloxacin have been used. Rifaximin and rifamycin are approved in the U.S. for treatment of TD caused by ETEC. If diarrhea persists despite therapy, travelers should be evaluated for bacterial strains resistant to the prescribed antibiotic, possible viral or parasitic infections, bacterial or amoebic dysentery, Giardia, helminths, or cholera. Antimotility agents Antimotility drugs such as loperamide and diphenoxylate reduce the symptoms of diarrhea by slowing transit time in the gut. They may be taken to slow the frequency of stools, but not enough to stop bowel movements completely, which delays expulsion of the causative organisms from the intestines. They should be avoided in patients with fever, bloody diarrhea, and possible inflammatory diarrhea. Adverse reactions may include nausea, vomiting, abdominal pain, hives or rash, and loss of appetite. Antimotility agents should not, as a rule, be taken by children under age two. Epidemiology An estimated 10 million people—20 to 50% of international travelers—develop TD each year. It is more common in the developing world, where rates exceed 60%, but has been reported in some form in virtually every travel destination in the world. Society and culture Moctezumas revenge is a colloquial term for travelers diarrhea contracted in Mexico. The name refers to Moctezuma II (1466–1520), the Tlatoani (ruler) of the Aztec civilization who was overthrown by the Spanish conquistador Hernán Cortés in the early 16th century, thereby bringing large portions of what is now Mexico and Central America under the rule of the Spanish crown. Wilderness diarrhea Wilderness diarrhea, also called wilderness-acquired diarrhea (WAD) or backcountry diarrhea, refers to diarrhea among backpackers, hikers, campers and other outdoor recreationalists in wilderness or backcountry situations, either at home or abroad. It is caused by the same fecal microorganisms as other forms of travelers diarrhea, usually bacterial or viral. Since wilderness campsites seldom provide access to sanitation facilities, the infection risk is similar to that of any developing country. Water treatment, good hygiene, and dish washing have all been shown to reduce the incidence of WAD. See also Diarrhea References This article incorporates public domain material from websites or documents of the Centers for Disease Control and Prevention. External links "Travelers Diarrhea". Centers for Disease Control and Prevention.
Trichinosis
Trichinosis, also known as trichinellosis, is a parasitic disease caused by roundworms of the Trichinella type. During the initial infection, invasion of the intestines can result in diarrhea, abdominal pain, and vomiting. Migration of larvae to muscle, which occurs about a week after being infected, can cause swelling of the face, inflammation of the whites of the eyes, fever, muscle pains, and a rash. Minor infection may be without symptoms. Complications may include inflammation of heart muscle, central nervous system involvement, and inflammation of the lungs.Trichinosis is mainly spread when undercooked meat containing Trichinella cysts is eaten. In North America this is most often bear, but infection can also occur from pork, boar, and dog meat. Several species of Trichinella can cause disease, with T. spiralis being the most common. After being eaten, the larvae are released from their cysts in the stomach. They then invade the wall of the small intestine, where they develop into adult worms. After one week, the females release new larvae that migrate to voluntarily controlled muscles, where they form cysts. The diagnosis is usually based on symptoms and confirmed by finding specific antibodies in the blood, or larvae on tissue biopsy.The best way to prevent trichinosis is to fully cook meat. A food thermometer can verify that the temperature inside the meat is high enough. Infection is typically treated with antiparasitic medication such as albendazole or mebendazole. Rapid treatment may kill adult worms and thereby stop further worsening of symptoms. Both medications are considered safe, but have been associated with side effects such as bone marrow suppression. Their use during pregnancy or in children under the age of 2 years is poorly studied, but appears to be safe. Treatment with steroids is sometimes also required in severe cases. Without treatment, symptoms typically resolve within three months.Worldwide, about 10,000 infections occur a year. At least 55 countries including the United States, China, Argentina, and Russia have had recently documented cases. While the disease occurs in the tropics, it is less common there. Rates of trichinosis in the United States have decreased from about 400 cases per year in the 1940s to 20 or fewer per year in the 2000s. The risk of death from infection is low. Signs and symptoms The great majority of trichinosis infections have either minor or no symptoms and no complications. The two main phases for the infection are enteral (affecting the intestines) and parenteral (outside the intestines). The symptoms vary depending on the phase, species of Trichinella, quantity of encysted larvae ingested, age, sex, and host immunity. Enteral phase A large burden of adult worms in the intestines promotes symptoms such as nausea, heartburn, dyspepsia, and diarrhea from two to seven days after infection, while small worm burdens generally are asymptomatic. Eosinophilia presents early and increases rapidly. Parenteral phase The severity of symptoms caused by larval migration from the intestines depends on the number of larvae produced. As the larvae migrate through tissue and vessels, the bodys inflammatory response results in edema, muscle pain, fever, and weakness. A classic sign of trichinosis is periorbital edema, swelling around the eyes, which may be caused by vasculitis. Splinter hemorrhage in the nails is also a common symptom.They may very rarely cause enough damage to produce serious neurological deficits (such as ataxia or respiratory paralysis) from worms entering the central nervous system (CNS), which is compromised by trichinosis in 10–24% of reported cases of cerebral venous sinus thrombosis, a very rare form of stroke (three or four cases per million annual incidence in adults). Trichinosis can be fatal depending on the severity of the infection; death can occur 4–6 weeks after the infection, and is usually caused by myocarditis, encephalitis, or pneumonia. Cause The classical agent is T. spiralis (found worldwide in many carnivorous and omnivorous animals, both domestic and sylvatic (wild), but seven primarily sylvatic species of Trichinella also are now recognized: Species and characteristics T. spiralis is most adapted to swine, most pathogenic in humans, and is cosmopolitan in distribution. T. britovi is the second-most common species to infect humans; it is distributed throughout Europe, Asia, and northern and western Africa, usually in wild carnivores, crocodiles, birds, wild boar, and domesticated pigs. T. murrelli also infects humans, especially from black bear meat; it is distributed among wild carnivores in North America. T. nativa, which has a high resistance to freezing, is found in the Arctic and subarctic regions; reservoir hosts include polar bears, Arctic foxes, walruses, and other wild game. T. nelsoni, found in East African predators and scavengers, has been documented to cause a few human cases. T. papuae infects both mammals and reptiles, including crocodiles, humans, and wild and domestic pigs; this species, found in Papua New Guinea and Thailand, is also nonencapsulated. T. pseudospiralis infects birds and mammals, and has demonstrated infection in humans; it is a nonencapsulated species. T. zimbabwensis can infect mammals, and possibly humans; this nonencapsulated species was detected in crocodiles in Africa. Taxonomy Kingdom: Animalia Phylum: Nematoda Class: Adenophorea Order: Trichurida Family: Trichinellidae Genus: Trichinella Lifecycle The typical lifecycle for T. spiralis involves humans, pigs, and rodents. A pig becomes infected when it eats infectious cysts in raw meat, often porcine carrion or a rat (sylvatic cycle). A human becomes infected by consuming raw or undercooked infected pork (domestic cycle). In the stomach, the cysts from infected undercooked meat are acted on by pepsin and hydrochloric acid, which help release the larvae from the cysts into the stomach. The larvae then migrate to the small intestine, and burrow into the intestinal mucosa, where they molt four times before becoming adults.Thirty to 34 hours after the cysts were originally ingested, the adults mate, and within five days produce larvae. Adult worms can only reproduce for a limited time, because the immune system eventually expels them from the small intestine. The larvae then use their piercing mouthpart, called the "stylet", to pass through the intestinal mucosa and enter the lymphatic vessels, and then enter the bloodstream.The larvae travel by capillaries to various organs, such as the retina, myocardium, or lymph nodes; however, only larvae that migrate to skeletal muscle cells survive and encyst. The larval host cell becomes a nurse cell, in which the larva will be encapsulated, potentially for the life of the host, waiting for the host to be eaten. The development of a capillary network around the nurse cell completes encystation of the larva. Trichinosis is not soil-transmitted, as the parasite does not lay eggs, nor can it survive long outside a host. Diagnosis Diagnosis of trichinosis is confirmed by a combination of exposure history, clinical diagnosis, and laboratory testing. Exposure history An epidemiological investigation can be done to determine a patients exposure to raw infected meat. Often, an infection arises from home-preparation of contaminated meat, in which case microscopy of the meat may be used to determine the infection. Exposure determination does not have to be directly from a laboratory-confirmed infected animal. Indirect exposure criteria include the consumption of products from a laboratory-confirmed infected animal, or sharing of a common exposure with a laboratory-confirmed infected human. Clinical diagnosis Clinical presentation of the common trichinosis symptoms may also suggest infection. These symptoms include eye puffiness, splinter hemorrhage, nonspecific gastroenteritis, and muscle pain. The case definition for trichinosis at the European Center for Disease Control states, "at least three of the following six: fever, muscle soreness and pain, gastrointestinal symptoms, facial edema, eosinophilia, and subconjunctival, subungual, and retinal hemorrhages." Laboratory testing Blood tests and microscopy can be used to aid in the diagnosis of trichinosis. Blood tests include a complete blood count for eosinophilia, creatine phosphokinase activity, and various immunoassays such as ELISA for larval antigens. Prevention Legislation Laws and rules for food producers may improve food safety for consumers, such as the rules established by the European Commission for inspections, rodent control, and improved hygiene. A similar protocol exists in the United States, in the USDA guidelines for farms and slaughterhouse responsibilities in inspecting pork. Education and training Public education about the dangers of consuming raw and undercooked meat, especially pork, may reduce infection rates. Hunters are also an at-risk population due to their contact and consumption of wild game, including bear. As such, many states, such as New York, require the completion of a course in such matters before a hunting license can be obtained. Meat testing Testing methods are available for both individual carcasses and monitoring of the herds. Artificial digestion method is usually used for the testing of individual carcasses, while the testing for specific antibodies is usually used for herd monitoring. Food preparation Larvae may be killed by the heating or irradiation of raw meat. Freezing is normally only effective for T. spiralis, since other species, such as T. nativa, are freeze-resistant and can survive long-term freezing. All meat (including pork) can be safely prepared by cooking to an internal temperature of 165 °F (74 °C) or higher for 15 seconds or more. Wild game: Wild game meat must be cooked thoroughly (see meat preparation above) Freezing wild game does not kill all trichinosis larval worms, because the worm species that typically infests wild game can resist freezing. Pork: Freezing cuts of pork less than 6 inches thick for 20 days at 5 °F (−15 °C) or three days at −4 °F (−20 °C) kills T. spiralis larval worms; but this will not kill other trichinosis larval worm species, such as T. nativa, if they have infested the pork food supply (which is unlikely, due to geography).Pork can be safely cooked to a slightly lower temperature, provided that the internal meat temperature is at least as hot for at least as long as listed in the USDA table below. Nonetheless, allowing a margin of error for variation in internal temperature within a particular cut of pork, which may have bones that affect temperature uniformity, is prudent. In addition, kitchen thermometers have measurement error that must be considered. Pork may be cooked for significantly longer and at a higher uniform internal temperature than listed below to be safe. Unsafe and unreliable methods of cooking meat include the use of microwave ovens, curing, drying, and smoking, as these methods are difficult to standardize and control. Pig farming Incidence of infection can be reduced by: Keeping pigs in clean pens, with floors that can be washed (such as concrete) Not allowing hogs to eat carcasses of other animals, including rats, which may be infected with Trichinella Cleaning meat grinders thoroughly when preparing ground meats Control and destruction of meat containing trichinae, e.g., removal and proper disposal of porcine diaphragms prior to public sale of meatThe US Centers for Disease Control and Prevention make the following recommendation: "Curing (salting), drying, smoking, or microwaving meat does not consistently kill infective worms." However, under controlled commercial food processing conditions, some of these methods are considered effective by the USDA.The USDA Animal and Plant Health Inspection Service (APHIS) is responsible for the regulations concerning the importation of swine from foreign countries. The Foreign Origin Meat and Meat Products, Swine section covers swine meat (cooked, cured and dried, and fresh). APHIS developed the National Trichinae Certification Program; this is a voluntary "preharvest" program for U.S. swine producers "that will provide documentation of swine management practices" to reduce the incidence of Trichinella in swine. The CDC reports 0.013% of U.S. swine are infected with Trichinella. Treatment As with most diseases, early treatment is better and decreases the risk of developing disease. If larvae do encyst in skeletal muscle cells, they can remain infectious for months to years. Primary treatment Early administration of anthelmintics, such as mebendazole or albendazole, decreases the likelihood of larval encystation, particularly if given within three days of infection. However, most cases are diagnosed after this time. In humans, mebendazole (200–400 mg three times a day for three days) or albendazole (400 mg twice a day for 8–14 days) is given to treat trichinosis. These drugs prevent newly hatched larvae from developing, but should not be given to pregnant women or children under two years of age.Medical references from the 1940s described no specific treatment for trichinosis at the time, but intravenous injection of calcium salts was found to be useful in managing symptoms related to severe toxemia from the infection.. Secondary treatment After infection, steroids, such as prednisone, may be used to relieve muscle pain associated with larval migration. Vaccine research Researchers trying to develop a vaccine for Trichinella have tried using either "larval extracts, excretory–secretory antigen, DNA, or recombinant antigen protein." Currently, no marketable vaccine is available for trichinosis, but experimental mouse studies have suggested possibilities. In one study, microwaved Trichinella larvae were used to immunize mice (orally or intraperitoneally), which were subsequently infected. Depending on the dosage and frequency of immunization, results ranged from a decreased larval count to complete protection from trichinosis. Another study used extracts and excretory–secretory products from first-stage larvae to produce an oral vaccine. To prevent gastric acids from dissolving the antigens before reaching the small intestine, scientists encapsulated the antigens in microcapsules. This vaccine significantly increased CD4+ cell levels, and increased antigen-specific serum IgGq and IgA, resulting in a statistically significant reduction in the average number of adult worms in the small intestines of mice. The significance of this approach is that, if the white blood cells in the small intestine have been exposed to Trichinella antigens (through vaccination), when an individual does get infected, the immune system will respond to expel the worms from the small intestine fast enough to prevent the female worms from releasing their larvae. A DNA vaccine tested on mice "induced a muscle larva burden reduction in BALB/c mice by 29% in response to T. spiralis infection". Epidemiology About 11 million humans are infected with Trichinella; T. spiralis is the species responsible for most of these infections. Infection was once very common, but this disease is now rare in the developed world, but two known outbreaks occurred in 2015. In the first outbreak, around 40 people were infected in Liguria, Italy, during a New Years Eve celebration. The second outbreak in France was associated with pork sausages from Corsica, which were eaten raw, affecting 14 people in total. The incidence of trichinosis in the U.S. has decreased dramatically in the past century from an average of 400 cases per year mid-20th century down to an annual average of 20 cases per year (2008–10). The number of cases has decreased because of legislation prohibiting the feeding of raw meat garbage to hogs, increased commercial and home freezing of pork, and the public awareness of the danger of eating raw or undercooked pork products.China reports around 10,000 cases every year, so is the country with the highest number of cases. In China, between 1964 and 1998, over 20,000 people became infected with trichinosis, and more than 200 people died.Trichinosis is common in developing countries where meat fed to pigs is raw or undercooked, but infections also arise in developed countries in Europe where raw or undercooked pork, wild boar and horse meat may be consumed as delicacies.In the developing world, most infections are associated with undercooked pork. For example, in Thailand, between 200 and 600 cases are reported annually around the Thai New Year. This is mostly attributable to a particular delicacy, larb, which calls for undercooked pork as part of the recipe.In parts of Eastern Europe, the World Health Organization reports, some swine herds have trichinosis infection rates above 50%, with correspondingly large numbers of human infections. United States Historically, pork products were thought to have the most risk of infecting humans with T. spiralis. However, a trichinosis surveillance conducted between 1997 and 2001 showed a higher percentage of cases caused by consumption of wild game (the sylvatic transmission cycle). This is thought to be due to the Federal Swine Health Protection Act (Public Law 96-468) that was passed by Congress in 1980. Prior to this act, swine were fed garbage that could potentially be infected by T. spiralis. This act was put in place to prevent trichinella-contaminated food from being given to swine. Additionally, other requirements were put in place, such as rodent control, limiting commercial swine contact with wildlife, maintaining good hygiene, and removing dead pigs from pens immediately.Between 2002 and 2007, 11 trichinosis cases were reported to the CDC each year on average in the United States, and 2008–10 averaged 20 cases per year; these were mostly the result of consuming undercooked game (sylvatic transmission) or home-reared pigs (domestic transmission). Religious groups The kashrut and halal dietary laws of Judaism and Islam prohibit eating pork. In the 19th century, when the association between trichinosis and undercooked pork was first established, this association was suggested to be the reason for the prohibition, reminiscent of the earlier opinion of medieval Jewish philosopher Maimonides that food forbidden by Jewish law was "unwholesome". This theory was controversial, and eventually fell out of favor. Reemergence The disappearance of the pathogen from domestic pigs has led to a relaxation of legislation and control efforts by veterinary public health systems. Trichinosis has lately been thought of as a re-emerging zoonosis, supplemented by the increased distribution of meat products, political changes, a changing climate, and increasing sylvatic transmission.Major sociopolitical changes can produce conditions that favor the resurgence of Trichinella infections in swine and, consequently, in humans. For instance, "the overthrow of the social and political structures in the 1990s" in Romania led to an increase in the incidence rate of trichinosis. History As early as 1835, trichinosis was known to have been caused by a parasite, but the mechanism of infection was unclear at the time. A decade later, American scientist Joseph Leidy pinpointed undercooked meat as the primary vector for the parasite, and two decades afterwards, this hypothesis was fully accepted by the scientific community. Parasite The circumstances surrounding the first observation and identification of T. spiralis are controversial, due to a lack of records. In 1835, James Paget, a first-year medical student, first observed the larval form of T. spiralis, while witnessing an autopsy at St. Bartholomew’s Hospital in London. Paget took special interest in the presentation of muscle with white flecks, described as a "sandy diaphragm". Although Paget is most likely the first person to have noticed and recorded these findings, the parasite was named and published in a report by his professor, Richard Owen, who is now credited for the discovery of the T. spiralis larval form. Lifecycle A series of experiments conducted between 1850 and 1870 by the German researchers Rudolf Virchow, Rudolf Leuckart, and Friedrich Albert von Zenker, which involved feeding infected meat to a dog and performing the subsequent necropsy, led to the discovery of the lifecycle of Trichinella. Through these experiments, Virchow was able to describe the development and infectivity of T. spiralis. Research The International Commission on Trichinellosis (ICT) was formed in Budapest in 1958. Its mission is to exchange information on the epidemiology, biology, pathophysiology, immunology, and clinical aspects of trichinosis in humans and animals. Prevention is a primary goal. Since the creation of the ICT, its members (more than 110 from 46 countries) have regularly gathered and worked together during meetings held every four years: the International Conference on Trichinellosis. See also List of parasites (human) Nurse cell References External links International Commission on trichinellosis web pages CDC Department of Parasitic Diseases – Trichinosis Jokelainen P, Näreaho A, Hälli O, Heinonen M, Sukura A (June 2012). "Farmed wild boars exposed to Toxoplasma gondii and Trichinella spp". Vet. Parasitol. 187 (1–2): 323–27. doi:10.1016/j.vetpar.2011.12.026. PMID 22244535.
Trichomoniasis
Trichomoniasis (trich) is an infectious disease caused by the parasite Trichomonas vaginalis. About 70% of affected people do not have symptoms when infected. When symptoms occur, they typically begin 5 to 28 days after exposure. Symptoms can include itching in the genital area, a bad smelling thin vaginal discharge, burning with urination, and pain with sex. Having trichomoniasis increases the risk of getting HIV/AIDS. It may also cause complications during pregnancy.Trichomoniasis is a sexually transmitted infection (STI) which is most often spread through vaginal, oral, or anal sex. It can also spread through genital touching. People who are infected may spread the disease even when symptoms are not present. Diagnosis is by finding the parasite in the vaginal fluid using a microscope, culturing the vaginal fluid or urine, or testing for the parasites DNA. If present, other STIs should be tested for.Methods of prevention include not having sex, using condoms, not douching, and being tested for STIs before having sex with a new partner. Although not caused by a bacteria, trichomoniasis can be cured with certain antibiotics (metronidazole, tinidazole, secnidazole). Sexual partners should also be treated. About 20% of people get infected again within three months of treatment.There were about 122 million new cases of trichomoniasis in 2015. In the United States, there are about 2 million women affected. It occurs more often in women than men. Trichomonas vaginalis was first identified in 1836 by Alfred Donné. It was first recognized as causing this disease in 1916. Signs and symptoms Most people infected with Trichomonas vaginalis do not have any symptoms and can be undetected for years. Symptoms experienced include pain, burning or itching in the penis, urethra (urethritis), or vagina (vaginitis). Discomfort for both sexes may increase during intercourse and urination. For women there may also be a yellow-green, itchy, frothy, foul-smelling ("fishy" smell) vaginal discharge. In rare cases, lower abdominal pain can occur. Symptoms usually appear within 5 to 28 days of exposure. Sometimes trichomoniasis can be confused with chlamydia because the symptoms are similar. Complications Trichomoniasis is linked to several serious complications. Trichomoniasis is associated with increased risk of transmission and infection of HIV. Trichomoniasis may cause a woman to deliver a low-birth-weight or premature infant. The role of Trichomonas infection in causing cervical cancer is unclear, although trichomonas infection may be associated with co-infection with high-risk strains of HPV. T. vaginalis infection in males has been found to cause asymptomatic urethritis and prostatitis. In the prostate, it may create chronic inflammation that may eventually lead to prostate cancer. Causes The human genital tract is the only reservoir for this species. Trichomonas is transmitted through sexual or genital contact.The single-celled protozoan produces mechanical stress on host cells and then ingests cell fragments after cell death. Genetic sequence A draft sequence of the Trichomonas genome was published on January 12, 2007, in the journal Science confirming that the genome has at least 26,000 genes, a similar number to the human genome. An additional approximately 34,000 unconfirmed genes, including thousands that are part of potentially transposable elements, brings the gene content to well over 60,000. Diagnosis There are three main ways to test for trichomoniasis. The first is known as saline microscopy. This is the most commonly used method and requires an endocervical, vaginal, or penile swab specimen for examination under a microscope. The presence of one or multiple trichomonads constitutes a positive result. This method is cheap but has a low sensitivity (60–70%) often due to an inadequate sample, resulting in false negatives. The second diagnostic method is culture, which has historically been the "gold standard" in infectious disease diagnosis. Trichomonas vaginalis culture tests are relatively cheap; however, sensitivity is still somewhat low (70–89%). The third method includes the nucleic acid amplification tests (NAATs) which are more sensitive. These tests are more costly than microscopy and culture, and are highly sensitive (80–90%). Prevention Use of male condoms or female condoms may help prevent the spread of trichomoniasis, although careful studies have never been done that focus on how to prevent this infection. Infection with trichomoniasis through water is unlikely because Trichomonas vaginalis dies in water after 45–60 minutes, in thermal water after 30 minutes to 3 hours and in diluted urine after 5–6 hours.Currently there are no routine standard screening requirements for the general U.S. population receiving family planning or STI testing. The Centers for Disease Control and Prevention (CDC) recommends trichomoniasis testing for females with vaginal discharge and can be considered for females at higher risk for infection or of HIV-positive serostatus.The advent of new, highly specific and sensitive trichomoniasis tests present opportunities for new screening protocols for both men and women. Careful planning, discussion, and research are required to determine the cost-efficiency and most beneficial use of these new tests for the diagnosis and treatment of trichomoniasis in the U.S., which can lead to better prevention efforts.A number of strategies have been found to improve follow-up for STI testing including email and text messaging as reminders of appointments. Screening Evidence from a randomized controlled trials for screening pregnant women who do not have symptoms for infection with trichomoniasis and treating women who test positive for the infection have not consistently shown a reduced risk of preterm birth. Further studies are needed to verify this result and determine the best method of screening. In the US, screening of pregnant women without any symptoms is only recommended in those with HIV as Trichomonas infection is associated with increased risk of transmitting HIV to the fetus. Treatment Treatment for both pregnant and non-pregnant women is usually with metronidazole, by mouth once. Caution should be used in pregnancy, especially in the first trimester. Sexual partners, even if they have no symptoms, should also be treated. Single oral dose of nitroimidazole is sufficient to kill the parasites.For 95–97% of cases, infection is resolved after one dose of metronidazole. Studies suggest that 4–5% of trichomonas cases are resistant to metronidazole, which may account for some "repeat" cases. Without treatment, trichomoniasis can persist for months to years in women, and is thought to improve without treatment in men. Women living with HIV infection have better cure rates if treated for seven days rather than with one dose.Topical treatments are less effective than oral antibiotics due to Skenes gland and other genitourinary structures acting as a reservoir. Epidemiology There were about 58 million cases of trichomoniasis in 2013. It is more common in women (2.7%) than males (1.4%). It is the most common non-viral STI in the U.S., with an estimated 3.7 million prevalent cases and 1.1 million new cases per year. It is estimated that 3% of the general U.S. population is infected, and 7.5–32% of moderate-to-high risk (including incarcerated) populations. References External links Trichomoniasis at Centers for Disease Control and Prevention Vaginitis/Vaginal infection fact sheet from the National Institute of Allergies and Infections. The first version of this article was taken from this public domain resource. eMedicine Health Trichomoniasis Archived 2008-05-22 at the Wayback Machine
Tricyclic antidepressant overdose
Tricyclic antidepressant overdose is poisoning caused by excessive medication of the tricyclic antidepressant (TCA) type. Symptoms may include elevated body temperature, blurred vision, dilated pupils, sleepiness, confusion, seizures, rapid heart rate, and cardiac arrest. If symptoms have not occurred within six hours of exposure they are unlikely to occur.TCA overdose may occur by accident or purposefully in an attempt to cause death. The toxic dose depends on the specific TCA. Most are non-toxic at less than 5 mg/kg except for desipramine, nortriptyline, and trimipramine, which are generally non-toxic at less than 2.5 mg/kg. In small children one or two pills can be fatal. An electrocardiogram (ECG) should be included in the assessment when there is concern of an overdose.In overdose activated charcoal is often recommended. People should not be forced to vomit. In those who have a wide QRS complex (> 100 ms) sodium bicarbonate is recommended. If seizures occur benzodiazepines should be given. In those with low blood pressure intravenous fluids and norepinephrine may be used. The use of intravenous lipid emulsion may also be tried.In the early 2000s TCAs were one of the most common cause of poisoning. In the United States in 2004 there were more than 12,000 cases. In the United Kingdom they resulted in about 270 deaths a year. An overdose from TCAs was first reported in 1959. Signs and symptoms The peripheral autonomic nervous system, central nervous system and the heart are the main systems that are affected following overdose. Initial or mild symptoms typically develop within 2 hours and include tachycardia, drowsiness, a dry mouth, nausea and vomiting, urinary retention, confusion, agitation, and headache. More severe complications include hypotension, cardiac rhythm disturbances, hallucinations, and seizures. Electrocardiogram (ECG) abnormalities are frequent and a wide variety of cardiac dysrhythmias can occur, the most common being sinus tachycardia and intraventricular conduction delay resulting in prolongation of the QRS complex and the PR/QT intervals. Seizures, cardiac dysrhythmias, and apnea are the most important life-threatening complications. Cause Tricyclics have a narrow therapeutic index, i.e., the therapeutic dose is close to the toxic dose. Factors that increase the risk of toxicity include advancing age, cardiac status, and concomitant use of other drugs. However, serum drug levels are not useful for evaluating risk of arrhythmia or seizure in tricyclic overdose. Pathophysiology Most of the toxic effects of TCAs are caused by four major pharmacological effects. TCAs have anticholinergic effects, cause excessive blockade of norepinephrine reuptake at the preganglionic synapse, direct alpha adrenergic blockade, and importantly they block sodium membrane channels with slowing of membrane depolarization, thus having quinidine-like effects on the myocardium. Diagnosis A specific blood test to verify toxicity is not typically available. An electrocardiogram (ECG) should be included in the assessment when there is concern of an overdose. Treatment People with symptoms are usually monitored in an intensive care unit for a minimum of 12 hours, with close attention paid to maintenance of the airways, along with monitoring of blood pressure, arterial pH, and continuous ECG monitoring. Supportive therapy is given if necessary, including respiratory assistance and maintenance of body temperature. Once a person has had a normal ECG for more than 24 hours they are generally medically clear. Decontamination Initial treatment of an acute overdose includes gastric decontamination. This is achieved by giving activated charcoal, which adsorbs the drug in the gastrointestinal tract either by mouth or via a nasogastric tube. Activated charcoal is most useful if given within 1 to 2 hours of ingestion. Other decontamination methods such as stomach pumps, ipecac induced emesis, or whole bowel irrigation are generally not recommended in TCA poisoning. Stomach pumps may be considered within an hour of ingestion but evidence to support the practice is poor. Medication Administration of intravenous sodium bicarbonate as an antidote has been shown to be an effective treatment for resolving the metabolic acidosis and cardiovascular complications of TCA poisoning. If sodium bicarbonate therapy fails to improve cardiac symptoms, conventional antidysrhythmic drugs or magnesium can be used to reverse any cardiac abnormalities. However, no benefit has been shown from Class 1 antiarrhythmic drugs; it appears they worsen the sodium channel blockade, slow conduction velocity, and depress contractility and should be avoided in TCA poisoning. Low blood pressure is initially treated with fluids along with bicarbonate to reverse metabolic acidosis (if present), if the blood pressure remains low despite fluids then further measures such as the administration of epinephrine, norepinephrine, or dopamine can be used to increase blood pressure.Another potentially severe symptom is seizures: Seizures often resolve without treatment but administration of a benzodiazepine or other anticonvulsive may be required for persistent muscular overactivity. There is no role for physostigmine in the treatment of tricyclic toxicity as it may increase cardiac toxicity and cause seizures. In cases of severe TCA overdose that are refractory to conventional therapy, intravenous lipid emulsion therapy has been reported to improve signs and symptoms in moribund patients with toxicities involving several types of lipophilic substances, therefore lipids may have a role in treating severe cases of refractory TCA overdose. Dialysis Tricyclic antidepressants are highly protein bound and have a large volume of distribution; therefore removal of these compounds from the blood with hemodialysis, hemoperfusion or other techniques are unlikely to be of any significant benefit. Epidemiology Studies in the 1990s in Australia and the United Kingdom showed that between 8 and 12% of drug overdoses were following TCA ingestion. TCAs may be involved in up to 33% of all fatal poisonings, second only to analgesics. Another study reported 95% of deaths from antidepressants in England and Wales between 1993 and 1997 were associated with tricyclic antidepressants, particularly dothiepin and amitriptyline. It was determined there were 5.3 deaths per 100,000 prescriptions.Sodium channel blockers such as Dilantin should not be used in the treatment of TCA overdose as the Na+ blockade will increase the QTI. References == External links ==
Trigeminal neuralgia
Trigeminal neuralgia (TN or TGN), also called Fothergill Disease, Tic Douloureux, or Trifacial Neuralgia is a long-term pain disorder that affects the trigeminal nerve, the nerve responsible for sensation in the face and motor functions such as biting and chewing. It is a form of neuropathic pain. There are two main types: typical and atypical trigeminal neuralgia. The typical form results in episodes of severe, sudden, shock-like pain in one side of the face that lasts for seconds to a few minutes. Groups of these episodes can occur over a few hours. The atypical form results in a constant burning pain that is less severe. Episodes may be triggered by any touch to the face. Both forms may occur in the same person. It is regarded as one of the most painful disorders known to medicine, and often results in depression.The exact cause is unknown, but believed to involve loss of the myelin of the trigeminal nerve. This might occur due to compression from a blood vessel as the nerve exits the brain stem, multiple sclerosis, stroke, or trauma. Less common causes include a tumor or arteriovenous malformation. It is a type of nerve pain. Diagnosis is typically based on the symptoms, after ruling out other possible causes such as postherpetic neuralgia.Treatment includes medication or surgery. The anticonvulsant carbamazepine or oxcarbazepine is usually the initial treatment, and is effective in about 90% of people. Side effects are frequently experienced that necessitate drug withdrawal in as many as 23% of patients. Other options include lamotrigine, baclofen, gabapentin, amitriptyline and pimozide. Opioids are not usually effective in the typical form. In those who do not improve or become resistant to other measures, a number of types of surgery may be tried.It is estimated that 1 in 8,000 people per year develop trigeminal neuralgia. It usually begins in people over 50 years old, but can occur at any age. Women are more commonly affected than men. The condition was first described in detail in 1773 by John Fothergill. Signs and symptoms This disorder is characterized by episodes of severe facial pain along the trigeminal nerve divisions. The trigeminal nerve is a paired cranial nerve that has three major branches: the ophthalmic nerve (V1), the maxillary nerve (V2), and the mandibular nerve (V3). One, two, or all three branches of the nerve may be affected. Trigeminal neuralgia most commonly involves the middle branch (the maxillary nerve or V2) and lower branch (mandibular nerve or V3) of the trigeminal nerve.An individual attack usually lasts from a few seconds to several minutes or hours, but these can repeat for hours with very short intervals between attacks. In other instances, only 4–10 attacks are experienced daily. The episodes of intense pain may occur paroxysmally. To describe the pain sensation, people often describe a trigger area on the face so sensitive that touching or even air currents can trigger an episode; however, in many people, the pain is generated spontaneously without any apparent stimulation. It affects lifestyle as it can be triggered by common activities such as eating, talking, shaving and brushing teeth. The wind, chewing, and talking can aggravate the condition in many patients. The attacks are said, by those affected, to feel like stabbing electric shocks, burning, sharp, pressing, crushing, exploding or shooting pain that becomes intractable.The pain also tends to occur in cycles with remissions lasting months or even years. Pain attacks are known to worsen in frequency or severity over time, in some people. Pain may migrate to other branches over time but in some people remains very stable.Bilateral (occurring on both sides) trigeminal neuralgia is very rare except for trigeminal neuralgia caused by multiple sclerosis (MS). This normally indicates problems with both trigeminal nerves, since one nerve serves the left side of the face and the other serves the right side. Occasional reports of bilateral trigeminal neuralgia reflect successive episodes of unilateral (only one side) pain switching the side of the face rather than pain occurring simultaneously on both sides.Rapid spreading of the pain, bilateral involvement or simultaneous participation with other major nerve trunks (such as Painful Tic Convulsif of nerves V & VII or occurrence of symptoms in the V and IX nerves) may suggest a systemic cause. Systemic causes could include multiple sclerosis or expanding cranial tumors.The severity of the pain makes it difficult to wash the face, shave, and perform good oral hygiene. The pain has a significant impact on activities of daily living especially as those affected live in fear of when they are going to get their next attack of pain and how severe it will be. It can lead to severe depression and anxiety.However, not all people will have the symptoms described above and there are variants of TN. One of which is atypical trigeminal neuralgia ("trigeminal neuralgia, type 2" or trigeminal neuralgia with concomitant pain), based on a recent classification of facial pain. In these instances there is also a more prolonged lower severity background pain that can be present for over 50% of the time and is described more as a burning or prickling, rather than a shock. Trigeminal pain can also occur after an attack of herpes zoster. Post-herpetic neuralgia has the same manifestations as in other parts of the body. Herpes zoster oticus typically presents with inability to move many facial muscles, pain in the ear, taste loss on the front of the tongue, dry eyes and mouth, and a vesicular rash. Less than 1% of varicella zoster infections involve the facial nerve and result in this occurring.Trigeminal deafferentation pain (TDP), also termed anesthesia dolorosa, or colloquially as phantom face pain, is from intentional damage to a trigeminal nerve following attempts to surgically fix a nerve problem. This pain is usually constant with a burning sensation and numbness. TDP is very difficult to treat as further surgeries are usually ineffective and possibly detrimental to the person. Causes The trigeminal nerve is a mixed cranial nerve responsible for sensory data such as tactition (pressure), thermoception (temperature), and nociception (pain) originating from the face above the jawline; it is also responsible for the motor function of the muscles of mastication, the muscles involved in chewing but not facial expression.Several theories exist to explain the possible causes of this pain syndrome. It was once believed that the nerve was compressed in the opening from the inside to the outside of the skull; but leading research indicates that it is an enlarged or lengthened blood vessel – most commonly the superior cerebellar artery – compressing or throbbing against the microvasculature of the trigeminal nerve near its connection with the pons. Such a compression can injure the nerves protective myelin sheath and cause erratic and hyperactive functioning of the nerve. This can lead to pain attacks at the slightest stimulation of any area served by the nerve as well as hinder the nerves ability to shut off the pain signals after the stimulation ends. This type of injury may rarely be caused by an aneurysm (an outpouching of a blood vessel); by an AVM (arteriovenous malformation); by a tumor; such as an arachnoid cyst or meningioma in the cerebellopontine angle; or by a traumatic event, such as a car accident.Short-term peripheral compression is often painless. Persistent compression results in local demyelination with no loss of axon potential continuity. Chronic nerve entrapment results in demyelination primarily, with progressive axonal degeneration subsequently. It is, "therefore widely accepted that trigeminal neuralgia is associated with demyelination of axons in the Gasserian ganglion, the dorsal root, or both." It has been suggested that this compression may be related to an aberrant branch of the superior cerebellar artery that lies on the trigeminal nerve. Further causes, besides an aneurysm, multiple sclerosis or cerebellopontine angle tumor, include: a posterior fossa tumor, any other expanding lesion or even brainstem diseases from strokes.Trigeminal neuralgia is found in 3–4% of people with multiple sclerosis, according to data from seven studies. It has been theorized that this is due to damage to the spinal trigeminal complex. Trigeminal pain has a similar presentation in patients with and without MS.Postherpetic neuralgia, which occurs after shingles, may cause similar symptoms if the trigeminal nerve is damaged, called Ramsay Hunt syndrome type 2. When there is no apparent structural cause, the syndrome is called idiopathic. Diagnosis Trigeminal neuralgia is diagnosed via the result of neurological and physical tests, as well as the individuals medical history. Management As with many conditions without clear physical or laboratory diagnosis, TN is sometimes misdiagnosed. A person with TN will sometimes seek the help of numerous clinicians before a firm diagnosis is made.There is evidence that points towards the need to quickly treat and diagnose TN. It is thought that the longer a patient has TN, the harder it may be to reverse the neural pathways associated with the pain.The differential diagnosis includes temporomandibular disorder. Since triggering may be caused by movements of the tongue or facial muscles, TN must be differentiated from masticatory pain that has the clinical characteristics of deep somatic rather than neuropathic pain. Masticatory pain will not be arrested by a conventional mandibular local anesthetic block. One quick test a dentist might perform is a conventional inferior dental local anaesthetic block, if the pain is in this branch, as it will not arrest masticatory pain but will TN. Medical The anticonvulsant carbamazepine is the first line treatment; second line medications include baclofen, lamotrigine, oxcarbazepine, phenytoin, topiramate, gabapentin and pregabalin. Uncontrolled trials have suggested that clonazepam and lidocaine may be effective. Antidepressant medications, such as amitriptyline have shown good efficacy in treating trigeminal neuralgia, especially if combined with an anti-convulsant drug such as pregabalin. There is some evidence that duloxetine can also be used in some cases of neuropathic pain, especially in patients with major depressive disorder as it is an antidepressant. However, it should, by no means, be considered a first line therapy and should only be tried by specialist advice. There is controversy around opiate use such as morphine and oxycodone for treatment of TN, with varying evidence on its effectiveness for neuropathic pain. Generally, opioids are considered ineffective against TN and thus should not be prescribed. Surgical Microvascular decompression provides freedom from pain in about 75% of patients presenting with drug-resistant trigeminal neuralgia. While there may be pain relief after surgery, there is also a risk of adverse effects, such as facial numbness. Percutaneous radiofrequency thermorhizotomy may also be effective as may stereotactic radiosurgery; however the effectiveness decreases with time.Surgical procedures can be separated into non-destructive and destructive: Non-destructive Microvascular decompression – this involves a small incision behind the ear and some bone removal from the area. An incision through the meninges is made to expose the nerve. Any vascular compressions of the nerve are carefully moved and a sponge-like pad is placed between the compression and nerve, stopping unwanted pulsation and allowing myelin sheath healing. Destructive All destructive procedures will cause facial numbness, post relief, as well as pain relief. Percutaneous techniques which all involve a needle or catheter entering the face up to the origin where the nerve splits into three divisions and then damaging this area, purposely, to produce numbness but also stop pain signals. These techniques are proven effective especially in those where other interventions have failed or in those who are medically unfit for surgery such as the elderly. Balloon compression – inflation of a balloon at this point causing damage and stopping pain signals. Glycerol injection – deposition of a corrosive liquid called glycerol at this point causes damage to the nerve to hinder pain signals. Radiofrequency thermocoagulation rhizotomy – application of a heated needle to damage the nerve at this point. Stereotactic radiosurgery is a form of radiation therapy that focuses high-power energy on a small area of the body Support Psychological and social support has found to play a key role in the management of chronic illnesses and chronic pain conditions, such as trigeminal neuralgia. Chronic pain can cause constant frustration to an individual as well as to those around them. History Trigeminal neuralgia was first described by physician John Fothergill and treated surgically by John Murray Carnochan, both of whom were graduates of the University of Edinburgh Medical School. Historically TN has been called "suicide disease" due to studies by the pioneering forefather in neurosurgery Harvey Cushing involving 123 cases of TN during 1896 and 1912. Often called the "suicide disease" because of the intense pain, higher rates of suicidal ideation in patients with severe migraines, and links to higher rates of depression, anxiety, and sleep disorders, trigeminal neuralgia is pain that spreads over the face and down the neck, triggered by even the slightest breath of wind across the face. Society and culture Some individuals of note with TN include: Four-time British Prime Minister William Gladstone is believed to have had the disease. Entrepreneur and author Melissa Seymour was diagnosed with TN in 2009 and underwent microvascular decompression surgery in a well documented case covered by magazines and newspapers which helped to raise public awareness of the illness in Australia. Seymour was subsequently made a Patron of the Trigeminal Neuralgia Association of Australia. Salman Khan, an Indian film star, was diagnosed with TN in 2011. He underwent surgery in the US. All-Ireland winning Gaelic footballer Christy Toye was diagnosed with the condition in 2013. He spent five months in his bedroom at home, returned for the 2014 season and lined out in another All-Ireland final with his team. Jim Fitzpatrick – former Member of Parliament (MP) for Poplar and Limehouse – disclosed he had trigeminal neuralgia before undergoing neurosurgery. He has openly discussed his condition at parliamentary meetings and is a prominent figure in the TNA UK charity. Jefferson Davis – President of the Confederate States of America Charles Sanders Peirce – American philosopher, scientist and father of pragmatism. Gloria Steinem – American feminist, journalist, and social and political activist Anneli van Rooyen, Afrikaans singer-songwriter popular during the 1980s and 1990s, was diagnosed with atypical trigeminal neuralgia in 2004. During surgical therapy directed at alleviating the condition performed in 2007, Van Rooyen had permanent nerve damage, resulting in her near-complete retirement from performing. H.R., singer of hardcore punk band Bad Brains Aneeta Prem, British author, human rights campaigner, magistrate and the founder and president of Freedom Charity. Aneetas experience of bilateral TN began in 2010, with severe pain and resulting sleep deprivation. Her condition remained undiagnosed until 2017. MVD Surgery to ameliorate the pain on the right-hand side was performed at UCHL in December 2019. Travis Barker, drummer of rock band Blink-182 See also Cluster headache Trigeminal trophic syndrome References External links Trigeminal neuralgia at Curlie Trigeminal Neuralgia at NHS Choices
Tropical sprue
Tropical sprue is a malabsorption disease commonly found in tropical regions, marked with abnormal flattening of the villi and inflammation of the lining of the small intestine. It differs significantly from coeliac sprue. It appears to be a more severe form of environmental enteropathy. Signs and symptoms The illness usually starts with an attack of acute diarrhoea, fever and malaise following which, after a variable period, the patient settles into the chronic phase of diarrhoea, steatorrhoea, weight loss, anorexia, malaise, and nutritional deficiencies. The symptoms of tropical sprue are: Diarrhoea Steatorrhoea or fatty stool (often foul-smelling and whitish in colour) Indigestion Cramps Weight loss and malnutrition FatigueLeft untreated, nutrient and vitamin deficiencies may develop in patients with tropical sprue. These deficiencies may have these symptoms: Vitamin A deficiency: hyperkeratosis or skin scales Vitamin B12 and folic acid deficiencies: anaemia, numbness, and tingling sensation Vitamin D and calcium deficiencies: spasm, bone pain, muscle weakness Vitamin K deficiency: bruises Cause The cause of tropical sprue is not known. It may be caused by persistent bacterial, viral, amoebal, or parasitic infections. Folic acid deficiency, effects of malabsorbed fat on intestinal motility, and persistent small intestinal bacterial overgrowth may combine to cause the disorder. A link between small intestinal bacterial overgrowth and tropical sprue has been proposed to be involved in the aetiology of post-infectious irritable bowel syndrome (IBS). Intestinal immunologic dysfunction, including deficiencies in secretory immunoglobulin A (IgA), may predispose people to malabsorption and bacterial colonization, so tropical sprue may be triggered in susceptible individuals following an acute enteric infection. Diagnosis Diagnosis of tropical sprue can be complicated because many diseases have similar symptoms. The following investigation results are suggestive: Abnormal flattening of villi and inflammation of the lining of the small intestine, observed during an endoscopic procedure. Presence of inflammatory cells (most often lymphocytes) in the biopsy of small intestine tissue. Low levels of vitamins A, B12, E, D, and K, as well as serum albumin, calcium, and folate, revealed by a blood test. Excess fat in the feces (steatorrhoea). Thickened small bowel folds seen on imaging.Tropical sprue is largely limited to within about 30 degrees north and south of the equator. Recent travel to this region is a key factor in diagnosing this disease in residents of countries outside of that geographical region.Other conditions which can resemble tropical sprue need to be differentiated. Coeliac disease (also known as coeliac sprue or gluten sensitive enteropathy), has similar symptoms to tropical sprue, with the flattening of the villi and small intestine inflammation and is caused by an autoimmune disorder in genetically susceptible individuals triggered by ingested gluten. Malabsorption can also be caused by protozoan infections, tuberculosis, HIV/AIDS, immunodeficiency, chronic pancreatitis and inflammatory bowel disease. Environmental enteropathy is a less severe, subclinical condition similar to tropical sprue. Prevention Preventive measures for visitors to tropical areas where the condition exists include steps to reduce the likelihood of gastroenteritis. These may comprise using only bottled water for drinking, brushing teeth, and washing food, and avoiding fruits washed with tap water (or consuming only peeled fruits, such as bananas and oranges). Basic sanitation is necessary to reduce fecal-oral contamination and the impact of environmental enteropathy in the developing world. Treatment Once diagnosed, tropical sprue can be treated by a course of the antibiotic tetracycline or sulphamethoxazole/trimethoprim (co-trimoxazole) for 3 to 6 months. Supplementation of vitamins B12 and folic acid improves appetite and leads to a gain in weight. Prognosis The prognosis for tropical sprue may be excellent after treatment. It usually does not recur in people who get it during travel to affected regions. The recurrence rate for natives is about 20%, but another study showed changes can persist for several years. Epidemiology Tropical sprue is common in the Caribbean, Central and South America, and India and southeast Asia. In the Caribbean, it appeared to be more common in Puerto Rico and Haiti. Epidemics in southern India have occurred. History The disease was first described by William Hillary in 1759 in Barbados. Tropical sprue was responsible for one-sixth of all casualties sustained by the Allied forces in India and Southeast Asia during World War II. The use of folic acid and vitamin B12 in the treatment of tropical sprue was promoted in the late 1940s by Dr. Tom Spies of the University of Alabama, while conducting his research in Cuba and Puerto Rico. == References ==
Tuberculosis management
Tuberculosis management describes the techniques and procedures utilized in treating tuberculosis (TB). The medical standard for active TB is a short course treatment involving a combination of Isoniazid, Rifampicin (also known as Rifampin), Pyrazinamide, and Ethambutol for the first two months. During this initial period, Isoniazid is taken alongside Pyridoxal phosphate to obviate Peripheral neuropathy. Isoniazid is then taken coicident with Rifampicin alone for the remaining four months of treatment. A patient is considered free of all living TB bacteria after six months. Latent tuberculosis or Latent Tuberculosis Infection (LTBI) is treated with three to nine months of Isoniazid alone, this longterm treatment often risks the development of Hepatotoxicity. A combination of Isoniazid plus Rifampicin for a period of three to four months is shown to be an equallly effective method for treating LTBI, while mitigating risks to Hepatotoxicity. Treatment of LTBI is essential in preventing the spread of active TB. Drugs First line All first-line anti-tuberculous drug names have semistandardized three-letter and single-letter abbreviations: ethambutol is EMB or E, isoniazid is INH or H, pyrazinamide is PZA or Z, rifampicin is RMP or R, streptomycin is SM or S.First-line anti-tuberculous drug names are often remembered with the mnemonic "RIPE," referring to the use of a rifamycin (like rifampin), isoniazid, pyrazinamide, and ethambutol. US practice uses abbreviations and names that are not internationally convened: rifampicin is called rifampin and abbreviated RIF; streptomycin is abbreviated STM. Other abbreviations have been widely used (for example, the notations RIF, RFP, and RMP have all been widely used for rifampicin, and the combination regimens have notations such as IRPE, HRZE, RIPE, and IREP that are variously synonyms or near-synonyms, depending on dosage schedules), but for clarity, the semistandardized abbreviations used above are used in the rest of this article. In this system, which the World Health Organization (WHO) supports, "RIPE" is "RHZE". (Both have mnemonic potential, as tuberculosis is named after tubercles (small tubers), and a tuber can be ripe and can be a rhizome.)Drug regimens are similarly abbreviated in a semistandardised manner. The drugs are listed using their single letter abbreviations (in the order given above, which is roughly the order of introduction into clinical practice). A prefix denotes the number of months the treatment should be given for; a subscript denotes intermittent dosing (so 3 means three times a week) and no subscript means daily dosing. Most regimens have an initial high-intensity phase, followed by a continuation phase (also called a consolidation phase or eradication phase): the high-intensity phase is given first, then the continuation phase, the two phases divided by a slash.So, 2HREZ/4HR3means isoniazid, rifampicin, ethambutol, pyrazinamide daily for two months, followed by four months of isoniazid and rifampicin given three times a week.In the US only, streptomycin is not considered a first line drug by ATS/IDSA/CDC because of high rates of resistance. The WHO have made no such recommendation. Second line The second line drugs (WHO groups 2, 3 and 4) are only used to treat disease that is resistant to first line therapy (i.e., for extensively drug-resistant tuberculosis (XDR-TB) or multidrug-resistant tuberculosis (MDR-TB)). A drug may be classed as second-line instead of first-line for one of three possible reasons: it may be less effective than the first-line drugs (e.g., p-aminosalicylic acid); or, it may have toxic side-effects (e.g., cycloserine); or it may be effective, but unavailable in many developing countries (e.g., fluoroquinolones): aminoglycosides (WHO group 2): e.g., amikacin (AMK), kanamycin (KM); polypeptides (WHO group 2): e.g., capreomycin, viomycin, enviomycin; fluoroquinolones (WHO group 3): e.g., ciprofloxacin (CIP), levofloxacin, moxifloxacin (MXF); thioamides (WHO group 4): e.g. ethionamide, prothionamide cycloserine (WHO group 4) terizidone (WHO group 5) Third line Third-line drugs (WHO group 5) include drugs that may be useful, but have doubtful or unproven efficacy: rifabutin macrolides: e.g., clarithromycin (CLR); linezolid (LZD); thioacetazone (T); thioridazine; arginine; vitamin D; bedaquiline.These drugs are listed here either because they are not very effective (e.g., clarithromycin) or because their efficacy has not been proven (e.g., linezolid, R207910). Rifabutin is effective, but is not included on the WHO list because for most developing countries, it is impractically expensive. Standard regimen Rationale and evidence Tuberculosis has been treated with combination therapy for over fifty years. Drugs are not used singly (except in latent TB or chemoprophylaxis), and regimens that use only single drugs result in the rapid development of resistance and treatment failure. The rationale for using multiple drugs to treat TB are based on simple probability. The rate of spontaneous mutations that confer resistance to an individual drug are well known: 1 mutation for every 107 cell divisions for EMB, 1 for every 108 divisions for STM and INH, and 1 for every 1010 divisions for RMP.Patients with extensive pulmonary TB have approximately 1012 bacteria in their body, and therefore will probably be harboring approximately 105 EMB-resistant bacteria, 104 STM-resistant bacteria, 104 INH-resistant bacteria and 10² RMP-resistant bacteria. Resistance mutations appear spontaneously and independently, so the chances of them harbouring a bacterium that is spontaneously resistant to both INH and RMP is 1 in 108 × 1 in 1010 = 1 in 1018, and the chances of them harbouring a bacterium that is spontaneously resistant to all four drugs is 1 in 1033. This is, of course, an oversimplification, but it is a useful way of explaining combination therapy.There are other theoretical reasons for supporting combination therapy. The different drugs in the regimen have different modes of action. INH are bacteriocidal against replicating bacteria. EMB is bacteriostatic at low doses, but is used in TB treatment at higher, bactericidal doses. RMP is bacteriocidal and has a sterilizing effect. PZA is only weakly bactericidal, but is very effective against bacteria located in acidic environments, inside macrophages, or in areas of acute inflammation.All TB regimens in use were 18 months or longer until the appearance of rifampicin. In 1953, the standard UK regimen was 3SPH/15PH or 3SPH/15SH2. Between 1965 and 1970, EMB replaced PAS. RMP began to be used to treat TB in 1968 and the BTS study in the 1970s showed that 2HRE/7HR was efficacious. In 1984, a BTS study showed that 2HRZ/4HR was efficacious, with a relapse rate of less than 3% after two years. In 1995, with the recognition that INH resistance was increasing, the British Thoracic Society recommended adding EMB or STM to the regimen: 2HREZ/4HR or 2SHRZ/4HR, which are the regimens currently recommended. The WHO also recommend a six-month continuation phase of HR if the patient is still culture positive after 2 months of treatment (approximately 15% of patients with fully sensitive TB) and for those patients who have extensive bilateral cavitation at the start of treatment. Monitoring, DOTS, and DOTS-Plus DOTS stands for "Directly Observed Treatment, Short-course" and is a major plank in the World Health Organization (WHO) Global Plan to Stop TB. The DOTS strategy focuses on five main points of action. The first element of DOTS involves creating increased sustainable financial services and a short and long-term plan provided by the government, dedicated to eliminating tuberculosis.The World Health Organization helps encourage mobilized funding to reduce poverty standards that will prevent tuberculosis. The second component of the DOTS strategy is case detection, which involves improving the accuracy of laboratory tests for bacteriology and improving communication from labs to doctors and patients. Case detection means that laboratories that detect and test for bacteriology are accurate and communicative to its doctors and patients. The third strategy is to provide standard treatment and patient support. The guidelines to adhere to adequate treatment is to provide pharmaceutical drugs that will help eliminate tuberculosis and follow-up check-ups to ensure that tuberculosis is not a deterring factor in a patients life. There are many cultural barriers as many patients might continue to work under unsanitary living conditions or not have enough money to pay for the treatments. Programs that provide stipends and incentives to allow citizens to seek treatment are also necessary. The fourth element to the DOTS approach is to have a management program that supplies a sustainable long-term supply of reliable antibiotics. Lastly, the fifth component is to record and monitor treatment plans to ensure that the DOTS approach is effective. The DOTS approach not only aims to provide structure for tuberculosis programs, but also to ensure that citizens diagnosed with tuberculosis adhere to protocols which will prevent future bacterial infections.These include government commitment to control TB, diagnosis based on sputum-smear microscopy tests done on patients who actively report TB symptoms, direct observation short-course chemotherapy treatments, a definite supply of drugs, and standardized reporting and recording of cases and treatment outcomes. The WHO advises that all TB patients should have at least the first two months of their therapy observed (and preferably the whole of it observed): this means an independent observer watching patients swallow their anti-TB therapy. The independent observer is often not a healthcare worker and may be a shopkeeper or a tribal elder or similar senior person within that society. DOTS is used with intermittent dosing (thrice weekly or 2HREZ/4HR3). Twice weekly dosing is effective but not recommended by the World Health Organization (WHO), because there is no margin for error (accidentally omitting one dose per week results in once weekly dosing, which is ineffective).Treatment with properly implemented DOTS has a success rate exceeding 95% and prevents the emergence of further multi-drug resistant strains of tuberculosis. Administering DOTS, decreases the possibilities of tuberculosis from recurring, resulting in a reduction in unsuccessful treatments. This is in part due to the fact that areas without the DOTS strategy generally provide lower standards of care. Areas with DOTS administration help lower the number of patients seeking help from other facilities where they are treated with unknown treatments resulting in unknown outcomes. However, if the DOTS program is not implemented or done so incorrectly positive results will be unlikely. For the program to work efficiently and accurately health providers must be fully engaged, links must be built between public and private practitioners, health services must be available to all, and global support is provided to countries trying to reach their TB prevention, and treatment aims. Some researchers suggest that, because the DOTS framework has been so successful in the treatment of tuberculosis in sub-Saharan Africa, DOTS should be expanded to non-communicable diseases such as diabetes mellitus, hypertension, and epilepsy. DOTS-Plus strategy The WHO extended the DOTS programme in 1998 to include the treatment of MDR-TB (called "DOTS-Plus"). Implementation of DOTS-Plus requires the capacity to perform drug-susceptibility testing (not routinely available even in developed countries) and the availability of second-line agents, in addition to all the requirements for DOTS. DOTS-Plus is therefore much more resource-expensive than DOTS, and requires much greater commitment from countries wishing to implement it. Community engagement is a new approach that is being initiated alongside the DOTS individualized treatment. By creating a community for health workers to give support to patients and hospital faculty, the DOTS-plus model also incorporates psychological structural support treatments to help accommodate patients to ensure completion of treatment. Treatment with the new strategy is a total duration of 18–24 months.Monthly surveillance until cultures convert to negative is recommended for DOTS-Plus, but not for DOTS. If cultures are positive or symptoms do not resolve after three months of treatment, it is necessary to re-evaluate the patient for drug-resistant disease or nonadherence to drug regimen. If cultures do not convert to negative despite three months of therapy, some physicians may consider admitting the patient to hospital so as to closely monitor therapy. Extra-pulmonary tuberculosis Tuberculosis not affecting the lungs is called extra-pulmonary tuberculosis. Disease of the central nervous system is specifically excluded from this classification. The United Kingdom and the World Health Organization (WHO) recommendation is 2HREZ/4HR; the US recommendation is 2HREZ/7HR. There is good evidence from randomised-controlled trials to say that in tuberculous lymphadenitis and in TB of the spine, the six-month regimen is equivalent to the nine-month regimen; the US recommendation is therefore not supported by the evidence.Up to 25% of patients with TB of the lymph nodes (TB lymphadenitis) will get worse on treatment before they get better and this usually happens in the first few months of treatment. A few weeks after starting treatment, lymph nodes often start to enlarge, and previously solid lymph nodes may soften and develop into tuberculous cervical lymphadenitis. This should not be interpreted as failure of therapy and is a common reason for patients (and their physicians) to panic unnecessarily. With patience, two to three months into treatment the lymph nodes start to shrink again and re-aspiration or re-biopsy of the lymph nodes is unnecessary: if repeat microbiological studies are ordered, they will show the continued presence of viable bacteria with the same sensitivity pattern, which further adds to the confusion: physicians inexperienced in the treatment of TB will then often add second-line drugs in the belief that the treatment is not working. In these situations, all that is required is re-assurance. Steroids may be useful in resolving the swelling, especially if it is painful, but they are unnecessary. Additional antibiotics are unnecessary and the treatment regimen does not need to be lengthened.There is no evidence that 6 months regimen is inadequate in treating abdominal TB and there is no additional benefits for 9-month regimen for preventing relapse. However, more large scale studies are needed to confirm the above conclusion. Tuberculosis of the central nervous system Central nervous system tuberculosis takes two major forms: tuberculous meningitis and tuberculoma.Tuberculosis may affect the central nervous system (meninges, brain or spinal cord) in which case it is called TB meningitis, TB cerebritis, and TB myelitis respectively; the standard treatment is 12 months of drugs (2HREZ/10HR) and steroid are mandatory.Diagnosis is difficult as CSF culture is positive in less than half of cases, and therefore a large proportion of cases are treated on the basis of clinical suspicion alone. PCR of CSF does not significantly improve the microbiology yield; culture remains the most sensitive method and a minimum of 5 mL (preferably 20 mL) of CSF should be sent for analysis. TB cerebritis (or TB of the brain) may require brain biopsy to make the diagnosis, because the CSF is commonly normal: this is not always available and even when it is, some clinicians would debate whether it is justified putting a patient through such an invasive and potentially dangerous procedure when a trial of anti-TB therapy may yield the same answer; probably the only justification for brain biopsy is when drug-resistant TB is suspected.It is possible that shorter durations of therapy (e.g., six months) may be sufficient to treat TB meningitis, but no clinical trial has addressed this issue. The CSF of patients with treated TB meningitis is commonly abnormal even at 12 months; the rate of resolution of the abnormality bears no correlation with clinical progress or outcome, and is not an indication for extending or repeating treatment; repeated sampling of CSF by lumbar puncture to monitor treatment progress should therefore not be done.Although TB meningitis and TB cerebritis are classified together, the experience of many clinicians is that their progression and response to treatment is not the same. TB meningitis usually responds well to treatment, but TB cerebritis may require prolonged treatment (up to two years) and the steroid course needed is often also prolonged (up to six months). Unlike TB meningitis, TB cerebritis often required repeated CT or MRI imaging of the brain to monitor progress.Central nervous system TB may be secondary to blood-borne spread: therefore some experts advocate the routine sampling of CSF in patients with miliary TB.The anti-TB drugs that are most useful for the treatment of Central nervous system TB are: INH (CSF penetration 100%) RMP (10–20%) EMB (25–50% inflamed meninges only) PZA (100%) STM (20% inflamed meninges only) LZD (20%) Cycloserine (80–100%) Ethionamide (100%) PAS (10–50%) (inflamed meninges only)The use of steroids is routine in TB meningitis (see section below). There is evidence from one poorly designed trial that aspirin may be beneficial, but further work is required before this can be recommended routinely. Steroids The usefulness of corticosteroids (e.g., prednisolone or dexamethasone) in the treatment of TB is proven for TB meningitis and TB pericarditis. The dose for TB meningitis is dexamethasone 8 to 12 mg daily tapered off over six weeks (for those who prefer more precise dosing should refer to Thwaites et al., 2004). The dose for pericarditis is prednisolone 60 mg daily tapered off over four to eight weeks.Steroids may be of temporary benefit in pleurisy, extremely advanced TB, and TB in children: Pleurisy: prednisolone 20 to 40 mg daily tapered off over 4 to 8 weeks Extremely advanced TB: 40 to 60 mg daily tapered off over 4 to 8 weeks TB in children: 2 to 5 mg/kg/day for one week, 1 mg/kg/day the next week, then tapered off over 5 weeksSteroids may be of benefit in peritonitis, miliary disease, tubercular osteomyelitis, TB osteomyelitis, laryngeal TB, lymphadenitis and genitourinary disease, but the evidence is scant and the routine use of steroids cannot be recommended. Steroid treatment in these patients should be considered on a case-by-case basis by the attending physician. The long-term impact of pleural TB on respiratory function is unknown. Therefore, such impact should be quantified first before assessing the need of further clinical trials of corticosteroids with pleural TB.Thalidomide may be of benefit in TB meningitis and has been used in cases where patients have failed to respond to steroid treatment. Non-compliance Patients who take their TB treatment in an irregular and unreliable way are at greatly increased risk of treatment failure, relapse and the development of drug-resistant TB strains.There are variety of reasons why patients fail to take their medication. The symptoms of TB commonly resolve within a few weeks of starting TB treatment and many patients then lose motivation to continue taking their medication. Regular follow-up is important to check on compliance and to identify any problems patients are having with their medication. Patients need to be told of the importance of taking their tablets regularly, and the importance of completing treatment, because of the risk of relapse or drug-resistance developing otherwise.One of the main complaints is the bulkiness of the tablets. The main offender is PZA (the tablets being the size of horse tablets). PZA syrup may be offered as a substitute, or if the size of the tablets is truly an issue and liquid preparations are not available, then PZA can be omitted altogether. If PZA is omitted, the patient should be warned that this results in a significant increase in the duration of treatment (details of regimens omitting PZA are given below). The other complaint is that the medicines must be taken on an empty stomach to facilitate absorption. This can be difficult for patients to follow (for example, shift workers who take their meals at irregular times) and may mean the patient waking up an hour earlier than usual everyday just to take medication. The rules are actually less stringent than many physicians and pharmacists realise: the issue is that the absorption of RMP is reduced if taken with fat, but is unaffected by carbohydrate, protein, or antacids. So the patient can in fact have his or her medication with food as long as the meal does not contain fat or oils (e.g., a cup of black coffee or toast with jam and no butter). Taking the medicines with food also helps ease the nausea that many patients feel when taking the medicines on an empty stomach. The effect of food on the absorption of INH is not clear: two studies have shown reduced absorption with food but one study showed no difference. There is a small effect of food on the absorption of PZA and of EMB that is probably not clinically important.It is possible to test urine for isoniazid and rifampicin levels to check for compliance. The interpretation of urine analysis is based on the fact that isoniazid has a longer half-life than rifampicin: urine positive for isoniazid and rifampicin – patient probably fully compliant urine positive for isoniazid only – patient has taken his medication in the last few days preceding the clinic appointment, but had not yet taken a dose that day. urine positive for rifampicin only – patient has omitted to take his medication the preceding few days, but did take it just before coming to clinic. urine negative for both isoniazid and rifampicin – patient has not taken either medicine for a number of daysIn countries where doctors are unable to compel patients to take their treatment (e.g., the UK), some say that urine testing only results in unhelpful confrontations with patients and does not help increase compliance. In countries where legal measures can be taken to force patients to take their medication (e.g., the US), then urine testing can be a useful adjunct in assuring compliance.RMP colours the urine and all bodily secretions (tears, sweat, etc.) an orange-pink colour and this can be a useful proxy if urine testing is not available (although this colour fades approximately six to eight hours after each dose).In study on cases of extra-pulmonary TB (EPTB), researchers at the University of the Philippines Manila found that similarity of symptoms of EPTB to other diseases results to delayed identification of the disease and late provision of medication. This, ultimately contribute to increasing rates of mortality and incidence rates of EPTB.The World Health Organization (WHO) recommends prescription of fixed-dose combination drugs, to improve adherence to treatment by reducing the number of tablets that need to be taken by people, and also possibly reducing prescribing errors. A Cochrane review, published in 2016, found moderate quality evidence that "there is probably little or no difference in fixed-dose combination drugs compared to single-drug formulations". Treatment adherence strategies As stated above, non-compliance to anti-tuberculin treatment can result in treatment failure or development of drug-resistant tuberculosis. Therefore, overall treatment strategies should be focused on promoting adherence. WHO and the Centers for Disease Control and Prevention (CDC) recommend a multi-faceted patient centered care approach. Public health and private sector practitioners can promote TB treatment adherence by allowing patients to be active partners in making their own treatment decisions; improving patients knowledge and understanding of tuberculosis disease, treatment and potential spread; and by discussing expected interim and long-term outcomes with patients. CDC also recommends use of incentives and enablers. Incentives are monetary rewards for a healthy behavior (e.g.transport or food vouchers), while enablers function to remove economic burdens impeding healthcare access (e.g. grouping clinic visits, providing after hours clinic visits, or home visits). However, more research is needed to determine whether incentives and enablers have a significant effect on long-term treatment adherence for TB. Smartphones are considered to have potential to improve compliance.Individuals with tuberculosis may also benefit from the emotional support of peers and survivors. Advocacy organizations and patient support groups such as STOP TB, TB Alert, Treatment Action Group (TAG) and others work to connect TB survivors. Adverse effects For information on adverse effects of individual anti-TB drugs, please refer to the individual articles for each drug. The relative incidence of major adverse effects has been carefully described: INH 0.49 per hundred patient months RMP 0.43 EMB 0.07 PZA 1.48 All drugs 2.47This works out to an 8.6% risk that any one patient will need to have his drug therapy changed during the course of standard short-course therapy (2HREZ/4HR). The people identified to be most at risk of major adverse side effects in this study were: age >60, females, HIV positive patients, and Asians.It can be extremely difficult identifying which drug is responsible for which side effect, but the relative frequency of each is known. The offending drugs are given in decreasing order of frequency: Thrombocytopenia: Rifampicin (RMP) Neuropathy: Isoniazid (INH) Vertigo: Streptomycin (STM) Hepatitis: Pyrazinamide (PZA), RMP, INH Rash: PZA, RMP, Ethambutol (EMB)Thrombocytopenia is only caused by RMP and no test dosing need be done. Regimens omitting RMP are discussed below. Please refer to the entry on rifampicin for further details. The most frequent cause of neuropathy is INH. The peripheral neuropathy of INH is always a pure sensory neuropathy and finding a motor component to the peripheral neuropathy should always prompt a search for an alternative cause. Once a peripheral neuropathy has occurred, INH must be stopped and pyridoxine should be given at a dose of 50 mg thrice daily. Simply adding high dose pyridoxine to the regimen once neuropathy has occurred will not stop the neuropathy from progressing. Patients at risk of peripheral neuropathy from other causes (diabetes mellitus, alcoholism, renal failure, malnutrition, pregnancy, etc.) should all be given pyridoxine 10 mg daily at the start of treatment. Please refer to the entry on isoniazid for details on other neurological side effects of INH. Rashes are most frequently due to PZA, but can occur with any of the TB drugs. Test dosing using the same regimen as detailed below for hepatitis may be necessary to determine which drug is responsible. Itching RMP commonly causes itching without a rash in the first two weeks of treatment: treatment should not be stopped and the patient should be advised that the itch usually resolves on its own. Short courses of sedative antihistamines such as chlorpheniramine may be useful in alleviating the itch. Fever during treatment can be due to a number of causes. It can occur as a natural effect of tuberculosis (in which case it should resolve within three weeks of starting treatment). Fever can be a result of drug resistance (but in that case the organism must be resistant to two or more of the drugs). Fever may be due to a superadded infection or additional diagnosis (patients with TB are not exempt from getting influenza and other illnesses during the course of treatment). In a few patients, the fever is due to drug allergy. The
Tuberculosis management
clinician must also consider the possibility that the diagnosis of TB is wrong. If the patient has been on treatment for more than two weeks and if the fever had initially settled and then come back, it is reasonable to stop all TB medication for 72 hours. If the fever persists despite stopping all TB medication, then the fever is not due to the drugs. If the fever disappears off treatment, then the drugs need to be tested individually to determine the cause. The same scheme as is used for test dosing for drug-induced hepatitis (described below) may be used. The drug most frequently implicated as causing a drug fever is RMP: details are given in the entry on rifampicin. Drug-induced hepatitis Drug-induced hepatitis, from TB treatment, has a mortality rate of around 5%. Three drugs can induce hepatitis: PZA, INH and RMP (in decreasing order of frequency).[1] It is not possible to distinguish between these three causes based purely on signs and symptoms. Test dosing must be carried out to determine which drug is responsible (this is discussed in detail below). Liver function tests (LFTs) should be checked at the start of treatment, but, if normal, need not be checked again; the patient need only be warned of the symptoms of hepatitis. Some clinicians insist on regular monitoring of LFTs while on treatment, and in this instance, tests need only be done two weeks after starting treatment and then every two months thereafter, unless any problems are detected. Elevations in bilirubin must be expected with RMP treatment (RMP blocks bilirubin excretion) and usually resolve after 10 days (liver enzyme production increases to compensate). Isolated elevations in bilirubin can be safely ignored. Elevations in liver transaminases (ALT and AST) are common in the first three weeks of treatment. If the patient is asymptomatic and the elevation is not excessive then no action need be taken; some experts suggest a cut-off of four times the upper limit of normal, but there is no evidence to support this particular number over and above any other number. Some experts consider that treatment should only be stopped if jaundice becomes clinically evident. If clinically significant hepatitis occurs while on TB treatment, then all the drugs should be stopped until the liver transaminases return to normal. If the patient is so ill that TB treatment cannot be stopped, then STM and EMB should be given until the liver transaminases return to normal (these two drugs are not associated with hepatitis). Fulminant hepatitis can occur in the course of TB treatment, but is fortunately rare; emergency liver transplantation may be necessary and deaths do occur. Test dosing for drug-induced hepatitis Drugs should be re-introduced individually. This cannot be done in an outpatient setting, and must be done under close observation. A nurse must be present to take patients pulse and blood pressure at 15-minute intervals for a minimum of four hours after each test dose is given (most problems will occur within six hours of test dosing, if they are going to occur at all). Patients can become very suddenly unwell and access to intensive care facilities must be available. The drugs should be given in this order: Day 1: INH at 1/3 or 1/4 dose Day 2: INH at 1/2 dose Day 3: INH at full dose Day 4: RMP at 1/3 or 1/4 dose Day 5: RMP at 1/2 dose Day 6: RMP at full dose Day 7: EMB at 1/3 or 1/4 dose Day 8: EMB at 1/2 dose Day 9: EMB at full doseNo more than one test dose per day should be given, and all other drugs should be stopped while test dosing is being done. So on day 4, for example, the patient only receives RMP and no other drugs are given. If the patient completes the nine days of test dosing, then it is reasonable to assume that PZA has caused the hepatitis and no PZA test dosing need be done. The reason for using the order for testing drugs is because the two most important drugs for treating TB are INH and RMP, so these are tested first: PZA is the most likely drug to cause hepatitis and is also the drug that can be most easily omitted. EMB is useful when the sensitivity pattern of the TB organism are not known and can be omitted if the organism is known to be sensitive to INH. Regimens omitting each of the standard drugs are listed below. The order in which the drugs are tested can be varied according to the following considerations: The most useful drugs (INH and RMP) should be tested first, because the absence of these drugs from a treatment regimen severely impairs its efficacy. The drugs most likely to be causing the reaction should be tested as late as possible (and possibly need not be tested at all). This avoids rechallenging patients with a drug to which they have already had a (possibly) dangerous adverse reaction.A similar scheme may be used for other adverse effects (such as fever and rash), using similar principles. Dysbiosis caused by HRZE antibiotic treatment Tuberculosis treatment results in changes to the structure of the gut microbiome both during and after treatment in mice and humans. It is currently unknown what the long term effects of this dysbiosis are on systemic immunity. Deviations from the standard regimen There is evidence supporting some deviations from the standard regimen when treating pulmonary TB. Sputum culture-positive patients who are smear-negative at the start of treatment do well with only 4 months of treatment (this has not been validated for HIV-positive patients); sputum culture-negative patients do well on only 3 months of treatment (possibly because some of these patients never had TB at all). It is unwise to treat patients for only three or four months, but all TB physicians will have patients who stop their treatment early (for whatever reason), and it can be reassuring to know that sometimes retreatment is unnecessary. Elderly patients who are already taking a large number of tablets may be offered 9HR, omitting PZA which is the bulkiest part of the regimen. It may not always be necessary to treat with four drugs from the beginning. An example might be a close contact of a patient known to have a fully sensitive strain of tuberculosis: in this case, it is acceptable to use 2HRZ/4HR (omitting EMB and STM) in the expectation that their strain will be INH susceptible also. Indeed, this was previously the recommended standard regimen in many countries until the early 1990s, when isoniazid-resistance rates increased. TB involving the brain or spinal cord (meningitis, encephalitis, etc.) is currently treated with 2HREZ/10HR (12 months of treatment in total), but there is no evidence to say that this is superior to 2HREZ/4HR. There is no difference in relapse rates amongst those who are treated with 6 months or longer period of time. However, more well-designed studies are needed to answer this question. Regimens omitting isoniazid Isoniazid resistance accounts 6.9% of isolates in the UK (2010). Worldwide, it is the most common type of resistance encountered, hence the current recommendation of using HREZ at the beginning of treatment until sensitivities are known. It is useful to know of current reported outbreaks (like the current outbreak of INH-resistant TB in London). If patients are discovered to be infected with an isoniazid-resistant strain of TB having completed 2 months of HREZ, then they should be changed to RE for a further 10 months, and the same thing if the patient is intolerant to isoniazid (although 2REZ/7RE may be acceptable if the patient is well supervised). The US recommendation is 6RZE with the option of adding a quinolone such as moxifloxacin. The level of evidence for all these regimens is poor, and there is little to recommend one over the other. Regimens omitting rifampicin The UK prevalence of rifampicin (RMP) resistance is 1.4%. It is rare for TB strains to be resistant to RMP without also being resistant to INH, which means that rifampicin-resistance usually means resistance to INH as well (that is, MDR-TB). However, RMP intolerance is not uncommon (hepatitis or thrombocytopaenia being the most common reasons for stopping rifampicin). Of the first-line drugs, rifampicin is also the most expensive, and in the poorest countries, regimens omitting rifampicin are therefore often used. Rifampicin is the most potent sterilising drug available for the treatment of tuberculosis and all treatment regimens that omit rifampicin are significantly longer than the standard regimen. The UK recommendation is 18HE or 12HEZ. The US recommendation is 9 to 12HEZ, with the option of adding a quinolone (for example, MXF). Regimens omitting pyrazinamide PZA is a common cause of rash, hepatitis and of painful arthralgia in the HREZ regimen, and can be safely stopped in those patients who are intolerant to it. Isolated PZA resistance is uncommon in M. tuberculosis, but M. bovis is innately resistant to PZA. PZA is not crucial to the treatment of fully sensitive TB, and its main value is in shortening the total treatment duration from nine months to six. An alternative regimen is 2HRE/7HR, for which there is excellent clinical trial evidence. The 1994 US CDC guidelines for tuberculosis erroneously cite Slutkin as evidence that a nine-month regimen using only isoniazid and rifampicin is acceptable, but almost all of the patients in that study received ethambutol for the first two to three months (although this is not obvious from the abstract of that article). This mistake was rectified in the 2003 guidelines.This regimen (2HRE/7HR) is the first-line regimen used to treat M. bovis, since M. bovis is intrinsically resistant to pyrazinamide. Regimens omitting ethambutol EMB intolerance or resistance is rare. If a patient is truly intolerant or is infected with TB that is resistant to EMB, then 2HRZ/4HR is an acceptable regimen. The main motivator for including EMB in the initial two months is because of increasing rates of INH resistance. Tuberculosis and other conditions Liver disease People with alcoholic liver disease are at an increased risk of tuberculosis. The incidence of tuberculous peritonitis is particularly high in patients with cirrhosis of the liver.There are broadly two categories of treatment: A) Cirrhotic patients with essentially normal baseline liver function tests (Childs A Cirrhosis). Such patients may be treated with standard 4 drug regime for 2 months followed by 2 drugs for remaining 4 months (total 6-month treatment). B) Cirrhotic patients altered baseline liver function tests (Childs B & C). According to 2010 WHO guidelines: depending on the severity of the disease and degree of decompensation, the following regimen can be used, by altering the number of hepatotoxic drugs. One or two hepatotoxic drugs may be used in moderately severe disease (e.g., Childs B cirrhosis) whereas hepatotoxic drugs are completely avoided in decompensated Child C cirrhosis. • Two hepatotoxic drugs – 9 months of Isoniazid, Rifampin and Ethambutol (until or unless isoniazid susceptibility is documented) – 2 months of Isoniazid, Rifampin, Ethambutol and Streptomycin followed by 6 months of Isoniazid and Rifampin • One hepatotoxic drug – 2 months of Isoniazid, Ethambutol & Streptomycin followed by 10 months of Isoniazid and Ethambutol • No hepatotoxic drugs – 18–24 months of Streptomycin, Ethambutol and Quinolones Patients with liver disease should have their liver function tests monitored regularly throughout TB treatment. Drug-induced hepatitis is discussed in a separate section above. Pregnancy Pregnancy itself is not a risk factor for TB. Rifampicin makes hormonal contraception less effective, so additional precautions need to be taken for birth control while tuberculosis treatment. Untreated TB in pregnancy is associated with an increased risk of miscarriage and major fetal abnormality, and treatment of pregnant women. The US guidelines recommend omitting PZA when treating TB in pregnancy; the UK and WHO guidelines make no such recommendation, and PZA is commonly used in pregnancy. There is extensive experience with the treatment of pregnant women with TB and no toxic effect of PZA in pregnancy has ever been found. High doses of RMP (much higher than used in humans) causes neural tube defects in animals, but no such effect has ever been found in humans. There may be an increased risk of hepatitis in pregnancy and during the puerperium. It is prudent to advise all women of child-bearing age to avoid getting pregnant until TB treatment is completed. Aminoglycosides (STM, capreomycin, amikacin) should be used with caution in pregnancy, because they may cause deafness in the unborn child. The attending physician must weigh the benefits of treating the mother against the potential harm to the baby, and good outcomes have been reported in children whose mothers were treated with aminoglycosides. Experience in Peru shows that treatment for MDR-TB is not a reason to recommend termination of pregnancy, and that good outcomes are possible. Kidney disease People with kidney failure have a 10 to 30-fold increase in risk of getting TB. People with kidney disease who are being given immunosuppressive medications or are being considered for transplant should be considered for treatment of latent tuberculosis if appropriate. Aminoglycosides (STM, capreomycin and amikacin) should be avoided in patients with mild to severe kidney problems because of the increased risk of damage to the kidneys. If the use of aminoglycosides cannot be avoided (e.g., in treating drug-resistant TB) then serum levels must be closely monitored and the patient warned to report any side-effects (deafness in particular). If a person has end-stage kidney disease and has no useful remaining kidney function, then aminoglycosides can be used, but only if drug levels can be easily measured (often only amikacin levels can be measured). In mild kidney impairment, no change needs to be made in dosing any of the other drugs routinely used in the treatment of TB. In severe chronic kidney disease (GFR<30), the EMB dose should be halved (or avoided altogether). The PZA dose is 20 mg/kg/day (UK recommendation) or three-quarters the normal dose (US recommendation), but not much published evidence is available to support this. When using 2HRZ/4HR in patients on dialysis, the drugs should be given daily during the initial high-intensity phase. In the continuation phase, the drugs should be given at the end of each haemodialysis session and no dose should be taken on non-dialysis days. HIV In patients with HIV, treatment for the HIV should be delayed until TB treatment is completed, if possible. The current UK guidance (provided by the British HIV Association) is CD4 count over 200—delay treatment until the six months of TB treatment are complete. CD4 count 100 to 200—delay treatment until the initial two-month intensive phase of therapy is complete CD4 count less than 100—the situation is unclear and patients should be enrolled in clinical trials examining this question. There is evidence that if these patients are managed by a specialist in both TB and HIV then outcomes are not compromised for either disease.If HIV treatment has to be started while a patient is still on TB treatment, then the advice of a specialist HIV pharmacist should be sought. In general, there is no significant interactions with the NRTIs. Nevirapine should not be used with rifampicin. Efavirenz may be used, but dose used depends on the patients weight (600 mg daily if weight less than 50 kg; 800 mg daily if weight greater than 50 kg). Efavirenz levels should be checked early after starting treatment (unfortunately, this is not a service routinely offered in the US, but is readily available in the UK). The protease inhibitors should be avoided if at all possible: patients on rifamycins and protease inhibitors have an increased risk of treatment failure or relapse.The World Health Organization (WHO) warns against using thioacetazone in patients with HIV, because of the 23% risk of potentially fatal exfoliative dermatitis.According to Caprisa 003 (SAPiT) Study the mortality in patients who were started on anti-retrovirals during TB treatment was 56% lower than those started after TB treatment was completed (hazard ratio 0.44 (95% CI: 0.25 to 0.79); p=0.003). Epilepsy INH may be associated with an increased risk of seizures. Pyridoxine 10 mg daily should be given to all epileptics taking INH. There is no evidence that INH causes seizures in patients who are not epileptic. TB treatment involves numerous drug interactions with anti-epileptic drugs and serum drug levels should be closely monitored. There are serious interactions between rifampicin and carbamazepine, rifampicin and phenytoin, and rifampicin and sodium valproate. The advice of a pharmacist should always be sought. Covid- 19 TB and COVID-19 are a "cursed duet" and need immediate attention. TB should be considered a risk factor for severe COVID disease and patients with TB should be prioritised for COVID-19 preventative efforts, including vaccination. Drug-resistance Definitions Multi-drug resistant tuberculosis (MDR-TB) is defined as TB that is resistant at least to INH and RMP. Isolates that are multi-resistant to any other combination of anti-TB drugs but not to INH and RMP are not classed as MDR-TB. As of Oct 2006, "Extensively drug-resistant tuberculosis" (XDR-TB) is defined as MDR-TB that is resistant to quinolones and also to any one of kanamycin, capreomycin, or amikacin. The old case definition of XDR-TB is MDR-TB that is also resistant to three or more of the six classes of second-line drugs. This definition should no longer be used, but is included here because many older publications refer to it. The principles of treatment for MDR-TB and for XDR-TB are the same. The main difference is that XDR-TB is associated with a much higher mortality rate than MDR-TB, because of a reduced number of effective treatment options. The epidemiology of XDR-TB is currently not well studied, but it is believed that XDR-TB does not transmit easily in healthy populations, but is capable of causing epidemics in populations which are already stricken by HIV and therefore more susceptible to TB infection. Epidemiology of drug-resistant TB A 1997 survey of 35 countries found rates above 2% in about a third of the countries surveyed. The highest rates of drug-resistant TB were in the former USSR, the Baltic states, Argentina, India and China, and was associated with poor or failing national Tuberculosis Control programmes. Likewise, the appearance of high rates of MDR-TB in New York city the early 1990s was associated with the dismantling of public health programmes by the Reagan administration.Paul Farmer points out that the more expensive a treatment, the harder it is for poor countries to get. Farmer sees this as verging on denial of basic human rights. Africa is low in quality of treatment partly because many African cultures lack the concept of time essential to the schedule of administration.MDR-TB can develop in the course of the treatment of fully sensitive TB and this is always the result of patients missing doses or failing to complete a course of treatment. Thankfully, MDR-TB strains appear to be less fit and less transmissible. It has been known of many years that INH-resistant TB is less virulent in guinea pigs, and the epidemiological evidence is that MDR strains of TB do not dominate naturally. A study in Los Angeles found that only 6% of cases of MDR-TB were clustered. This should not be a cause for complacency: it must be remembered that MDR-TB has a mortality rate comparable to lung cancer. It must also be remembered that people who have weakened immune systems (because of diseases such as HIV or because of drugs) are more susceptible to catching TB. Children represent a susceptible population with increasing rates of MDR and XDR-TB. Since diagnosis in pediatric patients is difficult, large number of cases are not properly reported. Cases of pediatric XDR-TB have been reported in most countries including the United States.In 2006 an outbreak of XDR-TB South Africa was first reported as a cluster of 53 patients in a rural hospital in KwaZulu-Natal, with all but one dying. What was particularly worrying was that the mean survival from sputum specimen collection to death was only 16 days and that the majority of patients had never previously received treatment for tuberculosis. This is the epidemic for which the acronym XDR-TB was first used, although TB strains that fulfil the current definition have been identified retrospectively, this was the largest group of linked cases ever found. Since the initial report in September 2006, cases have now been reported in most provinces in South Africa. As of 16 March 2007, there were 314 cases reported, with 215 deaths. It is clear that the spread of this strain of TB is closely associated with a high prevalence of HIV and poor infection control; in other countries where XDR-TB strains have arisen, drug-resistance has arisen from mismanagement of cases or poor patient compliance with drug treatment instead of being transmitted from person to person. This strain of TB does not respond to any of the drugs currently available in South Africa for first- or second-line treatment. It is now clear that the problem has been around for much longer than health department officials have suggested, and is far more extensive. By 23 November 2006, 303 cases of XDR-TB had been reported, of which 263 were in KwaZulu-Natal. Serious thought has been put to isolation procedures that may deny some patients their human rights, but which may be necessary to prevent further spread of this strain of TB. Treatment of MDR-TB The treatment and prognosis of MDR-TB are much more akin to that for cancer than to that for infection. It has a mortality rate of up to 80%, which depends on a number of factors, including How many drugs the organism is resistant to (the fewer the better), How many drugs the patient is given (patients treated with five or more drugs do better), Whether an injectable drug is given or not (it should be given for the first three months at least), The expertise and experience of the physician responsible, How co-operative the patient is with treatment (treatment is arduous and long, and requires persistence and determination on the part of the patient), Whether the patient is HIV positive or not (HIV co-infection is associated with an increased mortality).Treatment courses are a minimum of 18 months and may last years; it may require surgery, though death rates remain high despite optimal treatment. That said, good outcomes are still possible. Treatment courses that are at least 18 months long and which have a directly observed component can increase cure rates to 69%.The treatment of MDR-TB must be undertaken by a physician experienced in the treatment of MDR-TB. Mortality and morbidity in patients treated in non-specialist centres is significantly elevated compared to those patients treated in specialist centres. In addition to the obvious risks (e.g., known exposure to a patient with MDR-TB), risk factors for MDR-TB include male sex, HIV infection, previous incarceration, failed TB treatment, failure to respond to standard TB treatment, and relapse following standard TB treatment. A large proportion of people with MDR-TB are unable to access treatment due to what Paul Farmer describes as an "Outcome Gap". The majority of people struck with MDR-TB live in "resource-poor settings" and are denied treatment because international organizations have refused to make technologies available to countries who cannot afford to pay for treatment, the reason being that second line drugs are to expensive therefore treatment methods for MDR-TB are not sustainable in impoverished nations. Farmer argues that this is social injustice and we cannot allow people to die simply because they are faced with circumstances where they cannot afford "effective therapy".Treatment of MDR-TB must be done on the basis of sensitivity testing: it is impossible to treat such patients without this information. If treating a patient with suspected MDR-TB, the patient should be started on SHREZ+MXF+cycloserine pending the result of laboratory sensitivity testing. A gene probe for rpoB is available in some countries and this serves as a useful marker for MDR-TB, because isolated RMP resistance is rare (except when patients have a history of being treated with rifampicin alone). If the results of a gene probe (rpoB) are known to be positive, then it is reasonable to omit RMP and to use SHEZ+MXF+cycloserine. The reason for maintaining the patient on INH despite the suspicion of MDR-TB is that INH is so potent in treating TB that it is foolish to omit it until there is microbiological proof that it is ineffective. There are also probes available for isoniazid-resistance (katG and mabA-inhA), but these are less widely available. When sensitivities are known and the isolate is confirmed as resistant to both INH and RMP, five drugs should be chosen in the following order (based on known sensitivities): an aminoglycoside (e.g., amikacin, kanamycin) or polypeptide antibiotic (e.g., capreomycin) PZA EMB a fluoroquinolones: moxifloxacin is preferred (ciprofloxacin should no longer be used; rifabutin cycloserine a thioamide: prothionamide or ethionamide PAS a macrolide: e.g., clarithromycin linezolid high-dose INH (if low-level resistance) interferon-γ thioridazine meropenem and clavulanic acidDrugs are placed nearer the top of the list because they are more effective and less toxic; drugs are placed nearer the bottom of the list because they are less effective or more toxic, or more difficult to obtain. Resistance to one drug within a class generally means resistance to all drugs within that class, but a notable exception is rifabutin: rifampicin-resistance does not always mean rifabutin-resistance and the laboratory should be asked to test for it. It is only possible to use one drug within each drug class. If it is difficult finding five drugs to treat then the clinician can request that high level INH-resistance be looked for. If the strain has only low level INH-resistance (resistance at 0.2 mg/L INH, but sensitive at 1.0 mg/L INH), then high dose INH can be used as part of the regimen. When counting drugs, PZA and interferon count as zero; that is to say, when adding PZA to a four drug regimen, you must still choose another drug to make five. It is not possible to use more than one injectable (STM, capreomycin or amikacin), because the toxic effect of these drugs is additive: if possible, the aminoglycoside should be given daily for a minimum of three months (and perhaps thrice weekly thereafter). Ciprofloxacin should not be used in
Tuberculosis management
the treatment of tuberculosis if other fluoroquinolones are available.There is no intermittent regimen validated for use in MDR-TB, but clinical experience is that giving injectable drugs for five days a week (because there is no-one available to give the drug at weekends) does not seem to result in inferior results. Directly observed therapy certainly helps to improve outcomes in MDR-TB and should be considered an integral part of the treatment of MDR-TB.Response to treatment must be obtained by repeated sputum cultures (monthly if possible). Treatment for MDR-TB must be given for a minimum of 18 months and cannot be stopped until the patient has been culture-negative for a minimum of nine months. It is not unusual for patients with MDR-TB to be on treatment for two years or more. Patients with MDR-TB should be isolated in negative-pressure rooms, if possible. Patients with MDR-TB should not be accommodated on the same ward as immunosuppressed patients (HIV infected patients, or patients on immunosuppressive drugs). Careful monitoring of compliance with treatment is crucial to the management of MDR-TB (and some physicians insist on hospitalisation if only for this reason). Some physicians will insist that these patients are isolated until their sputum is smear negative, or even culture negative (which may take many months, or even years). Keeping these patients in hospital for weeks (or months) on end may be a practical or physical impossibility and the final decision depends on the clinical judgement of the physician treating that patient. The attending physician should make full use of therapeutic drug monitoring (particularly of the aminoglycosides) both to monitor compliance and to avoid toxic effects. Some supplements may be useful as adjuncts in the treatment of tuberculosis, but for the purposes of counting drugs for MDR-TB, they count as zero (if you already have four drugs in the regimen, it may be beneficial to add arginine or vitamin D or both, but you still need another drug to make five). arginine, some clinical evidence (peanuts are a good source) Vitamin D, (some in-vitro evidence & see Vitamin D and tuberculosis treatment )The drugs listed below have been used in desperation and it is uncertain whether they are effective at all. They are used when it is not possible to find five drugs from the list above. imipenem co-amoxiclav clofazimine prochlorperazine metronidazoleOn 28 December 2012 the US Food and Drug Administration (FDA) approved bedaquiline (marketed as Sirturo by Johnson & Johnson) to treat multi-drug resistant tuberculosis, the first new treatment in 40 years. Sirturo is to be used in a combination therapy for patients who have failed standard treatment and have no other options. Sirturo is an adenosine triphosphate synthase (ATP synthase) inhibitor.The follow drug is experimental compounds that are not commercially available, but which may be obtained from the manufacturer as part of a clinical trial or on a compassionate basis. Their efficacy and safety are unknown: Pretomanid (manufactured by Novartis, developed in partnership with TB Alliance)There is increasing evidence for the role of surgery (lobectomy or pneumonectomy) in the treatment of MDR-TB, although whether this is should be performed early or late is not yet clearly defined. See Modern surgical management Management in Asia The Asia‐Pacific region carries 58% of the global tuberculosis burden, which includes multi drug-resistant tuberculosis. Southeast Asia has high burdens of tuberculosis as a result of inefficient and inadequate health infrastructures. According to the World Health Organization, many Asian countries have high cases of tuberculosis, but their governments will not invest in new technology to treat its patients. Philippines From 2005 to 2009, the IPHO-Maguindanao, a governmental organization in Maguindanao, Philippines, partnered with the Catholic Relief Services (CRS) to increase tuberculosis awareness. CRS implemented a USAID-assisted project to fund tuberculosis testing. Additionally, they launched an "Advocacy, Communication, and Self-Mobilization" project featuring workshops to encourage testing in communities. Citizens attending religious sermons were able to distribute information about tuberculosis and inform their communities on where to seek treatment and how to adhere to treatment protocols The DOTS-Plus strategy, designed to deliver from within familiar local institutions, was successful at conveying information about tuberculosis prevention and treatment. India In 1906, India opened its first air sanatorium for treatment and isolation of TB patients.However, the World Health Organization reviewed the national program in India which lacked funding and treatment regimens that could report accurate tuberculosis case management. By 1945, there were successful immunization screenings due to campaigns that helped spread messages about the prevention of disease. This was also around the same time that the World Health Organization declared tuberculosis to be a global emergency and recommended countries adopt the DOTS strategy. Bangladesh, Cambodia, Thailand In Bangladesh, Cambodia, and Indonesia, there is a diagnostic treatment for latent tuberculosis in children below 5 years of age. The IGRA approach (Interferon Gamma Release Assay) is used in these countries. IGRA testing and diagnosis are whole blood cell tests where fresh blood samples are mixed with antigens and controls. A person infected with tuberculosis will have interferon-gammas in the blood stream when mixed with the antigen. It is a highly accurate but expensive test and is technologically complex for immuno-compromised patients. These developing countries were unable to get rid of tuberculosis effectively because the national health policies did not cover screening and testing for tuberculosis. There were also no programs in place to educate citizens and provide training for healthcare workers. Without the mobilization of sufficient resources and the backing of sustainable government funding, these developing countries failed to adequately provide the treatment and resources necessary to combat tuberculosis. Vietnam According to the WHO, Vietnam ranks 13th on the list of 22 countries with the highest tuberculosis burden in the world. Nearly 400 new cases of TB and 55 deaths occur each day in Vietnam. In 1989, the Ministry of Health in Vietnam addressed the tuberculosis burden by establishing the National Institute of Tuberculosis and Lung Diseases and implemented the DOTS strategy as a national priority. Vietnams health service system consists of four different levels: the central level headed by the Ministry of Health (MOH), provincial health services, district health services, and commune health centers". These departments worked with the National Institute of Tuberculosis and Lung Diseases to ensure that there were treatment and prevention plans for long-term reduction of tuberculosis. In 2002, Vietnam also implemented a communication plan to provide accurate educational information to respond to any barriers or misperceptions about tuberculosis treatment. The government worked with the World Health Organization, Center for Disease and Control Prevention, and local medical non-profits such as Friends for International Tuberculosis Relief to provide information about the causes of TB, sources of infection, how it is transmitted, symptoms, treatment, and prevention. The National Tuberculosis Control Program works closely with the primary health care system at the central, provincial, district, and commune levels which has proven to be an incredibly imperative measure of success. Tuberculosis non-profits in Asia Friends for International TB Relief is a small non-governmental organization whose mission is to help prevent tuberculosis and the spreading of TB. FIT not only diagnoses patients, but also provides preventative tuberculosis detection to pilot a comprehensive patient-centered TB program that aims to stop TB transmission and reduce suffering. The organization focuses on island screening due to the high level of risk and burden the population faces. Through its method of search, treat, prevent, and integrative sustainability, FIT is working closely with most of the population on the island (roughly 2022 patients), and partnered with the Ho Chi Minh City Public Health Association on a pilot that provides active community outreach, patient-centric care and stakeholder engagement.Located in Ha Noi, the National Institute of Tuberculosis and Lung Diseases is responsible for the direction and management of TB control activities at the central level. The institute supports the MOH in developing TB- related strategies, and in handling management and professional guidelines for the system. The provincial level centers diagnose, treat, and manage patients, implement TB policies issued by the NTP, and develop action plans under the guidelines of the Provincial Health Bureau and the provincial TB control committees. The districts are capable of detecting TB and treating patients. All districts have physicians specializing in TB, laboratories, and X-ray equipment and have either a TB department or a TB-communicable diseases department in the district hospital. The district level is also responsible for implementing and monitoring the NTP, and the supervision and management of TB programs in the communes. The commune level provides treatment as prescribed by the district level, administering drugs, and vaccinating children. In TB control, village health workers play critically important roles in identifying suspected TB patients, conducting counseling for examination and tests, paying home visits to patients undergoing treatment, and reporting problems in monthly meetings with the CHC.TB Alliance is a non-governmental organization that is located in South Africa and was discovered in the early 2000s. The NGO is a leading non-profit for global tuberculosis research and development of new TB vaccines. To advance TB development, TB Alliance creates partnerships with private, public, academic, and philanthropic sectors to develop products in underserved communities. In 2019, TB Alliance became the first not-for-profit organization to develop and register an anti-TB drug. TB Alliance also works closely alongside the World Health Organization (WHO), U.S FDA, and the European Medicine Agency (EMA) to endorse regulative policies and treatments that are affordable. FHI 360 is an international tuberculosis non-profit organization funded by USAID to treat and support patients in Myanmar, China, and Thailand. The organization developed an app called DOTsync for healthcare staff to administer antibiotics and monitor the side effects of patients. This is incredibly imperative to eliminating tuberculosis because it allows healthcare workers to have follow-up checkups with patients to ensure that tuberculosis treatments are effective. Operation ASHA is a TB nonprofit organization that was founded in 2006. Located in India and Cambodia, Operation ASHA focuses on the development of "e-Compliance," which is a verification and SMS text messaging system where patients can use their fingerprints to access their medical records and be reminded daily via text when to take their medication. According to Operation ASHA, the e-Compliance treatment successive rate is 85%. Treatment failure Patients who fail treatment must be distinguished from patients who relapse. Patients who responded to treatment and appeared to be cured after completing a course of TB treatment are not classed as treatment failures, but as relapses and are discussed in a separate section below. Patients are said to have failed treatment if they fail to respond to treatment (cough and sputum production persisting throughout the whole of treatment), or only experience a transient response to treatment (the patient gets better at first, but then get worse again, all the while on treatment).It is very uncommon for patients not to respond to TB treatment at all (even transiently), because this implies resistance at base-line to all of the drugs in the regimen. Patients who fail to get any response at all while on treatment should first of all be questioned very closely about whether or not they have been taking their medicines, and perhaps even be admitted to hospital to be observed taking their treatment. Blood or urine samples may be taken to check for malabsorption of TB drugs. If it can be shown that they are fully compliant with their medication, then the probability that they have another diagnosis (perhaps in addition to the diagnosis of TB) is very high. These patients should have their diagnosis carefully reviewed and specimens obtained for TB culture and sensitivity testing. Patients who get better and then get worse again should likewise be questioned very closely about adherence to treatment. If adherence is confirmed then they should be investigated for resistant TB (including MDR-TB), even if a specimen has already been obtained for microbiology before commencing treatment. Prescription or dispensing errors will account for a proportion of patients who fail to respond to treatment. Immune defects are a rare cause of non-response. In a tiny proportion of patients, treatment failure is a reflection of extreme biological variation and no cause is found. Treatment relapse Patients are said to relapse if they improve while on treatment, but become ill again after stopping treatment. Patients who experience only a transient improvement while on treatment, or who never respond to treatment are said to have failed treatment and are discussed above. There is a small relapse rate associated with all treatment regimens, even if the treatment has been taken religiously with 100% compliance (the standard regimen 2HREZ/4HR has a relapse rate of 2 to 3% under trial conditions). The majority of relapses occur within 6 months of finishing treatment. Patients who are more likely to relapse are those who took their medication in an unreliable and irregular fashion. The probability of resistance is higher in those patients who relapse and every effort must be made to obtain a specimen that can be cultured for sensitivities. That said, most patients who relapse do so with a fully sensitive strain and it is possible that these patients have not relapsed, but have instead been re-infected; these patients can be re-treated with the same regimen as before (no drugs need to be added to the regimen and the duration need not be any longer). The WHO recommends a regimen of 2SHREZ/6HRE when microbiology is not available (the majority of countries where TB is highly endemic). This regimen was designed to provide optimal treatment for fully sensitive TB (the most common finding in patients who have relapsed) as well as to cover the possibility of INH-resistant TB (the most common form of resistance found). Because of the lifelong risk of relapse, all patients should be warned of the symptoms of TB relapse upon finishing treatment and given strict instructions to return to their doctor if symptoms recur. Public health and health policy As of 2010, India has more reported cases of TB than any other country. This is in part due to severe mismanagement of diagnosis and treatment of TB within the private health care sector of India that serves about 50% of the population. There are therefore calls for the private sector to engage in the public Revised National Tuberculosis Control Program that has proved effective in reducing TB amongst the patients receiving health care through the government. Additionally, a study by Maurya et al. conducted in 2013 shows evidence that there is a burden of multidrug-resistant tuberculosis in India and change is needed for testing, surveillance, monitoring and management. During the COVID-19 pandemic, 80% fewer TB cases were reported daily in April 2020 in India, reducing the diagnosis and treatment of TB. Trial of therapy In areas where TB is highly endemic, it is not unusual to encounter patient with a fever, but in whom no source of infection is found. The physician may then, after extensive investigation has excluded all other diseases, resort to a trial of TB treatment. The regimen used is HEZ for a minimum of three weeks; RMP and STM are omitted from the regimen because they are broad spectrum antibiotics, whereas the other three first-line drugs treat only mycobacterial infection. Resolution of the fever after three weeks of treatment is good evidence for occult TB and the patient should then be changed to conventional TB treatment (2HREZ/4HR). If the fever does not resolve after three weeks of treatment then it is reasonable to conclude that the patient has another cause for his fever. This approach is not recommended by the WHO and most national guidelines. Surgical treatment Surgery has played an important part in the management of tuberculosis since the 1930s. Historical surgical management The first successful treatments for tuberculosis were all surgical. They were based on the observation that healed tuberculous cavities were all closed. Surgical management was therefore directed at closing open cavities to encourage healing. These procedures were all used in the pre-antibiotic era. There exists a myth that surgeons believed that the purpose was to deprive the organism of oxygen: it was however well known that the organism survives anaerobic conditions. Although these procedures may be considered barbaric by 21st centurys standards, it must be remembered that these treatments represented a potential cure for a disease that at the time had a mortality at least as bad as lung cancer in 2000s. Recurrent or persistent pneumothorax The simplest and earliest procedure was to introduce air into the pleural space so as to collapse the affected lung and therefore the open cavity. There was always spontaneous resolution of the pneumothorax and the procedure had to be repeated every few weeks. Phrenic nerve crush The phrenic nerve (which supplies the diaphragm) was cut or crushed so as to permanently paralyse the diaphragm on that side. The paralysed diaphragm would then rise up and the lung on that side would collapse, thus closing the cavity. Thoracoplasty When the cavity was located in the apex of the lung, thoracoplasty could be performed. Six to eight ribs were broken and pushed into the thoracic cavity to collapse the lung beneath. This was a disfiguring operation, but it avoided the need for repeated procedures. In the Novosibirsk TB Research Institute (Russia), osteoplastic thoracoplasty (a variant of extrapleural thoracoplasty) has been used for the last 50 years for patients with complicated cavitary forms of TB for whom lung resection is contraindicated. Plombage Plombage reduced the need for a disfiguring operation. It involved inserting porcelain balls into the thoracic cavity to collapse the lung underneath.Surgical resections of infected lungs were rarely attempted during the 1930s and 1940s, due to the extremely high perioperative mortality rate. Modern surgical management In modern times, the surgical treatment of tuberculosis is confined to the management of multi-drug resistant TB. A patient with MDR-TB who remains culture positive after many months of treatment may be referred for lobectomy or pneumonectomy with the aim of cutting out the infected tissue. The optimal timing for surgery has not been defined, and surgery still confers significant morbidity. The centre with the largest experience in the US is the National Jewish Medical and Research Center in Denver, Colorado. From 1983 to 2000, they performed 180 operations in 172 patients; of these, 98 were lobectomies, and 82 were pneumonectomies. They report a 3.3% operative mortality, with an additional 6.8% dying following the operation; 12% experienced significant morbidity (particularly extreme breathlessness). Of 91 patients who were culture positive before surgery, only 4 were culture positive after surgery. Some complications of treated tuberculosis like recurrent hemoptysis, destroyed or bronchiectasic lungs and empyema (a collection of pus in the pleural cavity) are also amenable to surgical therapy.In extrapulmonary TB, surgery is often needed to make a diagnosis (rather than to effect a cure): surgical excision of lymph nodes, drainage of abscesses, tissue biopsy, etc. are all examples of this. Samples taken for TB culture should be sent to the laboratory in a sterile pot with no additive (not even water or saline) and must arrive in the laboratory as soon as possible. Where facilities for liquid culture are available, specimens from sterile sites may be inoculated directly following the procedure: this may improve the yield. In spinal TB, surgery is indicated for spinal instability (when there is extensive bony destruction) or when the spinal cord is threatened. Therapeutic drainage of tuberculous abscesses or collections is not routinely indicated and will resolve with adequate treatment. In TB meningitis, hydrocephalus is a potential complication and may necessitate the insertion of a ventricular shunt or drain. Nutrition It is well known that malnutrition is a strong risk factor for becoming unwell with TB, that TB is itself a risk factor for malnutrition, and that malnourished patients with TB (BMI less than 18.5) are at an increased risk of death even with appropriate antibiotic therapy. Knowledge about the association between malnutrition and TB is prevalent in some cultures, and may reduce diagnostic delay and improve adherence to treatment.Although blood levels of some micronutrients may be low in people starting treatment for active tuberculosis, a Cochrane review of thirty-five included trials concluded that there is insufficient research to know whether the routine provision of free food or energy supplements improves tuberculosis treatment outcomes. However, nutritional supplementation probably improves weight gain in some settings. Vitamin D and tuberculosis epidemiology Vitamin D deficiency is a risk factor for tuberculosis, and vitamin D deficiency appears to impair the bodys ability to fight tuberculosis, but there is no clinical evidence to show that treating vitamin D deficiency prevents tuberculosis, although the available evidence is that it ought to. Reduced levels of vitamin D may explain the increased susceptibility of African-Americans to tuberculosis, and may also explain why phototherapy is effective for lupus vulgaris (tuberculosis of the skin) (a finding which won Niels Finsen the Nobel Prize in 1903), because skin exposed to sunlight naturally produces more vitamin D. Concerns that tuberculosis treatment itself decreases vitamin D levels appear not to be an issue in clinical practice.Genetic differences in the vitamin D receptor in West African, Gujarati and Chinese populations have been noted to affect susceptibility to tuberculosis, but there is no data available in any population that shows vitamin D supplementation (that is, giving extra vitamin D to people with normal vitamin D levels) has any effect on susceptibility to TB. Vitamin D and tuberculosis treatment Giving vitamin D to TB patients who are vitamin D deficient may be beneficial in a proportion of patients. When taken as a group, vitamin D supplementation appears to have no benefit when using sputum culture conversion as an endpoint, and giving vitamin D supplements to TB patients who have normal vitamin D levels does not provide any benefit from the point of view of TB. In a subset of patients with the tt genotype of the TaqI vitamin D receptor and who are vitamin D deficient, vitamin D supplementation appears to hasten sputum culture conversion. There are no studies of vitamin D using the gold standard outcome of relapse, so the true benefit of vitamin D is not at present known.It was noted as early as the mid-19th century that cod liver oil (which is rich in vitamin D) improved patients with tuberculosis, and the mechanism for this is probably an enhancement of immune responses to tuberculosis.The addition of vitamin D appears to enhance the ability of monocytes and macrophages to kill M. tuberculosis in vitro as well as ameliorating potentially harmful effects of the human immune system. Another reason Vitamin D can be used as a treatment for mycobacterial infections like tuberculosis is because of pro-anti-inflammatory cytokines that are influenced by vitamin D. Vitamin D has a post-anti-inflammatory effect on tuberculosis. Other arginine has some clinical evidence as an adjuvant. Mycobacterium vaccae has been completed in Phase III trials by Anhui Zhifei Longcom Biologic Pharmacy Co., Ltd., injectable Vaccae(TM) and Immunitor LLC., oral tablet Tubivac (V7). Latent tuberculosis The treatment of latent tuberculosis infection (LTBI) is essential to controlling and eliminating TB by reducing the risk that TB infection will progress to disease. The terms "preventive therapy" and "chemoprophylaxis" have been used for decades and are preferred in the UK because it involves giving medication to people who have no active disease and are currently well, the reason for treatment is primarily to prevent people from becoming unwell. The term "latent tuberculosis treatment" is preferred in the US because the medication does not actually prevent infection: it prevents an existing silent infection from becoming active. The feeling in the US is that the term "treatment of LTBI" promotes wider implementation by convincing people that they are receiving treatment for disease. There are no convincing reasons to prefer one term over the other. It is essential that assessment to rule out active TB is carried out before treatment for LTBI is started. To give LTBI treatment to someone with active TB is a serious error: the TB will not be adequately treated and there is a risk of developing drug-resistant strains of TB. There are several treatment regimens available: 9H—Isoniazid for 9 months is the gold standard and is 93% effective. 6H—Isoniazid for 6 months might be adopted by a local TB program based on cost-effectiveness and patient compliance. This is the regimen currently recommended in the UK for routine use. The US guidance exclude this regimen from use in children or persons with radiographic evidence of prior tuberculosis (old fibrotic lesions). (69% effective) 6 to 9H2—A twice-weekly regimen for the above two treatment regimens is an alternative if administered under Directly observed therapy (DOT). 4R—Rifampicin for 4 months is an alternative for those who are unable to take isoniazid or who have had known exposure to isoniazid-resistant TB. 3HR—Isoniazid and rifampicin may be given for 3 months. 2RZ—The 2-month regimen of rifampicin and pyrazinamide is no longer recommended for treatment of LTBI because of the greatly increased risk of drug-induced hepatitis and death. 3RPT/INH - 3-month (12-dose) regimen of weekly rifapentine and isoniazid.Evidence for treatment effectiveness: A 2000 Cochran review containing 11 double-blinded, randomized control trials and 73,375 patients examined six and 12 month courses of isoniazid (INH) for treatment of latent tuberculosis. HIV positive and patients currently or previously treated for tuberculosis were excluded. The main result was a relative risk (RR) of 0.40 (95% confidence interval (CI) 0.31 to 0.52) for development of active tuberculosis over two years or longer for patients treated with INH, with no significant difference between treatment courses of six or 12 months (RR 0.44, 95% CI 0.27 to 0.73 for six months, and 0.38, 95% CI 0.28 to 0.50 for 12 months).A 2013 systematic review published by the Cochrane Collaboration, compared Rifamycins (monotheraphy and combination therapy) to INH monotheraphy as an alternative in preventing active TB in HIV negative populations. The evidence suggested that shorter Rifampicin regimes (3 or 4 months) had higher treatment completion rates and fewer adverse events when compared to INH. However, the overall quality of evidence as per GRADE criteria was low to moderate. Another meta-analysis came to a similar conclusion, namely that rifamycin-containing regimens taken for 3 months or longer had a better profile in preventing TB reactivation. Research There is some evidence from animal and clinical studies that suggests that moxifloxacin-containing regimens as short as four months may be as effective as six months of conventional therapy.Bayer is currently running a phase II clinical trial in collaboration with the TB Alliance to evaluate shorter treatment regimens for TB; encouragingly, Bayer have also promised that if the trials are successful, Bayer will make moxifloxacin affordable and accessible in countries that need it. Another approach for anti-TB drug development, which does not rely on antibiotics, consists of targeting NAD+ synthase, an essential enzyme in tuberculosis bacteria but not in humans. Low level laser therapy for treating tuberculosis is not supported by reliable evidence. History Streptomycin and para-aminosalicylic acid were developed by the mid-1940s. In 1960, Edinburgh City Hospital physician Sir John Crofton, addressed the Royal College of Physicians in London with a lecture titled "Tuberculosis Undefeated", and proposed that "the disease could be conquered, once and for all". With his colleagues at Edinburgh, he recognised that germs that developed only a mild resistance to one drug was significant. His team showed that when treating new cases of TB, strict compliance to a combination of three therapies, or the triple therapy, (streptomycin, para-aminosalicylic acid and isoniazid) could provide a complete cure. It became known as the Edinburgh method and became standard treatment for at least 15 years. In the 1970s it was recognised that combining isoniazid and rifampin could reduce the duration of treatment from 18 to nine months, and in the 1980s the duration of treatment was further shortened by adding pyrazinamide. National and international guidelines See also Modern era ATC
Tuberculosis management
code J04 Drugs for treatment of TB Mantoux test Heaf test TB Alliance Tuberculosis management in the era before antituberculosis drugs History of tuberculosis Tuberculosis treatment in Colorado Springs (historical) References This article incorporates public domain material from websites or documents of the Centers for Disease Control and Prevention. == Further reading ==
Typhoid fever
Typhoid fever, also known as typhoid, is a disease caused by Salmonella serotype Typhi bacteria. Symptoms vary from mild to severe, and usually begin six to 30 days after exposure. Often there is a gradual onset of a high fever over several days. This is commonly accompanied by weakness, abdominal pain, constipation, headaches, and mild vomiting. Some people develop a skin rash with rose colored spots. In severe cases, people may experience confusion. Without treatment, symptoms may last weeks or months. Diarrhea may be severe, but is uncommon. Other people may carry the bacterium without being affected, but they are still able to spread the disease. Typhoid fever is a type of enteric fever, along with paratyphoid fever. S. enterica Typhi is believed to infect and replicate only within humans.Typhoid is caused by the bacterium Salmonella enterica subsp. enterica serovar Typhi growing in the intestines, peyers patches, mesenteric lymph nodes, spleen, liver, gallbladder, bone marrow and blood. Typhoid is spread by eating or drinking food or water contaminated with the feces of an infected person. Risk factors include limited access to clean drinking water and poor sanitation. Those who have not yet been exposed to the pathogen and ingest contaminated drinking water or food are most at risk for developing symptoms. Only humans can be infected; there are no known animal reservoirs.Diagnosis is by culturing and identifying S. enterica Typhi from patient samples or detecting an immune response to the pathogen from blood samples. Recently, new advances in large-scale data collection and analysis have allowed researchers to develop better diagnostics, such as detecting changing abundances of small molecules in the blood that may specifically indicate typhoid fever. Diagnostic tools in regions where typhoid is most prevalent are quite limited in their accuracy and specificity, and the time required for a proper diagnosis, the increasing spread of antibiotic resistance, and the cost of testing are also hardships for under-resourced healthcare systems.A typhoid vaccine can prevent about 40% to 90% of cases during the first two years. The vaccine may have some effect for up to seven years. For those at high risk or people traveling to areas where the disease is common, vaccination is recommended. Other efforts to prevent the disease include providing clean drinking water, good sanitation, and handwashing. Until an infection is confirmed as cleared, the infected person should not prepare food for others. Typhoid is treated with antibiotics such as azithromycin, fluoroquinolones, or third-generation cephalosporins. Resistance to these antibiotics has been developing, which has made treatment more difficult.In 2015, 12.5 million new typhoid cases were reported. The disease is most common in India. Children are most commonly affected. Typhoid decreased in the developed world in the 1940s as a result of improved sanitation and the use of antibiotics. Every year about 400 cases are reported in the U.S. and an estimated 6,000 people have typhoid. In 2015, it resulted in about 149,000 deaths worldwide – down from 181,000 in 1990. Without treatment, the risk of death may be as high as 20%. With treatment, it is between 1% and 4%.Typhus is a different disease. Owing to their similar symptoms, they were not recognized as distinct diseases until the 1800s. "Typhoid" means "resembling typhus". Signs and symptoms Classically, the progression of untreated typhoid fever has three distinct stages, each lasting about a week. Over the course of these stages, the patient becomes exhausted and emaciated. In the first week, the body temperature rises slowly, and fever fluctuations are seen with relative bradycardia (Faget sign), malaise, headache, and cough. A bloody nose (epistaxis) is seen in a quarter of cases, and abdominal pain is also possible. A decrease in the number of circulating white blood cells (leukopenia) occurs with eosinopenia and relative lymphocytosis; blood cultures are positive for S. enterica subsp. enterica serovar Typhi. The Widal test is usually negative. In the second week, the person is often too tired to get up, with high fever in plateau around 40 °C (104 °F) and bradycardia (sphygmothermic dissociation or Faget sign), classically with a dicrotic pulse wave. Delirium can occur, where the patient is often calm, but sometimes becomes agitated. This delirium has given typhoid the nickname "nervous fever". Rose spots appear on the lower chest and abdomen in around a third of patients. Rhonchi (rattling breathing sounds) are heard in the base of the lungs. The abdomen is distended and painful in the right lower quadrant, where a rumbling sound can be heard. Diarrhea can occur in this stage, but constipation is also common. The spleen and liver are enlarged (hepatosplenomegaly) and tender, and liver transaminases are elevated. The Widal test is strongly positive, with antiO and antiH antibodies. Blood cultures are sometimes still positive. In the third week of typhoid fever, a number of complications can occur: The fever is still very high and oscillates very little over 24 hours. Dehydration ensues along with malnutrition, and the patient is delirious. A third of affected people develop a macular rash on the trunk. Intestinal haemorrhage due to bleeding in congested Peyers patches occurs; this can be very serious, but is usually not fatal. Intestinal perforation in the distal ileum is a very serious complication and often fatal. It may occur without alarming symptoms until septicaemia or diffuse peritonitis sets in. Respiratory diseases such as pneumonia and acute bronchitis Encephalitis Neuropsychiatric symptoms (described as "muttering delirium" or "coma vigil"), with picking at bedclothes or imaginary objects. Metastatic abscesses, cholecystitis, endocarditis, and osteitis. Low platelet count (thrombocytopenia) is sometimes seen. Causes Bacteria The Gram-negative bacterium that causes typhoid fever is Salmonella enterica subsp. enterica serovar Typhi. Based on MLST subtyping scheme, the two main sequence types of the S. Typhi are ST1 and ST2, which are widespread globally. Global phylogeographical analysis showed dominance of a haplotype 58 (H58), which probably originated in India during the late 1980s and is now spreading through the world with multi-drug resistance. A more detailed genotyping scheme was reported in 2016 and is now being used widely. This scheme reclassified the nomenclature of H58 to genotype 4.3.1. Transmission Unlike other strains of Salmonella, no animal carriers of typhoid are known. Humans are the only known carriers of the bacterium. S. enterica subsp. enterica serovar Typhi is spread by the fecal-oral route from people who are infected and from asymptomatic carriers of the bacterium. An asymptomatic human carrier is someone who is still excreting typhoid bacteria in their stool a year after the acute stage of the infection. Diagnosis Diagnosis is made by any blood, bone marrow, or stool cultures and with the Widal test (demonstration of antibodies against Salmonella antigens O-somatic and H-flagellar). In epidemics and less wealthy countries, after excluding malaria, dysentery, or pneumonia, a therapeutic trial time with chloramphenicol is generally undertaken while awaiting the results of the Widal test and blood and stool cultures. Widal test The Widal test is used to identify specific antibodies in the serum of people with typhoid by using antigen-antibody interactions.In this test, the serum is mixed with a dead bacterial suspension of salmonella with specific antigens. If the patients serum contains antibodies against those antigens, they get attached to them, forming clumps. If clumping does not occur, the test is negative. The Widal test is time-consuming and prone to significant false positives. It may also be falsely negative in recently infected people. But unlike the Typhidot test, the Widal test quantifies the specimen with titres. Rapid diagnostic tests Rapid diagnostic tests such as Tubex, Typhidot, and Test-It have shown moderate diagnostic accuracy. Typhidot Typhidot is based on the presence of specific IgM and IgG antibodies to a specific 50Kd OMP antigen. This test is carried out on a cellulose nitrate membrane where a specific S. typhi outer membrane protein is attached as fixed test lines. It separately identifies IgM and IgG antibodies. IgM shows recent infection; IgG signifies remote infection. The sample pad of this kit contains colloidal gold-anti-human IgG or gold-anti-human IgM. If the sample contains IgG and IgM antibodies against those antigens, they will react and turn red. The typhidot test becomes positive within 2–3 days of infection. Two colored bands indicate a positive test. A single control band indicates a negative test. A single first fixed line or no band at all indicates an invalid test. Typhidots biggest limitation is that it is not quantitative, just positive or negative. Tubex test The Tubex test contains two types of particles: brown magnetic particles coated with antigen and blue indicator particles coated with O9 antibody. During the test, if antibodies are present in the serum, they will attach to the brown magnetic particles and settle at the base, while the blue indicator particles remain in the solution, producing a blue color, which means the test is positive.If the serum does not have an antibody in it, the blue particles attach to the brown particles and settle at the bottom, producing a colorless solution, which means the test is negative. Prevention Sanitation and hygiene are important to prevent typhoid. It can spread only in environments where human feces can come into contact with food or drinking water. Careful food preparation and washing of hands are crucial to prevent typhoid. Industrialization contributed greatly to the elimination of typhoid fever, as it eliminated the public-health hazards associated with having horse manure in public streets, which led to a large number of flies, which are vectors of many pathogens, including Salmonella spp. According to statistics from the U.S. Centers for Disease Control and Prevention, the chlorination of drinking water has led to dramatic decreases in the transmission of typhoid fever. Vaccination Two typhoid vaccines are licensed for use for the prevention of typhoid: the live, oral Ty21a vaccine (sold as Vivotif by Crucell Switzerland AG) and the injectable typhoid polysaccharide vaccine (sold as Typhim Vi by Sanofi Pasteur and Typherix by GlaxoSmithKline). Both are efficacious and recommended for travelers to areas where typhoid is endemic. Boosters are recommended every five years for the oral vaccine and every two years for the injectable form. An older, killed whole-cell vaccine is still used in countries where the newer preparations are not available, but this vaccine is no longer recommended for use because it has more side effects (mainly pain and inflammation at the site of the injection). To help decrease rates of typhoid fever in developing nations, the World Health Organization (WHO) endorsed the use of a vaccination program starting in 1999. Vaccination has proven effective at controlling outbreaks in high-incidence areas and is also very cost-effective: prices are normally less than US$1 per dose. Because the price is low, poverty-stricken communities are more willing to take advantage of the vaccinations. Although vaccination programs for typhoid have proven effective, they alone cannot eliminate typhoid fever. Combining vaccines with public-health efforts is the only proven way to control this disease.Since the 1990s, the WHO has recommended two typhoid fever vaccines. The ViPS vaccine is given by injection, and the Ty21a by capsules. Only people over age two are recommended to be vaccinated with the ViPS vaccine, and it requires a revaccination after 2–3 years, with a 55%–72% efficacy. The Ty21a vaccine is recommended for people five and older, lasting 5–7 years with 51%–67% efficacy. The two vaccines have proved safe and effective for epidemic disease control in multiple regions.A version of the vaccine combined with a hepatitis A vaccine is also available.Results of a phase 3 trial of typhoid conjugate vaccine (TCV) in December 2019 reported 81% fewer cases among children. Treatment Oral rehydration therapy The rediscovery of oral rehydration therapy in the 1960s provided a simple way to prevent many of the deaths of diarrheal diseases in general. Antibiotics Where resistance is uncommon, the treatment of choice is a fluoroquinolone such as ciprofloxacin. Otherwise, a third-generation cephalosporin such as ceftriaxone or cefotaxime is the first choice. Cefixime is a suitable oral alternative.Properly treated, typhoid fever is not fatal in most cases. Antibiotics such as ampicillin, chloramphenicol, trimethoprim-sulfamethoxazole, amoxicillin, and ciprofloxacin have been commonly used to treat it. Treatment with antibiotics reduces the case-fatality rate to about 1%.Without treatment, some patients develop sustained fever, bradycardia, hepatosplenomegaly, abdominal symptoms, and occasionally pneumonia. In white-skinned patients, pink spots, which fade on pressure, appear on the skin of the trunk in up to 20% of cases. In the third week, untreated cases may develop gastrointestinal and cerebral complications, which may prove fatal in 10%–20% of cases. The highest case fatality rates are reported in children under 4. Around 2%–5% of those who contract typhoid fever become chronic carriers, as bacteria persist in the biliary tract after symptoms have resolved. Surgery Surgery is usually indicated if intestinal perforation occurs. One study found a 30-day mortality rate of 9% (8/88), and surgical site infections at 67% (59/88), with the disease burden borne predominantly by low-resource countries.For surgical treatment, most surgeons prefer simple closure of the perforation with drainage of the peritoneum. Small-bowel resection is indicated for patients with multiple perforations. If antibiotic treatment fails to eradicate the hepatobiliary carriage, the gallbladder should be resected. Cholecystectomy is sometimes successful, especially in patients with gallstones, but is not always successful in eradicating the carrier state because of persisting hepatic infection. Resistance As resistance to ampicillin, chloramphenicol, trimethoprim-sulfamethoxazole, and streptomycin is now common, these agents are no longer used as first–line treatment of typhoid fever. Typhoid resistant to these agents is known as multidrug-resistant typhoid.Ciprofloxacin resistance is an increasing problem, especially in the Indian subcontinent and Southeast Asia. Many centres are shifting from ciprofloxacin to ceftriaxone as the first line for treating suspected typhoid originating in South America, India, Pakistan, Bangladesh, Thailand, or Vietnam. Also, it has been suggested that azithromycin is better at treating resistant typhoid than both fluoroquinolone drugs and ceftriaxone. Azithromycin can be taken by mouth and is less expensive than ceftriaxone, which is given by injection.A separate problem exists with laboratory testing for reduced susceptibility to ciprofloxacin; current recommendations are that isolates should be tested simultaneously against ciprofloxacin (CIP) and against nalidixic acid (NAL), that isolates sensitive to both CIP and NAL should be reported as "sensitive to ciprofloxacin", and that isolates sensitive to CIP but not to NAL should be reported as "reduced sensitivity to ciprofloxacin". But an analysis of 271 isolates found that around 18% of isolates with a reduced susceptibility to fluoroquinolones, the class which CIP belongs (MIC 0.125–1.0 mg/L), would not be detected by this method. Epidemiology In 2000, typhoid fever caused an estimated 21.7 million illnesses and 217,000 deaths. It occurs most often in children and young adults between 5 and 19 years old. In 2013, it resulted in about 161,000 deaths – down from 181,000 in 1990. Infants, children, and adolescents in south-central and Southeast Asia have the highest rates of typhoid. Outbreaks are also often reported in sub-Saharan Africa and Southeast Asia. In 2000, more than 90% of morbidity and mortality due to typhoid fever occurred in Asia. In the U.S., about 400 cases occur each year, 75% of which are acquired while traveling internationally.Before the antibiotic era, the case fatality rate of typhoid fever was 10%–20%. Today, with prompt treatment, it is less than 1%, but 3%–5% of people who are infected develop a chronic infection in the gall bladder. Since S. enterica subsp. enterica serovar Typhi is human-restricted, these chronic carriers become the crucial reservoir, which can persist for decades for further spread of the disease, further complicating its identification and treatment. Lately, the study of S. enterica subsp. enterica serovar Typhi associated with a large outbreak and a carrier at the genome level provides new insight into the pathogenesis of the pathogen.In industrialized nations, water sanitation and food handling improvements have reduced the number of typhoid cases. Developing nations, such as those in parts of Asia and Africa, have the highest rates. These areas lack access to clean water, proper sanitation systems, and proper health-care facilities. In these areas, such access to basic public-health needs is not expected in the near future.In 2004–2005 an outbreak in the Democratic Republic of Congo resulted in more than 42,000 cases and 214 deaths. Since November 2016, Pakistan has had an outbreak of extensively drug-resistant (XDR) typhoid fever.In Europe, a report based on data for 2017 retrieved from The European Surveillance System (TESSy) on the distribution of confirmed typhoid and paratyphoid fever cases found that 22 EU/EEA countries reported a total of 1,098 cases, 90.9% of which were travel-related, mainly acquired during travel to South Asia. History Early descriptions The plague of Athens, during the Peloponnesian War, was most likely an outbreak of typhoid fever. During the war, Athenians retreated into a walled-in city to escape attack from the Spartans. This massive influx of humans into a concentrated space overwhelmed the water supply and waste infrastructure, likely leading to unsanitary conditions as fresh water became harder to obtain and waste became more difficult to collect and remove beyond the city walls. In 2006, examining the remains for a mass burial site from Athens from around the time of the plague (~430 B.C.) revealed that fragments of DNA similar to modern day S. Typhi DNA were detected, whereas Yersinia pestis (plague), Rickettsia prowazekii (typhus), Mycobacterium tuberculosis, cowpox virus, and Bartonella henselae were not detected in any of the remains tested.It is possible that the Roman emperor Augustus Caesar had either a liver abscess or typhoid fever, and survived by using ice baths and cold compresses as a means of treatment for his fever. There is a statue of the Greek physician, Antonius Musa, who treated his fever. Definition and evidence of transmission The French doctors Pierre-Fidele Bretonneau and Pierre-Charles-Alexandre Louis are credited with describing typhoid fever as a specific disease, unique from typhus. Both doctors performed autopsies on individuals who died in Paris due to fever – and indicated that many had lesions on the Peyers patches which correlated with distinct symptoms before death. British medics were skeptical of the differentiation between typhoid and typhus because both were endemic to Britain at that time. However, in France only typhoid was present circulating in the population. Pierre-Charlles-Alexandre Louis also performed case studies and statistical analysis to demonstrate that typhoid was contagious - and that persons who already had the disease seemed to be protected. Afterward, several American doctors confirmed these findings, and then Sir William Jenner convinced any remaining skeptics that typhoid is a specific disease recognizable by lesions in the Peyers patches by examining sixty six autopsies from fever patients and concluding that the symptoms of headaches, diarrhea, rash spots, and abdominal pain were only present in patients which then had intestinal lesions after death; which solidified the association of the disease with the intestinal tract and gave the first clue to the route of transmission.In 1847 William Budd learned of an epidemic of typhoid fever in Clifton, and identified that all 13 of 34 residents who had contracted the disease drew their drinking water from the same well. Notably, this observation was two years prior to John Snow discovering the route of contaminated water as the cause for a cholera outbreak. Budd later became health officer of Bristol and ensured a clean water supply, and documented further evidence of typhoid as a water-borne illness throughout his career. Cause Polish scientist Tadeusz Browicz described a short bacillus in the organs and feces of typhoid victims in 1874. Browicz was able to isolate and grow the bacilli but did not go as far as to insinuate or prove that they caused the disease.In April 1880, three months prior to Eberths publication, Edwin Klebs described short and filamentous bacilli in the Peyers patches in typhoid victims. The bacteriums role in disease was speculated but not confirmed.In 1880, Karl Joseph Eberth described a bacillus that he suspected was the cause of typhoid. Eberth is given credit for discovering the bacterium definitively by successfully isolating the same bacterium from 18 of 40 typhoid victims and failing to discover the bacterium present in any "control" victims of other diseases. In 1884, pathologist Georg Theodor August Gaffky (1850–1918) confirmed Eberths findings. Gaffky isolated the same bacterium as Eberth from the spleen of a typhoid victim, and was able to grow the bacterium on solid media. The organism was given names such as Eberths bacillus, Eberthella Typhi, and Gaffky-Eberth bacillus. Today, the bacillus that causes typhoid fever goes by the scientific name Salmonella enterica serovar Typhi. Chlorination of water Most developed countries had declining rates of typhoid fever throughout the first half of the 20th century due to vaccinations and advances in public sanitation and hygiene. In 1893 attempts were made to chlorinate the water supply in Hamburg, Germany and in 1897 Maidstone, England, was the first town to have its entire water supply chlorinated. In 1905, following an outbreak of typhoid fever, the City of Lincoln, England, instituted permanent water chlorination. The first permanent disinfection of drinking water in the US was made in 1908 to the Jersey City, New Jersey, water supply. Credit for the decision to build the chlorination system has been given to John L. Leal. The chlorination facility was designed by George W. Fuller.Outbreaks in traveling military groups led to the creation of the Lyster bag in 1915; a bag with a faucet which can be hung from a tree or pole, filled with water, and comes with a chlorination tablet to drop into the water. The Lyster bag was essential for the survival of American soldiers in the Vietnam War. Direct transmission and carriers There were several occurrences of milk delivery men spreading typhoid fever throughout the communities they served. Although typhoid is not spread through milk itself, there were several examples of milk distributors in many locations watering their milk down with contaminated water, or cleaning the glass bottles the milk was placed in with contaminated water. Boston had two such cases around the turn of the 20th century. In 1899 there were 24 cases of typhoid traced to a single milkman, whose wife had died of typhoid fever a week before the outbreak. In 1908, J.J. Fallon, who was also a milkman, died of typhoid fever. Following his death and confirmation of the typhoid fever diagnosis, the city conducted an investigation of typhoid symptoms and cases along his route and found evidence of a significant outbreak. A month after the outbreak was first reported, the Boston Globe published a short statement declaring the outbreak over, stating "[a]t Jamaica Plain there is a slight increase, the total being 272 cases. Throughout the city there is a total of 348 cases." There was at least one death reported during this outbreak: Mrs. Sophia S. Engstrom, aged 46. Typhoid continued to ravage the Jamaica Plain neighborhood in particular throughout 1908, and several more people were reported dead due to typhoid fever, although these cases were not explicitly linked to the outbreak. The Jamaica Plain neighborhood at that time was home to many working-class and poor immigrants, mostly from Ireland.The most notorious carrier of typhoid fever, but by no means the most destructive, was Mary Mallon, known as Typhoid Mary. Although other cases of human-to-human spread of typhoid were known at the time, the concept of an asymptomatic carrier, who was able to transmit disease, had only been hypothesized and not yet identified or proven. Mary Mallon became the first known example of an asymptomatic carrier of an infectious disease, making typhoid fever the first known disease being transmissible through asymptomatic hosts. The cases and deaths caused by Mallon were mainly upper-class families in New York City. At the time of Mallons tenure as a personal cook for upper-class families, New York City reported 3,000 to 4,500 cases of typhoid fever annually. In the summer of 1906 two daughters of a wealthy family and maids working in their home became ill with typhoid fever. After investigating their home water sources and ruling out water contamination, the family hired civil engineer George Soper to conduct an investigation of the possible source of typhoid fever in the home. Soper described himself as an "epidemic fighter". His investigation ruled out many sources of food, and led him to question if the cook the family hired just prior to their household outbreak, Mallon, was the source. Since she had already left and begun employment elsewhere, he proceeded to track her down in order to obtain a stool sample. When he was able to finally meet Mallon in person he described her by saying "Mary had a good figure and might have been called athletic had she not been a little too heavy." In recounts of Sopers pursuit of Mallon, his only remorse appears to be that he was not given enough credit for his relentless pursuit and publication of her personal identifying information, stating that the media "rob[s] me of whatever credit belongs to the discovery of the first typhoid fever carrier to be found in America." Ultimately, 51 cases and 3 deaths were suspected to be caused by Mallon.In 1924 the city of Portland, Oregon, experienced an outbreak of typhoid fever, consisting of 26 cases and 5 deaths, all deaths due to intestinal hemorrhage. All cases were concluded to be due to a single milk farm worker, who was shedding large amounts of the typhoid pathogen in his urine. Misidentification of the disease, due to inaccurate Widal test results, delayed identification of the carrier and proper treatment.
Typhoid fever
Ultimately, it took four samplings of different secretions from all of the dairy workers in order to successfully identify the carrier. Upon discovery, the dairy worker was forcibly quarantined for seven weeks, and regular samples were taken, most of the time the stool samples yielding no typhoid and often the urine yielding the pathogen. The carrier was reported as being 72 years old and appearing in excellent health with no symptoms. Pharmaceutical treatment decreased the amount of bacteria secreted, however, the infection was never fully cleared from the urine, and the carrier was released "under orders never again to engage in the handling of foods for human consumption." At the time of release, the authors noted "for more than fifty years he has earned his living chiefly by milking cows and knows little of other forms of labor, it must be expected that the closest surveillance will be necessary to make certain that he does not again engage in this occupation."Overall, in the early 20th century the medical profession began to identify carriers of the disease, and evidence of transmission independent of water contamination. In a 1933 American Medical Association publication, physicians treatment of asymptomatic carriers is best summarized by the opening line "Carriers of typhoid bacilli are a menace". Within the same publication, the first official estimate of typhoid carriers is given: 2 to 5% of all typhoid patients, and distinguished between temporary carriers and chronic carriers. The authors further estimate that there are four to five chronic female carriers to every one male carrier, although offered no data to explain this assertion of a gender difference in the rate of typhoid carriers. As far as treatment, the authors suggest: "When recognized, carriers must be instructed as to the disposal of excreta as well as to the importance of personal cleanliness. They should be forbidden to handle food or drink intended for others, and their movements and whereabouts must be reported to the public health officers". Today, typhoid carriers exist all over the world, but the highest incidence of asymptomatic infection is likely to occur in South/Southeast Asian and Sub-Saharan countries. The Los Angeles County department of public health tracks typhoid carriers and reports the number of carriers identified within the county yearly; between 2006 and 2016 0-4 new cases of typhoid carriers were identified per year. Cases of typhoid fever must be reported within one working day from identification. As of 2018, chronic typhoid carriers must sign a "Carrier Agreement" and are required to test for typhoid shedding twice yearly, ideally every 6 months. Carriers may be released from their agreements upon fulfilling "release" requirements, based on completion of a personalized treatment plan designed with medical professionals. Fecal or gallbladder carrier release requirements: 6 consecutive negative feces and urine specimens submitted at 1-month or greater intervals beginning at least 7 days after completion of therapy. Urinary or kidney carrier release requirements: 6 consecutive negative urine specimens submitted at 1-month or greater intervals beginning at least 7 days after completion of therapy. As of 2016 the male:female ratio of carriers in Los Angeles county was 3:1.Due to the nature of asymptomatic cases, many questions remain about how individuals are able to tolerate infection for long periods of time, how to identify such cases, and efficient options for treatment. Researchers are currently working to understand asymptomatic infection with Salmonella species by studying infections in laboratory animals, which will ultimately lead to improved prevention and treatment options for typhoid carriers. In 2002, Dr. John Gunn described the ability of Salmonella sp. to form biofilms on gallstones in mice, providing a model for studying carriage in the gallbladder. Dr. Denise Monack and Dr. Stanley Falkow described a mouse model of asymptomatic intestinal and systemic infection in 2004, and Dr. Monack went on to demonstrate that a sub-population of superspreaders are responsible for the majority of transmission to new hosts, following the 80/20 rule of disease transmission, and that the intestinal microbiota likely plays a role in transmission. Dr. Monacks mouse model allows long-term carriage of salmonella in mesenteric lymph nodes, spleen and liver. Vaccine development British bacteriologist Almroth Edward Wright first developed an effective typhoid vaccine at the Army Medical School in Netley, Hampshire. It was introduced in 1896 and used successfully by the British during the Second Boer War in South Africa. At that time, typhoid often killed more soldiers at war than were lost due to enemy combat. Wright further developed his vaccine at a newly opened research department at St Marys Hospital Medical School in London from 1902, where he established a method for measuring protective substances (opsonin) in human blood. Wrights version of the typhoid vaccine was produced by growing the bacterium at body temperature in broth, then heating the bacteria to 60 °C to "heat inactivate" the pathogen, killing it, while keeping the surface antigens intact. The heat-killed bacteria was then injected into a patient. To show evidence of the vaccines efficacy, Wright then collected serum samples from patients several weeks post-vaccination, and tested their serums ability to agglutinate live typhoid bacteria. A "positive" result was represented by clumping of bacteria, indicating that the body was producing anti-serum (now called antibodies) against the pathogen.Citing the example of the Second Boer War, during which many soldiers died from easily preventable diseases, Wright convinced the British Army that 10 million vaccine doses should be produced for the troops being sent to the Western Front, thereby saving up to half a million lives during World War I. The British Army was the only combatant at the outbreak of the war to have its troops fully immunized against the bacterium. For the first time, their casualties due to combat exceeded those from disease.In 1909, Frederick F. Russell, a U.S. Army physician, adopted Wrights typhoid vaccine for use with the Army, and two years later, his vaccination program became the first in which an entire army was immunized. It eliminated typhoid as a significant cause of morbidity and mortality in the U.S. military. Typhoid vaccination for members of the American military became mandatory in 1911. Before the vaccine, the rate of typhoid fever in the military was 14,000 or greater per 100,000 soldiers. By World War I, the rate of typhoid in American soldiers was 37 per 100,000.During the second world war, the United States army authorized the use of a trivalent vaccine – containing heat-inactivated Typhoid, Paratyphi A and Paratyphi B pathogens.In 1934, discovery of the Vi capsular antigen by Arthur Felix and Miss S. R. Margaret Pitt enabled development of the safer Vi Antigen vaccine – which is widely in use today. Arthur Felix and Margaret Pitt also isolated the strain Ty2, which became the parent strain of Ty21a, the strain used as a live-attenuated vaccine for typhoid fever today. Antibiotics and resistance Chloramphenicol was isolated from Streptomyces by Dr. David Gotlieb during the 1940s. In 1948 American army doctors tested its efficacy in treating typhoid patients in Kuala Lumpur, Malaysia. Individuals who received a full course of treatment cleared the infection, whereas patients given a lower dose had a relapse. Asymptomatic carriers continued to shed bacilli despite chloramphenicol treatment - only ill patients were improved with chloramphenicol. Resistance to chloramphenicol became frequent in Southeast Asia by the 1950s, and today chloramphenicol is only used as a last resort due to the high prevalence of resistance. Terminology The disease has been referred to by various names, often associated with symptoms, such as gastric fever, enteric fever, abdominal typhus, infantile remittant fever, slow fever, nervous fever, pythogenic fever, drain fever and low fever. Notable people Emperor Augustus of Rome (suspected based on historical record but not confirmed), survived. Albert, Prince Consort, husband of Queen Victoria of the United Kingdom, died in 24 days after first record of "feeling horribly ill". Died 14 December 1861 after suffering loss of appetite, insomnia, fever, chills, profuse sweating, vomiting, rash spots, delusions, inability to recognize family members, worsening rash on abdomen, a change in tongue color, then finally a state of extreme fatigue. Attending physician William Jenner, an expert on Typhoid fever at the time, diagnosed him. Edward VII of the UK, son of Queen Victoria, while still Prince of Wales, had a near fatal case of typhoid fever. Tsar Nicholas II of Russia, survived, illness was circa 1900–1901. William Henry Harrison, the 9th President of the United States of America, died 32 days into his term, in 1841. This is the shortest term served by a United States President. Wilbur Wright, co-inventor of the plane with his brother Orville died from typhoid 32 years before Orville. Stephen A. Douglas, political opponent of Abraham Lincoln in 1858 and 1860, died of typhoid on June 3, 1861. Ignacio Zaragoza, Mexican general and politician, died at the age of 33 of typhoid fever on September 8, 1862. William Wallace Lincoln, the son of US president Abraham and Mary Todd Lincoln, died of typhoid in 1862. Martha Bulloch Roosevelt, mother of president Theodore Roosevelt and paternal grandmother of Eleanor Roosevelt, died of typhoid fever in 1884. Mary Mallon, "Typhoid Mary" - see history section, "carriers" for further details Leland Stanford Jr., son of American tycoon and politician A. Leland Stanford and eponym of Leland Stanford Junior University, died of typhoid fever in 1884 at the age of 15. Three of Louis Pasteurs five children died of typhoid fever. Gerard Manley Hopkins, English poet, died of typhoid fever in 1889. Lizzie van Zyl, South African child inmate of the Bloemfontein concentration camp during the Second Boer War, died of typhoid fever in 1901. Dr HJH Tup Scott, captain of the 1886 Australian cricket team that toured England, died of typhoid in 1910. Arnold Bennett, English novelist, died in 1932 of typhoid, two months after drinking a glass of water in a Paris hotel to prove it was safe. Hakaru Hashimoto, Japanese medical scientist, died of typhoid fever in 1934. Outbreaks Plague of Athens (suspected) "Burning Fever" outbreak among indigenous Americans. Between 1607 and 1624, 85% of the population at the James River died from a typhoid epidemic. The World Health Organization estimates the death toll was over 6,000 during this time. Maidstone, Kent outbreak in 1897–1898: 1,847 patients were recorded to have typhoid fever. This outbreak is notable because it was the first time a typhoid vaccine was deployed during a civilian outbreak. Almoth Edward Wrights vaccine was offered to 200 healthcare providers, and of the 84 individuals who received the vaccine none developed typhoid whereas 4 who had not been vaccinated became ill. American army in the Spanish-American war: government records estimate over 21,000 troops had typhoid, resulting in 2,200 deaths. In 1902, guests at mayoral banquets in Southampton and Winchester, England, became ill and four died, including the Dean of Winchester, after consuming oysters. The infection was due to oysters sourced from Emsworth, where the oyster beds had been contaminated with raw sewage. Jamaica Plain neighborhood, Boston in 1908 - linked to milk delivery. See history section, "carriers" for further details. Outbreak in upperclass New Yorkers who employed Mary Mallon - 51 cases and 3 deaths from 1907 to 1915. Aberdeen, Scotland, in summer 1964 - traced back to contaminated canned beef sourced from Argentina sold in markets. More than 500 patients were quarantined in the hospital for a minimum of four weeks, and the outbreak was contained without any deaths. Dushanbe, Tajikistan, in 1996–1997: 10,677 cases reported, 108 deaths Kinshasa, Democratic Republic of the Congo, in 2004: 43,000 cases and over 200 deaths. A prospective study of specimens collected in the same region between 2007 and 2011 revealed about one third of samples obtained from patient samples were resistant to multiple antibiotics. Kampala, Uganda in 2015: 10,230 cases reported See also Typhus fever References == Further reading ==
Weight loss
Weight loss, in the context of medicine, health, or physical fitness, refers to a reduction of the total body mass, by a mean loss of fluid, body fat (adipose tissue), or lean mass (namely bone mineral deposits, muscle, tendon, and other connective tissue). Weight loss can either occur unintentionally because of malnourishment or an underlying disease, or from a conscious effort to improve an actual or perceived overweight or obese state. "Unexplained" weight loss that is not caused by reduction in calorific intake or exercise is called cachexia and may be a symptom of a serious medical condition. Intentional Intentional weight loss is the loss of total body mass as a result of efforts to improve fitness and health, or to change appearance through slimming. Weight loss is the main treatment for obesity, and there is substantial evidence this can prevent progression from prediabetes to type 2 diabetes with a 7-10% weight loss and manage cardiometabolic health for diabetic people with a 5-15% weight loss.Weight loss in individuals who are overweight or obese can reduce health risks, increase fitness, and may delay the onset of diabetes. It could reduce pain and increase movement in people with osteoarthritis of the knee. Weight loss can lead to a reduction in hypertension (high blood pressure), however whether this reduces hypertension-related harm is unclear. Weight loss is achieved by adopting a lifestyle in which fewer calories are consumed than are expended. Depression, stress or boredom may contribute to weight increase, and in these cases, individuals are advised to seek medical help. A 2010 study found that dieters who got a full nights sleep lost more than twice as much fat as sleep-deprived dieters. Though hypothesized that supplementation of vitamin D may help, studies do not support this. The majority of dieters regain weight over the long term. According to the UK National Health Service and the Dietary Guidelines for Americans, those who achieve and manage a healthy weight do so most successfully by being careful to consume just enough calories to meet their needs, and being physically active.For weight loss to be permanent, changes in diet and lifestyle must be permanent as well. There is evidence that counseling or exercise alone do not result in weight loss, whereas dieting alone results in meaningful long-term weight loss, and a combination of dieting and exercise provides the best results. Meal replacements, orlistat, a very-low-calorie diet, and primary care intensive medical interventions can also support meaningful weight loss. Techniques Diet and exercise The least intrusive weight loss methods, and those most often recommended, are adjustments to eating patterns and increased physical activity, generally in the form of exercise. The World Health Organization recommends that people combine a reduction of processed foods high in saturated fats, sugar and salt, and reduced caloric intake with an increase in physical activity. Both long-term exercise programs and anti-obesity medications reduce abdominal fat volume. Self-monitoring of diet, exercise, and weight are beneficial strategies for weight loss, particularly early in weight loss programs. Research indicates that those who log their foods about three times per day and about 20 times per month are more likely to achieve clinically significant weight loss.Weight loss depends on maintaining an negative energy balance and not the type of macronutrients (such as carbohydrate) consumed. High protein diets have shown greater efficacy in the short term for people eating ad libitum due to increased thermogenesis and satiety. Medications Other methods of weight loss include use of anti-obesity drugs that decrease appetite, block fat absorption, or reduce stomach volume. Obesity has been resistant to drug-based therapies, with a 2021 review stating that existing medications are "often delivering insufficient efficacy and dubious safety". Bariatric surgery Bariatric surgery may be indicated in cases of severe obesity. Two common bariatric surgical procedures are gastric bypass and gastric banding. Both can be effective at limiting the intake of food energy by reducing the size of the stomach, but as with any surgical procedure both come with their own risks that should be considered in consultation with a physician. Weight loss industry There is a substantial market for products which claim to make weight loss easier, quicker, cheaper, more reliable, or less painful. These include books, DVDs, CDs, cremes, lotions, pills, rings and earrings, body wraps, body belts and other materials, fitness centers, clinics, personal coaches, weight loss groups, and food products and supplements. Dietary supplements, though widely used, are not considered a healthy option for weight loss, and have no clinical evidence of efficacy. Herbal products have not been shown to be effective.In 2008, between US$33 billion and $55 billion was spent annually in the US on weight-loss products and services, including medical procedures and pharmaceuticals, with weight-loss centers taking between 6 and 12 percent of total annual expenditure. Over $1.6 billion per year was spent on weight-loss supplements. About 70 percent of Americans dieting attempts are of a self-help nature.In Western Europe, sales of weight-loss products, excluding prescription medications, topped €1,25 billion (£900 million/$1.4 billion) in 2009.The scientific soundness of commercial diets by commercial weight management organizations varies widely, being previously non-evidence-based, so there is only limited evidence supporting their use, because of high attrition rates. Commercial diets result in modest weight loss in the long term, with similar results regardless of the brand, and similarly to non-commercial diets and standard care. Comprehensive diet programs, providing counseling and targets for calorie intake, are more efficient than dieting without guidance ("self-help"), although the evidence is very limited. The National Institute for Health and Care Excellence devised a set of essential criteria to be met by commercial weight management organizations to be approved. Unintentional Characteristics Unintentional weight loss may result from loss of body fats, loss of body fluids, muscle atrophy, or a combination of these. It is generally regarded as a medical problem when at least 10% of a persons body weight has been lost in six months or 5% in the last month. Another criterion used for assessing weight that is too low is the body mass index (BMI). However, even lesser amounts of weight loss can be a cause for serious concern in a frail elderly person.Unintentional weight loss can occur because of an inadequately nutritious diet relative to a persons energy needs (generally called malnutrition). Disease processes, changes in metabolism, hormonal changes, medications or other treatments, disease- or treatment-related dietary changes, or reduced appetite associated with a disease or treatment can also cause unintentional weight loss. Poor nutrient utilization can lead to weight loss, and can be caused by fistulae in the gastrointestinal tract, diarrhea, drug-nutrient interaction, enzyme depletion and muscle atrophy.Continuing weight loss may deteriorate into wasting, a vaguely defined condition called cachexia. Cachexia differs from starvation in part because it involves a systemic inflammatory response. It is associated with poorer outcomes. In the advanced stages of progressive disease, metabolism can change so that they lose weight even when they are getting what is normally regarded as adequate nutrition and the body cannot compensate. This leads to a condition called anorexia cachexia syndrome (ACS) and additional nutrition or supplementation is unlikely to help. Symptoms of weight loss from ACS include severe weight loss from muscle rather than body fat, loss of appetite and feeling full after eating small amounts, nausea, anemia, weakness and fatigue.Serious weight loss may reduce quality of life, impair treatment effectiveness or recovery, worsen disease processes and be a risk factor for high mortality rates. Malnutrition can affect every function of the human body, from the cells to the most complex body functions, including: immune response; wound healing; muscle strength (including respiratory muscles); renal capacity and depletion leading to water and electrolyte disturbances; thermoregulation; and menstruation.Malnutrition can lead to vitamin and other deficiencies and to inactivity, which in turn may pre-dispose to other problems, such as pressure sores. Unintentional weight loss can be the characteristic leading to diagnosis of diseases such as cancer and type 1 diabetes. In the UK, up to 5% of the general population is underweight, but more than 10% of those with lung or gastrointestinal diseases and who have recently had surgery. According to data in the UK using the Malnutrition Universal Screening Tool (MUST), which incorporates unintentional weight loss, more than 10% of the population over the age of 65 is at risk of malnutrition. A high proportion (10–60%) of hospital patients are also at risk, along with a similar proportion in care homes. Causes Disease-related Disease-related malnutrition can be considered in four categories: Weight loss issues related to specific diseases include: As chronic obstructive pulmonary disease (COPD) advances, about 35% of patients experience severe weight loss called pulmonary cachexia, including diminished muscle mass. Around 25% experience moderate to severe weight loss, and most others have some weight loss. Greater weight loss is associated with poorer prognosis. Theories about contributing factors include appetite loss related to reduced activity, additional energy required for breathing, and the difficulty of eating with dyspnea (labored breathing). Cancer, a very common and sometimes fatal cause of unexplained (idiopathic) weight loss. About one-third of unintentional weight loss cases are secondary to malignancy. Cancers to suspect in patients with unexplained weight loss include gastrointestinal, prostate, hepatobiliary (hepatocellular carcinoma, pancreatic cancer), ovarian, hematologic or lung malignancies. People with HIV often experience weight loss, and it is associated with poorer outcomes. Wasting syndrome is an AIDS-defining condition. Gastrointestinal disorders are another common cause of unexplained weight loss – in fact they are the most common non-cancerous cause of idiopathic weight loss. Possible gastrointestinal etiologies of unexplained weight loss include: celiac disease, peptic ulcer disease, inflammatory bowel disease (crohns disease and ulcerative colitis), pancreatitis, gastritis, diarrhea, chronic mesenteric ischemia and many other GI conditions. Infection. Some infectious diseases can cause weight loss. Fungal illnesses, endocarditis, many parasitic diseases, AIDS, and some other subacute or occult infections may cause weight loss. Renal disease. Patients who have uremia often have poor or absent appetite, vomiting and nausea. This can cause weight loss. Cardiac disease. Cardiovascular disease, especially congestive heart failure, may cause unexplained weight loss. Connective tissue disease Oral, taste or dental problems (including infections) can reduce nutrient intake leading to weight loss. Therapy-related Medical treatment can directly or indirectly cause weight loss, impairing treatment effectiveness and recovery that can lead to further weight loss in a vicious cycle. Many patients will be in pain and have a loss of appetite after surgery. Part of the bodys response to surgery is to direct energy to wound healing, which increases the bodys overall energy requirements. Surgery affects nutritional status indirectly, particularly during the recovery period, as it can interfere with wound healing and other aspects of recovery. Surgery directly affects nutritional status if a procedure permanently alters the digestive system. Enteral nutrition (tube feeding) is often needed. However a policy of nil by mouth for all gastrointestinal surgery has not been shown to benefit, with some weak evidence suggesting it might hinder recovery. Early post-operative nutrition is a part of Enhanced Recovery After Surgery protocols. These protocols also include carbohydrate loading in the 24 hours before surgery, but earlier nutritional interventions have not been shown to have a significant impact. Social conditions Social conditions such as poverty, social isolation and inability to get or prepare preferred foods can cause unintentional weight loss, and this may be particularly common in older people. Nutrient intake can also be affected by culture, family and belief systems. Ill-fitting dentures and other dental or oral health problems can also affect adequacy of nutrition.Loss of hope, status or social contact and spiritual distress can cause depression, which may be associated with reduced nutrition, as can fatigue. Myths Some popular beliefs attached to weight loss have been shown to either have less effect on weight loss than commonly believed or are actively unhealthy. According to Harvard Health, the idea of metabolic rate being the "key to weight" is "part truth and part myth" as while metabolism does affect weight loss, external forces such as diet and exercise have an equal effect. They also commented that the idea of changing ones rate of metabolism is under debate. Diet plans in fitness magazines are also often believed to be effective but may actually be harmful by limiting the daily intake of important calories and nutrients which can be detrimental depending on the person and are even capable of driving individuals away from weight loss. Health effects Obesity increases health risks, including diabetes, cancer, cardiovascular disease, high blood pressure, and non-alcoholic fatty liver disease, to name a few. Reduction of obesity lowers those risks. A 1-kg loss of body weight has been associated with an approximate 1-mm Hg drop in blood pressure. Intentional weight loss is associated with cognitive performance improvements in overweight and obese individuals. See also Anorexia Cigarette smoking for weight loss Dieting Physical exercise Weight gain References External links Weight loss at Curlie Health benefits of losing weight By IQWiG at PubMed Health Weight-control Information Network Archived 12 February 2015 at the Wayback Machine U.S. National Institutes of Health Nutrition in cancer care By NCI at PubMed Health Unintentional weight loss
Urethritis
Urethritis is the inflammation of the urethra. The most common symptoms include painful or difficult urination and urethral discharge. It is a commonly treatable condition usually caused by infection with bacteria. This bacterial infection is often sexually transmitted, but not in every instance; it can be idiopathic, for example. Some incidence of urethritis can appear asymptomatic as well. Symptoms and signs Symptoms vary based on the cause of the diseases. For infectious causes of urethritis, symptoms may start a few weeks to several months after infection. Non-infectious causes of urethritis commonly show symptoms after a few days. Common symptoms include painful urination, continuous urge to urinate, itching and, urethral discharge. Additional symptoms vary based on assigned sex at birth. AMAB (assigned male at birth) individuals may experience blood in the urine or semen, itching, tenderness, or swelling of the penis, enlarged lymph nodes in the groin area, and/or pain with intercourse or ejaculation. AFAB (assigned female at birth) may experience abdominal pain, pelvic pain, pain with intercourse, or vaginal discharge. Non-gonococcal urethritis typically does not have noticeable symptoms in AFAB individuals, however, the infection can spread to parts of the reproductive system. Complications Serious, yet rare complications associated with Neisseria gonorrhea, may include penile edema, abscessed tissue surrounding the urethra, urethral strictures such as scarring, and penile lymphangitis. If left untreated, the bacteria that cause non-gonococcal urethritis can lead to various complications. In individuals assigned male at birth, complications can lead to epididymitis, reactive arthritis, conjunctivitis, skin lesions, and discharge. In individuals assigned female at birth, complications can lead to pelvic inflammatory disease, chronic pelvic pain, vaginitis, mucopurulent cervicitis, and miscarriages. Causes The disease is classified as either gonococcal urethritis, caused by Neisseria gonorrhoeae, or non-gonococcal urethritis (NGU), most commonly caused by Chlamydia trachomatis, which is accounted for 20-50% of routinely tested cases. NGU, sometimes called nonspecific urethritis (NSU), has both infectious and noninfectious causes. Other causes include: Mycoplasma genitalium: second most common cause accounting for 15-20% of non-gonococcal urethritis Trichomonas vaginalis: accounts for 2-13% of cases in the US; infection is mainly asymptomatic in most cases Adenoviridae Uropathogenic Escherichia coli (UPEC) Herpes simplex virus Cytomegalovirus Reactive arthritis: urethritis is part of the triad of reactive arthritis, which includes arthritis, urethritis, and conjunctivitis. Ureaplasma urealyticum Methicillin-resistant Staphylococcus aureus Group B streptococcus Irritation of the genital area: for example catheter-induced, physical activity, tight clothing or soaps Fungal urethritis in immunosuppressed individual Menopause Diagnosis Urethritis is usually diagnosed through collecting history on the individual and through a physical examination. In AFAB individuals, urethritis can be diagnosed with a number of tests including: urine test, blood test, vaginal culture, cytoscopy, or a nucleic acid test. AFAB individuals will also have abdominal and pelvic exams to check for urethral discharge, and tenderness of the lower abdomen or urethra.In AMAB individuals, urethritis is diagnosed by at least one of the following: mucopurulent or purulent urethral discharge on examination, ≥ 2 white blood cells per oil immersion field from a Gram stain of a urethral swab, or positive leukocyte esterase and/or ≥10 white blood cells per high power field of the first-void urine. Men who meet the criteria for urethritis commonly get nucleic acid amplification testing for Chlamydia trachomatis and Neisseria gonorrhoeae to determine the type of urethritis. AMAB individuals will have an exam on the abdomen, bladder area, penis, and scrotum. Additionally, a digital rectal examination of the prostate may be used if rectal pain is reported or if the individual is of older age. Prevention Primary prevention can be accomplished by the reduction of modifiable risk factors that increase the likelihood of developing urethritis. These factors include, but are not limited to, sexual intercourse (particularly unprotected intercourse) and genital irritation from contact with tight clothing, physical activity, and various irritants such as soap, lotion and spermicides.Bacterial infections leading to gonococcal and non-gonococcal urethritis can be prevented by: sexual abstinence use of barrier contraception, such as condoms pre-exposure vaccination: HPV and Hepatitis B vaccines reducing number of sexual partnersChlorhexidine is an antibacterial agent that covers a wide spectrum of gram-positive and gram-negative bacteria. Rinsing with 15 ml of a 0.12% or 10 ml of 0.2% chlorhexidine solution for 30 seconds produced large and prolonged reductions in salivary bacterial counts within 7 hours of its use. One hypothesis in 2010 posed the potential use of chlorhexidine rinsing before oral sex as a prevention strategy of recurrent non-gonococcal urethritis caused by bacteria entering the urethra from oral cavity following "insertive oral intercourse", particularly in men. However, actual clinical studies are yet to be carried out in order to prove this hypothesis. Treatment Antimicrobials are generally the drug of choice for gonococcal and non-gonococcal infections. The CDC in 2015 suggests using a dual therapy that consists of two antimicrobials that have different mechanisms of action would be an effective treatment strategy for urethritis and it could also potentially slow down antibiotic resistance.A variety of drugs may be prescribed based on the cause of urethritis: Gonococcal urethritis (caused by N. gonorrhoeae): The CDC recommends administering an injection dose of ceftriaxone 250 mg intramuscularly and oral dose of azithromycin 1g simultaneously. Cefixime 400 mg oral single dose can be used as an alternative if ceftriaxone is not available. Non-gonococcal urethritis (caused by Chlamydia trachomatis): The CDC recommends administering an oral single dose of azithromycin 1g or a 7-day course of doxycycline 100 mg orally twice daily.Alternative treatments can also be used when the above options are not available:Erythromycin base 500 mg orally four times daily for 7 days Erythromycin ethylsuccinate 800 mg orally four times daily for 7 days Levofloxacin 500 mg orally once daily for 7 days Ofloxacin 300 mg orally twice daily for 7 daysTreatment for both gonococcal and non-gonococcal urethritis is suggested to be given under direct observation in a clinic or healthcare facility in order to maximize compliance and effectiveness. For non-medication management, proper perineal hygiene should be stressed. This includes avoiding use of vaginal deodorant sprays and proper wiping after urination and bowel movements. Sexual intercourse should be avoided at least 7 days after completion of treatment (and until symptoms resolves, if present). Past and current sexual partners should also be assessed and treated.Individuals displaying persistence or recurrence of symptoms should be instructed for possible re-evaluation. Although there is no standard definition, persistent urethritis is defined as urethritis that has failed to display improvement within the first week of initial therapy. Additionally, recurrent urethritis is defined as urethritis reappearing within 6 weeks after a previous episode of non-gonococcal urethritis. If recurrent symptoms are supported by microscopic evidence of urethritis, then re-treatment is appropriate. The following treatment recommendations are limited and based on clinical experience, expert opinions and guidelines for recurrent or persistent non-gonococcal urethritis: If doxycycline was prescribed as initial therapy, give azithromycin 500 mg or 1 gram for the first day, then give azithromycin 250 mg once daily for 4 days plus metronidazole 400 – 500 mg twice daily for 5 days If azithromycin was prescribed as initial therapy, then give doxycycline 100 mg twice daily for 7 days plus metronidazole 400 – 500 mg twice daily for 5 – 7 days Moxifloxacin 400 mg orally once daily for 7 – 14 days can be given with use of caution, if macrolide-resistant M. genitalium infection is demonstrated Appropriate treatment for these individuals may require further referral to a urologist if symptoms persist after initial treatment. Epidemiology Urethritis is one of the most common sexually transmitted infections found in men. Gonorrhea and chlamydia are the main pathogens causing urethritis. Health organizations break down the rate of urethritis based on its etiology. The estimated global prevalence of gonorrhoea is 0.9% in women and 0.7% in men. An estimated 87 million new infections of gonorrhoea occurred in 2016. Low-income countries have the highest prevalence of gonorrhoea. Gonorrhea is more commonly seen in males than in females and infection rates are higher in adolescents and young adults.The estimated global prevalence of chlamydia, which is the most common cause of non-gonococcal urethritis, is 3.8% in women and 2.7% in men. An estimated 127 million new chlamydia cases occurred in 2016. Upper-middle income countries had the highest prevalence of chlamydia. The rate of chlamydia is around two times higher in females than in males. Rates are also higher among adolescents and young adults. References == External links ==
Sodium bicarbonate
Sodium bicarbonate (IUPAC name: sodium hydrogencarbonate), commonly known as baking soda or bicarbonate of soda, is a chemical compound with the formula NaHCO3. It is a salt composed of a sodium cation (Na+) and a bicarbonate anion (HCO3−). Sodium bicarbonate is a white solid that is crystalline, but often appears as a fine powder. It has a slightly salty, alkaline taste resembling that of washing soda (sodium carbonate). The natural mineral form is nahcolite. It is a component of the mineral natron and is found dissolved in many mineral springs. Nomenclature Because it has long been known and widely used, the salt has many different names such as baking soda, bread soda, cooking soda, and bicarbonate of soda and can often be found near baking powder in stores. The term baking soda is more common in the United States, while bicarbonate of soda is more common in Australia, United Kingdom and Ireland. and in many northern/central European countries it is called Natron. Abbreviated colloquial forms such as sodium bicarb, bicarb soda, bicarbonate, and bicarb are common.The word saleratus, from Latin sal æratus (meaning "aerated salt"), was widely used in the 19th century for both sodium bicarbonate and potassium bicarbonate.Its E number food additive code is E500.The prefix bi in bicarbonate comes from an outdated naming system predating molecular knowledge in reference to the two molar equivalents of carbon dioxide (known as carbonic acid in the ancient chemistry language) that potassium hydrocarbonate/bicarbonate releases upon decomposition to (di)potassium carbonate and to potassium oxide (potash). The modern chemical formulas of these compounds now express their precise chemical compositions which were unknown when the name bi-carbonate of potash was coined (see also: bicarbonate). Uses Cooking Leavening In cooking, baking soda is primarily used in baking as a leavening agent. When it reacts with acid, carbon dioxide is released, which causes expansion of the batter and forms the characteristic texture and grain in cakes, quick breads, soda bread, and other baked and fried foods. The acid–base reaction can be generically represented as follows: NaHCO3 + H+ → Na+ + CO2 + H2OAcidic materials that induce this reaction include hydrogen phosphates, cream of tartar, lemon juice, yogurt, buttermilk, cocoa, and vinegar. Baking soda may be used together with sourdough, which is acidic, making a lighter product with a less acidic taste.Heat can also by itself cause sodium bicarbonate to act as a raising agent in baking because of thermal decomposition, releasing carbon dioxide at temperatures above 80 °C (180 °F), as follows: 2 NaHCO3 → Na2CO3 + H2O + CO2When used this way on its own, without the presence of an acidic component (whether in the batter or by the use of a baking powder containing acid), only half the available CO2 is released (one CO2 molecule is formed for every two equivalents of NaHCO3). Additionally, in the absence of acid, thermal decomposition of sodium bicarbonate also produces sodium carbonate, which is strongly alkaline and gives the baked product a bitter, "soapy" taste and a yellow color. Since the reaction occurs slowly at room temperature, mixtures (cake batter, etc.) can be allowed to stand without rising until they are heated in the oven. Baking powder Baking powder, also sold for cooking, contains around 30% of bicarbonate, and various acidic ingredients which are activated by the addition of water, without the need for additional acids in the cooking medium. Many forms of baking powder contain sodium bicarbonate combined with calcium acid phosphate, sodium aluminium phosphate, or cream of tartar. Baking soda is alkaline; the acid used in baking powder avoids a metallic taste when the chemical change during baking creates sodium carbonate. Pyrotechnics Sodium bicarbonate is one of the main components of the common "black snake" firework. The effect is caused by the thermal decomposition, which produces carbon dioxide gas to produce a long snake-like ash as a combustion product of the other main component, sucrose. Sodium bicarbonate is also used to delay combustion reactions by releasing CO2 and H2O when heated, both of which are flame retardants. Mild disinfectant It has weak disinfectant properties, and it may be an effective fungicide against some organisms. Because baking soda will absorb musty smells, it has become a reliable method for used book sellers when making books less malodorous. Fire extinguisher Sodium bicarbonate can be used to extinguish small grease or electrical fires by being thrown over the fire, as heating of sodium bicarbonate releases carbon dioxide. However, it should not be applied to fires in deep fryers; the sudden release of gas may cause the grease to splatter. Sodium bicarbonate is used in BC dry chemical fire extinguishers as an alternative to the more corrosive monoammonium phosphate in ABC extinguishers. The alkaline nature of sodium bicarbonate makes it the only dry chemical agent, besides Purple-K, that was used in large-scale fire suppression systems installed in commercial kitchens. Because it can act as an alkali, the agent has a mild saponification effect on hot grease, which forms a smothering, soapy foam. Neutralization of acids Sodium bicarbonate reacts spontaneously with acids, releasing CO2 gas as a reaction product. It is commonly used to neutralize unwanted acid solutions or acid spills in chemical laboratories. It is not appropriate to use sodium bicarbonate to neutralize base even though it is amphoteric, reacting with both acids and bases. Agriculture Sodium bicarbonate when applied on leaves, can prevent the growth of fungi; however, it does not kill the fungus. Excessive amount of sodium bicarbonate can cause discolouration of fruits (two percent solution) and chlorosis (one percent solution). Medical uses and health Sodium bicarbonate mixed with water can be used as an antacid to treat acid indigestion and heartburn. Its reaction with stomach acid produces salt, water, and carbon dioxide: NaHCO3 + HCl → NaCl + H2O + CO2(g)A mixture of sodium bicarbonate and polyethylene glycol such as PegLyte, dissolved in water and taken orally, is an effective gastrointestinal lavage preparation and laxative prior to gastrointestinal surgery, gastroscopy, etc.Intravenous sodium bicarbonate in an aqueous solution is sometimes used for cases of acidosis, or when insufficient sodium or bicarbonate ions are in the blood. In cases of respiratory acidosis, the infused bicarbonate ion drives the carbonic acid/bicarbonate buffer of plasma to the left, and thus raises the pH. For this reason, sodium bicarbonate is used in medically supervised cardiopulmonary resuscitation. Infusion of bicarbonate is indicated only when the blood pH is markedly low (< 7.1–7.0).HCO3− is used for treatment of hyperkalemia, as it will drive K+ back into cells during periods of acidosis. Since sodium bicarbonate can cause alkalosis, it is sometimes used to treat aspirin overdoses. Aspirin requires an acidic environment for proper absorption, and a basic environment will diminish aspirin absorption in cases of overdose. Sodium bicarbonate has also been used in the treatment of tricyclic antidepressant overdose. It can also be applied topically as a paste, with three parts baking soda to one part water, to relieve some kinds of insect bites and stings (as well as accompanying swelling).Some alternative practitioners, such as Tullio Simoncini, have promoted baking soda as a cancer cure, which the American Cancer Society has warned against due to both its unproven effectiveness and potential danger in use. Edzard Ernst has called the promotion of sodium bicarbonate as a cancer cure "one of the more sickening alternative cancer scams I have seen for a long time".Sodium bicarbonate can be added to local anesthetics, to speed up the onset of their effects and make their injection less painful. It is also a component of Moffetts solution, used in nasal surgery.It has been proposed that acidic diets weaken bones. One systematic meta-analysis of the research shows no such effect. Another also finds that there is no evidence that alkaline diets improve bone health, but suggests that there "may be some value" to alkaline diets for other reasons.Antacid (such as baking soda) solutions have been prepared and used by protesters to alleviate the effects of exposure to tear gas during protests.Similarly to its use in baking, sodium bicarbonate is used together with a mild acid such as tartaric acid as the excipient in effervescent tablets: when such a tablet is dropped in a glass of water, the carbonate leaves the reaction medium as carbon dioxide gas (HCO3− + H+ → H2O + CO2↑ or, more precisely, HCO3− + H3O+ → 2 H2O + CO2↑). This makes the tablet disintegrate, leaving the medication suspended and/or dissolved in the water together with the resulting salt (in this example, sodium tartrate). Personal hygiene Sodium bicarbonate is also used as an ingredient in some mouthwashes. It has anticaries and abrasive properties. It works as a mechanical cleanser on the teeth and gums, neutralizes the production of acid in the mouth, and also acts as an antiseptic to help prevent infections. Sodium bicarbonate in combination with other ingredients can be used to make a dry or wet deodorant. Sodium bicarbonate may be used as a buffering agent, combined with table salt, when creating a solution for nasal irrigation.It is used in eye hygiene to treat blepharitis. This is done by addition of a teaspoon of sodium bicarbonate to cool water that was recently boiled, followed by gentle scrubbing of the eyelash base with a cotton swab dipped in the solution. Veterinary uses Sodium bicarbonate is used as a cattle feed supplement, in particular as a buffering agent for the rumen. Cleaning agent Sodium bicarbonate is used in a process for removing paint and corrosion called sodablasting. As a blasting medium, sodium bicarbonate is used to remove surface contamination from softer and less resilient substrates such as aluminium, copper or timber which could be damaged by silica sand abrasive media.A manufacturer recommends a paste made from baking soda with minimal water as a gentle scouring powder, and is useful in removing surface rust, as the rust forms a water-soluble compound when in a concentrated alkaline solution; cold water should be used, as hot-water solutions can corrode steel. Sodium bicarbonate attacks the thin protective oxide layer that forms on aluminium, making it unsuitable for cleaning this metal. A solution in warm water will remove the tarnish from silver when the silver is in contact with a piece of aluminium foil. Baking soda is commonly added to washing machines as a replacement for water softener and to remove odors from clothes. It is also almost as effective in removing heavy tea and coffee stains from cups as Sodium hydroxide, when diluted with warm water. During the Manhattan Project to develop the nuclear bomb in the early 1940s, the chemical toxicity of uranium was an issue. Uranium oxides were found to stick very well to cotton cloth, and did not wash out with soap or laundry detergent. However, the uranium would wash out with a 2% solution of sodium bicarbonate. Clothing can become contaminated with toxic dust of depleted uranium (DU), which is very dense, hence used for counterweights in a civilian context, and in armour-piercing projectiles. DU is not removed by normal laundering; washing with about 6 ounces (170 g) of baking soda in 2 gallons (7.5 L) of water will help to wash it out. Odor control It is often claimed that baking soda is an effective odor remover, and it is often recommended that an open box be kept in the refrigerator to absorb odor. This idea was promoted by the leading U.S. brand of baking soda, Arm & Hammer, in an advertising campaign starting in 1972. Though this campaign is considered a classic of marketing, leading within a year to more than half of American refrigerators containing a box of baking soda, there is little evidence that it is in fact effective in this application. Chemistry Sodium bicarbonate is an amphoteric compound. Aqueous solutions are mildly alkaline due to the formation of carbonic acid and hydroxide ion: HCO−3 + H2O → H2CO3 + OH−Sodium bicarbonate can often be used as a safer alternative to sodium hydroxide, and as such can be used as a wash to remove any acidic impurities from a "crude" liquid, producing a purer sample. Reaction of sodium bicarbonate and an acid produces a salt and carbonic acid, which readily decomposes to carbon dioxide and water: NaHCO3 + HCl → NaCl + H2CO3 H2CO3 → H2O + CO2(g)Sodium bicarbonate reacts with acetic acid (found in vinegar), producing sodium acetate, water, and carbon dioxide: NaHCO3 + CH3COOH → CH3COONa + H2O + CO2(g)Sodium bicarbonate reacts with bases such as sodium hydroxide to form carbonates: NaHCO3 + NaOH → Na2CO3 + H2O Thermal decomposition At temperatures from 80–100 °C (176–212 °F), sodium bicarbonate gradually decomposes into sodium carbonate, water, and carbon dioxide. The conversion is faster at 200 °C (392 °F): 2 NaHCO3 → Na2CO3 + H2O + CO2Most bicarbonates undergo this dehydration reaction. Further heating converts the carbonate into the oxide (above 850 °C/1,560 °F): Na2CO3 → Na2O + CO2These conversions are relevant to the use of NaHCO3 as a fire-suppression agent ("BC powder") in some dry-powder fire extinguishers. Stability and shelf life If kept cool (room temperature) and dry (an airtight container is recommended to keep out moist air), sodium bicarbonate can be kept without a significant amount of decomposition for at least two or three years. History The word natron has been in use in many languages throughout modern times (in the forms of anatron, natrum and natron) and originated (like Spanish, French and English natron as well as sodium) via Arabic naṭrūn (or anatrūn; cf. the Lower Egyptian “Natrontal” Wadi El Natrun, where a mixture of sodium carbonate and sodium hydrogen carbonate for the dehydration of mummies was used ) from Greek nítron (νίτρον) (Herodotus; Attic lítron (λίτρον)), which can be traced back to ancient Egyptian ntr. The Greek nítron (soda, saltpeter) was also used in Latin (sal) nitrum and in German Salniter (the source of Nitrogen, Nitrat etc.).In 1791, French chemist Nicolas Leblanc produced sodium carbonate, also known as soda ash. The pharmacist Valentin Rose the Younger is credited with the discovery of sodium bicarbonate in 1801 in Berlin. In 1846, two American bakers, John Dwight and Austin Church, established the first factory in the United States to produce baking soda from sodium carbonate and carbon dioxide.Saleratus, potassium or sodium bicarbonate, is mentioned in the novel Captains Courageous by Rudyard Kipling as being used extensively in the 1800s in commercial fishing to prevent freshly caught fish from spoiling.In 1919, US Senator Lee Overman declared that bicarbonate of soda could cure the Spanish flu. In the midst of the debate on 26 January 1919, he interrupted the discussion to announce the discovery of a cure. "I want to say, for the benefit of those who are making this investigation," he reported, "that I was told by a judge of a superior court in the mountain country of North Carolina they have discovered a remedy for this disease." The purported cure implied a critique of modern science and an appreciation for the simple wisdom of simple people. "They say that common baking soda will cure the disease," he continued, "that they have cured it with it, that they have no deaths up there at all; they use common baking soda, which cures the disease." Production Sodium bicarbonate is produced industrially from sodium carbonate: Na2CO3 + CO2 + H2O → 2 NaHCO3It is produced on the scale of about 100,000 tonnes/year (as of 2001) with a worldwide production capacity of 2.4 million tonnes per year (as of 2002). Commercial quantities of baking soda are also produced by a similar method: soda ash, mined in the form of the ore trona, is dissolved in water and treated with carbon dioxide. Sodium bicarbonate precipitates as a solid from this solution.Regarding the Solvay process, sodium bicarbonate is an intermediate in the reaction of sodium chloride, ammonia, and carbon dioxide. The product however shows low purity (75pc). NaCl + CO2 + NH3 + H2O → NaHCO3 + NH4ClAlthough of no practical value, NaHCO3 may be obtained by the reaction of carbon dioxide with an aqueous solution of sodium hydroxide: CO2 + NaOH → NaHCO3 Mining Naturally occurring deposits of nahcolite (NaHCO3) are found in the Eocene-age (55.8–33.9 Mya) Green River Formation, Piceance Basin in Colorado. Nahcolite was deposited as beds during periods of high evaporation in the basin. It is commercially mined using common underground mining techniques such as bore, drum, and longwall mining in a fashion very similar to coal mining.It is also produced by solution mining, pumping heated water through nahcolite beds and crystalizing the dissolved nahcolite through a cooling crystallization process. In popular culture Sodium bicarbonate, as "bicarbonate of soda", was a frequent source of punch lines for Groucho Marx in Marx Brothers movies. In Duck Soup, Marx plays the leader of a nation at war. In one scene, he receives a message from the battlefield that his general is reporting a gas attack, and Groucho tells his aide: "Tell him to take a teaspoonful of bicarbonate of soda and a half a glass of water." In A Night at the Opera, Grouchos character addresses the opening night crowd at an opera by saying of the lead tenor: "Signor Lassparri comes from a very famous family. His mother was a well-known bass singer. His father was the first man to stuff spaghetti with bicarbonate of soda, thus causing and curing indigestion at the same time."In the Joseph L. Mankewicz classic All About Eve, the Max Fabian character (Gregory Ratoff) has an extended scene with Margo Channing (Bette Davis) in which, suffering from heartburn, he requests and then drinks bicarbonate of soda, eliciting a prominent burp. Channing promises to always keep a box of bicarb with Maxs name on it. See also References Bibliography External links International Chemical Safety Card 1044
Bacterial vaginosis
Bacterial vaginosis (BV) is a disease of the vagina caused by excessive growth of bacteria. Common symptoms include increased vaginal discharge that often smells like fish. The discharge is usually white or gray in color. Burning with urination may occur. Itching is uncommon. Occasionally, there may be no symptoms. Having BV approximately doubles the risk of infection by a number of sexually transmitted infections, including HIV/AIDS. It also increases the risk of early delivery among pregnant women.BV is caused by an imbalance of the naturally occurring bacteria in the vagina. There is a change in the most common type of bacteria and a hundred to thousand fold increase in total numbers of bacteria present. Typically, bacteria other than Lactobacilli become more common. Risk factors include douching, new or multiple sex partners, antibiotics, and using an intrauterine device, among others. However, it is not considered a sexually transmitted infection and, unlike gonorrhoea and chlamydia, sexual partners are not treated. Diagnosis is suspected based on the symptoms, and may be verified by testing the vaginal discharge and finding a higher than normal vaginal pH, and large numbers of bacteria. BV is often confused with a vaginal yeast infection or infection with Trichomonas.Usually treatment is with an antibiotic, such as clindamycin or metronidazole. These medications may also be used in the second or third trimesters of pregnancy. However, the condition often recurs following treatment. Probiotics may help prevent re-occurrence. It is unclear if the use of probiotics or antibiotics affects pregnancy outcomes.BV is the most common vaginal infection in women of reproductive age. The percentage of women affected at any given time varies between 5% and 70%. BV is most common in parts of Africa and least common in Asia and Europe. In the United States about 30% of women between the ages of 14 and 49 are affected. Rates vary considerably between ethnic groups within a country. While BV-like symptoms have been described for much of recorded history, the first clearly documented case occurred in 1894. Signs and symptoms Although about 50% of women with BV are asymptomatic, common symptoms include increased vaginal discharge that usually smells like fish. The discharge is often white or gray in color. There may be burning with urination. Occasionally, there may be no symptoms.The discharge coats the walls of the vagina, and is usually without significant irritation, pain, or erythema (redness), although mild itching can sometimes occur. By contrast, the normal vaginal discharge will vary in consistency and amount throughout the menstrual cycle and is at its clearest at ovulation—about two weeks before the period starts. Some practitioners claim that BV can be asymptomatic in almost half of affected women, though others argue that this is often a misdiagnosis. Complications Although previously considered a mere nuisance infection, untreated bacterial vaginosis may cause increased susceptibility to sexually transmitted infections, including HIV, and pregnancy complications.It has been shown that HIV-infected women with bacterial vaginosis (BV) are more likely to transmit HIV to their sexual partners than those without BV. There is evidence of an association between BV and increased rates of sexually transmitted infections such as HIV/AIDS. BV is associated with up to a six-fold increase in HIV shedding. BV is a risk factor for viral shedding and herpes simplex virus type 2 infection. BV may increase the risk of infection with or reactivation of human papillomavirus (HPV).In addition, bacterial vaginosis as either pre-existing, or acquired, may increase the risk of pregnancy complications, most notably premature birth or miscarriage. Pregnant women with BV have a higher risk of chorioamnionitis, miscarriage, preterm birth, premature rupture of membranes, and postpartum endometritis. Women with BV who are treated with in vitro fertilization have a lower implantation rate and higher rates of early pregnancy loss. Causes Healthy vaginal microbiota consists of species that neither cause symptoms or infections, nor negatively affect pregnancy. It is dominated mainly by Lactobacillus species. BV is defined by the disequilibrium in the vaginal microbiota, with decline in the number of lactobacilli. While the infection involves a number of bacteria, it is believed that most infections start with Gardnerella vaginalis creating a biofilm, which allows other opportunistic bacteria to thrive.One of the main risks for developing BV is douching, which alters the vaginal microbiota and predisposes women to developing BV. Douching is strongly discouraged by the U.S. Department of Health and Human Services and various medical authorities, for this and other reasons.BV is a risk factor for pelvic inflammatory disease, HIV, sexually transmitted infections (STIs), and reproductive and obstetric disorders or negative outcomes. Although BV can be associated with sexual activity, there is no clear evidence of sexual transmission. It is possible for sexually inactive persons to develop bacterial vaginosis.Also, subclinical iron deficiency may correlate with bacterial vaginosis in early pregnancy. A longitudinal study published in February 2006, in the American Journal of Obstetrics and Gynecology, showed a link between psychosocial stress and bacterial vaginosis persisted even when other risk factors were taken into account. Exposure to the spermicide nonoxynol-9 does not affect the risk of developing bacterial vaginosis. Diagnosis To make a diagnosis of bacterial vaginosis, a swab from inside the vagina should be obtained. These swabs can be tested for: Gram stain which shows the depletion of lactobacilli and overgrowth of Gardnerella vaginalis bacteria. Bacterial vaginosis is usually confirmed by a Gram stain of vaginal secretions. A characteristic "fishy" odor on wet mount. This test, called the whiff test, is performed by adding a small amount of potassium hydroxide to a microscope slide containing the vaginal discharge. A characteristic fishy odor is considered a positive whiff test and is suggestive of bacterial vaginosis. Loss of acidity. To control bacterial growth, the vagina is normally slightly acidic with a pH of 3.8–4.2. A swab of the discharge is put onto litmus paper to check its acidity. A pH greater than 4.5 is considered alkaline and is suggestive of bacterial vaginosis. The presence of clue cells on wet mount. Similar to the whiff test, the test for clue cells is performed by placing a drop of sodium chloride solution on a slide containing vaginal discharge. If present, clue cells can be visualized under a microscope. They are so-named because they give a clue to the reason behind the discharge. These are epithelial cells that are coated with bacteria.Differential diagnosis for bacterial vaginosis includes the following: Normal vaginal discharge. Candidiasis (thrush, or a yeast infection). Trichomoniasis, an infection caused by Trichomonas vaginalis. Aerobic vaginitisThe Center for Disease Control (CDC) defines STIs as "a variety of clinical syndromes and infections caused by pathogens that can be acquired and transmitted through sexual activity." But the CDC does not specifically identify BV as sexually transmitted infection. Amsel criteria In clinical practice BV can be diagnosed using the Amsel criteria: Thin, white, yellow, homogeneous discharge Clue cells on microscopy pH of vaginal fluid >4.5 Release of a fishy odor on adding alkali—10% potassium hydroxide (KOH) solution.At least three of the four criteria should be present for a confirmed diagnosis. A modification of the Amsel criteria accepts the presence of two instead of three factors and is considered equally diagnostic. Gram stain An alternative is to use a Gram-stained vaginal smear, with the Hay/Ison criteria or the Nugent criteria. The Hay/Ison criteria are defined as follows: Grade 1 (Normal): Lactobacillus morphotypes predominate. Grade 2 (Intermediate): Some lactobacilli present, but Gardnerella or Mobiluncus morphotypes also present. Grade 3 (Bacterial Vaginosis): Predominantly Gardnerella and/or Mobiluncus morphotypes. Few or absent lactobacilli. (Hay et al., 1994)Gardnerella vaginalis is the main culprit in BV. Gardnerella vaginalis is a short, Gram-variable rod (coccobacillus). Hence, the presence of clue cells and gram variable coccobacilli are indicative or diagnostic of bacterial vaginosis. Nugent score The Nugent score is now rarely used by physicians due to the time it takes to read the slides and requires the use of a trained microscopist. A score of 0-10 is generated from combining three other scores. The scores are as follows: 0–3 is considered negative for BV 4–6 is considered intermediate 7+ is considered indicative of BV.At least 10–20 high power (1000× oil immersion) fields are counted and an average determined. DNA hybridization testing with Affirm VPIII was compared to the Gram stain using the Nugent criteria. The Affirm VPIII test may be used for the rapid diagnosis of BV in symptomatic women but uses expensive proprietary equipment to read results, and does not detect other pathogens that cause BV, including Prevotella spp, Bacteroides spp, and Mobiluncus spp. The cervicovaginal microbiome measured using 16S rRNA sequencing has the capacity to increase throughput of the Nugent Score and has demonstrate to be directly comparable to clinical Nugent Score measurement. Screening Screening during pregnancy is not recommended in the United States as of 2020. Prevention Some steps suggested to lower the risk include: not douching, avoiding sex, or limiting the number of sex partners.One review concluded that probiotics may help prevent re-occurrence. Another review found that, while there is tentative evidence, it is not strong enough to recommend their use for this purpose.Early evidence suggested that antibiotic treatment of male partners could re-establish the normal microbiota of the male urogenital tract and prevent the recurrence of infection. However, a 2016 Cochrane review found high-quality evidence that treating the sexual partners of women with bacterial vaginosis had no effect on symptoms, clinical outcomes, or recurrence in the affected women. It also found that such treatment may lead treated sexual partners to report increased adverse events. Treatment Antibiotics Treatment is typically with the antibiotics metronidazole or clindamycin. They can be either given by mouth or applied inside the vagina with similar efficacy. About 10% to 15% of people, however, do not improve with the first course of antibiotics and recurrence rates of up to 80% have been documented. Recurrence rates are increased with sexual activity with the same pre-/posttreatment partner and inconsistent condom use although estrogen-containing contraceptives decrease recurrence. When clindamycin is given to pregnant women symptomatic with BV before 22 weeks of gestation the risk of pre-term birth before 37 weeks of gestation is lower.Other antibiotics that may work include macrolides, lincosamides, nitroimidazoles, and penicillins.Bacterial vaginosis is not considered a sexually transmitted infection, and treatment of a male sexual partner of a woman with bacterial vaginosis is not recommended. Probiotics A 2009 Cochrane review found tentative but insufficient evidence for probiotics as a treatment for BV. A 2014 review reached the same conclusion. A 2013 review found some evidence supporting the use of probiotics during pregnancy. The preferred probiotics for BV are those containing high doses of lactobacilli (around 109 CFUs) given in the vagina. Intravaginal administration is preferred to taking them by mouth. Prolonged repetitive courses of treatment appear to be more promising than short courses.The lack of effectiveness of commercially available Lactobacillus probiotics may be because most do not actually contain vaginal lactobacilli strains. LACTIN-V is a live biopharmaceutical medication containing the vaginally important Lactobacillus crispatus which is under development for the treatment of bacterial vaginosis and recurrent urinary tract infections. It has shown initial effectiveness in considerably reducing recurrence of bacterial vaginosis following antibiotic treatment. LACTIN-V is not yet Food and Drug Administration (FDA)-approved or commercially available. Antiseptics Topical antiseptics, for example dequalinium chloride, policresulen, hexetidine or povidone-iodine vaginal suppositories may be applied, if the risk of ascending infections is low (outside of pregnancy and in immunocompetent people without histories of upper genital tract infections). One study found that vaginal irrigations with hydrogen peroxide (3%) resulted in a slight improvement but this was much less than with the use of oral metronidazole. Intravaginal boric acid in conjunction with other medications may be helpful in the treatment of recurrent BV. TOL-463, a formulation of boric acid enhanced with ethylenediaminetetraacetic acid (EDTA), is under development as an intravaginal medication for the treatment of BV and has shown preliminary effectiveness. Epidemiology BV is the most common infection of the vagina in women of reproductive age. The percentage of women affected at any given time varies between 5% and 70%. BV is most common in parts of Africa, and least common in Asia and Europe. In the United States, about 30% of those between the ages of 14 and 49 are affected. Rates vary considerably between ethnic groups within a country. References == External links ==
Nonallergic rhinitis
Nonallergic rhinitis is inflammation of the inner part of the nose that is not caused by an allergy. Nonallergic rhinitis involves symptoms including chronic sneezing or having a congested, drippy nose without an identified allergic reaction. Other common terms for nonallergic rhinitis are vasomotor rhinitis and perennial rhinitis. The prevalence of nonallergic rhinitis in otolaryngology is 40%. Allergic rhinitis is more common than nonallergic rhinitis; however, both conditions have similar presentation, manifestation and treatment. Nasal itching and paroxysmal sneezing are usually associated with nonallergic rhinitis in comparison to allergic rhinitis. Types Rhinitis medicamentosa – rebound nasal congestion suspected to be brought on by extended use of topical decongestants and certain oral medications that constrict blood vessels in the nose. Treatment includes withdrawal of nasal drops, short courses of systemic steroid therapy and in some cases, surgical reduction of turbinates, if they have become hypertrophied. Rhinitis of pregnancy – pregnant women may develop persistent rhinitis due to hormonal changes. Nasal mucous become edematous and block the airway. Some may develop secondary infection and even sinusitis in such cases. Care should be taken while prescribing drugs. Generally, local measures such as limited use of nasal drops, topical steroids and limited surgery (cryosurgery) to turbinates are sufficient to relate the symptoms. Safety of developing fetus is not established for newer antihistamines and they should be avoided. Honeymoon rhinitis – this usually follows sexual excitement, leading to nasal stuffiness. The condition appears to be genetically determined and caused by the presence in the nose of erectile tissue which may become engorged during sexual arousal, as a side effect of the signals from the autonomic nervous system that trigger changes in the genitals of both men and women. A related condition called sexually induced sneezing also exists, where people sneeze, sometimes uncontrollably, when engaging in or even thinking about sexual activity. A phenomenon presumably related to honeymoon rhinitis is the frequent side effect of nasal congestion during the use of Viagra or related phosphodiesterase type 5 antagonists. Gustatory rhinitis – spicy and pungent food may in some people produce rhinorrhea, nasal stuffiness, lacrimation, sweating and flushing of face. It can be relieved by ipratropium bromide nasal spray (an anticholinergic), a few minutes before a meal. Non-air flow rhinitis – it is seen in patients of laryngectomy, tracheostomy and choanal atresia. Nose is not used for air flow and the turbinates become swollen due to loss of vasomotor control. In choanal atresia there is an additional factor of infection due to stagnation of discharge in the nasal cavity which should otherwise drain freely into nasopharynx. Photic sneeze reflex is a reflex condition that causes sneezing in response to looking at bright lights. Presentation Paroxysmal sneezing in morning, especially in morning while getting out of bed. Excessive rhinorrhea – watering discharge from the nose when patient bends forward. Nasal obstruction – bilateral nasal stuffiness alternates from one site to other; this is more marked at night, when the dependent side of nose is often blocked. Postnasal drip. Complications Nonallergic rhinitis cases may subsequently develop polyps, turbinate hypertrophy and sinusitis. Pathophysiology Nasal mucosa has a rich blood supply and has venous sinusoids or "lakes" surrounded by smooth muscle fibers. These smooth muscle fibers act as sphincters and control the filling and emptying of sinusoids. Sympathetic stimulation causes vasoconstriction and shrinkage of mucosa, which leads to decongestion of the nose. Parasympathetic stimulation causes not only excessive secretion from the nasal gland but also vasodilatation and engorgement, which lead to rhinorrhoea and congestion of the nose. The autonomic nervous system, which supplies the nasal mucosa, is under the control of the hypothalamus. Diagnosis Nose examination: The mucosa is usually boggy and edematous with clear mucoid secretions. The turbinates are congested and hypertrophic.Pharynx examination: Mucosal injection and lymphoid hyperplasia involving tonsils, adenoids and base of tongue may be seen. Investigations Absolute eosinophil count, nasal smear, skin and in vitro allergy tests to rule out allergic rhinitis, acoustic rhinometry for measuring nasal patency, smell testing, CT scan in cases of sinus disease and MRI in case of mass lesions. Classification Treatment Medical The avoidance of inciting factors such as sudden changes in temperature, humidity, or blasts of air or dust is helpful. Intranasal application of antihistamines, corticosteroids, or anticholinergics may also be used for vasomotor rhinitis. Intranasal cromolyn sodium may be used in patients older than two years. A Cochrane review concluded that it is unclear whether intranasal corticosteroids, when compared with a placebo, reduce patient‐reported disease severity in people with nonallergic/vasomotor rhinitis due to the low certainty of the evidence available from clinical trials. However, intranasal corticosteroids probably increase risk of nosebleeds.Astelin (azelastine) "is indicated for symptomatic treatment of vasomotor rhinitis including rhinorrhea, nasal congestion, and post nasal drip in adults and children 12 years of age and older." Surgical Reduction of hypertrophied turbinates, correction of nasal septum deviation, removal of polyps, sectioning of the parasympathetic secretomotor fiber to nose (vidian neurectomy) for controlling refractory excessive rhinorrhea. See also Snatiation == References ==
Ventriculitis
Ventriculitis is the inflammation of the ventricles in the brain. The ventricles are responsible for containing and circulating cerebrospinal fluid throughout the brain. Ventriculitis is caused by infection of the ventricles, leading to swelling and inflammation. This is especially prevalent in patients with external ventricular drains and intraventricular stents. Ventriculitis can cause a wide variety of short-term symptoms and long-term side effects ranging from headaches and dizziness to unconsciousness and death if not treated early. It is treated with some appropriate combination of antibiotics in order to rid the patient of the underlying infection. Much of the current research involving ventriculitis focuses specifically around defining the disease and what causes it. This will allow for much more advancement in the subject. There is also a lot of attention being paid to possible treatments and prevention methods to help make this disease even less prevalent and dangerous. Signs and symptoms There is great deal of variety in the symptoms associated with ventriculitis. The symptoms vary based on a number of different factors including severity of inflammation, underlying cause, and the individual patient. Patients often present with headaches, painful cranial pressure, and neck pain early in the progression of the disease. Patients with a more advanced infection have been known to complain of many neurological effects such as dizziness, vertigo, confusion, and slurred speech. Very advanced cases can lead to mental instability, nausea, vomiting, rigors, and temporary loss of consciousness. Many patients with ventriculitis also experience some degree of hydrocephalus, which is the buildup of cerebrospinal fluid due to the inability of the ventricles to reabsorb and correctly circulate the fluid. Brain abscess is another common disorder resulting from the inflammation. If left untreated, ventriculitis can lead to serious inhibition of mental function and even death. The symptoms vary greatly, in part, because of the underlying or causing infection. While the inflammation can cause a number of effects such as those mentioned previously, the base infection could cause other symptoms that dont necessarily have to do with the ventriculitis, itself. One of the challenges doctors face in diagnosing ventriculitis is distinguishing indicative symptoms, in spite of the wide variety of possible presentations of the disease. A great deal of emphasis is being put on research into better and faster ways to diagnose ventriculitis without the delay inherent with microbiological testing of the cerebrospinal fluid.The progression of the disease is also largely dependent on the nature of the specific case. Depending on the underlying infection, the way it entered the brain, and the type and timing of treatment, the infection may spread or withdraw on the order of months or days. Ventriculitis is a very serious condition and should be treated early to ensure as little lasting damage as possible. Cause Ventriculitis is caused by an infection of the ventricles, causing an immune response in the lining, which in turn, leads to inflammation. The ventriculitis, is in truth, a complication of the initial infection or abnormality. The underlying infection can come in the form of a number of different bacteria or viruses. The data seems to point to Staphylococci as the leading bacterial cause of infection leading to ventriculitis being present in about 90% of cases, but generally, what is of more concern is the way the infection entered the ventricles. The brain in its natural state is very protected from infection. The blood–brain barrier serves to keep pathogens from entering the sensitive areas of the brain. However, when those natural defenses are by-passed in the hospital setting, the brain is suddenly exposed to a host of potentially harmful bacteria and viruses. Patients that have had invasive brain surgery or procedures are considered to be the most at risk for experiencing ventriculitis. Two procedures, in particular have been studied extensively due to their high rate of ventriculitis contractions post operation. The first group consists of patients that have had an external ventricular drain implanted to allow physicians to reduce the intracranial pressure they experience. The duration that the drain is implanted varies by necessity, however, the longer the drain is in, the more likely an infection will occur. The second group consists of patients that have an implanted intracranial stent. Both groups of patients have a much higher rate of ventriculitis than the general populace, though there is very little supporting evidence due to the lack of definition of ventriculitis as frequent misdiagnosis. Nearly 25% of patients with an external ventricular drain experience infection-based meningitis or ventriculitis. Diagnosis Ventriculitis is commonly diagnosed using a variety of tests or procedures. When a physician suspects that a patient has ventriculitis, the first step is typically to ascertain the presence of the inflammation using computed tomography (CT) or magnetic resonance imaging (MRI) technology to "take a picture" of the brain. The scans allow physicians to check for "intraventricular debris and pus, abnormal periventricular and subependymal signal intensity, and enhancement of the ventricular lining," all of which indicate the likelihood of ventriculitis. MRIs have been reported as being highly effective and sensitive in detecting such indicators, even from an early stage. After determining whether a patient shows signs of ventriculitis, the doctor may choose to pursue a more specific and useful diagnosis to find the cause of the ventriculitis. This is done by obtaining a sample of cerebrospinal fluid, most commonly via a procedure called a lumbar puncture or spinal tap. For patients with an implanted external ventricular drain, cerebrospinal fluid can be collected from the drains output. After the sample of fluid is obtained, a battery of tests featuring gram staining will be performed to identify any offending pathogen or infection agent. The test will also determine any resistance the pathogen may have to antibiotics. By identifying the viral or bacterial cause of the ventriculitis, doctors are more able to effectively treat the inflammation and infection. This procedure is fairly effective, but is rarely able to isolate anaerobic organisms that may be causing the inflammation, giving cause for further research and procedural development.Though they present with similar symptoms and often occur in tandem, meningitis and ventriculitis are two different diseases, so physicians must be able to distinguish between the two. Meningitis is the inflammation of the protective lining of the central nervous system, called meninges. Because of the similar pathologies and cause of the two types of inflammation, they are difficult to differentiate using chemical testing, but show very different visual effects in both the MRI and CT scans, hence their use as a validation that the patient does, in fact, have ventriculitis and not another, similar condition such as meningitis. Treatment Treatment of ventriculitis is critical. If left untreated, it could lead to severe brain damage and even death in some cases. Currently, the only commonly employed treatments of ventriculitis involve an antibiotic regimen targeting the underlying infection causing the inflammation. Typically, the physician will order the patient be placed on broad-spectrum antibiotics in order to manage the symptoms and control the infection while the cerebrospinal fluid samples are analyzed. When a specific bacterial or viral cause is found, the doctor will change the treatment accordingly. There is some debate as to the most effective antibiotics and the best ways to introduce the drugs (e.g. intravenously, orally, etc.), however it is agreed that drug effectiveness is limited by the difficulty of non-invasively allowing the drugs to enter the cerebrospinal fluid. Should intracranial pressure reach unsafe levels, the patient may need to have cerebrospinal fluid drained. Implanted external ventricular drains are one of the more common ways to manage and monitor the intracranial pressure, however there are several risks involved with such an invasive procedure, including the risk of further infection. There is a great deal of research focused around prevention of ventriculitis. It is crucial that any procedure involving exposing the brain is performed with the utmost care, as infections in the brain are very dangerous and potentially deadly. When patients undergo such procedures, they are often monitored closely over the next several days to ensure that there were no infections and any instance of even a small headache is treated very seriously. It is also necessary to monitor the intracranial pressure of the patients often enough to observe significant changes that could indicate the presence of and infection and ensuing ventriculitis. It is important not to measure the pressure too often, however, as it could in fact lead to infection. Current research Due to the poor definition of the condition that is ventriculitis, there is still a great deal that is not known about this dangerous condition. While other, similar conditions, such as meningitis or encephalitis, have been thoroughly researched, ventriculitis is a very loose grouping of conditions characterized by the fact that the lining of the ventricles is inflamed. Because no solid definition has been accepted across the medical community, research in the subject has been slow to progress. However, most common research into ventriculitis has been focused on the main points of causation, demographic information, and effectiveness of treatments and prevention methods. Causation One of the key areas of research for ventriculitis is discovering and defining exactly what causes it. There are many bacterial and viral infections that could cause inflammation of the ventricles, but researchers are trying to define which are the most common pathogens, the risk levels associated with various medical operations and procedures, and why the symptoms vary so much on a case-by-case basis. Answering these questions will allow doctors to not only better understand ventriculitis, but better treat and prevent it as well. Demographics Currently, there is very little understanding as to who is at increased risk for ventriculitis, other than those who have undergone neurosurgery or procedures involving brain exposure. Even then, current clinical practices cant predict which patients will be affected. In order to predict which populations should be focused on, researchers must gather more case information about who is diagnosed with ventriculitis and how they present. In essence, the medical community must compile data of as many details as possible from each case so that more generalized conclusions may be drawn. Treatment and prevention So little is currently known about how ventriculitis should be defined and those it affects that even less can be known about prevention methods. While treatment is fairly standard for any infection to some degree, prevention is a different matter. One popular theory is the use of prophylactic antibiotics, administered during insertion of external ventricular drains or ventricular stents with the hope of preventing infection. The results of these studies have been more or less inconclusive due to a lack of standardized protocol, showing no significant benefit to using antibiotics as a preventative measure. References == External links ==
Vitamin D deficiency
Vitamin D deficiency or hypovitaminosis D is a vitamin D level that is below normal. It most commonly occurs in people when they have inadequate exposure to sunlight, particularly sunlight with adequate ultraviolet B rays (UVB). Vitamin D deficiency can also be caused by inadequate nutritional intake of vitamin D; disorders that limit vitamin D absorption; and disorders that impair the conversion of vitamin D to active metabolites, including certain liver, kidney, and hereditary disorders. Deficiency impairs bone mineralization, leading to bone-softening diseases, such as rickets in children. It can also worsen osteomalacia and osteoporosis in adults, increasing the risk of bone fractures. Muscle weakness is also a common symptom of vitamin D deficiency, further increasing the risk of fall and bone fractures in adults. Vitamin D deficiency is associated with the development of schizophrenia.Vitamin D can be synthesized in the skin under the exposure of UVB from sunlight. Oily fish, such as salmon, herring, and mackerel, are also sources of vitamin D, as are mushrooms. Milk is often fortified with vitamin D; sometimes bread, juices, and other dairy products are fortified with vitamin D. Many multivitamins contain vitamin D in different amounts. Classifications Vitamin D deficiency is typically diagnosed by measuring the concentration of the 25-hydroxyvitamin D in the blood, which is the most accurate measure of stores of vitamin D in the body. One nanogram per millilitre (1 ng/mL) is equivalent to 2.5 nanomoles per litre (2.5 nmol/L). Severe deficiency: <12 ng/mL = <30 nmol/L Deficiency: <20 ng/mL = <50 nmol/L Insufficient: 20–29 ng/mL = 50–75 nmol/L Normal: 30–50 ng/mL = 75–125 nmol/LVitamin D levels falling within this normal range prevent clinical manifestations of vitamin D insufficiency as well as vitamin D toxicity. Signs and symptoms In most cases, vitamin D deficiency is almost asymptomatic. It may only be detected on blood tests but is the cause of some bone diseases and is associated with other conditions: Complications Rickets, a childhood disease characterized by impeded growth and deformity of the long bones. The earliest sign of vitamin D deficiency is craniotabes, abnormal softening or thinning of the skull. Osteomalacia, a bone-thinning disorder that occurs exclusively in adults and is characterized by proximal muscle weakness and bone fragility. Women with vitamin D deficiency who have been through multiple pregnancies are at elevated risk of osteomalacia. Osteoporosis, a condition characterized by reduced bone mineral density and increased bone fragility Increased risk of fracture Muscle aches, weakness, and twitching (fasciculations), due to reduced blood calcium (hypocalcemia) Periodontitis, local inflammatory bone loss that can result in tooth loss. Pre-eclampsia: There has been an association of vitamin D deficiency and women who develop pre-eclampsia in pregnancy. The exact relationship of these conditions is not well understood. Maternal vitamin D deficiency may affect the baby, causing overt bone disease from before birth and impairment of bone quality after birth. Respiratory infections and COVID-19: Vitamin D deficiency may increase the risk of severe acute respiratory infections and COPD. Emerging studies have suggested a link between vitamin D deficiency and COVID-19 symptoms. A review has shown that vitamin D deficiency is not associated with a higher chance of having COVID-19 but is associated with a greater severity of the disease, including 80% increases in the rates of hospitalization and mortality. Schizophrenia: Vitamin D deficiency is associated with the development of schizophrenia. People with schizophrenia generally have lower levels of vitamin D. The environmental risk factors of seasonality of birth, latitude, and migration linked to schizophrenia all implicate vitamin D deficiency, as do other health conditions such as maternal obesity. Vitamin D is essential for the normal development of the nervous system. Maternal vitamin D deficiency can cause prenatal neurodevelopmental defects, which influence neurotransmission, altering brain rhythms and the metabolism of dopamine. Vitamin D receptors, CYP27B1, and CYP24A1 are found in various regions of the brain, showing that vitamin D is a neuroactive, neurosteroid hormone essential for the development of the brain and normal function. Inflammation as a causative factor in schizophrenia is normally suppressed by vitamin D. Risk factors Those most likely to be affected by vitamin D deficiency are people with little exposure to sunlight. Certain climates, dress habits, the avoidance of sun exposure and the use of too much sunscreen protection can all limit the production of vitamin D. Age Elderly people have a higher risk of having a vitamin D deficiency due to a combination of several risk factors, including decreased sunlight exposure, decreased intake of vitamin D in the diet, and decreased skin thickness, which leads to further decreased absorption of vitamin D from sunlight. Fat percentage Since vitamin D3 (cholecalciferol) and vitamin D2 (ergocalciferol) are fat-soluble, humans and other animals with a skeleton need to store some fat. Without fat, the animal will have a hard time absorbing vitamin D2 and vitamin D3, and the lower the fat percentage, the greater the risk of vitamin deficiency, which is true in some athletes who strive to get as lean as possible. Malnutrition Although rickets and osteomalacia are now rare in Britain, osteomalacia outbreaks in some immigrant communities included women with seemingly adequate daylight outdoor exposure wearing typical Western clothing. Having darker skin and reduced exposure to sunshine did not produce rickets unless the diet deviated from a Western omnivore pattern characterized by high intakes of meat, fish, and eggs and low intakes of high-extraction cereals. In sunny countries where rickets occurs among older toddlers and children, the rickets has been attributed to low dietary calcium intakes. This is characteristic of cereal-based diets with limited access to dairy products. Rickets was formerly a major public health problem among the US population; in Denver, almost two-thirds of 500 children had mild rickets in the late 1920s. An increase in the proportion of animal protein in the 20th-century American diet coupled with increased consumption of milk fortified with relatively small quantities of vitamin D coincided with a dramatic decline in the number of rickets cases. One study of children in a hospital in Uganda, however, showed no significant difference in vitamin D levels of malnourished children compared to non-malnourished children. Because both groups were at risk due to darker skin pigmentation, both groups had vitamin D deficiency. Nutritional status did not appear to play a role in this study. Obesity There is an increased risk of vitamin D deficiency in people who are considered overweight or obese based on their body mass index (BMI) measurement. The relationship between these conditions is not well understood. There are different factors that could contribute to this relationship, particularly diet and sunlight exposure. Alternatively, vitamin D is fat-soluble, so excess amounts can be stored in fat tissue and used during winter, when sun exposure is limited. Sun exposure The use of sunscreen with a sun protection factor of 8 can theoretically inhibit more than 95% of vitamin D production in the skin. In practice, however, sunscreen is applied so as to have a negligible effect on vitamin D status. The vitamin D status of those in Australia and New Zealand is unlikely to have been affected by campaigns advocating sunscreen. Instead, wearing clothing is more effective at reducing the amount of skin exposed to UVB and reducing natural vitamin D synthesis. Clothing that covers a large portion of the skin, when worn on a consistent and regular basis, such as the burqa, is correlated with lower vitamin D levels and an increased prevalence of vitamin D deficiency.Regions far from the equator have a high seasonal variation of the amount and intensity of sunlight. In the UK, the prevalence of low vitamin D status in children and adolescents is found to be higher in winter than in summer. Lifestyle factors such as indoor versus outdoor work and time spent in outdoor recreation play an important role. Additionally, vitamin D deficiency has been associated with urbanisation in terms of both air pollution, which blocks UV light, and an increase in the number of people working indoors. The elderly are generally exposed to less UV light due to hospitalisation, immobility, institutionalisation, and being housebound, leading to decreased levels of vitamin D. Darker skin color Because of melanin which enables natural sun protection, dark-skinned people are susceptible to vitamin D deficiency. Three to five times greater sun exposure is necessary for naturally darker skinned people to produce the same amount of vitamin D as those with white skin. Malabsorption Rates of vitamin D deficiency are higher among people with untreated celiac disease, inflammatory bowel disease, exocrine pancreatic insufficiency from cystic fibrosis, and short bowel syndrome, which can all produce problems of malabsorption. Vitamin D deficiency is also more common after surgical procedures that reduce absorption from the intestine, including weight loss procedures. Critical illness Vitamin D deficiency is associated with increased mortality in critical illness. People who take vitamin D supplements before being admitted for intensive care are less likely to die than those who do not take vitamin D supplements. Additionally, vitamin D levels decline during stays in intensive care. Vitamin D3 (cholecalciferol) or calcitriol given orally may reduce the mortality rate without significant adverse effects. Breastfeeding Infants who exclusively breastfeed need a vitamin D supplement, especially if they have dark skin or have minimal sun exposure. The American Academy of Pediatrics recommends that all breastfed infants receive 400 international units (IU) per day of oral vitamin D. Pathophysiology Decreased exposure of the skin to sunlight is a common cause of vitamin D deficiency. People with a darker skin pigment with increased amounts of melanin may have decreased production of vitamin D. Melanin absorbs ultraviolet B radiation from the sun and reduces vitamin D production. Sunscreen can also reduce vitamin D production. Medications may speed up the metabolism of vitamin D, causing a deficiency.The liver is required to transform vitamin D into 25-hydroxyvitamin D. This is an inactive metabolite of vitamin D but is a necessary precursor (building block) to create the active form of vitamin D.The kidneys are responsible for converting 25-hydroxyvitamin D to 1,25-hydroxyvitamin D. This is the active form of vitamin D in the body. Kidney disease reduces 1,25-hydroxyvitamin D formation, leading to a deficiency of the effects of vitamin D.Intestinal conditions that result in malabsorption of nutrients may also contribute to vitamin D deficiency by decreasing the amount of vitamin D absorbed via diet. In addition, a vitamin D deficiency may lead to decreased absorption of calcium by the intestines, resulting in increased production of osteoclasts that may break down a persons bone matrix. In states of hypocalcemia, calcium will leave the bones and may give rise to secondary hyperparathyroidism, which is a response by the body to increase serum calcium levels. The body does this by increasing uptake of calcium by the kidneys and continuing to take calcium away from the bones. If prolonged, this may lead to osteoporosis in adults and rickets in children. Diagnosis The serum concentration of calcifediol, also called 25-hydroxyvitamin D (abbreviated 25(OH)D), is typically used to determine vitamin D status. Most vitamin D is converted to 25(OH)D in the serum, giving an accurate picture of vitamin D status. The level of serum 1,25(OH)D is not usually used to determine vitamin D status because it often is regulated by other hormones in the body such as parathyroid hormone. The levels of 1,25(OH)D can remain normal even when a person may be vitamin D deficient. Serum level of 25(OH)D is the laboratory test ordered to indicate whether or not a person has vitamin D deficiency or insufficiency. It is also considered reasonable to treat at-risk persons with vitamin D supplementation without checking the level of 25(OH)D in the serum, as vitamin D toxicity has only been rarely reported to occur.Levels of 25(OH)D that are consistently above 200 nanograms per milliliter (ng/mL) (or 500 nanomoles per liter, nmol/L) are potentially toxic. Vitamin D toxicity usually results from taking supplements in excess. Hypercalcemia is often the cause of symptoms, and levels of 25(OH)D above 150 ng/mL (375 nmol/L) are usually found, although in some cases 25(OH)D levels may appear to be normal. Periodic measurement of serum calcium in individuals receiving large doses of vitamin D is recommended. Screening The official recommendation from the United States Preventive Services Task Force is that for persons that do not fall within an at-risk population and are asymptomatic, there is not enough evidence to prove that there is any benefit in screening for vitamin D deficiency. Treatment UVB exposure Vitamin-D overdose is impossible from UV exposure: the skin reaches an equilibrium where the vitamin degrades as fast as it is created. Sun tanning Light therapy Exposure to photons (light) at specific wavelengths of narrowband UVB enables the body to produce vitamin D to treat vitamin D deficiency. Supplement In the United States and Canada as of 2016, the amount of vitamin D recommended is 400 IU per day for children, 600 IU per day for adults, and 800 IU per day for people over age 70. The Canadian Paediatric Society recommends that pregnant or breastfeeding women consider taking 2000 IU/day, that all babies who are exclusively breastfed receive a supplement of 400 IU/day, and that babies living north of 55°N get 800 IU/day from October to April.Treating vitamin D deficiency depends on the severity of the deficit. Treatment involves an initial high-dosage treatment phase until the required serum levels are reached, followed by the maintenance of the acquired levels. The lower the 25(OH)D serum concentration is before treatment, the higher is the dosage that is needed in order to quickly reach an acceptable serum level.The initial high-dosage treatment can be given on a daily or weekly basis or can be given in form of one or several single doses (also known as stoss therapy, from the German word Stoß, meaning "push").Therapy prescriptions vary, and there is no consensus yet on how best to arrive at an optimum serum level. While there is evidence that vitamin D3 raises 25(OH)D blood levels more effectively than vitamin D2, other evidence indicates that D2 and D3 are equal for maintaining 25(OH)D status. Initial phase Daily or weekly or monthly dose For treating rickets, the American Academy of Pediatrics (AAP) has recommended that pediatric patients receive an initial two- to three-month treatment of "high-dose" vitamin D therapy. In this regime, the daily dose of cholecalciferol is 1,000 IU for newborns, 1,000 to 5,000 IU for 1- to 12-month-old infants, and 5,000 IU for patients over 1 year of age.For adults, other dosages have been called for. A review of 2008/2009 recommended dosages of 1,000 IU cholecalciferol per 10 ng/mL required serum increase, to be given daily over two to three months. In another proposed cholecalciferol loading dose guideline for vitamin D-deficient adults, a weekly dosage is given, up to a total amount that is proportional to the required serum increase (up to the level of 75 nmL/L) and within certain body weight limits, to body weight.According to new data and practices relevant to vitamin D levels in the general population in France, to establish optimal vitamin D status and frequency of intermittent supplement dosing, patients with or at high risk for osteoporosis and vitamin D deficiency should start supplementation with a loading phase consisting of 50,000 IU weekly of vitamin D for 8 weeks in patients with levels <20 ng/mL and 50,000 IU weekly for 4 weeks in patients with levels between 20 and 30 ng/mL. Subsequently, long-term supplementation should be prescribed as 50,000 IU monthly. Should pharmaceutical forms suitable for daily supplementation become available, patients displaying good treatment adherence could take a daily dose determined based on the 25(OH)D level. Until now, there are no consistent data suggesting the ideal regimen of supplementation with vitamin D, and the question of the ideal time between doses is still of debate. Ish-Shalom et al. performed a study in elderly women to compare the efficacy and safety of a daily dose of 1,500 IU to a weekly dose of 10,500 IU and to a dose of 45,000 IU given every 28 days for two months. They concluded that supplementation with vitamin D can be equally achieved with daily, weekly, or monthly dosing frequencies. Another study comparing daily, weekly, and monthly supplementation of vitamin D in deficient patients was published by Takacs et al. They reported an equal efficacy of 1,000 IU taken daily, 7,000 IU taken weekly, and 30,000 IU taken monthly. Nevertheless, these consistent findings differ from the report by Chel et al., in which a daily dose was more effective than a monthly dose. In that study, the compliance calculation could be questionable since only random samples of the returned medications were counted. In a study by De Niet et al., 60 subjects with vitamin D deficiency were randomized to receive 2,000 IU vitamin D3 daily or 50,000 IU monthly. They reported a similar efficacy of the two dosing frequencies, with the monthly dose providing a more rapid normalization of vitamin D levels. Single-dose therapy Alternatively, a single-dose therapy is used for instance if there are concerns regarding the patients compliance. The single-dose therapy can be given as an injection but is normally given in form of an oral medication. Vitamin D doses and meals The presence of a meal and the fat content of that meal may also be important. Because vitamin D is fat-soluble, it is hypothesized that absorption would be improved if patients are instructed to take their supplement with a meal. Raimundo et al. performed different studies confirming that a high-fat meal increased the absorption of vitamin D3 as measured by serum 25(OH) D. A clinical report indicated that serum 25(OH) D levels increased by an average of 57% over a 2-month to 3-month period in 17 clinic patients after they were instructed to take their usual dose of vitamin D with the largest meal of the day.Another study conducted in 152 healthy men and women concluded that diets rich in monounsaturated fatty acids may improve and those rich in polyunsaturated fatty acids may reduce the effectiveness of vitamin D3 supplements. In another study performed by Cavalier E. et al., 88 subjects received orally a single dose of 50,000 IU of vitamin D3 solubilized in an oily solution as two ampoules each containing 25,000 IU (D‐CURE®, Laboratories SMB SA, Brussels, Belgium) with or without a standardized high‐fat breakfast. No significant difference between fasting vs. fed conditions was observed. Maintenance phase Once the desired serum level has been achieved, be it by a high daily or weekly or monthly dose or by a single-dose therapy, the AAP recommendation calls for a maintenance supplementation of 400 IU for all age groups, with this dosage being doubled for premature infants, dark-skinned infants and children, children who reside in areas of limited sun exposure (>37.5° latitude), obese patients, and those on certain medications. Special cases To maintain blood levels of calcium, therapeutic vitamin D doses are sometimes administered (up to 100,000 IU or 2.5 mg daily) to patients who have had their parathyroid glands removed (most commonly kidney dialysis patients who have had tertiary hyperparathyroidism, but also to patients with primary hyperparathyroidism) or with hypoparathyroidism. Patients with chronic liver disease or intestinal malabsorption disorders may also require larger doses of vitamin D (up to 40,000 IU or 1 mg (1000 micrograms) daily). Co-supplementation with vitamin K The combination of vitamin D and vitamin K supplements has been shown in trials to improve bone quality. As high intake of vitamin D is a cause of raised calcium levels (hypercalcemia), the addition of vitamin K may be beneficial in helping to prevent vascular calcification, particularly in people with chronic kidney disease. Bioavailability Not all D3 deficiencies can be effectively supplemented or treated with vitamin D3 on its own. Older people or those who have fatty liver or metabolic syndrome have a reduced ability to absorb vitamin D3. In addition, in overweight or obese persons, excessive adipose tissue can sequester D3 from the circulation and reduce its access to other tissues. With age or in obesity, metabolic activation of D3 may be reduced by liver steatosis or by microbiome imbalance.For vitamin D3 to perform its hormonal roles, it is converted into its biologically active metabolite, calcifediol, also known as 25-hydroxyvitamin D3, an activation occurring by a hydroxylation reaction in the liver via the cytochrome P450 system, and in the gut microbiome. Epidemiology The estimated percentage of the population with a vitamin D deficiency varies based on the threshold used to define a deficiency. Recommendations for 25(OH)D serum levels vary across authorities, and probably vary based on factors like age; calculations for the epidemiology of vitamin D deficiency depend on the recommended level used.A 2011 Institute of Medicine (IOM) report set the sufficiency level at 20 ng/mL (50 nmol/L), while in the same year The Endocrine Society defined sufficient serum levels at 30 ng/mL and others have set the level as high as 60 ng/mL. As of 2011 most reference labs used the 30 ng/mL standard.: 435 Applying the IOM standard to NHANES data on serum levels, for the period from 1988 to 1994 22% of the US population was deficient, and 36% were deficient for the period between 2001 and 2004; applying the Endocrine Society standard, 55% of the US population was deficient between 1988 and 1994, and 77% were deficient for the period between 2001 and 2004.In 2011 the Centers for Disease Control and Prevention applied the IOM standard to NHANES data on serum levels collected between 2001 and 2006, and determined that 32% of Americans were deficient during that period (8% at risk of deficiency, and 24% at risk of inadequacy). History The role of diet in the development of rickets was determined by Edward Mellanby between 1918 and 1920. In 1921, Elmer McCollum identified an antirachitic substance found in certain fats that could prevent rickets. Because the newly discovered substance was the fourth vitamin identified, it was called vitamin D. The 1928 Nobel Prize in Chemistry was awarded to Adolf Windaus, who discovered the steroid 7-dehydrocholesterol, the precursor of vitamin D. Prior to the fortification of milk products with vitamin D, rickets was a major public health problem. In the United States, milk has been fortified with 10 micrograms (400 IU) of vitamin D per quart since the 1930s, leading to a dramatic decline in the number of rickets cases. Research Some evidence suggests vitamin D deficiency may be associated with a worse outcome for some cancers, but evidence is insufficient to recommend that vitamin D be prescribed for people with cancer. Taking vitamin D supplements has no significant effect on cancer risk. Vitamin D3, however, appears to decrease the risk of death from cancer but concerns with the quality of the data exist.Vitamin D deficiency is thought to play a role in the pathogenesis of non-alcoholic fatty liver disease.Evidence suggests that vitamin D deficiency may be associated with impaired immune function. Those with vitamin D deficiency may have trouble fighting off certain types of infections. It has also been thought to correlate with cardiovascular disease, type 1 diabetes, type 2 diabetes, and some cancers.Review studies have also seen associations between vitamin D deficiency and pre-eclampsia. See also Hypervitaminosis D Vitamin D deficiency in Australia References == External links ==
Vitamin E deficiency
Vitamin E deficiency in humans is a very rare condition, occurring as a consequence of abnormalities in dietary fat absorption or metabolism rather than from a diet low in vitamin E. Collectively the EARs, RDAs, AIs and ULs for vitamin E and other essential nutrients are referred to as Dietary Reference Intakes (DRIs). Vitamin E deficiency can cause nerve problems due to poor conduction of electrical impulses along nerves due to changes in nerve membrane structure and function. Signs and symptoms Signs of vitamin E deficiency include the following: Neuromuscular problems – such as spinocerebellar ataxia and myopathies. Neurological problems – may include dysarthria, absence of deep tendon reflexes, loss of the ability to sense vibration and detect where body parts are in three dimensional space, and positive Babinski sign. Hemolytic anemia – due to oxidative damage to red blood cells Retinopathy Impairment of the immune response Causes Vitamin E deficiency is rare. There are no records of it from simple lack of vitamin E in a persons diet, but it can arise from physiological abnormalities. It occurs in the people in the following situations: Premature, very low birth weight infants – birth weights less than 1500 grams (3.3 pounds). Rare disorders of fat metabolism – There is a rare genetic condition termed isolated vitamin E deficiency or ataxia with isolated with vitamin E deficiency, caused by mutations in the gene for the tocopherol transfer protein. These individuals have an extremely poor capacity to absorb vitamin E and develop neurological complications that are reversed by high doses of vitamin E. Fat malabsorption – Some dietary fat is needed for the absorption of vitamin E from the gastrointestinal tract. Anyone diagnosed with cystic fibrosis, individuals who have had part or all of their stomach removed or who have had a gastric bypass, and individuals with malabsorptive problems such as Crohns disease, liver disease or exocrine pancreatic insufficiency may not absorb fat (people who cannot absorb fat often pass greasy stools or have chronic diarrhea and bloating). Abetalipoproteinemia is a rare inherited disorder of fat metabolism that results in poor absorption of dietary fat and vitamin E. The vitamin E deficiency associated with this disease causes problems such as poor transmission of nerve impulses and muscle weakness. Diagnosis The U.S. Institute of Medicine defines deficiency as a serum concentration of less than 12 μmol/L. The symptoms can be enough for a diagnosis to be formed. Treatment Treatment is oral vitamin E supplementation. See also Familial isolated vitamin E deficiency Abetalipoproteinemia Tocopherol References == External links ==
Waldenström macroglobulinemia
Waldenström macroglobulinemia () is a type of cancer affecting two types of B cells: lymphoplasmacytoid cells and plasma cells. Both cell types are white blood cells. It is characterized by having high levels of a circulating antibody, immunoglobulin M (IgM), which is made and secreted by the cells involved in the disease. Waldenström macroglobulinemia is an "indolent lymphoma" (i.e., one that tends to grow and spread slowly) and a type of lymphoproliferative disease which shares clinical characteristics with the indolent non-Hodgkin lymphomas. It is commonly classified as a form of plasma cell dyscrasia, similar to other plasma cell dyscrasias that, for example, lead to multiple myeloma. Waldenström macroglobulinemia is commonly preceded by two clinically asymptomatic but progressively more pre-malignant phases, IgM monoclonal gammopathy of undetermined significance and smoldering Waldenström macroglobulinemia. The Waldenström macroglobulinemia spectrum of dysplasias differs from other spectrums of plasma cell dyscrasias in that it involves not only aberrant plasma cells but also aberrant lymphoplasmacytoid cells and that it involves IgM while other plasma dyscrasias involve other antibody isoforms.Waldenström macroglobulinemia is a rare disease, with only about 1,500 cases per year in the United States. It occurs more frequently in older adults. While the disease is incurable, it is treatable. Because of its indolent nature, many patients are able to lead active lives, and when treatment is required, may experience years of symptom-free remission. Signs and symptoms Signs and symptoms of Waldenström macroglobulinemia include weakness, fatigue, weight loss, and chronic oozing of blood from the nose and gums. Peripheral neuropathy occurs in 10% of patients. Enlargement of the lymph nodes, spleen, and/or liver are present in 30–40% of cases. Other possible signs and symptoms include blurring or loss of vision, headache, and (rarely) stroke or coma. Causes Waldenström macroglobulinemia is characterized by an uncontrolled clonal proliferation of terminally differentiated B lymphocytes. The most commonly associated mutations, based on whole-genome sequencing of 30 patients, are a somatic mutation in MYD88 (90% of patients) and a somatic mutation in CXCR4 (27% of patients). CXCR4 mutations cause symptomatic hyperviscosity syndrome and high bone marrow activity characteristic of the disease. However, CXCR4 mutation is not associated with splenomegaly, high platelet counts, or different response to therapy, questioning the relevance of CXCR4 in treating patients. An association has been demonstrated with the locus 6p21.3 on chromosome 6. There is a two- to threefold increased risk of Waldenström macroglobulinemia in people with a personal history of autoimmune diseases with autoantibodies, and a particularly elevated risk associated with liver inflammation, human immunodeficiency virus, and rickettsiosis.There are genetic factors, with first-degree relatives of Waldenström macroglobulinemia patients shown to have a highly increased risk of also developing the disease. There is also evidence to suggest that environmental factors, including exposure to farming, pesticides, wood dust, and organic solvents, may influence the development of Waldenström macroglobulinemia. Genetics Although believed to be a sporadic disease, studies have shown increased susceptibility within families, indicating a genetic component. A mutation in gene MYD88 has been found to occur frequently in patients. Waldenström macroglobulinemia cells show only minimal changes in cytogenetic and gene expression studies. Their miRNA signature however differs from their normal counterpart. It is therefore believed that epigenetic modifications play a crucial role in the disease.Comparative genomic hybridization identified the following chromosomal abnormalities: deletions of 6q23 and 13q14, and gains of 3q13-q28, 6p and 18q. FGFR3 is overexpressed. The following signalling pathways have been implicated: CD154/CD40 Akt ubiquitination, p53 activation, cytochrome c release NF-κB WNT/beta-catenin mTOR ERK MAPK Bcl-2The protein Src tyrosine kinase is overexpressed in Waldenström macroglobulinemia cells compared with control B cells. Inhibition of Src arrests the cell cycle at phase G1 and has little effect on the survival of Waldenström macroglobulinemia or normal cells. MicroRNAs involved in Waldenström: increased expression of miRNAs-363*, -206, -494, -155, -184, -542–3p. decreased expression of miRNA-9*.MicroRNA-155 regulates the proliferation and growth of Waldenström macroglobulinemia cells in vitro and in vivo, by inhibiting MAPK/ERK, PI3/AKT, and NF-κB pathways.In Waldenström macroglobulinemia cells, histone deacetylases and histone-modifying genes are de-regulated. Bone marrow tumour cells express the following antigen targets CD20 (98.3%), CD22 (88.3%), CD40 (83.3%), CD52 (77.4%), IgM (83.3%), MUC1 core protein (57.8%), and 1D10 (50%). Pathophysiology Symptoms including blurring or loss of vision, headache, and (rarely) stroke or coma are due to the effects of the IgM paraprotein, which may cause autoimmune phenomenon or cryoglobulinemia. Other symptoms of Waldenström macroglobulinemia are due to hyperviscosity syndrome, which is present in 6–20% of patients. This is attributed to the IgM monoclonal protein increasing the viscosity of the blood by forming aggregates to each other, binding water through their carbohydrate component and by their interaction with blood cells. Diagnosis A diagnosis of Waldenström macroglobulinemia depends on a significant monoclonal IgM spike evident in blood tests and malignant cells consistent with the disease in bone marrow biopsy samples. Blood tests show the level of IgM in the blood and the presence of proteins, or tumor markers, that are the key signs of Waldenström macroglobulinemia. A bone marrow biopsy provides a sample of bone marrow, usually from the lower back of the pelvis bone. The sample is extracted through a needle and examined under a microscope. A pathologist identifies the particular lymphocytes that indicate Waldenström macroglobulinemia. Flow cytometry may be used to examine markers on the cell surface or inside the lymphocytes.Additional tests such as computed tomography (CT or CAT) scan may be used to evaluate the chest, abdomen, and pelvis, particularly swelling of the lymph nodes, liver, and spleen. A skeletal survey can help distinguish between Waldenström macroglobulinemia and multiple myeloma. Anemia occurs in about 80% of patients with Waldenström macroglobulinemia. A low white blood cell count, and low platelet count in the blood may be observed. A low level of neutrophils (a specific type of white blood cell) may also be found in some individuals with Waldenström macroglobulinemia.Chemistry tests include lactate dehydrogenase (LDH) levels, uric acid levels, erythrocyte sedimentation rate (ESR), kidney and liver function, total protein levels, and an albumin-to-globulin ratio. The ESR and uric acid level may be elevated. Creatinine is occasionally elevated and electrolytes are occasionally abnormal. A high blood calcium level is noted in approximately 4% of patients. The LDH level is frequently elevated, indicating the extent of Waldenström macroglobulinemia–related tissue involvement. Rheumatoid factor, cryoglobulins, direct antiglobulin test and cold agglutinin titre results can be positive. Beta-2 microglobulin and C-reactive protein test results are not specific for Waldenström macroglobulinemia. Beta-2 microglobulin is elevated in proportion to tumor mass. Coagulation abnormalities may be present. Prothrombin time, activated partial thromboplastin time, thrombin time, and fibrinogen tests should be performed. Platelet aggregation studies are optional. Serum protein electrophoresis results indicate evidence of a monoclonal spike but cannot establish the spike as IgM. An M component with beta-to-gamma mobility is highly suggestive of Waldenström macroglobulinemia. Immunoelectrophoresis and immunofixation studies help identify the type of immunoglobulin, the clonality of the light chain, and the monoclonality and quantitation of the paraprotein. High-resolution electrophoresis and serum and urine immunofixation are recommended to help identify and characterize the monoclonal IgM paraprotein. The light chain of the monoclonal protein is usually the kappa light chain. At times, patients with Waldenström macroglobulinemia may exhibit more than one M protein. Plasma viscosity must be measured. Results from characterization studies of urinary immunoglobulins indicate that light chains (Bence Jones protein), usually of the kappa type, are found in the urine. Urine collections should be concentrated. Bence Jones proteinuria is observed in approximately 40% of patients and exceeds 1 g/d in approximately 3% of patients. Patients with findings of peripheral neuropathy should have nerve conduction studies and antimyelin associated glycoprotein serology.Criteria for diagnosis of Waldenström macroglobulinemia include: IgM monoclonal gammopathy that excludes chronic lymphocytic leukemia and Mantle cell lymphoma Evidence of anemia, constitutional symptoms, hyperviscosity, swollen lymph nodes, or enlargement of the liver and spleen that can be attributed to an underlying lymphoproliferative disorder. Treatment There is no single accepted treatment for Waldenström macroglobulinemia. There is marked variation in clinical outcome due to gaps in knowledge of the diseases molecular basis. Objective response rates are high (> 80%) but complete response rates are low (0–15%). The medication ibrutinib targets the MYD88 L265P mutation induced activation of Brutons tyrosine kinase. In a cohort study of previously treated patients, ibrutinib induced responses in 91% of patients, and at 2 years 69% of patients had no progression of disease and 95% were alive. Based on this study, the Food and Drug Administration approved ibrutinib for use in Waldenström macroglobulinemia in 2015.There are different treatment flowcharts: Treon and mSMART.Patients with Waldenström macroglobulinemia are at higher risk of developing second cancers than the general population, but it is not yet clear whether treatments are contributory. Watchful waiting In the absence of symptoms, many clinicians will recommend simply monitoring the patient; Waldenström himself stated "let well do" for such patients. These asymptomatic cases are now classified as two successively more pre-malignant phases, IgM monoclonal gammopathy of undetermined significance and smoldering Waldenström macroglobulinemia. But on occasion, the disease can be fatal, as it was to the French president Georges Pompidou, who died in office in 1974. Mohammad Reza Shah Pahlavi, the Shah of Iran, also had Waldenström macroglobulinemia, which resulted in his ill-fated trip to the United States for therapy in 1979, leading to the Iran hostage crisis. First-line Should treatment be started it should address both the paraprotein level and the lymphocytic B-cells.In 2002, a panel at the International Workshop on Waldenströms Macroglobulinemia agreed on criteria for the initiation of therapy. They recommended starting therapy in patients with constitutional symptoms such as recurrent fever, night sweats, fatigue due to anemia, weight loss, progressive symptomatic lymphadenopathy or spleen enlargement, and anemia due to bone marrow infiltration. Complications such as hyperviscosity syndrome, symptomatic sensorimotor peripheral neuropathy, systemic amyloidosis, kidney failure, or symptomatic cryoglobulinemia were also suggested as indications for therapy.Treatment includes the monoclonal antibody rituximab, sometimes in combination with chemotherapeutic drugs such as chlorambucil, cyclophosphamide, or vincristine or with thalidomide. Corticosteroids, such as prednisone, may also be used in combination. Plasmapheresis can be used to treat the hyperviscosity syndrome by removing the paraprotein from the blood, although it does not address the underlying disease. Ibrutinib is another agent that has been approved for use in this condition. Combination treatment with Ibrutinib and Rituximab showed significantly higher disease progression free survival than with just Rituximab treatment.Autologous bone marrow transplantation is a treatment option.Zanubrutinib is indicated for the treatment of adults with Waldenström macroglobulinemia. Salvage therapy When primary or secondary resistance invariably develops, salvage therapy is considered. Allogeneic stem cell transplantation can induce durable remissions for heavily pre-treated patients. Drug pipeline As of October 2010, there have been a total of 44 clinical trials on Waldenström macroglobulinemia, excluding transplantation treatments. Of these, 11 were performed on previously untreated patients, 14 in patients with relapsed or refractory Waldenström. A database of clinical trials investigating Waldenström macroglobulinemia is maintained by the National Institutes of Health in the US. Patient stratification Patients with polymorphic variants (alleles) FCGR3A-48 and -158 were associated with improved categorical responses to rituximab-based treatments. Prognosis Current medical treatments result in survival of some longer than 10 years; in part this is because better diagnostic testing means early diagnosis and treatments. Older diagnosis and treatments resulted in published reports of median survival of approximately 5 years from time of diagnosis. Currently, median survival is 6.5 years. In rare instances, Waldenström macroglobulinemia progresses to multiple myeloma.The International Prognostic Scoring System for Waldenströms Macroglobulinemia is a predictive model to characterise long-term outcomes. According to the model, factors predicting reduced survival are: Age > 65 years Hemoglobin ≤ 11.5 g/dL Platelet count ≤ 100×109/L B2-microglobulin > 3 mg/L Serum monoclonal protein concentration > 70 g/LThe risk categories are: Low: ≤ 1 adverse variable except age Intermediate: 2 adverse characteristics or age > 65 years High: > 2 adverse characteristicsFive-year survival rates for these categories are 87%, 68% and 36%, respectively. The corresponding median survival rates are 12, 8, and 3.5 years.The International Prognostic Scoring System for Waldenströms Macroglobulinemia has been shown to be reliable. It is also applicable to patients on a rituximab-based treatment regimen. An additional predictive factor is elevated serum lactate dehydrogenase (LDH). Epidemiology Of cancers involving the lymphocytes, 1% of cases are Waldenström macroglobulinemia.Waldenström macroglobulinemia is a rare disorder, with fewer than 1,500 cases occurring in the United States annually. The median age of onset is between 60 and 65 years, with some cases occurring in late teens. History Waldenström macroglobulinemia was first described by Jan G. Waldenström (1906–1996) in 1944 in two patients with bleeding from the nose and mouth, anemia, decreased levels of fibrinogen in the blood (hypofibrinogenemia), swollen lymph nodes, neoplastic plasma cells in bone marrow, and increased viscosity of the blood due to increased levels of a class of heavy proteins called macroglobulins.For a time, Waldenström macroglobulinemia was considered to be related to multiple myeloma because of the presence of monoclonal gammopathy and infiltration of the bone marrow and other organs by plasmacytoid lymphocytes. The new World Health Organization (WHO) classification, however, places Waldenström macroglobulinemia under the category of lymphoplasmacytic lymphomas, itself a subcategory of the indolent (low-grade) non-Hodgkin lymphomas. Since the 1990s, there have been significant advances in the understanding and treatment of Waldenström macroglobulinemia. See also List of hematologic conditions Waldenström hyperglobulinemic purpura == References ==
Wart
Warts are typically small, rough, hard growths that are similar in color to the rest of the skin. They typically do not result in other symptoms, except when on the bottom of the feet, where they may be painful. While they usually occur on the hands and feet, they can also affect other locations. One or many warts may appear. They are not cancerous.Warts are caused by infection with a type of human papillomavirus (HPV). Factors that increase the risk include use of public showers and pools, working with meat, eczema and a weak immune system. The virus is believed to enter the body through skin that has been damaged slightly. A number of types exist, including "common warts", plantar warts, "filiform warts", and genital warts. Genital warts are often sexually transmitted.Without treatment, most types of warts resolve in months to years. A number of treatments may speed resolution, including salicylic acid applied to the skin and cryotherapy. In those who are otherwise healthy, they do not typically result in significant problems. Treatment of genital warts differs from that of other types.Warts are very common, with most people being infected at some point in their lives. The estimated current rate of non-genital warts among the general population is 1–13%. They are more common among young people. Prior to widespread adoption of the HPV vaccine, the estimated rate of genital warts in sexually active women was 12%. Warts have been described at least as far back as 400 BC by Hippocrates. Types A range of types of wart have been identified, varying in shape and site affected, as well as the type of human papillomavirus involved. These include: Common wart (verruca vulgaris), a raised wart with roughened surface, most common on hands, but can grow anywhere on the body. Sometimes known as a Palmer wart or Junior wart. Flat wart (verruca plana), a small, smooth flattened wart, flesh-coloured, which can occur in large numbers; most common on the face, neck, hands, wrists and knees. Filiform or digitate wart, a thread- or finger-like wart, most common on the face, especially near the eyelids and lips. Genital wart (venereal wart, condyloma acuminatum, verruca acuminata), a wart that occurs on the genitalia. Periungual wart, a cauliflower-like cluster of warts that occurs around the nails. Plantar wart (verruca, verruca plantaris), a hard, sometimes painful lump, often with multiple black specks in the center; usually only found on pressure points on the soles of the feet. Mosaic wart, a group of tightly clustered plantar-type warts, commonly on the hands or soles of the feet. Cause Warts are caused by the human papilloma virus (HPV). There are about 130 known types of human papilloma viruses. HPV infects the squamous epithelium, usually of the skin or genitals, but each HPV type is typically only able to infect a few specific areas on the body. Many HPV types can produce a benign growth, often called a "wart" or "papilloma", in the area they infect. Many of the more common HPV and wart types are listed below. Common warts – HPV types 2 and 4 (most common); also types 1, 3, 26, 29, and 57 and others. Cancers and genital dysplasia – "high-risk" HPV types are associated with cancers, notably cervical cancer, and can also cause some vulvar, vaginal, penile, anal and some oropharyngeal cancers. "Low-risk" types are associated with warts or other conditions.High-risk: 16, 18 (cause the most cervical cancer); also 31, 33, 35, 39, 45, 52, 58, 59, and others.Plantar warts (verruca) – HPV type 1 (most common); also types 2, 3, 4, 27, 28, and 58 and others. Anogenital warts (condylomata acuminata or venereal warts) – HPV types 6 and 11 (most common); also types 42, 44 and others.Low-risk: 6, 11 (most common); also 13, 44, 40, 43, 42, 54, 61, 72, 81, 89, and others.Verruca plana (flat warts) – HPV types 3, 10, and 28. Butchers warts – HPV type 7. Hecks disease (focal epithelial hyperplasia) – HPV types 13 and 32. Pathophysiology Common warts have a characteristic appearance under the microscope. They have thickening of the stratum corneum (hyperkeratosis), thickening of the stratum spinosum (acanthosis), thickening of the stratum granulosum, rete ridge elongation, and large blood vessels at the dermoepidermal junction. Diagnosis On dermatoscopic examination, warts will commonly have fingerlike or knoblike extensions. Prevention Gardasil 6 is an HPV vaccine aimed at preventing cervical cancers and genital warts. Gardasil is designed to prevent infection with HPV types 16, 18, 6, and 11. HPV types 16 and 18 currently cause about 70% of cervical cancer cases, and also cause some vulvar, vaginal, penile and anal cancers. HPV types 6 and 11 are responsible for 90% of documented cases of genital warts.Gardasil 9, approved in 2014 protects against HPV types 6, 11, 16, 18, 31, 33, 45, 52, and 58.HPV vaccines do not currently protect against the virus strains responsible for plantar warts (verrucae). Disinfection The virus is relatively hardy and immune to many common disinfectants. Exposure to 90% ethanol for at least 1 minute, 2% glutaraldehyde, 30% Savlon, and/or 1% sodium hypochlorite can disinfect the pathogen.The virus is resistant to drying and heat, but killed by 100 °C (212 °F) and ultraviolet radiation. Treatment There are many treatments and procedures associated with wart removal. A review of various skin wart treatments concluded that topical treatments containing salicylic acid were more effective than placebo. Cryotherapy appears to be as effective as salicylic acid, but there have been fewer trials. Medication Salicylic acid can be prescribed by a dermatologist in a higher concentration than that found in over-the-counter products. Several over-the-counter products are readily available at pharmacies and supermarkets of roughly two types: adhesive pads treated with salicylic acid, and bottled concentrated salicylic acid and lactic acid solution. Fluorouracil — Fluorouracil cream, a chemotherapy agent sometimes used to treat skin cancer, can be used on particularly resistant warts, by blocking viral DNA and RNA production and repair. Imiquimod is a topical cream that helps the bodys immune system fight the wart virus by encouraging interferon production. It has been approved by the U.S. Food and Drug Administration (FDA) for genital warts. Cantharidin, found naturally in the bodies of many members of the beetle family Meloidae, causes dermal blistering. It is used either by itself or compounded with podophyllin. Not FDA approved, but available through Canada or select US compounding pharmacies. Bleomycin — A more potent chemotherapy drug, can be injected into deep warts, destroying the viral DNA or RNA. Bleomycin is notably not US FDA approved for this purpose. Possible side effects include necrosis of the digits, nail loss and Raynaud syndrome. The usual treatment is one or two injections. Dinitrochlorobenzene (DNCB), like salicylic acid, is applied directly to the wart. Studies show this method is effective with a cure rate of 80%. But DNCB must be used much more cautiously than salicylic acid; the chemical is known to cause genetic mutations, so it must be administered by a physician. This drug induces an allergic immune response, resulting in inflammation that wards off the wart-causing virus. Cidofovir is an antiviral drug which is injected into HPV lesions within the larynx (laryngeal papillomatosis) as an experimental treatment. Verrutop verruca treatment is a topical solution made from a combination of organic acids, inorganic acids, and metal ions. This solution causes the production of nitrites, which act to denature viral proteins and mummify the wart tissue. The difference between Verrutop and other acid treatments is that it does not damage the surrounding skin. Another product available over-the-counter that can aid in wart removal is silver nitrate in the form of a caustic pencil, which is also available at drug stores. In a placebo-controlled study of 70 patients, silver nitrate given over nine days resulted in clearance of all warts in 43% and improvement in warts in 26% one month after treatment compared to 11% and 14%, respectively, in the placebo group. The instructions must be followed to minimize staining of skin and clothing. Occasionally, pigmented scars may develop. Procedures Keratolysis, of dead surface skin cells usually using salicylic acid, blistering agents, immune system modifiers ("immunomodulators"), or formaldehyde, often with mechanical paring of the wart with a pumice stone, blade etc. Electrodesiccation Cryosurgery or cryotherapy, which involves freezing the wart (generally with liquid nitrogen), creating a blister between the wart and epidermal layer after which the wart and the surrounding dead skin fall off. An average of 3 to 4 treatments are required for warts on thin skin. Warts on calloused skin like plantar warts might take dozens or more treatments. Surgical curettage of the wart Laser treatment – often with a pulse dye laser or carbon dioxide (CO2) laser. Pulse dye lasers (wavelength 582 nm) work by selective absorption by blood cells (specifically hemoglobin). CO2 lasers work by selective absorption by water molecules. Pulse dye lasers are less destructive and more likely to heal without scarring. CO2 laser works by vaporizing and destroying tissue and skin. Laser treatments can be painful, expensive (though covered by many insurance plans), and not extensively scarring when used appropriately. CO2 lasers will require local anaesthetic. Pulse dye laser treatment does not need conscious sedation or local anesthetic. It takes 2 to 4 treatments, but can be many more for extreme cases. Typically, 10–14 days are required between treatments. Preventative measures are important. Infrared coagulator – an intense source of infrared light in a small beam like a laser. This works essentially on the same principle as laser treatment. It is less expensive. Like the laser, it can cause blistering, pain and scarring. Intralesional immunotherapy with purified candida, MMR, and tuberculin (PPD) protein appears safe and effective. Duct tape occlusion therapy involves placing a piece of duct tape over the wart. The mechanism of action of this technique still remains unknown. Despite several trials, evidence for the efficacy of duct tape therapy is inconclusive. Despite the mixed evidence for efficacy, the simplicity of the method and its limited side-effects leads some researchers to be reluctant to dismiss it. No intervention. Spontaneous resolution within a few years can be recommended. Alternative medicine Daily application of the latex of Chelidonium majus is a traditional treatment.The acrid yellow sap of Greater Celandine is used as a traditional wart remedy.According to English folk belief, touching toads causes warts; according to a German belief, touching a toad under a full moon cures warts. The most common Northern Hemisphere toads have glands that protrude from their skin that superficially resemble warts. Warts are caused by a virus, and toads do not harbor it. A variety of traditional folk remedies and rituals claim to be able to remove warts. In The Adventures of Tom Sawyer, Mark Twain has his characters discuss a variety of such remedies. Tom Sawyer proposes "spunk-water" (or "stump-water", the water collecting in the hollow of a tree stump) as a remedy for warts on the hand. You put your hand into the water at midnight and say: You then "walk away quick, eleven steps, with your eyes shut, and then turn around three times and walk home without speaking to anybody. Because if you speak the charms busted." This is given as an example to Huckleberry Finns planned remedy, which involves throwing a dead cat into a graveyard as a devil or devils comes to collect a recently buried wicked person. Another remedy involved splitting a bean, drawing blood from the wart and putting it on one of the halves, and burying that half at a crossroads at midnight. The theory of operation is that the blood on the buried bean will draw away the wart. Twain is recognized as an early collector and recorder of genuine American folklore.Similar practices are recorded elsewhere. In Louisiana, one remedy for warts involves rubbing the wart with a potato, which is then buried; when the "buried potato dries up, the wart will be cured". Another remedy similar to Twains is reported from Northern Ireland, where water from a specific well on Rathlin Island is credited with the power to cure warts. History Surviving ancient medical texts show that warts were a documented disease since at least the time of Hippocrates, who lived c. 460 – c. 370 BC. In the book De Medecia by the Roman physician Aulus Cornelius Celsus, who lived c. 25 BC – c. 50 AD, different types of warts were described. Celsus described myrmecia, today recognized as plantar wart, and categorized acrochordon (a skin tag) as wart. In the 13th century, warts were described in books published by the surgeons William of Saliceto and Lanfranc of Milan. The word verruca to describe a wart was introduced by the physician Daniel Sennert, who described warts in his 1636 book Hypomnemata physicae.But the cause of warts was disputed in the medical profession. In the early 18th century the physician Daniel Turner, who published the first book on dermatology, suggested that warts were caused by damaged nerves close to the skin. In the mid 18th century, the surgeon John Hunter popularized the belief that warts were caused by a bacterial syphilis infection. The surgeon Benjamin Bell documented that warts were caused by a disease entirely unrelated to syphilis, and established a causal link between warts and cancer. In the 19th century, the chief physician of Verona hospital established a link between warts and cervical cancer. But in 1874 it was noted by the dermatologist Ferdinand Ritter von Hebra that while various theories were advanced by the medical profession, the "influences causing warts are still very obscure".In 1907 the physician Giuseppe Ciuffo was the first to demonstrate that warts were caused by a virus infection. In 1976 the virologist Harald zur Hausen was the first to discover that warts were caused by the human papillomavirus (HPV). His continuous research established the evidence necessary to develop a HPV vaccine, which first became available in 2006. Other animals References External links Wart photo library, Dermnet
Wernicke encephalopathy
Wernicke encephalopathy (WE), also Wernickes encephalopathy, or wet brain is the presence of neurological symptoms caused by biochemical lesions of the central nervous system after exhaustion of B-vitamin reserves, in particular thiamine (vitamin B1). The condition is part of a larger group of thiamine deficiency disorders that includes beriberi, in all its forms, and alcoholic Korsakoff syndrome. When it occurs simultaneously with alcoholic Korsakoff syndrome it is known as Wernicke–Korsakoff syndrome.Classically, Wernicke encephalopathy is characterised by a triad of symptoms: ophthalmoplegia, ataxia, and confusion. Around 10% of patients exhibit all three features, and other symptoms may also be present. While it is commonly regarded as a condition peculiar to malnourished people with alcohol misuse, it can be caused by a variety of diseases. It is treated with thiamine supplementation, which can lead to improvement of the symptoms and often complete resolution, particularly in those where alcohol misuse is not the underlying cause. Often other nutrients also need to be replaced, depending on the cause. Wernicke encephalopathy may be present in the general population with a prevalence of around 2%, and is considered underdiagnosed; probably, many cases are in patients who do not have commonly-associated symptoms. Signs and symptoms The classic triad of symptoms found in Wernicke encephalopathy is: ophthalmoplegia (later expanded to other eye movement disorders, most commonly affecting the lateral rectus muscle. Lateral nystagmus is most commonly seen although lateral rectus palsy, usually bilateral, may be seen). ataxia (later expanded to imbalance or any cerebellar signs) confusion (later expanded to other mental changes. Has 82% incidence in diagnosis cases)However, in actuality, only a small percentage of patients experience all three symptoms, and the full triad occurs more frequently among those who have overused alcohol. Also a much more diverse range of symptoms has been found in patients with this condition, including: pupillary changes, retinal hemorrhage, papilledema, impaired vision and hearing, vision loss hearing loss, fatigability, apathy, irritability, drowsiness, psycho and/or motor slowing dysphagia, blush, sleep apnea, epilepsy and stupor lactic acidosis memory impairment, amnesia, depression, psychosis hypothermia, polyneuropathy, hyperhidrosis.Although hypothermia is usually diagnosed with a body temperature of 35 °C / 95° Fahrenheit, or less, incipient cooling caused by deregulation in the central nervous system (CNS) needs to be monitored because it can promote the development of an infection. The patient may report feeling cold, followed by mild chills, cold skin, moderate pallor, tachycardia, hypertension, tremor or piloerection. External warming techniques are advised to prevent hypothermia.Among the frequently altered functions are the cardio circulatory. There may be tachycardia, dyspnea, chest pain, orthostatic hypotension, changes in heart rate and blood pressure. The lack of thiamine sometimes affects other major energy consumers, the myocardium, and also patients may have developed cardiomegaly. Heart failure with lactic acidosis syndrome has been observed. Cardiac abnormalities are an aspect of the WE, which was not included in the traditional approach, and are not classified as a separate disease. Infections have been pointed out as one of the most frequent triggers of death in WE. Furthermore, infections are usually present in pediatric cases.In the last stage other symptoms may occur: hyperthermia, increased muscle tone, spastic paralysis, choreic dyskinesias and coma.Because of the frequent involvement of heart, eyes and peripheral nervous system, several authors prefer to call it Wernicke disease rather than simply encephalopathy.Early symptoms are nonspecific, and it has been stated that WE may present nonspecific findings. In Wernicke Korsakoffs syndrome some single symptoms are present in about one-third. Location of the lesion Depending on the location of the brain lesion different symptoms are more frequent: Brainstem tegmentum. - Ocular: pupillary changes. Extraocular muscle palsy; gaze palsy: nystagmus. Hypothalamus. Medulla: dorsal nuc. of vagus. - Autonomic dysfunction: temperature; cardiocirculatory; respiratory. Medulla: vestibular region. Cerebellum. - Ataxia. Dorsomedial nuc. of thalamus. Mammillary bodies. - Amnestic syndrome for recent memory.Mamillary lesion are characteristic-small petechial hemorrhages are found. Diffuse cerebral dysfunction.- Altered cognition: global confusional state. Brainstem: periaqueductal gray.- Reduction of consciousness Hypothalamic lesions may also affect the immune system, which is known in people who consume excessive amounts of alcohol, causing dysplasias and infections. Korsakoff syndrome Korsakoff syndrome, characterised by memory impairment, confabulation, confusion and personality changes, has a strong and recognised link with WE. A very high percentage of patients with Wernicke–Korsakoff syndrome also have peripheral neuropathy, and many people who consume excess alcohol have this neuropathy without other neurologic signs or symptoms. Korsakoffs occurs much more frequently in WE due to chronic alcoholism. It is uncommon among those who do not consume excessive amounts of alcohol. Up to 80% of WE patients who misuse alcohol develop Korsakoffs syndrome. In Korsakoffs, is usually observed atrophy of the thalamus and the mammillary bodies, and frontal lobe involvement. In a study, half of Wernicke-Korsakoff cases had good recovery from the amnesic state, which may take from 2 months to 10 years. Risk factors Wernicke encephalopathy has classically been thought of as a disease solely of people who drink excessive amounts of alcohol, but it is also found in the chronically undernourished, and in recent years had been discovered post bariatric surgery. Without being exhaustive, the documented causes of Wernicke encephalopathy have included: pancreatitis, liver dysfunction, chronic diarrhea, celiac disease, Crohns disease, uremia, thyrotoxicosis vomiting, hyperemesis gravidarum, malabsorption, gastrointestinal surgery or diseases incomplete parenteral nutrition, starvation/fasting chemotherapy, renal dialysis, diuretic therapy, stem cell/marrow transplantation cancer, AIDS, Creutzfeldt–Jakob disease, febrile infections this disease may even occur in some people with normal, or even high blood thiamine levels, are people with deficiencies in intracellular transport of this vitamin. Selected genetic mutations, including presence of the X-linked transketolase-like 1 gene, SLC19A2 thiamine transporter protein mutations, and the aldehyde dehydrogenase-2 gene, which may predispose to alcohol use disorder. The APOE epsilon-4 allele, involved in Alzheimers disease, may increase the chance of developing neurological symptoms. Pathophysiology Thiamine deficiency and errors of thiamine metabolism are believed to be the primary cause of Wernicke encephalopathy. Thiamine, also called B1, helps to break down glucose. Specifically, it acts as an essential coenzyme to the TCA cycle and the pentose phosphate shunt. Thiamine is first metabolised to its more active form, thiamine diphosphate (TDP), before it is used. The body only has 2–3 weeks of thiamine reserves, which are readily exhausted without intake, or if depletion occurs rapidly, such as in chronic inflammatory states or in diabetes. Thiamine is involved in: Metabolism of carbohydrates, releasing energy. Production of neurotransmitters including glutamic acid and GABA. Lipid metabolism, necessary for myelin production. Amino acid modification. Probably linked to the production of taurine, of great cardiac importance. Neuropathology The primary neurological-related injury caused by thiamine deficiency in WE is three-fold: oxidative damage, mitochondrial injury leading to apoptosis, and directly stimulating a pro-apoptotic pathway. Thiamine deficiency affects both neurons and astrocytes, glial cells of the brain. Thiamine deficiency alters the glutamate uptake of astrocytes, through changes in the expression of astrocytic glutamate transporters EAAT1 and EAAT2, leading to excitotoxicity. Other changes include those to the GABA transporter subtype GAT-3, GFAP, glutamine synthetase, and the Aquaporin 4 channel. Focal lactic acidosis also causes secondary oedema, oxidative stress, inflammation and white matter damage. Pathological anatomy Despite its name, WE is not related to Wernickes area, a region of the brain associated with speech and language interpretation.In most, early lesions completely reversed with immediate and adequate supplementation.Lesions are usually symmetrical in the periventricular region, diencephalon, the midbrain, hypothalamus, and cerebellar vermis. Brainstem lesions may include cranial nerve III, IV, VI and VIII nuclei, the medial thalamic nuclei, and the dorsal nucleus of the vagus nerve. Oedema may be found in the regions surrounding the third ventricle, and fourth ventricle, also appearing petechiae and small hemorrhages. Chronic cases can present the atrophy of the mammillary bodies.Endothelial proliferation, hyperplasia of capillaries, demyelination and neuronal loss can also occur.An altered blood–brain barrier may cause a perturbed response to certain drugs and foods. Diagnosis Diagnosis of Wernicke encephalopathy or disease is made clinically. Caine et al. in 1997 established criteria that Wernicke encephalopathy can be diagnosed in any patient with just two or more of the main symptoms noted above. The sensitivity of the diagnosis by the classic triad was 23% but increased to 85% taking two or more of the four classic features. These criteria are challenged because all the cases he studied were people who drank excessive amounts of alcohol. Some consider it sufficient to suspect the presence of the disease with only one of the principal symptoms. Some British hospital protocols suspect WE with any one of these symptoms: confusion, decreased consciousness level (or unconsciousness, stupor or coma), memory loss, ataxia or unsteadiness, ophthalmoplegia or nystagmus, and unexplained hypotension with hypothermia. The presence of only one sign should be sufficient for treatment.As a much more diverse range of symptoms has been found frequently in patients it is necessary to search for new diagnostic criteria, however Wernicke encephalopathy remains a clinically-diagnosed condition. Neither the MR, nor serum measurements related to thiamine are sufficient diagnostic markers in all cases. However, as described by Zuccoli et al. in several papers the involvement of the cranial nerve nuclei and central gray matter on MRI, is very specific to WE in the appropriate clinical setting. Non-recovery upon supplementation with thiamine is inconclusive.The sensitivity of MR was 53% and the specificity was 93%. The reversible cytotoxic edema was considered the most characteristic lesion of WE. The location of the lesions were more frequently atypical among people who drank appropriate amounts of alcohol, while typical contrast enhancement in the thalamus and the mammillary bodies was observed frequently associated with alcohol misuse. These abnormalities may include: Dorsomedial thalami, periaqueductal gray matter, mamillary bodies, tectal plate and brainstem nuclei are commonly affected. Involvement is always bilateral and symmetric. Value of DWI in the diagnosis of WE is minimal. Axial FLAIR MRI images represent the best diagnostic MRI sequence. Contrast material may highlight involvement of the mamillary bodies.There appears to be very little value for CT scans.Thiamine can be measured using an erythrocyte transketolase activity assay, or by activation by measurement of in vitro thiamine diphosphate levels. Normal thiamine levels do not necessarily rule out the presence of WE, as this may be a patient with difficulties in intracellular transport. Prevention There are hospital protocols for prevention, supplementing with thiamine in the presence of: history of alcohol misuse or related seizures, requirement for IV glucose, signs of malnutrition, poor diet, recent diarrhea or vomiting, peripheral neuropathy, intercurrent illness, delirium tremens or treatment for DTs, and others. Some experts advise parenteral thiamine should be given to all at-risk patients in the emergency department.In the clinical diagnosis should be remembered that early symptoms are nonspecific, and it has been stated that WE may present nonspecific findings. There is consensus to provide water-soluble vitamins and minerals after gastric operations.In some countries certain foods have been supplemented with thiamine, and have reduced WE cases. Improvement is difficult to quantify because they applied several different actions. Avoiding or moderating alcohol consumption and having adequate nutrition reduces one of the main risk factors in developing Wernicke-Korsakoff syndrome.. Treatment Most symptoms will improve quickly if deficiencies are treated early. Memory disorder may be permanent.In patients suspected of WE, thiamine treatment should be started immediately. Blood should be immediately taken to test for thiamine, other vitamins and minerals levels. Following this an immediate intravenous or intramuscular dose of thiamine should be administered two or three times daily. Thiamine administration is usually continued until clinical improvement ceases.Considering the diversity of possible causes and several surprising symptomatologic presentations, and because there is low assumed risk of toxicity of thiamine, because the therapeutic response is often dramatic from the first day, some qualified authors indicate parenteral thiamine if WE is suspected, both as a resource for diagnosis and treatment. The diagnosis is highly supported by the response to parenteral thiamine, but is not sufficient to be excluded by the lack of it. Parenteral thiamine administration is associated with a very small risk of anaphylaxis.People who consume excessive amounts of alcohol may have poor dietary intakes of several vitamins, and impaired thiamine absorption, metabolism, and storage; they may thus require higher doses.If glucose is given, such as in people with an alcohol use disorder who are also hypoglycaemic, thiamine must be given concurrently. If this is not done, the glucose will rapidly consume the remaining thiamine reserves, exacerbating this condition.The observation of edema in MR, and also the finding of inflation and macrophages in necropsied tissues, has led to successful administration of antiinflammatories.Other nutritional abnormalities should also be looked for, as they may be exacerbating the disease. In particular, magnesium, a cofactor of transketolase which may induce or aggravate the disease.Other supplements may also be needed, including: cobalamin, ascorbic acid, folic acid, nicotinamide, zinc, phosphorus (dicalcium phosphate) and in some cases taurine, especially suitable when there cardiocirculatory impairment. Patient-guided nutrition is suggested. In patients with Wernicke-Korsakoff syndrome, even higher doses of parenteral thiamine are recommended. Concurrent toxic effects of alcohol should also be considered. Epidemiology There are no conclusive statistical studies, all figures are based on partial studies. Wernickes lesions were observed in 0.8 to 2.8% of the general population autopsies, and 12.5% of people with an alcohol use disorder. This figure increases to 35% of such individuals if including cerebellar damage due to lack of thiamine.Most autopsy cases were from people with an alcohol use disorder. Autopsy series were performed in hospitals on the material available which is unlikely to be representative of the entire population. Considering the slight affectations, previous to the generation of observable lesions at necropsy, the percentage should be higher. There is evidence to indicate that Wernicke encephalopathy is underdiagnosed. For example, in one 1986 study, 80% of cases were diagnosed postmortem. Is estimated that only 5–14% of patients with WE are diagnosed in life.In a series of autopsy studies held in Recife, Brazil, it was found that only 7 out of 36 had consumed excessive amounts of alcohol, and only a small minority had malnutrition. In a reviewed of 53 published case reports from 2001 to 2011, the relationship with alcohol was also about 20% (10 out of 53 cases).WE is more likely to occur in males than females. Among the minority who are diagnosed, mortality can reach 17%. The main factors triggering death are thought to be infections and liver dysfunctions. History WE was first identified in 1881 by the German neurologist Carl Wernicke, although the link with thiamine was not identified until the 1930s. A similar presentation of this disease was described by the Russian psychiatrist Sergei Korsakoff in a series of articles published 1887–1891. References == External links ==
Wilms tumor
Wilms tumor or Wilms tumor, also known as nephroblastoma, is a cancer of the kidneys that typically occurs in children, rarely in adults.; and occurs most commonly as a renal tumor in child patients. It is named after Max Wilms, the German surgeon (1867–1918) who first described it.Approximately 650 cases are diagnosed in the U.S. annually. The majority of cases occur in children with no associated genetic syndromes; however, a minority of children with Wilms tumor have a congenital abnormality. It is highly responsive to treatment, with about 90 percent of children being cured. Signs and symptoms Typical signs and symptoms of Wilms tumor include the following: a painless, palpable abdominal mass loss of appetite abdominal pain fever nausea and vomiting blood in the urine (in about 20% of cases) high blood pressure in some cases (especially if synchronous or metachronous bilateral kidney involvement) Rarely as varicocele Pathogenesis Wilms tumor has many causes, which can broadly be categorized as syndromic and non-syndromic. Syndromic causes of Wilms tumor occur as a result of alterations to genes such as the Wilms Tumor 1 (WT1) or Wilms Tumor 2 (WT2) genes, and the tumor presents with a group of other signs and symptoms. Non-syndromic Wilms tumor is not associated with other symptoms or pathologies. Many, but not all, cases of Wilms tumor develop from nephrogenic rests, which are fragments of tissue in or around the kidney that develop before birth and become cancerous after birth. In particular, cases of bilateral Wilms tumor, as well as cases of Wilms tumor derived from certain genetic syndromes such as Denys-Drash syndrome, are strongly associated with nephrogenic rests. Most nephroblastomas are on one side of the body only and are found on both sides in less than 5% of cases, although people with Denys-Drash syndrome mostly have bilateral or multiple tumors. They tend to be encapsulated and vascularized tumors that do not cross the midline of the abdomen. In cases of metastasis it is usually to the lung. A rupture of Wilms tumor puts the patient at risk of bleeding and peritoneal dissemination of the tumor. In such cases, surgical intervention by a surgeon who is experienced in the removal of such a fragile tumor is imperative.Pathologically, a triphasic nephroblastoma comprises three elements: blastema mesenchyme (stroma) epitheliumWilms tumor is a malignant tumor containing metanephric blastema, stromal and epithelial derivatives. Characteristic is the presence of abortive tubules and glomeruli surrounded by a spindled cell stroma. The stroma may include striated muscle, cartilage, bone, fat tissue, and fibrous tissue. Dysfunction is caused when the tumor compresses the normal kidney parenchyma.The mesenchymal component may include cells showing rhabdomyoid differentiation or malignancy (rhabdomyosarcomatous Wilms).Wilms tumors may be separated into two prognostic groups based on pathologic characteristics: Favorable – Contains well developed components mentioned above Anaplastic – Contains diffuse anaplasia (poorly developed cells) Molecular biology and related conditions Mutations of the WT1 gene which is located on the short arm of chromosome 11 (11p13) are observed in approximately 20% of Wilms tumors, the majority of them being inherited from the germline, while a minority are acquired somatic mutations. In addition at least half of the Wilms tumors with mutations in WT1 also carry acquired somatic mutations in CTNNB1, the gene encoding the proto-oncogene beta-catenin. This latter gene is found on short arm of chromosome 3 (3p22.1). Most cases do not have mutations in any of these genes. An association with H19 has been reported. H19 is a long noncoding RNA located on the short arm of chromosome 11 (11p15.5). Diagnosis The majority of people with Wilms tumor present with an asymptomatic abdominal mass which is noticed by a family member or healthcare professional. Renal tumors can also be found during routine screening in children who have known predisposing clinical syndromes. The diagnostic process includes taking a medical history, a physical exam, and a series of tests including blood, urine, and imaging tests.Once Wilms tumor is suspected, an ultrasound scan is usually done first to confirm the presence of an intrarenal mass. A computed tomography scan or MRI scan can also be used for more detailed imaging. Finally, the diagnosis of Wilms tumor is confirmed by a tissue sample. In most cases, a biopsy is not done first because there is a risk of cancer cells spreading during the procedure. Treatment in North America is nephrectomy or in Europe chemotherapy followed by nephrectomy. A definitive diagnosis is obtained by pathological examination of the nephrectomy specimen. Staging Staging is a standard way to describe the extent of spread of Wilms tumors and to determine prognosis and treatments. Staging is based on anatomical findings and tumor cells pathology. According to the extent of tumor tissue at the time of initial diagnosis, five stages are considered.In Stage I Wilms tumor (43% of cases), all of the following criteria must be met: Tumor is limited to the kidney and is completely excised. The surface of the renal capsule is intact. The tumor is not ruptured or biopsied (open or needle) prior to removal. No involvement of extrarenal or renal sinus lymph-vascular spaces No residual tumor apparent beyond the margins of excision. Metastasis of tumor to lymph nodes not identified.In Stage II (23% of cases), 1 or more of the following criteria must be met: Tumor extends beyond the kidney but is completely excised. No residual tumor apparent at or beyond the margins of excision. Any of the following conditions may also exist: Tumor involvement of the blood vessels of the renal sinus and/or outside the renal parenchyma. Extensive tumor involvement of renal sinus soft tissue.In Stage III (20% of cases), 1 or more of the following criteria must be met: Inoperable primary tumor. Lymph node metastasis. Tumor is present at surgical margins. Tumor spillage involving peritoneal surfaces either before or during surgery, or transected tumor thrombus. The tumor has been biopsied prior to removal or there is local spillage of tumor during surgery, confined to the flank.Stage IV (10% of cases) Wilms tumor is defined by the presence of hematogenous metastases (lung, liver, bone, or brain), or lymph node metastases outside the abdominopelvic region.Stage V (5% of cases) Wilms tumor is defined by bilateral renal involvement at the time of initial diagnosis. For patients with bilateral involvement, an attempt should be made to stage each side according to the above criteria (stage I to III) on the basis of extent of disease prior to biopsy. Treatment and prognosis The overall 5-year survival is estimated to be approximately 90%, but for individuals the prognosis is highly dependent on individual staging and treatment. Early removal tends to promote positive outcomes. Tumor-specific loss-of-heterozygosity (LOH) for chromosomes 1p and 16q identifies a subset of Wilms tumor patients who have a significantly increased risk of relapse and death. LOH for these chromosomal regions can now be used as an independent prognostic factor together with disease stage to target intensity of treatment to risk of treatment failure. Genome-wide copy number and LOH status can be assessed with virtual karyotyping of tumor cells (fresh or paraffin-embedded).Statistics may sometimes show more favorable outcomes for more aggressive stages than for less aggressive stages, which may be caused by more aggressive treatment and/or random variability in the study groups. Also, a stage V tumor is not necessarily worse than a stage IV tumor. In case of relapse of Wilms tumor, the 4-year survival rate for children with a standard-risk has been estimated to be 80%. Epidemiology Wilms tumor is the most common malignant renal tumor in children. There are a number of rare genetic syndromes that have been linked to an increased risk of developing Wilms Tumor. Screening guidelines vary between countries; however health care professionals are recommending regular ultrasound screening for people with associated genetic syndromes.Wilms tumor affects approximately one person per 10,000 worldwide before the age of 15 years. People of African descent may have slightly higher rates of Wilms tumor. The peak age of Wilms tumor is 3 to 4 years and most cases occur before the age of 10 years. A genetic predisposition to Wilms tumor in individuals with aniridia has been established, due to deletions in the p13 band on chromosome 11. History Dr. Sidney Farber, founder of Dana–Farber Cancer Institute, and his colleagues achieved the first remissions in Wilms tumor in the 1950s. By employing the antibiotic actinomycin D in addition to surgery and radiation therapy, they boosted cure rates from 40 to 89 percent.The use of computed tomography scan for the diagnosis of Wilms tumor began in the early 1970s, thanks to the intuition of Dr. Mario Costici, an Italian physician. He discovered that in the direct radiograms and in the urographic images, you can identify determining elements for a differential diagnosis with the Wilms tumor. This possibility was a premise for starting a treatment. See also Hemihypertrophy National Wilms Tumor Study Group (NWTS) Perlman syndrome Virtual Karyotype for 1p and 16q LOH References External links Wilms tumor at Curlie GeneReviews/NCBI/NIH/UW entry on Wilms Tumor Overview Information from National Cancer Institute Cancer.Net Wilms Tumor – Childhood
Wrinkle
A wrinkle, also known as a rhytid, is a fold, ridge or crease in an otherwise smooth surface, such as on skin or fabric. Skin wrinkles typically appear as a result of ageing processes such as glycation, habitual sleeping positions, loss of body mass, sun damage, or temporarily, as the result of prolonged immersion in water. Age wrinkling in the skin is promoted by habitual facial expressions, aging, sun damage, smoking, poor hydration, and various other factors. In humans, it can also be prevented to some degree by avoiding excessive solar exposure and through diet (in particular through consumption of carotenoids, tocopherols and flavonoids, vitamins (A, C, D and E), essential omega-3-fatty acids, certain proteins and lactobacilli). Skin Causes for aging wrinkles Development of facial wrinkles is a kind of fibrosis of the skin. Misrepair-accumulation aging theory suggests that wrinkles develop from incorrect repairs of injured elastic fibers and collagen fibers. Repeated extensions and compressions of the skin cause repeated injuries of extracellular fibers in derma. During the repairing process, some of the broken elastic fibers and collagen fibers are not regenerated and restored but replaced by altered fibers. When an elastic fiber is broken in an extended state, it may be replaced by a "long" collagen fiber. Accumulation of "long" collagen fibers makes part of the skin looser and stiffer, and as a consequence, a big fold of skin appears. When a "long" collagen is broken in a compressed state, it may be replaced by a "short" collagen fiber. The "shorter" collagen fibers will restrict the extension of "longer" fibers, and make the “long" fibers in a folding state permanently. A small fold, namely a permanent wrinkle, then appears. Sleep wrinkles Sleep wrinkles are created and reinforced when the face is compressed against a pillow or bed surface in side or stomach sleeping positions during sleep. They appear in predictable locations due to the underlying superficial musculoaponeurotic system (SMAS), and are usually distinct from wrinkles of facial expression. As with wrinkles of facial expression, sleep wrinkles can deepen and become permanent over time, unless the habitual sleeping positions which cause the wrinkles are altered. Water-immersion wrinkling The wrinkles that occur in skin over prolonged exposure to water are sometimes referred to as pruney fingers or water aging. This is a temporary skin condition where the skin on the palms of the hand or feet becomes wrinkly. This wrinkling response may have imparted an evolutionary benefit by providing improved traction in wet conditions, and a better grasp of wet objects. These results were called into question by a 2014 study that failed to reproduce any improvement of handling wet objects with wrinkled fingertips. However, a 2020 study of gripping efficiency found that wrinkles decreased the force required to grip wet objects by 20%, supporting the traction hypothesis.Prior to a 1935 study, the common explanation was based on water absorption in the keratin-laden epithelial skin when immersed in water, causing the skin to expand and resulting in a larger surface area, forcing it to wrinkle. Usually the tips of the fingers and toes are the first to wrinkle because of a thicker layer of keratin and an absence of hairs which secrete the protective oil called sebum. In the 1935 study, however, Lewis and Pickering were studying patients with palsy of the median nerve when they discovered that skin wrinkling did not occur in the areas of the patients skin normally innervated by the damaged nerve. This suggested that the nervous system plays an essential role in wrinkling, so the phenomenon could not be entirely explained simply by water absorption. Recent research shows that wrinkling is related to vasoconstriction. Water probably initiates the wrinkling process by altering the balance of electrolytes in the skin as it diffuses into the hands and soles via their many sweat ducts. This could alter the stability of the membranes of the many neurons that synapse on the many blood vessels underneath skin, causing them to fire more rapidly. Increased neuronal firing causes blood vessels to constrict, decreasing the amount of fluid underneath the skin. This decrease in fluid would cause a decrease in tension, causing the skin to become wrinkly.This insight resulted in bedside tests for nerve damage and vasoconstriction. Wrinkling is often scored with immersion of the hands for 30 minutes in water or EMLA cream with measurements steps of 5 minutes, and counting the number of visible wrinkles in time. Not all healthy persons have finger wrinkling after immersion, so it would be safe to say that sympathetic function is preserved if finger wrinkling after immersion in water is observed, but if the fingers emerge smooth it cannot be assumed that there is a lesion to the autonomic supply or to the peripheral nerves of the hand. Animals with wrinkles Examples of wrinkles can be found in various animal species that grow loose, excess skin, particularly when they are young. Several breeds of dog, such as the Pug and the Shar Pei, have been bred to exaggerate this trait. In dogs bred for fighting, this is the result of selection for loose skin, which confers a protective advantage. Wrinkles are also associated with neoteny, as they are a trait associated with juvenile animals. Techniques for reducing the appearance of aging wrinkles Current evidence suggests that tretinoin decreases cohesiveness of follicular epithelial cells, although the exact mode of action is unknown. Additionally, tretinoin stimulates mitotic activity and increased turnover of follicular epithelial cells. Tretinoin is better known by the brand name Retin-A. Topical glycosaminoglycans supplements can help to provide temporary restoration of enzyme balance to slow or prevent matrix breakdown and consequent onset of wrinkle formation. Glycosaminoglycans (GAGs) are produced by the body to maintain structural integrity in tissues and to maintain fluid balance. Hyaluronic acid is a type of GAG that promotes collagen synthesis, repair, and hydration. GAGs serve as a natural moisturizer and lubricant between epidermal cells to inhibit the production of matrix metalloproteinases (MMPs). Dermal fillers are injectable products frequently used to correct wrinkles, and other depressions in the skin. They are often a kind of soft tissue designed to enable injection into the skin for purposes of improving the appearance. The most common products are based on hyaluronic acid and calcium hydroxylapatite. Botulinum toxin is a neurotoxin protein produced by the bacterium Clostridium botulinum. Botox is a specific form of botulinum toxin manufactured by Allergan for both therapeutic and cosmetic use. Besides its cosmetic application, Botox is used in the treatment of other conditions including migraine headache and cervical dystonia (spasmodic torticollis) (a neuromuscular disorder involving the head and neck).Dysport, manufactured by Ipsen, received FDA approval and is now used to treat cervical dystonia as well as glabellar lines in adults. In 2010, another form of botulinum toxin, one free of complexing proteins, became available to Americans. Xeomin received FDA approval for medical indications in 2010 and cosmetic indications in 2011. Botulinum toxin treats wrinkles by immobilizing the muscles which cause wrinkles. It is not appropriate for the treatment of all wrinkles; it is indicated for the treatment of glabellar lines (between the eyebrows) in adults. Any other usage is not approved by the FDA and is considered off-label use. Laser resurfacing is FDA-cleared skin resurfacing procedure in which lasers are used to improve the condition of the skin. Two types of lasers are used to reduce the appearance of fine lines and wrinkles on the face; laser ablation, which removes thin layers of skin, and nonablative lasers that stimulate collagen production. Nonablative lasers are less effective than ablative ones but they are less invasive and recovery time is short. After the procedure people experience temporary redness, itching and swelling. See also Botulinum toxin Injectable filler References External links Skin Ageing at Medline
X-linked hypophosphatemia
X-linked hypophosphatemia (XLH) is an X-linked dominant form of rickets (or osteomalacia) that differs from most cases of dietary deficiency rickets in that vitamin D supplementation does not cure it. It can cause bone deformity including short stature and genu varum (bow-leggedness). It is associated with a mutation in the PHEX gene sequence (Xp.22) and subsequent inactivity of the PHEX protein. PHEX mutations lead to an elevated circulating (systemic) level of the hormone FGF23 which results in renal phosphate wasting, and locally in the extracellular matrix of bones and teeth an elevated level of the mineralization/calcification-inhibiting protein osteopontin. An inactivating mutation in the PHEX gene results in an increase in systemic circulating FGF23, and a decrease in the enzymatic activity of the PHEX enzyme which normally removes (degrades) mineralization-inhibiting osteopontin protein; in XLH, the decreased PHEX enzyme activity leads to an accumulation of inhibitory osteopontin locally in bones and teeth to block mineralization which, along with renal phosphate wasting, both cause osteomalacia and odontomalacia. For both XLH and hypophosphatasia, inhibitor-enzyme pair relationships function to regulate mineralization in the extracellular matrix through a double-negative (inhibiting the inhibitors) activation effect in a manner described as the Stenciling Principle. Both these underlying mechanisms (renal phosphate wasting systemically, and mineralization inhibitor accumulation locally) contribute to the pathophysiology of XLH that leads to soft bones and teeth (hypomineralization, osteomalacia/odontomalacia). The prevalence of the disease is 1 in 20,000.X-linked hypophosphatemia may be lumped in with autosomal dominant hypophosphatemic rickets under general terms such as hypophosphatemic rickets. Hypophosphatemic rickets are associated with at least nine other genetic mutations. Clinical management of hypophosphatemic rickets may differ depending on the specific mutations associated with an individual case, but treatments are aimed at raising phosphate levels to promote normal bone formation. Symptoms and signs The most common symptoms of XLH affect the bones and teeth, causing pain, abnormalities, and osteoarthritis. Symptoms and signs can vary between children and adults and can include: Children Adults Osteomalacia Dental abscesses Limited range of movement (enthesopathy) Short stature Fatigue Fractures / pseudofracture Bone pain Craniostenosis Osteoarthritis Spinal stenosis Hearing loss Depression Genetics XLH is associated with a mutation in the PHEX gene sequence, located on the human X chromosome at location Xp22.2-p22.1. The PHEX protein regulates another protein called fibroblast growth factor 23 (produced from the FGF23 gene). Fibroblast growth factor 23 normally inhibits the kidneys ability to reabsorb phosphate into the bloodstream. Gene mutations in PHEX prevent it from correctly regulating fibroblast growth factor 23. The resulting overactivity of FGF-23 reduces vitamin D 1α-hydroxylation and phosphate reabsorption by the kidneys, leading to hypophosphatemia and the related features of hereditary hypophosphatemic rickets. Also in XLH, where PHEX enzymatic activity is absent or reduced, osteopontin—a mineralization-inhibiting secreted substrate protein found in the extracellular matrix of bone—accumulates in bone (and teeth) to contribute to the osteomalacia (and odontomalacia) as shown in the mouse homolog (Hyp) of XLH and in XLH patients. Biochemically in blood, XLH is recognized by hypophosphatemia and an inappropriately low level of calcitriol (1,25-(OH)2 vitamin D3). Patients often have bowed legs or knock knees in which they usually cannot touch both knees and ankles together at the same time.The disorder is inherited in an X-linked dominant manner. This means the defective gene responsible for the disorder (PHEX) is located on the X chromosome, and only one copy of the defective gene is sufficient to cause the disorder when inherited from a parent who has the disorder. Males are normally hemizygous for the X chromosome, having only one copy. As a result, X-linked dominant disorders usually show higher expressivity in males than females.As the X chromosome is one of the sex chromosomes (the other being the Y chromosome), X-linked inheritance is determined by the sex of the parent carrying a specific gene and can often seem complex. This is because, typically, females have two copies of the X-chromosome and males have only one copy. The difference between dominant and recessive inheritance patterns also plays a role in determining the chances of a child inheriting an X-linked disorder from their parentage. Diagnosis Begin clinical laboratory evaluation of rickets with assessment of serum calcium, phosphate, and alkaline phosphatase levels. In hypophosphatemic rickets, calcium levels may be within or slightly below the reference range; alkaline phosphatase levels will be significantly above the reference range.Carefully evaluate serum phosphate levels in the first year of life, because the concentration reference range for infants (5.0–7.5 mg/dL) is high compared with that for adults (2.7–4.5 mg/dL).Serum parathyroid hormone levels are within the reference range or slightly elevated, while calcitriol levels are low or within the lower reference range. Most importantly, urinary loss of phosphate is above the reference range.The renal tubular reabsorption of phosphate (TRP) in X-linked hypophosphatemia is 60%; normal TRP exceeds 90% at the same reduced plasma phosphate concentration. The TRP is calculated with the following formula:1 - [Phosphate Clearance (CPi) / Creatinine Clearance (Ccr)] X 100 Treatment Oral phosphate, calcitriol; in the event of severe bowing, an osteotomy may be performed to correct the leg shape. The monoclonal antibody burosumab was first licensed in February 2018 by the European Medicines Agency, then licensed by the Food and Drug Administration in the United States of America in June 2018 as the first drug targeting the underlying cause for this condition.The leg deformity can be treated with Ilizarov frames and CAOS. It is also treated with medications including human growth hormone, calcitriol, and phosphate. Society and culture International XLH Alliance – an alliance of international patient groups for individuals affected by XLH and related disorders. Jennyfer Marques Parinos is a Paralympic bronze medalist from Brazil who has XLH. She competes under a class 9 disability. See also Autosomal dominant hypophosphatemic rickets Hypophosphatemia Tumor-induced osteomalacia References External links 00754 at CHORUSHypophosphatemic rickets; XLH; Hypophosphatemia, vitamin D-resistant rickets at NIHs Office of Rare Diseases
Xerophthalmia
Xerophthalmia (from Ancient Greek "xērós" (ξηρός) meaning "dry" and "ophthalmos" (οφθαλμός) meaning "eye") is a medical condition in which the eye fails to produce tears. It may be caused by vitamin A deficiency, which is sometimes used to describe that condition, although there may be other causes. Xerophthalmia caused by a severe vitamin A deficiency is described by pathologic dryness of the conjunctiva and cornea. The conjunctiva becomes dry, thick and wrinkled. The first symptom is poor vision at night. If untreated, xerophthalmia can lead to dry eye syndrome, corneal ulceration, and ultimately to blindness as a result of corneal and retinal damage. Xerophthalmia usually implies a destructive dryness of the conjunctival epithelium due to dietary vitamin A deficiency—a rare condition in developed countries, but still causing much damage in developing countries. Other forms of dry eye are associated with aging, poor lid closure, scarring from a previous injury, or autoimmune diseases such as rheumatoid arthritis and Sjögrens syndrome, and these can all cause chronic conjunctivitis. Radioiodine therapy can also induce xerophthalmia, often transiently, although in some patients late onset or persistent xerophthalmia has been observed.The damage to the cornea in vitamin A associated xerophthalmia is quite different from damage to the retina at the back of the globe, a type of damage which can also be due to lack of vitamin A, but which is caused by lack of other forms of vitamin A which work in the visual system. Xerophthalmia from hypovitaminosis A is specifically due to lack of the hormone-like vitamin A metabolite retinoic acid, since (along with certain growth-stunting effects) the condition can be reversed in vitamin A deficient rats by retinoic acid supplementation (however the retinal damage continues). Since retinoic acid cannot be reduced to retinal or retinol, these effects on the cornea must be specific to retinoic acid. This is in keeping with retinoic acids known requirement for good health in epithelial cells, such as those in the cornea. Cause The condition is not congenital and develops over the course of a few months as the lacrimal glands fail to produce tears. Other conditions involved in the progression already stated include the appearance of Bitots spots, which are clumps of keratin debris that build up inside the conjunctiva and night blindness, which precedes corneal ulceration and total blindness. Classification World Health Organization classified xerophthalmia into following stages: XN-Night blindness X1A-Conjunctival xerosis X1B-Bitot spots X2-Corneal xerosis X3A-Corneal ulceration/keratomalacia, involving less than one-third of the cornea X3B-Corneal ulceration/keratomalacia, involving more than one-third of the cornea XS-Corneal scar due to xerophthalmia XF-Xerophthalmic fundus Prevention Prophylaxis consists of periodic administration of Vitamin A supplements. WHO recommended schedule, which is universally recommended is as follows: Infants 6–12 months old and any older children weighing less than 8 kg – 100,000 IU orally every 3–6 months Children over 1 year and under 6 years of age – 200,000 IU orally every 6 months Infants less than 6 months old, who are not being breastfed – 50,000 IU orally should be given before they attain the age of 6 months Treatment Treatment can occur in two ways: treating symptoms and treating the deficiency. Treatment of symptoms usually includes the use of artificial tears in the form of eye drops, increasing the humidity of the environment with humidifiers, and wearing wraparound glasses when outdoors. Treatment of the deficiency can be accomplished with a Vitamin A or multivitamin supplement or by eating foods rich in Vitamin A. Treatment with supplements and/or diet can be successful until the disease progresses as far as corneal ulceration, at which point only an extreme surgery can offer a chance of returning sight. Epidemiology Throughout southeast Asia, estimates are that more than half of children under the age of six years have subclinical vitamin A deficiency and night blindness, with progression to xerophthalmia being the leading cause of preventable childhood blindness. Estimates are that each year there are 350,000 cases of childhood blindness due to vitamin A deficiency. The causes are vitamin A deficiency during pregnancy, followed by low transfer of vitamin A during lactation and infant/child diets low in vitamin A or beta-carotene. The prevalence of pre-school age children who are blind due to vitamin A deficiency is lower than expected from incidence of new cases only because childhood vitamin A deficiency significantly increases all-cause mortality. See also Keratoconjunctivitis Keratoconjunctivitis sicca Keratomalacia, also caused by vitamin A deficiency. References Further reading Jellife DB. "Xerophthalmia: A World-wide Drive for Prevention." Journal of Tropical Pediatrics 1980; 26: ii-iii. 4 November 2009. == External links ==
Xerostomia
Xerostomia, also known as dry mouth, is dryness in the mouth, which may be associated with a change in the composition of saliva, or reduced salivary flow, or have no identifiable cause. This symptom is very common and is often seen as a side effect of many types of medication. It is more common in older people (mostly because this group tend to take several medications) and in persons who breathe through their mouths. Dehydration, radiotherapy involving the salivary glands, chemotherapy and several diseases can cause reduced salivation (hyposalivation), or a change in saliva consistency and hence a complaint of xerostomia. Sometimes there is no identifiable cause, and there may sometimes be a psychogenic reason for the complaint. Definition Xerostomia is the subjective sensation of dry mouth, which is often (but not always) associated with hypofunction of the salivary glands. The term is derived from the Greek words ξηρός (xeros) meaning "dry" and στόμα (stoma) meaning "mouth". A drug or substance that increases the rate of salivary flow is termed a sialogogue. Hyposalivation is a clinical diagnosis that is made based on the history and examination, but reduced salivary flow rates have been given objective definitions. Salivary gland hypofunction has been defined as any objectively demonstrable reduction in whole and/or individual gland flow rates. An unstimulated whole saliva flow rate in a normal person is 0.3–0.4 ml per minute, and below 0.1 ml per minute is significantly abnormal. A stimulated saliva flow rate less than 0.5 ml per gland in 5 minutes or less than 1 ml per gland in 10 minutes is decreased. The term subjective xerostomia is sometimes used to describe the symptom in the absence of any clinical evidence of dryness. Xerostomia may also result from a change in composition of saliva (from serous to mucous). Salivary gland dysfunction is an umbrella term for the presence of xerostomia, salivary gland hyposalivation, and hypersalivation. Signs and symptoms Hyposalivation may give the following signs and symptoms: Dental caries (xerostomia related caries) – Without the buffering effects of saliva, tooth decay becomes a common feature and may progress much more aggressively than it would otherwise ("rampant caries"). It may affect tooth surfaces that are normally spared, e.g., cervical caries and root surface caries. This is often seen in patients who have had radiotherapy involving the major salivary glands, termed radiation-induced caries. Therefore, it is important that any products used in managing dry mouth symptoms are sugar-free, as the presence of sugars in the mouth support the growth of oral bacteria, resulting in acid production and development of dental caries. Acid erosion. Saliva acts as a buffer and helps to prevent demineralization of teeth. Oral candidiasis – A loss of the antimicrobial actions of saliva may also lead to opportunistic infection with Candida species. Ascending (suppurative) sialadenitis – an infection of the major salivary glands (usually the parotid gland) that may be recurrent. It is associated with hyposalivation, as bacteria are able to enter the ductal system against the diminished flow of saliva. There may be swollen salivary glands even without acute infection, possibly caused by autoimmune involvement. Dysgeusia – altered taste sensation (e.g., a metallic taste) and dysosmia, altered sense of smell. Intraoral halitosis – possibly due to increased activity of halitogenic biofilm on the posterior dorsal tongue (although dysgeusia may cause a complaint of nongenuine halitosis in the absence of hyposalivation). Burning mouth syndrome – a burning or tingling sensation in the mouth. Saliva that appears thick or ropey. Mucosa that appears dry. A lack of saliva pooling in the floor of the mouth during examination. Dysphagia – difficulty swallowing and chewing, especially when eating dry foods. Food may stick to the tissues during eating. The tongue may stick to the palate, causing a clicking noise during speech, or the lips may stick together. Gloves or a dental mirror may stick to the tissues. Fissured tongue with atrophy of the filiform papillae and a lobulated, erythematous appearance of the tongue. Saliva cannot be "milked" (expressed) from the parotid duct. Difficulty wearing dentures, e.g., when swallowing or speaking. There may be generalized mucosal soreness and ulceration of the areas covered by the denture. Mouth soreness and oral mucositis. Lipstick or food may stick to the teeth. A need to sip drinks frequently while talking or eating. Dry, sore, and cracked lips and angles of mouth. Thirst.However, sometimes the clinical findings do not correlate with the symptoms experienced. For example, a person with signs of hyposalivation may not complain of xerostomia. Conversely a person who reports experiencing xerostomia may not show signs of reduced salivary secretions (subjective xerostomia). In the latter scenario, there are often other oral symptoms suggestive of oral dysesthesia ("burning mouth syndrome"). Some symptoms outside the mouth may occur together with xerostomia. These include: Xerophthalmia (dry eyes). Inability to cry. Blurred vision. Photophobia (light intolerance). Dryness of other mucosae, e.g., nasal, laryngeal, and/or genital. Burning sensation. Itching or grittiness. Dysphonia (voice changes).There may also be other systemic signs and symptoms if there is an underlying cause such as Sjögrens syndrome, for example, joint pain due to associated rheumatoid arthritis. Cause The differential of hyposalivation significantly overlaps with that of xerostomia. A reduction in saliva production to about 50% of the normal unstimulated level will usually result in the sensation of dry mouth. Altered saliva composition may also be responsible for xerostomia. Physiologic Salivary flow rate is decreased during sleep, which may lead to a transient sensation of dry mouth upon waking. This disappears with eating or drinking or with oral hygiene. When associated with halitosis, this is sometimes termed "morning breath". Dry mouth is also a common sensation during periods of anxiety, probably owing to enhanced sympathetic drive. Dehydration is known to cause hyposalivation, the result of the body trying to conserve fluid. Physiologic age-related changes in salivary gland tissues may lead to a modest reduction in salivary output and partially explain the increased prevalence of xerostomia in older people. However, polypharmacy is thought to be the major cause in this group, with no significant decreases in salivary flow rate being likely to occur through aging alone. Drug induced Aside from physiologic causes of xerostomia, iatrogenic effects of medications are the most common cause. A medication which is known to cause xerostomia may be termed xerogenic. Over 400 medications are associated with xerostomia. Although drug induced xerostomia is commonly reversible, the conditions for which these medications are prescribed are frequently chronic. The likelihood of xerostomia increases in relation to the total number of medications taken, whether the individual medications are xerogenic or not. The sensation of dryness usually starts shortly after starting the offending medication or after increasing the dose. Anticholinergic, sympathomimetic, or diuretic drugs are usually responsible. Sjögrens syndrome Xerostomia may be caused by autoimmune conditions which damage saliva-producing cells. Sjögrens syndrome is one such disease, and it is associated with symptoms including fatigue, myalgia and arthralgia. The disease is characterised by inflammatory changes in the moisture-producing glands throughout the body, leading to reduced secretions from glands that produce saliva, tears and other secretions throughout the body. Primary Sjögrens syndrome is the combination of dry eyes and xerostomia. Secondary Sjögrens syndrome is identical to primary form but with the addition of a combination of other connective tissue disorders such as systemic lupus erythematosus or rheumatoid arthritis. Celiac disease Xerostomia may be the only symptom of celiac disease, especially in adults, who often have no obvious digestive symptoms. Radiation therapy Radiation therapy for cancers of the head and neck (including brachytherapy for thyroid cancers) where the salivary glands are close to or within the field irradiated is another major cause of xerostomia. A radiation dose of 52 Gy is sufficient to cause severe salivary dysfunction. Radiotherapy for oral cancers usually involves up to 70 Gy of radiation, often given along with chemotherapy which may also have a damaging effect on saliva production. This side effect is a result of radiation damage of the parasympathetic nerves. Formation of salivary gland ducts depends on the secretion of a neuropeptide from the parasympathetic nerves, while development of the end buds of the salivary gland depends on acetylcholine from the parasympathetic nerves. Sicca syndrome "Sicca" simply means dryness. Sicca syndrome is not a specific condition, and there are varying definitions, but the term can describe oral and eye dryness that is not caused by autoimmune diseases (e.g., Sjögren syndrome). Other causes Oral dryness may also be caused by mouth breathing, usually caused by partial obstruction of the upper respiratory tract. Examples include hemorrhage, vomiting, diarrhea, and fever.Alcohol may be involved in the cause of salivary gland disease, liver disease, or dehydration.Smoking is another possible cause. Other recreational drugs such as methamphetamine, cannabis, hallucinogens, or heroin, may be implicated. Hormonal disorders, such as poorly controlled diabetes, chronic graft versus host disease or low fluid intake in people undergoing hemodialysis for renal impairment may also result in xerostomia, due to dehydration.Nerve damage can be a cause of oral dryness. An injury to the face or surgery can cause nerve damage to your head and neck area which can effect the nerves that are associated with the salivary flow.Xerostomia may be a consequence of infection with hepatitis C virus (HCV) and a rare cause of salivary gland dysfunction may be sarcoidosis.Infection with Human Immunodeficiency Virus/Acquired immunodeficiency Syndrome (AIDS) can cause a related salivary gland disease known as Diffuse Infiltrative Lymphocytosis Syndrome (DILS).Similar to taste dysfunction, xerostomia is one of the most prevalent and persistent oral symptoms associated with COVID-19. Despite a close association with COVID-19, xerostomia, dry mouth and hyposalivation tend to be overlooked in COVID-19 patients and survivors, unlike ageusia, dysgeusia and hypogeusia. Diagnostic approach A diagnosis of hyposalivation is based predominantly on the clinical signs and symptoms. The Challacombe scale maybe used to classify the extent of dryness. The rate of the salivary flow in an individuals mouth can also be measured. There is little correlation between symptoms and objective tests of salivary flow, such as sialometry. This test is simple and noninvasive, and involves measurement of all the saliva a patient can produce during a certain time, achieved by dribbling into a container. Sialometery can yield measures of stimulated salivary flow or unstimulated salivary flow. Stimulated salivary flow rate is calculated using a stimulant such as 10% citric acid dropped onto the tongue, and collection of all the saliva that flows from one of the parotid papillae over five or ten minutes. Unstimulated whole saliva flow rate more closely correlates with symptoms of xerostomia than stimulated salivary flow rate. Sialography involves introduction of radio-opaque dye such as iodine into the duct of a salivary gland. It may show blockage of a duct due to a calculus. Salivary scintiscanning using technetium is rarely used. Other medical imaging that may be involved in the investigation include chest x-ray (to exclude sarcoidosis), ultrasonography and magnetic resonance imaging (to exclude Sjögrens syndrome or neoplasia). A minor salivary gland biopsy, usually taken from the lip, may be carried out if there is a suspicion of organic disease of the salivary glands. Blood tests and urinalysis may be involved to exclude a number of possible causes. To investigate xerophthalmia, the Schirmer test of lacrimal flow may be indicated. Slit-lamp examination may also be carried out. Treatment The successful treatment of xerostomia is difficult to achieve and often unsatisfactory. This involves finding any correctable cause and removing it if possible, but in many cases it is not possible to correct the xerostomia itself, and treatment is symptomatic, and also focuses on preventing tooth decay through improving oral hygiene. Where the symptom is caused by hyposalivation secondary to underlying chronic disease, xerostomia can be considered permanent or even progressive. The management of salivary gland dysfunction may involve the use of saliva substitutes and/or saliva stimulants: Saliva substitutes – These are viscous products which are applied to the oral mucosa, which can be found in the form of sprays, gels, oils, mouthwashes, mouth rinses, pastilles or viscous liquids. This includes water, artificial salivas (mucin-based, carboxymethylcellulose-based), and other substances (milk, vegetable oil): Mucin Spray: 4 Trials have been completed on the effects of Mucin Spray on Xerostomia, overall there is no strong evidence showing that Mucin Spray is more effective than a placebo in reducing the symptoms of dry mouth. Mucin Lozenge: Only 1 trial (Gravenmade 1993) has been completed regarding the effectiveness of Mucin Lozenges. Whilst it was assessed as being at high risk of bias, it showed that Mucin Lozenges were ineffective when compared to a placebo. Mucoadhesive Disk: These disks are stuck to the palate and they contain lubricating agents, flavouring agents and some antimicrobial agents. One trial (Kerr 2010) assessed their effectiveness against a placebo disk. Strangely, patients from both groups (placebo and the real disk) reported an increase in subjective oral moistness. No adverse effects were reported. More research is needed in this area before conclusions are drawn. Biotene oral Balance Gel & toothpaste: One trial has been completed (Epstein 1999) regarding the effectiveness of Biotene Oral Balance gel & toothpaste. The results showed that Biotene products were "more effective than control and reduced dry mouth on waking". Saliva stimulants – organic acids (ascorbic acid, malic acid), chewing gum, parasympathomimetic drugs (choline esters, e.g. pilocarpine hydrochloride, cholinesterase inhibitors), and other substances (sugar-free mints, nicotinamide). Medications which stimulate saliva production traditionally have been administered through oral tablets, which the patient goes on to swallow, although some saliva stimulants can also be found in the form of toothpastes. Lozenges, which are retained in the mouth and then swallowed are becoming more and more popular. Lozenges are soft and gentle on the mouth and there is a belief that prolonged contact with the oral mucosa mechanically stimulates saliva production.Pilocarpine: A study by Taweechaisupapong in 2006 showed no statistical significant improvement in oral dryness and saliva production compared to placebo when administering pilocarpine lozenges. Physostigmine Gel: A study by Knosravini in 2009 showed a reduction in the oral dryness and a 5 times increase in saliva following physostigmine treatment. Chewing gum increases saliva production but there is no strong evidence that it improves dry mouth symptoms. The Cochrane oral health group concluded there is insufficient evidence to determine whether pilocarpine or physostigmine are effective treatments for Xerostomia. More research is needed. Dentirol chewing gum (xylitol): A study by Risheim in 1993 showed that when subjects had 2 sticks of gum up to 5 x daily, the gum gave subjective dry mouth symptom relief in approximately 1/3 of participants but no change in SWS (stimulated whole saliva). Profylin lozenge (xylitol/sorbitol):A study by Risheim in 1993 showed that when subjects had 1 lozenge 4 to 8 x daily, Profylin lozenges gave subjective dry mouth symptom relief in approximately 1/3 of participants but no change in SWS (stimulated whole saliva).Saliva substitutes can improve xerostomia, but tend not to improve the other problems associated with salivary gland dysfunction. Parasympathomimetic drugs (saliva stimulants) such as pilocarpine may improve xerostomia symptoms and other problems associated with salivary gland dysfunction, but the evidence for treatment of radiation-induced xerostomia is limited. Both stimulants and substitutes relieve symptoms to some extent. Salivary stimulants are probably only useful in people with some remaining detectable salivary function. A systematic review compromising of 36 randomised controlled trials for the treatment of dry mouth found that there was no strong evidence to suggest that a specific topical therapy is effective. This review also states that topical therapies can be expected to provide only short-term effects, which are reversible. The review reported limited evidence that oxygenated glycerol triester spray was more effective than electrolyte sprays. Sugar free chewing gum increases saliva production but there is no strong evidence that it improves symptoms. Plus, there is no clear evidence to suggest whether chewing gum is more or less effective as a treatment. There is a suggestion that intraoral devices and integrated mouthcare systems may be effective in reducing symptoms, but there was a lack of strong evidence. A systematic review of the management of radiotherapy-induced xerostomia with parasympathomimetic drugs found that there was limited evidence to support the use of pilocarpine in the treatment of radiation-induced salivary gland dysfunction. It was suggested that, barring any contraindications, a trial of the drug be offered in the above group (at a dose of five mg three times per day to minimize side effects). Improvements can take up to twelve weeks. However, pilocarpine is not always successful in improving xerostomia symptoms. The review also concluded that there was little evidence to support the use of other parasympathomimetics in this group. Another systematic review showed, that there is some low-quality evidence to suggest that amifostine prevents the feeling of dry mouth or reduce the risk of moderate to severe xerostomia in people receiving radiotherapy to the head and neck (with or without chemotherapy) in the short- (end of radiotherapy) to medium-term (three months postradiotherapy). But, it is less clear whether or not this effect is sustained to 12 months postradiotherapy.A 2013 review looking at non-pharmacological interventions reported a lack of evidence to support the effects of electrostimulation devices, or acupuncture, on symptoms of dry mouth. Epidemiology Xerostomia is a very common symptom. A conservative estimate of prevalence is about 20% in the general population, with increased prevalences in females (up to 30%) and the elderly (up to 50%). Estimates of the prevalence of persistent dry mouth vary between 10 and 50%. History Xerostomia has been used as a test to detect lies, which relied on emotional inhibition of salivary secretions to indicate possible incrimination. See also Xerosis (dry skin) References External links University of Illinois at Chicago NIH MedlinePlus Encyclopedia Drymouth Drymouth Drug Database
Yersiniosis
Yersiniosis is an infectious disease caused by a bacterium of the genus Yersinia. In the United States, most yersiniosis infections among humans are caused by Yersinia enterocolitica. Infection with Y. enterocolitica occurs most often in young children. The infection is thought to be contracted through the consumption of undercooked meat products, unpasteurized milk, or water contaminated by the bacteria. It has been also sometimes associated with handling raw chitterlings.Another bacterium of the same genus, Yersinia pestis, is the cause of Plague. Signs and symptoms Infection with Y. enterocolitica can cause a variety of symptoms depending on the age of the person infected. Common symptoms in children are fever, abdominal pain, and diarrhea, which is often bloody. Symptoms typically develop 4 to 7 days after exposure and may last 1 to 3 weeks or longer. In older children and adults, right-sided abdominal pain and fever may be the predominant symptoms, and may be confused with appendicitis. In a small proportion of cases, complications such as skin rash, joint pains, ileitis, erythema nodosum, and sometimes sepsis, acute arthritis or the spread of bacteria to the bloodstream (bacteremia) can occur. Diagnosis Phage typing of bacterial culture or antibodies for F-antigen. Treatment Treatment for gastroenteritis due to Y. enterocolitica is not needed in the majority of cases. Severe infections with systemic involvement (sepsis or bacteremia) often requires aggressive antibiotic therapy; the drugs of choice are doxycycline and an aminoglycoside. Alternatives include cefotaxime, fluoroquinolones, and co-trimoxazole. References == External links ==
Zinc deficiency
Zinc deficiency is defined either as insufficient zinc to meet the needs of the body, or as a serum zinc level below the normal range. However, since a decrease in the serum concentration is only detectable after long-term or severe depletion, serum zinc is not a reliable biomarker for zinc status. Common symptoms include increased rates of diarrhea. Zinc deficiency affects the skin and gastrointestinal tract; brain and central nervous system, immune, skeletal, and reproductive systems. Zinc deficiency in humans is caused by reduced dietary intake, inadequate absorption, increased loss, or increased body system use. The most common cause is reduced dietary intake. In the U.S., the Recommended Dietary Allowance (RDA) is 8 mg/day for women and 11 mg/day for men.The highest concentration of dietary zinc is found in oysters, meat, beans, and nuts. Increasing the amount of zinc in the soil and thus in crops and animals is an effective preventive measure. Zinc deficiency may affect up to 2 billion people worldwide. Signs and symptoms Skin, nails and hair Zinc deficiency may manifest as acne, eczema, xerosis (dry, scaling skin), seborrheic dermatitis, or alopecia (thin and sparse hair). It may also impair or possibly prevent wound healing. Mouth Zinc deficiency can manifest as non-specific oral ulceration, stomatitis, or white tongue coating. Rarely it can cause angular cheilitis (sores at the corners of the mouth). Vision, smell and taste Severe zinc deficiency may disturb the sense of smell and taste. Night blindness may be a feature of severe zinc deficiency, although most reports of night blindness and abnormal dark adaptation in humans with zinc deficiency have occurred in combination with other nutritional deficiencies (e.g. vitamin A). Immune system Impaired immune function in people with zinc deficiency can lead to the development of respiratory, gastrointestinal, or other infections, e.g., pneumonia. The levels of inflammatory cytokines (e.g., IL-1β, IL-2, IL-6, and TNF-α) in blood plasma are affected by zinc deficiency and zinc supplementation produces a dose-dependent response in the level of these cytokines. During inflammation, there is an increased cellular demand for zinc and impaired zinc homeostasis from zinc deficiency is associated with chronic inflammation. Diarrhea Zinc deficiency contributes to an increased incidence and severity of diarrhea. Appetite Zinc deficiency may lead to loss of appetite. The use of zinc in the treatment of anorexia has been advocated since 1979 by Bakan. At least 15 clinical trials have shown that zinc improved weight gain in anorexia. A 1994 trial showed that zinc doubled the rate of body mass increase in the treatment of anorexia nervosa. Deficiency of other nutrients such as tyrosine, tryptophan and thiamine could contribute to this phenomenon of "malnutrition-induced malnutrition". Cognitive function and hedonic tone Cognitive functions, such as learning and hedonic tone, are impaired with zinc deficiency. Moderate and more severe zinc deficiencies are associated with behavioral abnormalities, such as irritability, lethargy, and depression (e.g., involving anhedonia). Zinc supplementation produces a rapid and dramatic improvement in hedonic tone (i.e., general level of happiness or pleasure) under these circumstances. Zinc supplementation has been reported to improve symptoms of ADHD and depression. Psychological disorders Low plasma zinc levels have been alleged to be associated with many psychological disorders. Schizophrenia has been linked to decreased brain zinc levels. Evidence suggests that zinc deficiency could play a role in depression. Zinc supplementation may be an effective treatment in major depression. Growth Zinc deficiency in children can cause delayed growth and has been claimed to be the cause of stunted growth in one third of the worlds population. During pregnancy Zinc deficiency during pregnancy can negatively affect both the mother and fetus. Animal studies indicate that maternal zinc deficiency can upset both the sequencing and efficiency of the birth process. An increased incidence of difficult and prolonged labor, hemorrhage, uterine dystocia and placental abruption has been documented in zinc deficient animals. These effects may be mediated by the defective functioning of estrogen via the estrogen receptor, which contains a zinc finger protein. A review of pregnancy outcomes in women with acrodermatitis enteropathica, reported that out of every seven pregnancies, there was one abortion and two malfunctions, suggesting the human fetus is also susceptible to the teratogenic effects of severe zinc deficiency. However, a review on zinc supplementation trials during pregnancy did not report a significant effect of zinc supplementation on neonatal survival.Zinc deficiency can interfere with many metabolic processes when it occurs during infancy and childhood, a time of rapid growth and development when nutritional needs are high. Low maternal zinc status has been associated with less attention during the neonatal period and worse motor functioning. In some studies, supplementation has been associated with motor development in very low birth weight infants and more vigorous and functional activity in infants and toddlers. Testosterone production Zinc is required to produce testosterone. Thus, zinc deficiency can lead to reduced circulating testosterone, which could lead to sexual immaturity (Ananda Parsad, et al.) hypogonadism, and delayed puberty. Causes Dietary deficiency Zinc deficiency can be caused by a diet high in phytate-containing whole grains, foods grown in zinc deficient soil, or processed foods containing little or no zinc. Conservative estimates suggest that 25% of the worlds population is at risk of zinc deficiency.In the U.S., the Recommended Dietary Allowance (RDA) is 8 mg/day for women and 11 mg/day for men. RDA for pregnancy is 11 mg/day. RDA for lactation is 12 mg/day. For infants up to 12 months the RDA is 3 mg/day. For children ages 1–13 years the RDA increases with age from 3 to 8 mg/day. The following table summarizes most of the foods with significant quantities of zinc, listed in order of quantity per serving, unfortified. Note that all of the top 10 entries are meat, beans, or nuts. Recent research findings suggest that increasing atmospheric carbon dioxide concentrations will exacerbate zinc deficiency problems in populations that consume grains and legumes as staple foods. A meta-analysis of data from 143 studies comparing the nutrient content of grasses and legumes grown in ambient and elevated CO2 environments found that the edible portions of wheat, rice, peas and soybeans grown in elevated CO2 contained less zinc and iron. Global atmospheric CO2 concentration is expected to reach 550 p.p.m. in the late 21st century. At this CO2 level the zinc content of these crops was 3.3 to 9.3% lower than that of crops grown in the present atmosphere. A model of the nutritional impact of these lower zinc quantities on the populations of 151 countries predicts that an additional 175 million people could face dietary zinc deficiency as the result of increasing atmospheric CO2. Inadequate absorption Acrodermatitis enteropathica is an inherited deficiency of the zinc carrier protein ZIP4 resulting in inadequate zinc absorption. It presents as growth retardation, severe diarrhea, hair loss, skin rash (most often around the genitalia and mouth) and opportunistic candidiasis and bacterial infections.Numerous small bowel diseases which cause destruction or malfunction of the gut mucosa enterocytes and generalized malabsorption are associated with zinc deficiency. Increased loss Exercising, high alcohol intake, and diarrhea all increase loss of zinc from the body. Changes in intestinal tract absorbability and permeability due, in part, to viral, protozoal, or bacteria pathogens may also encourage fecal losses of zinc. Chronic disease The mechanism of zinc deficiency in some diseases has not been well defined; it may be multifactorial.Wilsons disease, sickle cell disease, chronic kidney disease, chronic liver disease have all been associated with zinc deficiency. It can also occur after bariatric surgery, mercury exposure and tartrazine.Although marginal zinc deficiency is often found in depression, low zinc levels could either be a cause or a consequence of mental disorders and their symptoms. Mechanism As biosystems are unable to store zinc, regular intake is necessary. Excessively low zinc intake can lead to zinc deficiency, which can negatively impact an individuals health. The mechanisms for the clinical manifestations of zinc deficiency are best appreciated by recognizing that zinc functions in the body in three areas: catalytic, structural, and regulatory. Zinc (Zn) is only common in its +2 oxidative state, where it typically coordinates with tetrahedral geometry. It is important in maintaining basic cellular functions such as DNA replication, RNA transcription, cell division and cell activations. However, having too much or too little zinc can cause these functions to be compromised. Zinc is a critical component of the catalytic site of hundreds of kinds of different metalloenzymes in each human being. In its structural role, zinc coordinates with certain protein domains, facilitating protein folding and producing structures such as zinc fingers. In its regulatory role, zinc is involved in the regulation of nucleoproteins and the activity of various inflammatory cells. For example, zinc regulates the expression of metallothionein, which has multiple functions, such as intracellular zinc compartmentalization and antioxidant function. Thus zinc deficiency results in disruption of hundreds of metabolic pathways, causing numerous clinical manifestations, including impaired growth and development, and disruption of reproductive and immune function.Pra1 (pH regulated antigen 1) is a candida albicans protein that scavenges host zinc. Diagnosis Diagnosis is typically made based on clinical suspicion and a low level of zinc in the blood. Any level below 70 mcg/dl (normal 70-120 mcg/dl)is considered as zinc deficiency. Zinc deficiency could be also associated with low alkaline phosphatase since it acts a cofactor for this enzyme.There is a paucity of adequate zinc biomarkers, and the most widely used indicator, plasma zinc, has poor sensitivity and specificity. Classification Zinc deficiency can be classified as acute, as may occur during prolonged inappropriate zinc-free total parenteral nutrition; or chronic, as may occur in dietary deficiency or inadequate absorption. Prevention Five interventional strategies can be used: Adding zinc to soil, called agronomic biofortification, which both increases crop yields and provides more dietary zinc. Adding zinc to food, called food fortification. The Republic of China, India, Mexico and about 20 other countries, mostly on the east coast of sub-Saharan Africa, fortify wheat flour and/or maize flour with zinc. Adding zinc rich foods to diet. The foods with the highest concentration of zinc are proteins, especially animal meats, the highest being oysters. Per ounce, beef, pork, and lamb contain more zinc than fish. The dark meat of a chicken has more zinc than the light meat. Other good sources of zinc are nuts, whole grains, legumes, and yeast. Although whole grains and cereals are high in zinc, they also contain chelating phytates which bind zinc and reduce its bioavailability. Oral repletion via tablets (e.g. zinc gluconate) or liquid (e.g. zinc acetate). Oral zinc supplementation in healthy infants more than six months old has been shown to reduce the duration of any subsequent diarrheal episodes by about 11 hours. Oral repletion via multivitamin/mineral supplements containing zinc gluconate, sulfate, or acetate. It is not clear whether one form is better than another. Epidemiology Zinc deficiency affects about 2.2 billion people around the world. Severe zinc deficiency is rare, and is mainly seen in persons with acrodermatitis enteropathica, a severe defect in zinc absorption due to a congenital deficiency in the zinc carrier protein ZIP4 in the enterocyte. Mild zinc deficiency due to reduced dietary intake is common. Conservative estimates suggest that 25% of the worlds population is at risk of zinc deficiency. Zinc deficiency is thought to be a leading cause of infant mortality.Providing micronutrients, including zinc, to humans is one of the four solutions to major global problems identified in the Copenhagen Consensus from an international panel of economists. History Significant historical events related to zinc deficiency began in 1869 when zinc was first discovered to be essential to the growth of an organism Aspergillus niger. In 1929 Lutz measured zinc in numerous human tissues using the dithizone technique and estimated total body zinc in a 70 kg man to be 2.2 grams. Zinc was found to be essential to the growth of rats in 1933. In 1939 beriberi patients in China were noted to have decreased zinc levels in skin and nails. In 1940 zinc levels in a series of autopsies found it to be present in all tissues examined. In 1942 a study showed most zinc excretion was via the feces. In 1950 a normal serum zinc level was first defined, and found to be 17.3–22.1 micromoles/liter. In 1956 cirrhotic patients were found to have low serum zinc levels. In 1963 zinc was determined to be essential to human growth, three enzymes requiring zinc as a cofactor were described, and a report was published of a 21-year-old Iranian man with stunted growth, infantile genitalia, and anemia which were all reversed by zinc supplementation. In 1972 fifteen Iranian rejected army inductees with symptoms of zinc deficiency were reported: all responded to zinc. In 1973 the first case of acrodermatitis enteropathica due to severe zinc deficiency was described. In 1974 the National Academy of Sciences declared zinc to be an essential element for humans and established a recommended daily allowance. In 1978 the Food and Drug Administration required zinc to be in total parenteral nutrition fluids. In the 1990s there was increasing attention on the role of zinc deficiency in childhood morbidity and mortality in developing countries. In 2002 the zinc transporter protein ZIP4 was first identified as the mechanism for absorption of zinc in the gut across the basolateral membrane of the enterocyte. By 2014 over 300 zinc containing enzymes have been identified, as well as over 1000 zinc containing transcription factors.Phytate was recognized as removing zinc from nutrients given to chicken and swine in 1960. That it can cause human zinc deficiency however was not recognized until Reinholds work in Iran in the 1970s. This phenomenon is central to the high risk of zinc deficiency worldwide. Soils and crops Soil zinc is an essential micronutrient for crops. Almost half of the worlds cereal crops are deficient in zinc, leading to poor crop yields. Many agricultural countries around the world are affected by zinc deficiency. In China, zinc deficiency occurs on around half of the agricultural soils, affecting mainly rice and maize. Areas with zinc deficient soils are often regions with widespread zinc deficiency in humans. A basic knowledge of the dynamics of zinc in soils, understanding of the uptake and transport of zinc in crops and characterizing the response of crops to zinc deficiency are essential steps in achieving sustainable solutions to the problem of zinc deficiency in crops and humans. Biofortification Soil and foliar application of zinc fertilizer can effectively increase grain zinc and reduce the phytate:zinc ratio in grain. People who eat bread prepared from zinc enriched wheat have a significant increase in serum zinc.Zinc fertilization not only increases zinc content in zinc deficient crops, it also increases crop yields. Balanced crop nutrition supplying all essential nutrients, including zinc, is a cost effective management strategy. Even with zinc-efficient varieties, zinc fertilizers are needed when the available zinc in the topsoil becomes depleted. Plant breeding can improve zinc uptake capacity of plants under soil conditions with low chemical availability of zinc. Breeding can also improve zinc translocation which elevates zinc content in edible crop parts as opposed to the rest of the plant. Central Anatolia, in Turkey, was a region with zinc-deficient soils and widespread zinc deficiency in humans. In 1993, a research project found that yields could be increased by 6 to 8-fold and child nutrition dramatically increased through zinc fertilization. Zinc was added to fertilizers. While the product was initially made available at the same cost, the results were so convincing that Turkish farmers significantly increased the use of the zinc-fortified fertilizer (1 percent of zinc) within a few years, despite the repricing of the products to reflect the added value of the content. Nearly ten years after the identification of the zinc deficiency problem, the total amount of zinc-containing compound fertilizers produced and applied in Turkey reached a record level of 300,000 tonnes per annum. It is estimated that the economic benefits associated with the application of zinc fertilizers on zinc deficient soils in Turkey is around US$100 million per year. Zinc deficiency in children has been dramatically reduced. References Further reading Maret W (2013). "Chapter 14 Zinc and the Zinc Proteome". In Banci L (ed.). Metallomics and the Cell. Metal Ions in Life Sciences. Vol. 12. Springer. pp. 479–501. doi:10.1007/978-94-007-5561-1_14. ISBN 978-94-007-5560-4. ISSN 1559-0836. PMID 23595681. External links DermAtlas 228
Zinc deficiency
Zinc deficiency is defined either as insufficient zinc to meet the needs of the body, or as a serum zinc level below the normal range. However, since a decrease in the serum concentration is only detectable after long-term or severe depletion, serum zinc is not a reliable biomarker for zinc status. Common symptoms include increased rates of diarrhea. Zinc deficiency affects the skin and gastrointestinal tract; brain and central nervous system, immune, skeletal, and reproductive systems. Zinc deficiency in humans is caused by reduced dietary intake, inadequate absorption, increased loss, or increased body system use. The most common cause is reduced dietary intake. In the U.S., the Recommended Dietary Allowance (RDA) is 8 mg/day for women and 11 mg/day for men.The highest concentration of dietary zinc is found in oysters, meat, beans, and nuts. Increasing the amount of zinc in the soil and thus in crops and animals is an effective preventive measure. Zinc deficiency may affect up to 2 billion people worldwide. Signs and symptoms Skin, nails and hair Zinc deficiency may manifest as acne, eczema, xerosis (dry, scaling skin), seborrheic dermatitis, or alopecia (thin and sparse hair). It may also impair or possibly prevent wound healing. Mouth Zinc deficiency can manifest as non-specific oral ulceration, stomatitis, or white tongue coating. Rarely it can cause angular cheilitis (sores at the corners of the mouth). Vision, smell and taste Severe zinc deficiency may disturb the sense of smell and taste. Night blindness may be a feature of severe zinc deficiency, although most reports of night blindness and abnormal dark adaptation in humans with zinc deficiency have occurred in combination with other nutritional deficiencies (e.g. vitamin A). Immune system Impaired immune function in people with zinc deficiency can lead to the development of respiratory, gastrointestinal, or other infections, e.g., pneumonia. The levels of inflammatory cytokines (e.g., IL-1β, IL-2, IL-6, and TNF-α) in blood plasma are affected by zinc deficiency and zinc supplementation produces a dose-dependent response in the level of these cytokines. During inflammation, there is an increased cellular demand for zinc and impaired zinc homeostasis from zinc deficiency is associated with chronic inflammation. Diarrhea Zinc deficiency contributes to an increased incidence and severity of diarrhea. Appetite Zinc deficiency may lead to loss of appetite. The use of zinc in the treatment of anorexia has been advocated since 1979 by Bakan. At least 15 clinical trials have shown that zinc improved weight gain in anorexia. A 1994 trial showed that zinc doubled the rate of body mass increase in the treatment of anorexia nervosa. Deficiency of other nutrients such as tyrosine, tryptophan and thiamine could contribute to this phenomenon of "malnutrition-induced malnutrition". Cognitive function and hedonic tone Cognitive functions, such as learning and hedonic tone, are impaired with zinc deficiency. Moderate and more severe zinc deficiencies are associated with behavioral abnormalities, such as irritability, lethargy, and depression (e.g., involving anhedonia). Zinc supplementation produces a rapid and dramatic improvement in hedonic tone (i.e., general level of happiness or pleasure) under these circumstances. Zinc supplementation has been reported to improve symptoms of ADHD and depression. Psychological disorders Low plasma zinc levels have been alleged to be associated with many psychological disorders. Schizophrenia has been linked to decreased brain zinc levels. Evidence suggests that zinc deficiency could play a role in depression. Zinc supplementation may be an effective treatment in major depression. Growth Zinc deficiency in children can cause delayed growth and has been claimed to be the cause of stunted growth in one third of the worlds population. During pregnancy Zinc deficiency during pregnancy can negatively affect both the mother and fetus. Animal studies indicate that maternal zinc deficiency can upset both the sequencing and efficiency of the birth process. An increased incidence of difficult and prolonged labor, hemorrhage, uterine dystocia and placental abruption has been documented in zinc deficient animals. These effects may be mediated by the defective functioning of estrogen via the estrogen receptor, which contains a zinc finger protein. A review of pregnancy outcomes in women with acrodermatitis enteropathica, reported that out of every seven pregnancies, there was one abortion and two malfunctions, suggesting the human fetus is also susceptible to the teratogenic effects of severe zinc deficiency. However, a review on zinc supplementation trials during pregnancy did not report a significant effect of zinc supplementation on neonatal survival.Zinc deficiency can interfere with many metabolic processes when it occurs during infancy and childhood, a time of rapid growth and development when nutritional needs are high. Low maternal zinc status has been associated with less attention during the neonatal period and worse motor functioning. In some studies, supplementation has been associated with motor development in very low birth weight infants and more vigorous and functional activity in infants and toddlers. Testosterone production Zinc is required to produce testosterone. Thus, zinc deficiency can lead to reduced circulating testosterone, which could lead to sexual immaturity (Ananda Parsad, et al.) hypogonadism, and delayed puberty. Causes Dietary deficiency Zinc deficiency can be caused by a diet high in phytate-containing whole grains, foods grown in zinc deficient soil, or processed foods containing little or no zinc. Conservative estimates suggest that 25% of the worlds population is at risk of zinc deficiency.In the U.S., the Recommended Dietary Allowance (RDA) is 8 mg/day for women and 11 mg/day for men. RDA for pregnancy is 11 mg/day. RDA for lactation is 12 mg/day. For infants up to 12 months the RDA is 3 mg/day. For children ages 1–13 years the RDA increases with age from 3 to 8 mg/day. The following table summarizes most of the foods with significant quantities of zinc, listed in order of quantity per serving, unfortified. Note that all of the top 10 entries are meat, beans, or nuts. Recent research findings suggest that increasing atmospheric carbon dioxide concentrations will exacerbate zinc deficiency problems in populations that consume grains and legumes as staple foods. A meta-analysis of data from 143 studies comparing the nutrient content of grasses and legumes grown in ambient and elevated CO2 environments found that the edible portions of wheat, rice, peas and soybeans grown in elevated CO2 contained less zinc and iron. Global atmospheric CO2 concentration is expected to reach 550 p.p.m. in the late 21st century. At this CO2 level the zinc content of these crops was 3.3 to 9.3% lower than that of crops grown in the present atmosphere. A model of the nutritional impact of these lower zinc quantities on the populations of 151 countries predicts that an additional 175 million people could face dietary zinc deficiency as the result of increasing atmospheric CO2. Inadequate absorption Acrodermatitis enteropathica is an inherited deficiency of the zinc carrier protein ZIP4 resulting in inadequate zinc absorption. It presents as growth retardation, severe diarrhea, hair loss, skin rash (most often around the genitalia and mouth) and opportunistic candidiasis and bacterial infections.Numerous small bowel diseases which cause destruction or malfunction of the gut mucosa enterocytes and generalized malabsorption are associated with zinc deficiency. Increased loss Exercising, high alcohol intake, and diarrhea all increase loss of zinc from the body. Changes in intestinal tract absorbability and permeability due, in part, to viral, protozoal, or bacteria pathogens may also encourage fecal losses of zinc. Chronic disease The mechanism of zinc deficiency in some diseases has not been well defined; it may be multifactorial.Wilsons disease, sickle cell disease, chronic kidney disease, chronic liver disease have all been associated with zinc deficiency. It can also occur after bariatric surgery, mercury exposure and tartrazine.Although marginal zinc deficiency is often found in depression, low zinc levels could either be a cause or a consequence of mental disorders and their symptoms. Mechanism As biosystems are unable to store zinc, regular intake is necessary. Excessively low zinc intake can lead to zinc deficiency, which can negatively impact an individuals health. The mechanisms for the clinical manifestations of zinc deficiency are best appreciated by recognizing that zinc functions in the body in three areas: catalytic, structural, and regulatory. Zinc (Zn) is only common in its +2 oxidative state, where it typically coordinates with tetrahedral geometry. It is important in maintaining basic cellular functions such as DNA replication, RNA transcription, cell division and cell activations. However, having too much or too little zinc can cause these functions to be compromised. Zinc is a critical component of the catalytic site of hundreds of kinds of different metalloenzymes in each human being. In its structural role, zinc coordinates with certain protein domains, facilitating protein folding and producing structures such as zinc fingers. In its regulatory role, zinc is involved in the regulation of nucleoproteins and the activity of various inflammatory cells. For example, zinc regulates the expression of metallothionein, which has multiple functions, such as intracellular zinc compartmentalization and antioxidant function. Thus zinc deficiency results in disruption of hundreds of metabolic pathways, causing numerous clinical manifestations, including impaired growth and development, and disruption of reproductive and immune function.Pra1 (pH regulated antigen 1) is a candida albicans protein that scavenges host zinc. Diagnosis Diagnosis is typically made based on clinical suspicion and a low level of zinc in the blood. Any level below 70 mcg/dl (normal 70-120 mcg/dl)is considered as zinc deficiency. Zinc deficiency could be also associated with low alkaline phosphatase since it acts a cofactor for this enzyme.There is a paucity of adequate zinc biomarkers, and the most widely used indicator, plasma zinc, has poor sensitivity and specificity. Classification Zinc deficiency can be classified as acute, as may occur during prolonged inappropriate zinc-free total parenteral nutrition; or chronic, as may occur in dietary deficiency or inadequate absorption. Prevention Five interventional strategies can be used: Adding zinc to soil, called agronomic biofortification, which both increases crop yields and provides more dietary zinc. Adding zinc to food, called food fortification. The Republic of China, India, Mexico and about 20 other countries, mostly on the east coast of sub-Saharan Africa, fortify wheat flour and/or maize flour with zinc. Adding zinc rich foods to diet. The foods with the highest concentration of zinc are proteins, especially animal meats, the highest being oysters. Per ounce, beef, pork, and lamb contain more zinc than fish. The dark meat of a chicken has more zinc than the light meat. Other good sources of zinc are nuts, whole grains, legumes, and yeast. Although whole grains and cereals are high in zinc, they also contain chelating phytates which bind zinc and reduce its bioavailability. Oral repletion via tablets (e.g. zinc gluconate) or liquid (e.g. zinc acetate). Oral zinc supplementation in healthy infants more than six months old has been shown to reduce the duration of any subsequent diarrheal episodes by about 11 hours. Oral repletion via multivitamin/mineral supplements containing zinc gluconate, sulfate, or acetate. It is not clear whether one form is better than another. Epidemiology Zinc deficiency affects about 2.2 billion people around the world. Severe zinc deficiency is rare, and is mainly seen in persons with acrodermatitis enteropathica, a severe defect in zinc absorption due to a congenital deficiency in the zinc carrier protein ZIP4 in the enterocyte. Mild zinc deficiency due to reduced dietary intake is common. Conservative estimates suggest that 25% of the worlds population is at risk of zinc deficiency. Zinc deficiency is thought to be a leading cause of infant mortality.Providing micronutrients, including zinc, to humans is one of the four solutions to major global problems identified in the Copenhagen Consensus from an international panel of economists. History Significant historical events related to zinc deficiency began in 1869 when zinc was first discovered to be essential to the growth of an organism Aspergillus niger. In 1929 Lutz measured zinc in numerous human tissues using the dithizone technique and estimated total body zinc in a 70 kg man to be 2.2 grams. Zinc was found to be essential to the growth of rats in 1933. In 1939 beriberi patients in China were noted to have decreased zinc levels in skin and nails. In 1940 zinc levels in a series of autopsies found it to be present in all tissues examined. In 1942 a study showed most zinc excretion was via the feces. In 1950 a normal serum zinc level was first defined, and found to be 17.3–22.1 micromoles/liter. In 1956 cirrhotic patients were found to have low serum zinc levels. In 1963 zinc was determined to be essential to human growth, three enzymes requiring zinc as a cofactor were described, and a report was published of a 21-year-old Iranian man with stunted growth, infantile genitalia, and anemia which were all reversed by zinc supplementation. In 1972 fifteen Iranian rejected army inductees with symptoms of zinc deficiency were reported: all responded to zinc. In 1973 the first case of acrodermatitis enteropathica due to severe zinc deficiency was described. In 1974 the National Academy of Sciences declared zinc to be an essential element for humans and established a recommended daily allowance. In 1978 the Food and Drug Administration required zinc to be in total parenteral nutrition fluids. In the 1990s there was increasing attention on the role of zinc deficiency in childhood morbidity and mortality in developing countries. In 2002 the zinc transporter protein ZIP4 was first identified as the mechanism for absorption of zinc in the gut across the basolateral membrane of the enterocyte. By 2014 over 300 zinc containing enzymes have been identified, as well as over 1000 zinc containing transcription factors.Phytate was recognized as removing zinc from nutrients given to chicken and swine in 1960. That it can cause human zinc deficiency however was not recognized until Reinholds work in Iran in the 1970s. This phenomenon is central to the high risk of zinc deficiency worldwide. Soils and crops Soil zinc is an essential micronutrient for crops. Almost half of the worlds cereal crops are deficient in zinc, leading to poor crop yields. Many agricultural countries around the world are affected by zinc deficiency. In China, zinc deficiency occurs on around half of the agricultural soils, affecting mainly rice and maize. Areas with zinc deficient soils are often regions with widespread zinc deficiency in humans. A basic knowledge of the dynamics of zinc in soils, understanding of the uptake and transport of zinc in crops and characterizing the response of crops to zinc deficiency are essential steps in achieving sustainable solutions to the problem of zinc deficiency in crops and humans. Biofortification Soil and foliar application of zinc fertilizer can effectively increase grain zinc and reduce the phytate:zinc ratio in grain. People who eat bread prepared from zinc enriched wheat have a significant increase in serum zinc.Zinc fertilization not only increases zinc content in zinc deficient crops, it also increases crop yields. Balanced crop nutrition supplying all essential nutrients, including zinc, is a cost effective management strategy. Even with zinc-efficient varieties, zinc fertilizers are needed when the available zinc in the topsoil becomes depleted. Plant breeding can improve zinc uptake capacity of plants under soil conditions with low chemical availability of zinc. Breeding can also improve zinc translocation which elevates zinc content in edible crop parts as opposed to the rest of the plant. Central Anatolia, in Turkey, was a region with zinc-deficient soils and widespread zinc deficiency in humans. In 1993, a research project found that yields could be increased by 6 to 8-fold and child nutrition dramatically increased through zinc fertilization. Zinc was added to fertilizers. While the product was initially made available at the same cost, the results were so convincing that Turkish farmers significantly increased the use of the zinc-fortified fertilizer (1 percent of zinc) within a few years, despite the repricing of the products to reflect the added value of the content. Nearly ten years after the identification of the zinc deficiency problem, the total amount of zinc-containing compound fertilizers produced and applied in Turkey reached a record level of 300,000 tonnes per annum. It is estimated that the economic benefits associated with the application of zinc fertilizers on zinc deficient soils in Turkey is around US$100 million per year. Zinc deficiency in children has been dramatically reduced. References Further reading Maret W (2013). "Chapter 14 Zinc and the Zinc Proteome". In Banci L (ed.). Metallomics and the Cell. Metal Ions in Life Sciences. Vol. 12. Springer. pp. 479–501. doi:10.1007/978-94-007-5561-1_14. ISBN 978-94-007-5560-4. ISSN 1559-0836. PMID 23595681. External links DermAtlas 228
Zollinger–Ellison syndrome
Zollinger–Ellison syndrome (Z-E syndrome) is a disease in which tumors cause the stomach to produce too much acid, resulting in peptic ulcers. Symptoms include abdominal pain and diarrhea. The syndrome is caused by a gastrinoma, a neuroendocrine tumor that secretes a hormone called gastrin. Too much gastrin in the blood (hypergastrinemia) results in the overproduction of gastric acid by parietal cells in the stomach. Gastrinomas most commonly arise in the duodenum, pancreas or stomach.In 75% of cases Zollinger–Ellison syndrome occurs sporadically, while in 25% of cases it occurs as part of an autosomal dominant syndrome called multiple endocrine neoplasia type 1 (MEN 1). Signs and symptoms Patients with Zollinger–Ellison syndrome may experience abdominal pain and diarrhea. The diagnosis is also suspected in patients who have severe ulceration of the stomach and small bowel, especially if they fail to respond to treatment. Chronic diarrhea, including steatorrhea (fatty stools) Pain in the esophagus, especially between and after meals at night Nausea Wheezing Vomiting blood Malnourishment Loss of appetite MalabsorptionGastrinomas may occur as single tumors or as multiple small tumors. About one-half to two-thirds of single gastrinomas are malignant tumors that most commonly spread to the liver and to lymph nodes near the pancreas and small bowel. Nearly 25 percent of patients with gastrinomas have multiple tumors as part of a condition called multiple endocrine neoplasia type 1 (MEN 1). MEN I patients have tumors in their pituitary gland and parathyroid glands, in addition to tumors of the pancreas. Pathophysiology Gastrin works on the parietal cells of the gastric glands, causing them to secrete more hydrogen ions into the stomach lumen. In addition, gastrin acts as a trophic factor for parietal cells, causing parietal cell hyperplasia. Normally, hydrogen ion secretion is controlled by a negative feedback loop by gastric cells to maintain a suitable pH, however, the neuroendocrine tumor that is present in individuals with Zollinger–Ellison Syndrome has no regulation, resulting in excessively large amounts of secretion. Thus, there is an increase in the number of acid-secreting cells, and each of these cells produces acid at a higher rate. The increase in acidity contributes to the development of peptic ulcers in the stomach, duodenum (first portion of the small bowel) and occasionally the jejunum (second portion of the small bowel), the last of which is an atypical ulcer. Diagnosis Zollinger–Ellison syndrome may be suspected when the above symptoms prove resistant to treatment when the symptoms are especially suggestive of the syndrome, or when endoscopy is suggestive. The diagnosis is made through several laboratory tests and imaging studies: Secretin stimulation test, which measures evoked gastrin levels. Note that the mechanism underlying this test is in contrast to the normal physiologic mechanism whereby secretin inhibits gastrin release from G cells. Gastrinoma cells release gastrin in response to secretin stimulation, thereby providing a sensitive means of differentiation. Fasting gastrin levels on at least three occasions Gastric acid secretion and pH (normal basal gastric acid secretion is less than 10 mEq/hour; in Zollinger–Ellison patients, it is usually more than 15 mEq/hour) An increased level of chromogranin A is a common marker of neuroendocrine tumors.In addition, the source of the increased gastrin production must be determined using MRI or somatostatin receptor scintigraphy. Treatment Proton pump inhibitors (such as omeprazole and lansoprazole) and histamine H2-receptor antagonists (such as famotidine and ranitidine) are used to slow acid secretion. Once gastric acid is suppressed, symptoms normally improve. Surgery to remove peptic ulcers or tumors might also be considered. Epidemiology The condition most commonly affects people between the ages of 30 and 60. The prevalence is unknown, but estimated to be about 1 in 100,000 people. History Sporadic reports of unusual cases of peptic ulceration in the presence of pancreatic tumors occurred prior to 1955, but R. M. Zollinger and E. H. Ellison, surgeons at Ohio State University, were the first to postulate a causal relationship between these findings. The American Surgical Association meeting in Philadelphia in April 1955 heard the first public description of the syndrome, and Zollinger and Ellison subsequently published their findings in Annals of Surgery. References == External links ==
Skeletal fluorosis
Skeletal fluorosis is a bone disease caused by excessive accumulation of fluoride leading to weakened bones. In advanced cases, skeletal fluorosis causes painful damage to bones and joints. Symptoms Symptoms are mainly promoted in the bone structure. Due to a high fluoride concentration in the body, the bone is hardened and thus less elastic, resulting in an increased frequency of fractures. Other symptoms include thickening of the bone structure and accumulation of bone tissue, which both contribute to impaired joint mobility. Ligaments and cartilage can become ossified. Most patients with skeletal fluorosis show side effects from the high fluoride dose such as ruptures of the stomach lining and nausea. Fluoride can also damage the parathyroid glands, leading to hyperparathyroidism, the uncontrolled secretion of parathyroid hormones. These hormones regulate calcium concentration in the body. An elevated parathyroid hormone concentration results in a depletion of calcium in bone structures and thus a higher calcium concentration in the blood. As a result, bone flexibility decreases making the bone more susceptible to fractures. Causes Common causes of fluorosis include inhalation of fluoride dusts or fumes by workers in industry and consumption of fluoride from drinking water (levels of fluoride in excess of levels that are considered safe.) In India, especially the Nalgonda region (Telangana), a common cause of fluorosis is fluoride-rich drinking water that is sourced from deep-bore wells. Over half of groundwater sources in India have fluoride above recommended levels.Fluorosis can also occur as a result of volcanic activity. The 1783 eruption of the Laki volcano in Iceland is estimated to have killed about 22% of the Icelandic population, and 60% of livestock, as a result of fluorosis and sulfur dioxide gases. The 1693 eruption of Hekla also led to fatalities of livestock under similar conditions. Skeletal fluorosis phases Treatment As of now, there are no established treatments for skeletal fluorosis patients. However, it is reversible in some cases, depending on the progression of the disease. If fluorine intake is stopped, the amount in bone will decrease and be excreted via urine. However, it is a very slow process to eliminate the fluorine from the body completely. Minimal results are seen in patients. Treatment of side effects is also very difficult. For example, a patient with a bone fracture cannot be treated according to standard procedures, because the bone is very brittle. In this case, recovery will take a very long time and a pristine healing cannot be guaranteed. However, further fluorosis can be prevented by drinking defluoridated water. It is recently suggested that drinking of defluoridated water from the "calcium amended-hydroxyapatite" defluoridation method may help in the fluorosis reversal. Defluoridated water from this suggested method provides calcium-enriched alkaline drinking water as generally fluoride contaminated water has a low amount of calcium mineral and drinking alkaline water helps in eliminating the toxic fluoride from the body. Epidemiology In some areas, skeletal fluorosis is endemic. While fluorosis is most severe and widespread in the worlds two most populous countries – India and China – UNICEF estimates that "fluorosis is endemic in at least 25 countries across the globe. The total number of people affected is not known, but a conservative estimate would number in the tens of millions."In India, 20 states have been identified as endemic areas, with an estimated 60 million people at risk and 6 million people disabled; about 600,000 might develop a neurological disorder as a consequence. Effects on animals The histological changes which are induced through fluorine on rats resemble those of humans. See also Dental fluorosis Fluoride poisoning Kaj Roholm References Further reading Fluorosis from drinking very large amounts of tea: Naveen Kakumanu, M.D. & Sudhaker D. Rao, M.B., B.S. (2013-03-21). "Skeletal Fluorosis Due to Excessive Tea Drinking". New England Journal of Medicine. 368 (12): 1140. doi:10.1056/NEJMicm1200995. PMID 23514291.{{cite journal}}: CS1 maint: multiple names: authors list (link) == External links ==
Amylase
An amylase () is an enzyme that catalyses the hydrolysis of starch (Latin amylum) into sugars. Amylase is present in the saliva of humans and some other mammals, where it begins the chemical process of digestion. Foods that contain large amounts of starch but little sugar, such as rice and potatoes, may acquire a slightly sweet taste as they are chewed because amylase degrades some of their starch into sugar. The pancreas and salivary gland make amylase (alpha amylase) to hydrolyse dietary starch into disaccharides and trisaccharides which are converted by other enzymes to glucose to supply the body with energy. Plants and some bacteria also produce amylase. Specific amylase proteins are designated by different Greek letters. All amylases are glycoside hydrolases and act on α-1,4-glycosidic bonds. Classification α-Amylase The α-amylases (EC 3.2.1.1 ) (CAS 9014-71-5) (alternative names: 1,4-α-D-glucan glucanohydrolase; glycogenase) are calcium metalloenzymes. By acting at random locations along the starch chain, α-amylase breaks down long-chain saccharides, ultimately yielding either maltotriose and maltose from amylose, or maltose, glucose and "limit dextrin" from amylopectin. They belong to glycoside hydrolase family 13. Because it can act anywhere on the substrate, α-amylase tends to be faster-acting than β-amylase. In animals, it is a major digestive enzyme, and its optimum pH is 6.7–7.0.In human physiology, both the salivary and pancreatic amylases are α-amylases. The α-amylase form is also found in plants, fungi (ascomycetes and basidiomycetes) and bacteria (Bacillus). β-Amylase Another form of amylase, β-amylase (EC 3.2.1.2 ) (alternative names: 1,4-α-D-glucan maltohydrolase; glycogenase; saccharogen amylase) is also synthesized by bacteria, fungi, and plants. Working from the non-reducing end, β-amylase catalyzes the hydrolysis of the second α-1,4 glycosidic bond, cleaving off two glucose units (maltose) at a time. During the ripening of fruit, β-amylase breaks starch into maltose, resulting in the sweet flavor of ripe fruit. They belong to glycoside hydrolase family 14. Both α-amylase and β-amylase are present in seeds; β-amylase is present in an inactive form prior to germination, whereas α-amylase and proteases appear once germination has begun. Many microbes also produce amylase to degrade extracellular starches. Animal tissues do not contain β-amylase, although it may be present in microorganisms contained within the digestive tract. The optimum pH for β-amylase is 4.0–5.0. γ-Amylase γ-Amylase (EC 3.2.1.3 ) (alternative names: Glucan 1,4-a-glucosidase; amyloglucosidase; exo-1,4-α-glucosidase; glucoamylase; lysosomal α-glucosidase; 1,4-α-D-glucan glucohydrolase) will cleave α(1–6) glycosidic linkages, as well as the last α-1,4 glycosidic bond at the nonreducing end of amylose and amylopectin, yielding glucose. The γ-amylase has the most acidic optimum pH of all amylases because it is most active around pH 3. They belong to a variety of different GH families, such as glycoside hydrolase family 15 in fungi, glycoside hydrolase family 31 of human MGAM, and glycoside hydrolase family 97 of bacterial forms. Uses Fermentation α- and β-amylases are important in brewing beer and liquor made from sugars derived from starch. In fermentation, yeast ingests sugars and excretes ethanol. In beer and some liquors, the sugars present at the beginning of fermentation have been produced by "mashing" grains or other starch sources (such as potatoes). In traditional beer brewing, malted barley is mixed with hot water to create a "mash", which is held at a given temperature to allow the amylases in the malted grain to convert the barleys starch into sugars. Different temperatures optimize the activity of alpha or beta amylase, resulting in different mixtures of fermentable and unfermentable sugars. In selecting mash temperature and grain-to-water ratio, a brewer can change the alcohol content, mouthfeel, aroma, and flavor of the finished beer. In some historic methods of producing alcoholic beverages, the conversion of starch to sugar starts with the brewer chewing grain to mix it with saliva. This practice continues to be practiced in home production of some traditional drinks, such as chhaang in the Himalayas, chicha in the Andes and kasiri in Brazil and Suriname. Flour additive Amylases are used in breadmaking and to break down complex sugars, such as starch (found in flour), into simple sugars. Yeast then feeds on these simple sugars and converts it into the waste products of ethanol and carbon dioxide. This imparts flavour and causes the bread to rise. While amylases are found naturally in yeast cells, it takes time for the yeast to produce enough of these enzymes to break down significant quantities of starch in the bread. This is the reason for long fermented doughs such as sourdough. Modern breadmaking techniques have included amylases (often in the form of malted barley) into bread improver, thereby making the process faster and more practical for commercial use.α-Amylase is often listed as an ingredient on commercially package-milled flour. Bakers with long exposure to amylase-enriched flour are at risk of developing dermatitis or asthma. Molecular biology In molecular biology, the presence of amylase can serve as an additional method of selecting for successful integration of a reporter construct in addition to antibiotic resistance. As reporter genes are flanked by homologous regions of the structural gene for amylase, successful integration will disrupt the amylase gene and prevent starch degradation, which is easily detectable through iodine staining. Medical uses Amylase also has medical applications in the use of pancreatic enzyme replacement therapy (PERT). It is one of the components in Sollpura (liprotamase) to help in the breakdown of saccharides into simple sugars. Other uses An inhibitor of alpha-amylase, called phaseolamin, has been tested as a potential diet aid.When used as a food additive, amylase has E number E1100, and may be derived from pig pancreas or mold fungi. Bacilliary amylase is also used in clothing and dishwasher detergents to dissolve starches from fabrics and dishes. Factory workers who work with amylase for any of the above uses are at increased risk of occupational asthma. Five to nine percent of bakers have a positive skin test, and a fourth to a third of bakers with breathing problems are hypersensitive to amylase. Hyperamylasemia Blood serum amylase may be measured for purposes of medical diagnosis. A higher than normal concentration may reflect any of several medical conditions, including acute inflammation of the pancreas (it may be measured concurrently with the more specific lipase), perforated peptic ulcer, torsion of an ovarian cyst, strangulation, ileus, mesenteric ischemia, macroamylasemia and mumps. Amylase may be measured in other body fluids, including urine and peritoneal fluid. A January 2007 study from Washington University in St. Louis suggests that saliva tests of the enzyme could be used to indicate sleep deficits, as the enzyme increases its activity in correlation with the length of time a subject has been deprived of sleep. History In 1831, Erhard Friedrich Leuchs (1800–1837) described the hydrolysis of starch by saliva, due to the presence of an enzyme in saliva, "ptyalin", an amylase. it was named after the Ancient Greek name for saliva: πτύαλον - ptyalon. The modern history of enzymes began in 1833, when French chemists Anselme Payen and Jean-François Persoz isolated an amylase complex from germinating barley and named it "diastase". It is from this term that all subsequent enzyme names tend to end in the suffix -ase. In 1862, Alexander Jakulowitsch Danilewsky (1838–1923) separated pancreatic amylase from trypsin. Evolution Salivary amylase Saccharides are a food source rich in energy. Large polymers such as starch are partially hydrolyzed in the mouth by the enzyme amylase before being cleaved further into sugars. Many mammals have seen great expansions in the copy number of the amylase gene. These duplications allow for the pancreatic amylase AMY2 to re-target to the salivary glands, allowing animals to detect starch by taste and to digest starch more efficiently and in higher quantities. This has happened independently in mice, rats, dogs, pigs, and most importantly, humans after the agricultural revolution.Following the agricultural revolution 12,000 years ago, human diet began to shift more to plant and animal domestication in place of hunting and gathering. Starch has become a staple of the human diet. Despite the obvious benefits, early humans did not possess salivary amylase, a trend that is also seen in evolutionary relatives of the human, such as chimpanzees and bonobos, who possess either one or no copies of the gene responsible for producing salivary amylase.Like in other mammals, the pancreatic alpha-amylase AMY2 was duplicated multiple times. One event allowed it to evolve salivary specificity, leading to the production of amylase in the saliva (named in humans as AMY1). The 1p21.1 region of human chromosome 1 contains many copies of these genes, variously named AMY1A, AMY1B, AMY1C, AMY2A, AMY2B, and so on.However, not all humans possess the same number of copies of the AMY1 gene. Populations known to rely more on saccharides have a higher number of AMY1 copies than human populations that, by comparison, consume little starch. The number of AMY1 gene copies in humans can range from six copies in agricultural groups such as European-American and Japanese (two high starch populations) to only two to three copies in hunter-gatherer societies such as the Biaka, Datog, and Yakuts.The correlation that exists between starch consumption and number of AMY1 copies specific to population suggest that more AMY1 copies in high starch populations has been selected for by natural selection and considered the favorable phenotype for those individuals. Therefore, it is most likely that the benefit of an individual possessing more copies of AMY1 in a high starch population increases fitness and produces healthier, fitter offspring.This fact is especially apparent when comparing geographically close populations with different eating habits that possess a different number of copies of the AMY1 gene. Such is the case for some Asian populations that have been shown to possess few AMY1 copies relative to some agricultural populations in Asia. This offers strong evidence that natural selection has acted on this gene as opposed to the possibility that the gene has spread through genetic drift.Variations of amylase copy number in dogs mirrors that of human populations, suggesting they acquired the extra copies as they followed humans around. Unlike humans whose amylase levels depend on starch content in diet, wild animals eating a broad range of foods tend to have more copies of amylase. This may have to do with mainly detection of starch as opposed to digestion. == References ==
Ectopic calcification
Ectopic calcification is a pathologic deposition of calcium salts in tissues or bone growth in soft tissues. This can be a symptom of hyperphosphatemia. Formation of osseous tissue in soft tissues such as the lungs, eyes, arteries, or other organs is known as ectopic calcification, dystrophic calcification, or ectopic ossification. Causes Absorption of calcium salts normally occurs in bony tissues and is facilitated by parathyroid hormone and vitamin D. However, increased amounts of parathyroid hormone in the blood result in the deposit of calcium in soft tissues. This can be an indication of hyperparathyroidism, arteriosclerosis, or trauma to tissues. Calcification of muscle can occur after traumatic injury and is known as myositis ossificans. It can be recognized by muscle tenderness and loss of stretch in the affected area. To reduce the risk of calcification after an injury, initiate what is commonly known as "RICE" (rest, ice, compression, and elevation). Diagnosis Typically, the diagnosis of extra-skeletal ectopic calcification is quite straightforward. A physical examination of a suspected area with calcified deposits palpates as hard and rough. To confirm, the calcified tissues can be seen on an x-ray. Prognosis Ectopic ossification of the heart valves is an indicator of future heart problems, hyperparathyroidism, and necrosis of tissues. See also Subepidermal calcified nodule == References ==
Bullous keratopathy
Bullous keratopathy, also known as pseudophakic bullous keratopathy (PBK), is a pathological condition in which small vesicles, or bullae, are formed in the cornea due to endothelial dysfunction. In a healthy cornea, endothelial cells keeps the tissue from excess fluid absorption, pumping it back into the aqueous humor. When affected by some reason, such as Fuchs dystrophy or a trauma during cataract removal, endothelial cells suffer mortality or damage. The corneal endothelial cells normally do not undergo mitotic cell division, and cell loss results in permanent loss of function. When endothelial cell counts drop too low, the pump starts failing to function and fluid moves anterior into the stroma and epithelium. The excess fluid precipitates swelling of the cornea. As fluid accumulates between the basal epithelium cells, blister like formations form (bullae) and they undergo painful ruptures releasing their fluid content to the surface. These characteristic malformations disrupt vision and create pain sensations. Symptoms Disease begins with vesicles that coalesce. There is severe progressing edema and rupture may occur in 24 hours or less. Bullous keratopathy Treatment Treatment can include hyperosmotic eye drops to reduce swelling (5% sodium chloride), bandage contact lenses to reduce discomfort, glaucoma medications to reduce the flow of fluid into the cornea, and surgical procedures . The most common types of surgical treatment are Descemets stripping automated endothelial keratoplasty (DSAEK) and Descemets membrane endothelial keratoplasty (DMEK). Prognosis Keratopathy is common in older people. Keratopathy occurs after cataract surgery, its incidence has decreased since the advent of intraoperative viscoelastic agents that protect the endothelium. References == External links ==
Hostility
Hostility is seen as form of emotionally charged aggressive behavior. In everyday speech it is more commonly used as a synonym for anger and aggression. It appears in several psychological theories. For instance it is a facet of neuroticism in the NEO PI, and forms part of personal construct psychology, developed by George Kelly. Hostility/hospitality For hunter gatherers, every stranger from outside the small tribal group was a potential source of hostility. Similarly, in archaic Greece, every community was in a state of hostility, latent or overt, with every other community - something only gradually tempered by the rights and duties of hospitality.Tensions between the two poles of hostility and hospitality remain a potent force in the 21st century world. Us/them Robert Sapolsky argues that the tendency to form in-groups and out-groups of Us and Them, and to direct hostility at the latter, is inherent in humans. He also explores the possibility raised by Samuel Bowles that intra-group hostility is reduced when greater hostility is directed at Thems, something exploited by insecure leaders when they mobilise external conflicts so as to reduce in-group hostility towards themselves. Non-verbal indicators Automatic mental functioning suggests that among universal human indicators of hostility are the grinding or gnashing of teeth, the clenching and shaking of fists, and grimacing. Desmond Morris would add stamping and thumping.The Haka represents a ritualised set of such non-verbal signs of hostility. Kellys model In psychological terms, George Kelly considered hostility as the attempt to extort validating evidence from the environment to confirm types of social prediction, constructs, that have failed. Instead of reconstructing their constructs to meet disconfirmations with better predictions, the hostile person attempts to force or coerce the world to fit their view, even if this is a forlorn hope, and even if it entails emotional expenditure and/or harm to self or others.In this sense hostility is a form of psychological extortion - an attempt to force reality to produce the desired feedback, even by acting out in bullying by individuals and groups in various social contexts, in order that preconceptions become ever more widely validated. Kellys theory of cognitive hostility thus forms a parallel to Leon Festingers view that there is an inherent impulse to reduce cognitive dissonance.While challenging reality can be a useful part of life, and persistence in the face of failure can be a valuable trait (for instance in invention or discovery), in the case of hostility it is argued that evidence is not being accurately assessed but rather forced into a Procrustean mould in order to maintain ones belief systems and avoid having ones identity challenged. Instead it is claimed that hostility shows evidence of suppression or denial, and is "deleted" from awareness - unfavorable evidence which might suggest that a prior belief is flawed is to various degrees ignored and willfully avoided. See also Antisocial personality disorder Death drive Narcissism of small differences Righteous indignation References External links Quotations related to Hostility at Wikiquote The dictionary definition of hostility at Wiktionary
Cold sensitivity
Cold sensitivity or cold intolerance is unusual discomfort felt by some people when in a cool environment.There is much variation in the sensitivity to cold experienced by different people, with some putting on many layers of clothing while others in the same environment feel comfortable in one layer.Cold sensitivity may be a symptom of hypothyroidism, anemia, fibromyalgia or vasoconstriction. Vitamin B12 deficiency usually accompanies cold intolerance as well. There are other conditions that may cause a cold intolerance, including low body weight, high body temperature and low blood pressure. There may also be differences in people in the expression of uncoupling proteins, thus affecting their amount of thermogenesis. Psychology may also play a factor in perceived temperature. See also Apparent temperature Thermoception Raynaud syndrome == References ==
Spontaneous coronary artery dissection
Spontaneous coronary artery dissection (SCAD) is an uncommon but dangerous condition in which one of the arteries that supply the heart spontaneously develops a blood collection, or hematoma, within the artery wall. This leads to a separation and weakening of the walls of the artery. SCAD is a major cause of heart attacks in young, otherwise healthy women without known risk factors. While the exact cause is not yet known, SCAD is likely related to changes that occur during and after pregnancy, as well as other diseases. These changes lead to the dissection of the wall which restricts blood flow to the heart and causes symptoms. SCAD is often diagnosed in the cath lab with angiography, though more advanced confirmatory tests exist. While the risk of death due to SCAD is low, it has a relatively high rate of recurrence leading to further heart attack-like symptoms in the future. It was first described in 1931. Signs and symptoms SCAD often presents like a heart attack in young to middle-aged, healthy women. This pattern usually includes chest pain, rapid heartbeat, shortness of breath, sweating, extreme tiredness, nausea, and dizziness. A minority of people with SCAD may also present in cardiogenic shock (2-5%), ventricular arrhythmias (3-11%), or after sudden cardiac death. Pregnancy- and postpartum-associated SCAD generally have worse outcomes compared to other cases. Causes Risk factors include pregnancy and the postpartum period. Evidence suggests that estrogen- and progesterone-related vascular changes affect the coronary arteries during this period, contributing to SCAD. Some case reports and case series suggest associations with autoimmune inflammatory diseases, but there have not been larger studies to explore this relationship. Underlying heritable conditions such as fibromuscular dysplasia and connective-tissue disorders (e.g., Marfan syndrome, Ehlers–Danlos syndrome, and Loeys–Dietz syndrome) are associated with SCAD, but SCAD otherwise lacks a significant genetic component. SCAD triggers may include severe physical or emotional stress, but many cases have no obvious cause. Pathophysiology SCAD symptoms are the result of a restriction in the size of the affected coronary artery. The dissection leads to a collection of blood, or hematoma, between the layers of the artery wall. The hematoma does not carry oxygen to the heart muscle but instead forms a "false lumen" that restricts the flow of blood through the "true lumen" to the heart muscle. As yet, there is no consensus on why the hematoma develops in the first place.The restriction limits the availability of oxygen and nutrients to the heart muscle, or myocardium. As a result, the myocardium continues to demand oxygen but is not adequately supplied by the coronary artery. This imbalance leads to ischemia, damage, and eventually death of myocardium, causing a heart attack (myocardial infarction). Diagnosis Given the demographics of SCAD, it is important to maintain a high index of suspicion for the condition in otherwise low-risk women presenting with symptoms of acute coronary syndrome. Initial evaluation may show ECG changes of ST elevation, like heart attacks due to other causes. SCAD comprises 2-4% of all cases of acute coronary syndrome.With typically elevated cardiac biomarkers and ECG changes, people will often undergo coronary angiography evaluation. It is important to recognize SCAD through angiography as other confirmatory measures carry increased risks. Angiography There are 3 types of SCAD based on angiographic and anatomical criteria; with the designations based on the location and extent of the hematoma within the walls of the coronary arteries. Type 1 SCAD results from an intimal tear of the coronary artery (a tear of the innermost layer of the arterial wall) creating a false lumen as blood flows into the new space. A type 1 SCAD lesion is seen on angiography or intravascular imaging as a radiolucent flap separating the two flow channels; separating the false lumen from the true lumen of the coronary artery. Type 2 SCAD, the most common type of SCAD lesions, seen in 60-75% of patients, occurs due to an intramural hematoma or a collection of blood in the muscular layer of the coronary artery wall with the absence of intimal tearing. This is seen on coronary angiography as an abrupt change in coronary caliber with a long segment of a diffusely narrowed artery (typically longer than 20 mm). Type 2 SCAD is subdivided into type 2A where the narrowed segment of the coronary artery is flanked by normal caliber segments and type 2B where the stenosis continues to the terminus of a coronary artery. Type 3 SCAD, the least common type, is also due to an intramural hematoma causing coronary stenosis, but the lesions are shorter than as seen in type 2 SCAD, being less than 20 mm in length. Due to the short segment of coronary stenosis in type 3 SCAD, it is often difficult to differentiate type 3 SCAD lesions from coronary stenosis due to atherosclerotic plaques and intra-coronary imaging is often needed to distinguish between the two. Some authors have proposed a fourth designation of SCAD, type 4 SCAD, in which there is a complete intraluminal occlusion of the coronary artery due to any of the previously mentioned types (Type 1–3). Intracoronary imaging Intracoronary imaging (ICI), consisting of intracoronary optical coherence tomography (OCT) and intravascular ultrasound (IVUS) can help distinguish SCAD from an atherosclerotic lesion when it is difficult to do so with angiography. ICI techniques provide a direct view of the walls of the coronary artery to confirm SCAD, but may actually worsen the dissection as the probes are inserted into the damaged area. ICI confers a 3.4% risk of iatrogenic dissection in people with SCAD compared to 0.2% risk in the general population. Between the two ICI methods, OCT - a newer technique - has superior spatial resolution than IVUS and is the preferred technique if ICI is required, but the need to inject extra contrast with OCT poses risk for worsening the dissection. Other methods Some studies propose coronary CT angiography to evaluate SCAD in lower-risk people, with research into the approach ongoing. Management Management depends upon the presenting symptoms. In most people who are hemodynamically stable without high-risk coronary involvement, conservative medical management with blood pressure control is recommended. In these people, especially if angiography demonstrates adequate coronary flow, the most likely course usually leads to spontaneous healing, often within 30 days. Anti-coagulation should be discontinued upon diagnosis of SCAD on coronary angiography as continuation of anti-coagulation may lead to hematoma and dissection propagation.In cases involving high-risk coronaries, hemodynamic instability, or a lack of improvement or worsening after initial attempts at treatment, urgent treatment with coronary stents or coronary artery bypass surgery may be necessary. Stents carry the risk of worsening the dissection or have an increased risk of other complications as the vessel walls in SCAD are already weak due to the disease before introducing the stent. Large studies into coronary artery bypass surgery are lacking, but this approach is used to redirect blood to the heart around the affected area for cases involving the left main coronary artery or when other approaches fail.Angina, or chest pain due to coronary insufficiency may persist for months after SCAD, sometimes even when repeat angiography shows vessel healing. Anti-anginal agents such as nitrates, calcium channel blockers and ranolazine are indicated as anti-anginal pharmacologic agents after SCAD. Control of hypertension is also indicated after SCAD, with beta blockers especially showing a reduction in the recurrence of SCAD. Statins are not recommended in the treatment post-SCAD (in the absence of other indications for statins) as the myocardial infarctions in SCAD are not the result of atherosclerotic plaques. Cardiac rehabilitation is recommended for all patients after myocardial infarction due to SCAD and is associated with a reduction in anginal symptoms increased psychological well-being. Dual antiplatelet therapy should be started after percutaneous coronary intervention (stents) is used to treat SCAD and continued for at least 1 year afterwards. Dual antiplatelet therapy during the acute phase and for at least 1 year after medically treated SCAD may be also used, based on expert consensus.Physical stress is associated with SCAD recurrence but there are no heart rate, blood pressure or weight exercise parameters that are established in those with SCAD. In general, it is recommended that those with SCAD avoid isometric exercise, high intensity endurance training, exercising to the point of exhaustion and activities involving a prolonged Valsalva to reduce the risk of SCAD recurrence.After addressing the SCAD, people are often treated with typical post-heart attack care, though people who are pregnant may need altered therapy due to the possibility of some teratogenic cardiac medications affecting fetal development. Depending on the clinical situation, providers may screen for associated connective tissue diseases. Prognosis People with SCAD have a low in-hospital mortality after treatment. However, the lesion may worsen after leaving the hospital within the first month. One study suggested a 1.2% mortality rate following SCAD but a 19.9% risk for either death, heart attacks, or strokes. Even afterwards, SCAD has a high recurrence risk at 30% within 10 years, often at a different site than the initial lesion - meaning that stents placed in the location of the first lesion may not protect against a second. Given the lack of consensus on the cause of SCAD, prevention of future SCAD may include medical therapy, counseling about becoming pregnant again (for those who had pregnancy-associated SCAD), or avoidance of oral contraceptives - as they contain estrogen and progesterone. Epidemiology SCAD is the most common cause of heart attacks in pregnant and postpartum women. Over 90% of people who develop SCAD are women. It is especially common among women aged 43–52. With angiography and improved recognition of the condition, diagnosis of SCAD has improved since the early 2010s. While prior studies had reported a SCAD prevalence of less than 1% in patients presenting with acute coronary syndrome, more recent data suggests the prevalence of SCAD in acute coronary syndrome patients may be between 2-4%. History SCAD was first described in the year 1931, at postmortem examination, in a 42-year-old woman. Due to a lack of recognition and diagnostic technology though, SCAD literature until the 21st century included only case reports and series. With the recent advent of coronary angiography and intracoronary imaging, recognition and diagnosis of SCAD has greatly increased, especially in the 2010s. See also Dissection (medical) Aortic dissection Kounis syndrome References External links "Spontaneous Coronary Artery Dissection Postpartum" "Spontaneous-Coronary-Artery-Dissection-Case-Series-and-Review"
Bleeding diathesis
In medicine (hematology), bleeding diathesis is an unusual susceptibility to bleed (hemorrhage) mostly due to hypocoagulability (a condition of irregular and slow blood clotting), in turn caused by a coagulopathy (a defect in the system of coagulation). Therefore, this may result in the reduction of platelets being produced and leads to excessive bleeding. Several types of coagulopathy are distinguished, ranging from mild to lethal. Coagulopathy can be caused by thinning of the skin (Cushings syndrome), such that the skin is weakened and is bruised easily and frequently without any trauma or injury to the body. Also, coagulopathy can be contributed by impaired wound healing or impaired clot formation. Signs and symptoms Complications Following are some complications of coagulopathies, some of them caused by their treatments: Causes While there are several possible causes, they generally result in excessive bleeding and a lack of clotting. Acquired Acquired causes of coagulopathy include anticoagulation with warfarin, liver failure, vitamin K deficiency and disseminated intravascular coagulation. Additionally, the hemotoxic venom from certain species of snakes can cause this condition, for example Bothrops, rattlesnakes and other species of viper. Viral hemorrhagic fevers include dengue hemorrhagic fever and dengue shock syndrome. Leukemia may also cause coagulopathy. Furthermore, cystic fibrosis has been known to cause bleeding diathesis, especially in undiagnosed infants, due to malabsorption of fat soluble vitamins like vitamin K. Autoimmune causes of acquired coagulation disorders There are autoimmune causes of coagulation disorders. They include acquired antibodies to coagulation factors, termed inhibitors of coagulation. The main inhibitor is directed against clotting factor VIII. Another example is antiphospholipid syndrome, an autoimmune, hypercoagulable state. Causes other than coagulation Bleeding diathesis may also be caused by impaired wound healing (as in scurvy), or by thinning of the skin, such as in Cushings syndrome. Genetic Some people lack genes that typically produce the protein coagulation factors that allow normal clotting. Various types of hemophilia and von Willebrand disease are the major genetic disorders associated with coagulopathy. Rare examples are Bernard–Soulier syndrome, Wiskott–Aldrich syndrome and Glanzmanns thrombasthenia. Gene therapy treatments may be a solution as they involve in the insertion of normal genes to replace defective genes causing for the genetic disorder. Gene therapy is a source of active research that hold promise for the future. Diagnosis Comparing coagulation tests Treatments Consult a hematologist and have a regular blood check ups. Have an early diagnostic test for any blood disorders or blood diseases including hemophilia, hemorrhage, and sickle-cell anemia. Prothrombin time and partial thromboplastin time blood tests are useful to investigate the reason behind the excessive bleeding. The PT evaluates coagulation factors I, II, V, VII and X, while the PTT evaluates coagulation factors I, II, V, VIII, IX, X, XI and XII. The analysis of both tests thus helps to diagnose certain disorders.Blood transfusion involves the transfer of plasma containing all the necessary coagulating factors (fibrinogen, prothrombin, thromboplastin) to help restore them and to improve the immune defense of the patient after excessive blood loss. Blood transfusion also caused the transfer of platelets that can work along with coagulating factors for blood clotting to commence.Different drugs can be prescribed depending on the type of disease. Vitamins (K, P and C) are essential in case of obstruction to walls of blood vessels. Also, vitamin K is required for the production of blood clotting factors, hence the injection of vitamin K (phytomenadione) is recommended to boost blood clotting. References == External links ==
Breast development
Breast development, also known as mammogenesis, is a complex biological process in primates that takes place throughout a females life. It occurs across several phases, including prenatal development, puberty, and pregnancy. At menopause, breast development ceases and the breasts atrophy. Breast development results in prominent and developed structures on the chest known as breasts in primates, which serve primarily as mammary glands. The process is mediated by an assortment of hormones (and growth factors), the most important of which include estrogen, progesterone, prolactin, and growth hormone. Biochemistry Hormones The master regulators of breast development are the steroid hormones, estrogen and progesterone, growth hormone (GH), mostly via its secretory product, insulin-like growth factor 1 (IGF-1), and prolactin. These regulators induce the expression of growth factors, such as amphiregulin, epidermal growth factor (EGF), IGF-1, and fibroblast growth factor (FGF), which in turn have specific roles in breast growth and maturation.At puberty, gonadotropin-releasing hormone (GnRH) is secreted in a pulsatile manner from the hypothalamus. GnRH induces the secretion of the gonadotropins, follicle-stimulating hormone (FSH) and luteinizing hormone (LH), from the pituitary gland. The secreted gonadotropins travel through the bloodstream to the ovaries and trigger the secretion of estrogen and progesterone in fluctuating amounts during each menstrual cycle. Growth hormone (GH), which is secreted from the pituitary gland, and insulin-like growth factor 1 (IGF-1), which is produced in the body in response to GH, are growth-mediating hormones. During prenatal development, infancy, and childhood, GH and IGF-1 levels are low, but progressively increase and reach a peak at puberty, with a 1.5- to 3-fold increase in pulsatile GH secretion and a 3-fold or greater increase in serum IGF-1 levels being capable of occurring at this time. In late adolescence and early adulthood, GH and IGF-1 levels significantly decrease, and continue to decrease throughout the rest of life. It has been found that both estrogen and GH are essential for breast development at puberty – in the absence of either, no development will take place. Moreover, most of the role of GH in breast development has been found to be mediated by its induction of IGF-1 production and secretion, as IGF-1 administration rescues breast development in the absence of GH. GH induction of IGF-1 production and secretion occurs in almost all types of tissue in the body, but especially in the liver, which is the source of approximately 80% of circulating IGF-1, as well as locally in the breasts. Although IGF-1 is responsible for most of the role of GH in mediating breast development, GH itself has been found to play a direct, augmenting role as well, as it increases estrogen receptor (ER) expression in breast stromal (connective) tissue, while IGF-1, in contrast, has been found to not do this. In addition to estrogen and GH/IGF-1 both being essential for pubertal breast development, they are synergistic in bringing it about.Despite the apparent necessity of GH/IGF-1 signaling in pubertal breast development however, women with Laron syndrome, in whom the growth hormone receptor (GHR) is defective and insensitive to GH and serum IGF-1 levels are very low, puberty, including breast development, is delayed, although full sexual maturity is always eventually reached. Moreover, breast development and size are normal (albeit delayed) in spite of GH/IGF-1 axis insufficiency, and in some the breasts may actually be large in relation to body size. The relatively large breasts in women with Laron syndrome have been suggested to be due to increased secretion of prolactin (which is known to produce breast enlargement) caused by a drift phenomenon from somatomammotrophic cells in the pituitary gland with a high GH secretion. An animal model of Laron syndrome, the GHR knockout mouse, shows severely impaired ductal outgrowth at 11 weeks of age. However, by 15 weeks, ductal development has caught up with that of normal mice and the ducts have fully distributed throughout the mammary fat pad, although the ducts remain narrower than those of wild-type mice. In any case, female GHR knockout mice can lactate normally. As such, it has been said that the phenotypes of women with Laron syndrome and GHR knockout mice are identical, with diminished body size and delayed sexual maturation accompanied by normal lactation. These data indicate that very low circulating levels of IGF-1 can nonetheless allow for full pubertal breast development. Development of the breasts during the prenatal stage of life is independent of biological sex and sex hormones. During embryonic development, the breast buds, in which networks of tubules are formed, are generated from the ectoderm. These rudimentary tubules will eventually become the matured lactiferous (milk) ducts, which connect the lobules (milk "containers") of the breast, grape-like clusters of alveoli, to the nipples. Until puberty, the tubule networks of the breast buds remain rudimentary and quiescent, and the male and female breast do not show any differences. During puberty in females, estrogen, in conjunction with GH/IGF-1, through activation of ERα specifically (and notably not ERβ or GPER), causes growth of and transformation of the tubules into the matured ductal system of the breasts. Under the influence of estrogen, the ducts sprout and elongate, and terminal end buds (TEBs), bulbous structures at the tips of the ducts, penetrate into the fat pad and branch as the ducts elongate. This continues until a tree-like network of branched ducts that is embedded into and fills the entire fat pad of the breast is formed. In addition to its role in mediating ductal development, estrogen causes stromal tissue to grow and adipose (fat) tissue to accumulate, as well as the nipple-areolar complex to increase in size.Progesterone, in conjunction with GH/IGF-1 similarly to estrogen, affects the development of the breasts during puberty and thereafter as well. To a lesser extent than estrogen, progesterone contributes to ductal development at this time, as evidenced by the findings that progesterone receptor (PR) knockout mice or mice treated with the PR antagonist mifepristone show delayed (albeit eventually normal, due to estrogen acting on its own) ductal growth during puberty and by the fact that progesterone has been found to induce ductal growth on its own in the mouse mammary gland mainly via the induction of the expression of amphiregulin, the same growth factor that estrogen primarily induces to mediate its actions on ductal development. In addition, progesterone produces modest lobuloalveolar development (alveolar bud formation or ductal sidebranching) starting at puberty, specifically through activation of PRB (and notably not PRA), with growth and regression of the alveoli occurring to some degree with each menstrual cycle. However, only rudimentary alveoli develop in response to pre-pregnancy levels of progesterone and estrogen, and lobuloalveolar development will remain at this stage until pregnancy occurs, if it does. In addition to GH/IGF-1, estrogen is required for progesterone to affect the breasts, as estrogen primes the breasts by inducing the expression of the progesterone receptor (PR) in breast epithelial tissue. In contrast to the case of the PR, ER expression in the breast is stable and differs relatively little in the contexts of reproductive status, stage of the menstrual cycle, or exogenous hormonal therapy.During pregnancy, pronounced breast growth and maturation occurs in preparation of lactation and breastfeeding. Estrogen and progesterone levels increase dramatically, reaching levels by late pregnancy that are several hundred-fold higher than usual menstrual cycle levels. Estrogen and progesterone cause the secretion of high levels of prolactin from the anterior pituitary, which reach levels as high as 20 times greater than normal menstrual cycle levels. IGF-1 and IGF-2 levels also increase dramatically during pregnancy, due to secretion of placental growth hormone (PGH). Further ductal development, by estrogen, again in conjunction with GH/IGF-1, occurs during pregnancy. In addition, the concert of estrogen, progesterone (again specifically through PRB), prolactin, and other lactogens such as human placental lactogen (hPL) and PGH, in conjunction with GH/IGF-1, as well as insulin-like growth factor 2 (IGF-2), acting together, mediate the completion of lobuloalveolar development of the breasts during pregnancy. Both PR and prolactin receptor (PRLR) knockout mice fail to show lobuloalveolar development, and progesterone and prolactin have been found to be synergistic in mediating growth of alveoli, demonstrating the essential role of both of these hormones in this aspect of breast development. Growth hormone receptor (GHR) knockout mice also show greatly impaired lobuloalveolar development. In addition to their role in lobuloalveolar growth, prolactin and hPL act to increase the size of the nipple-areolar complex during pregnancy. By the end of the fourth month of pregnancy, at which time lobuloalveolar maturation is complete, the breasts are fully prepared for lactation and breastfeeding.Insulin, glucocorticoids such as cortisol (and by extension adrenocorticotropic hormone (ACTH)), and thyroid hormones such as thyroxine (and by extension thyroid-stimulating hormone (TSH) and thyrotropin-releasing hormone (TRH)) also play permissive but less well-understood/poorly-characterized roles in breast development during both puberty and pregnancy, and are required for full functional development. Leptin has also been found to be an important factor in mammary gland development, and has been found to promote mammary epithelial cell proliferation.In contrast to the female-associated sex hormones, estrogen and progesterone, the male-associated sex hormones, the androgens, such as testosterone and dihydrotestosterone (DHT), powerfully suppress the action of estrogen in the breasts. At least one way that they do this is by reducing the expression of the estrogen receptor in breast tissue. In the absence of androgenic activity, such as in women with complete androgen insensitivity syndrome (CAIS), modest levels of estrogen (50 pg/mL) are capable of mediating significant breast development, with CAIS women showing breast volumes that are even above-average. The combination of much higher levels of androgens (about 10-fold higher) and much lower levels of estrogen (about 10-fold less), due to the ovaries in females producing high amounts of estrogens but low amounts of androgens and the testes in males producing high amounts of androgens but low amounts of estrogens, are why males generally do not grow prominent or well-developed breasts relative to females.Calcitriol, the hormonally active form of vitamin D, acting through the vitamin D receptor (VDR), has, like the androgens, been reported to be a negative regulator of mammary gland development in mice, for instance, during puberty. VDR knockout mice show more extensive ductal development relative to wild-type mice, as well as precocious mammary gland development. In addition, VDR knockout has also been shown to result in increased responsiveness of mouse mammary gland tissue to estrogen and progesterone, which was represented by increased cell growth in response to these hormones. Conversely however, it has been found that VDR knockout mice show reduced ductal differentiation, represented by an increased number of undifferentiated TEBs, and this finding has been interpreted as indicating that vitamin D may be essential for lobuloalveolar development. As such, calcitriol, via the VDR, may be a negative regulator of ductal development but a positive regulator of lobuloalveolar development in the mammary gland.A possible mechanism of the negative regulatory effects of the VDR on breast development may be indicated by a study of vitamin D3 supplementation in women which found that vitamin D3 suppresses cyclooxygenase-2 (COX-2) expression in the breast, and by doing so, reduces and increases, respectively, the levels of prostaglandin E2 (PGE2) and transforming growth factor β2 (TGF-β2), a known inhibitory factor in breast development. Moreover, suppression of PGE2 in breast tissue is relevant because, via activation of prostaglandin EP receptors, PGE2 potently induces amphiregulin expression in breast tissue, and activation of the EGFR by amphiregulin increases COX-2 expression in breast tissue, in turn resulting in more PGE2, and thus, a self-perpetuating, synergistic cycle of growth amplification due to COX-2 appears to potentially be present in normal breast tissue. Accordingly, overexpression of COX-2 in mammary gland tissue produces mammary gland hyperplasia as well as precocious mammary gland development in female mice, mirroring the phenotype of VDR knockout mice, and demonstrating a strong stimulatory effect of COX-2, which is downregulated by VDR activation, on the growth of the mammary glands. Also in accordance, COX-2 activity in the breasts has been found to be positively associated with breast volume in women. Growth factors Estrogen, progesterone, and prolactin, as well as GH/IGF-1, produce their effects on breast development by modulating the local expression in breast tissue of an assortment of autocrine and paracrine growth factors, including IGF-1, IGF-2, amphiregulin, EGF, FGF, hepatocyte growth factor (HGF), tumor necrosis factor α (TNF-α), tumor necrosis factor β (TNF-β), transforming growth factor α (TGF-α), transforming growth factor β (TGF-β), heregulin, Wnt, RANKL, and leukemia inhibitory factor (LIF). These factors regulate cellular growth, proliferation, and differentiation via activation of intracellular signaling cascades that control cell function, such as Erk, Akt, JNK, and Jak/Stat.Based on research with epidermal growth factor receptor (EGFR) knockout mice, the EGFR, which is the molecular target of EGF, TGF-α, amphiregulin, and heregulin, has, similarly to the insulin-like growth factor-1 receptor (IGF-1R), been found to be essential for mammary gland development. Estrogen and progesterone mediate ductal development mainly through induction of amphiregulin expression, and thus downstream EGFR activation. Accordingly, ERα, amphiregulin, and EGFR knockout mice copy each other phenotypically in regards to their effects on ductal development. Also in accordance, treatment of mice with amphiregulin or other EGFR ligands like TGF-α or heregulin induces ductal and lobuloalveolar development in the mouse mammary gland, actions that occur even in the absence of estrogen and progesterone. As both the IGF-1R and the EGFR are independently essential for mammary gland development, and as combined application of IGF-1 and EGF, through their respective receptors, has been found to synergistically stimulate the growth of human breast epithelial cells, these growth factor systems appear to work together in mediating breast development.Elevated levels of HGF and, to a lesser extent, IGF-1 (by 5.4-fold and 1.8-fold, respectively), in breast stromal tissue, have been found in macromastia, a very rare condition of extremely and excessively large breast size. Exposure of macromastic breast stromal tissue to non-macromastic breast epithelial tissue was found to cause increased alveolar morphogenesis and epithelial proliferation in the latter. A neutralizing antibody for HGF, but not for IGF-1 or EGF, was found to attenuate the proliferation of breast epithelial tissue caused by exposure to macromastic breast stromal cells, potentially directly implicating HGF in the breast growth and enlargement seen in macromastia. Also, a genome-wide association study has highly implicated HGF and its receptor, c-Met, in breast cancer aggressiveness. Lactation Upon parturition (childbirth), estrogen and progesterone rapidly drop to very low levels, with progesterone levels being undetectable. Conversely, prolactin levels remain elevated. As estrogen and progesterone block prolactin-induced lactogenesis by suppressing prolactin receptor (PRLR) expression in breast tissue, their sudden absence results in the commencement of milk production and lactation by prolactin. Expression of the PRLR in breast tissue may increase by as much as 20-fold when estrogen and progesterone levels drop upon childbirth. With suckling from the infant, prolactin and oxytocin are secreted and mediate milk production and letdown, respectively. Prolactin suppresses the secretion of LH and FSH, which in turn results in continued low levels of estrogen and progesterone, and temporary amenorrhea (absence of menstrual cycles) occurs. In the absence of regular, episodic suckling, which keeps prolactin concentrations high, levels of prolactin will quickly drop, the menstrual cycle will resume and hence normal estrogen and progesterone levels will return, and lactation will cease (that is, until next parturition, or until induced lactation (i.e., with a galactogogue), occurs). Breast size and cancer risk Some factors of breast morphology, including their density, are clearly implicated in breast cancer. While breast size is moderately heritable, the relationship between breast size and cancer is uncertain. The genetic variants influencing breast size have not been identified.Through genome-wide association studies, a variety of genetic polymorphisms have been linked to breast size. Some of these include rs7816345 near ZNF703 (zinc finger protein 703); rs4849887 and rs17625845 flanking INHBB (inhibin βB); rs12173570 near ESR1 (ERα); rs7089814 in ZNF365 (zinc finger protein 365); rs12371778 near PTHLH (parathyroid hormone-like hormone); rs62314947 near AREG (amphiregulin); as well as rs10086016 at 8p11.23 (which is in complete linkage disequilibrium with rs7816345) and rs5995871 at 22q13 (contains the MKL1 gene, which has been found to modulate the transcriptional activity of ERα). Many of these polymorphisms are also associated with the risk of developing breast cancer, revealing a potential positive association between breast size and breast cancer risk. However, conversely, some polymorphisms show a negative association between breast size and breast cancer risk. In any case, a meta-analysis concluded that breast size and risk of breast cancer are indeed importantly related.Circulating IGF-1 levels are positively associated with breast volume in women. In addition, the absence of the common 19-repeat allele in the IGF1 gene is also positively associated with breast volume in women, as well as with high IGF-1 levels during oral contraceptive use and with lessening of the normal age-associated decline in circulating IGF-1 concentrations in women. There is great variation in the prevalence of the IGF1 19-repeat allele between ethnic groups, and its absence has been reported to be highest among African-American women.Genetic variations in the androgen receptor (AR) have been linked to both breast volume (as well as body mass index) and breast cancer aggressiveness.COX-2 expression has been positively associated with breast volume and inflammation in breast tissue, as well as with breast cancer risk and prognosis. Rare mutations Women with CAIS, who are completely insensitive to the AR-mediated actions of androgens, have, as a group, above-average sized breasts. This is true despite the fact that they simultaneously have relatively low levels of estrogen, which demonstrates the powerful suppressant effect of androgens on estrogen-mediated breast development.Aromatase excess syndrome, an extremely rare condition characterized by marked hyperestrogenism, is associated with precocious breast development and macromastia in females and similarly precocious gynecomastia (womens breasts) in males. In complete androgen insensitivity syndrome, a condition in which the AR is defective and insensitive to androgens, there is full breast development with breast volumes that are in fact above average in spite of relatively low levels of estrogen (50 pg/mL estradiol). In aromatase deficiency, a form of hypoestrogenism in which aromatase is defective and cannot synthesize estrogen, and in complete estrogen insensitivity syndrome, a condition in which ERα is defective and insensitive to estrogen, breast development is completely absent. See also Breast augmentation Breast enlargement Mammoplasia Premenstrual water retention Thelarche References Further reading Hovey, Russell C.; Aimo, Lucila (2010). "Diverse and Active Roles for Adipocytes During Mammary Gland Growth and Function". Journal of Mammary Gland Biology and Neoplasia. 15 (3): 279–290. doi:10.1007/s10911-010-9187-8. ISSN 1083-3021. PMC 2941079. PMID 20717712. Sun, Susie X.; Bostanci, Zeynep; Kass, Rena B.; Mancino, Anne T.; Rosenbloom, Arlan L.; Klimberg, V. Suzanne; Bland, Kirby I. (2018). "Breast Physiology". The Breast. pp. 37–56.e6. doi:10.1016/B978-0-323-35955-9.00003-9. ISBN 9780323359559.
Asterixis
Asterixis, more colloquially referred to as flapping tremor, is a tremor of the hand when the wrist is extended, sometimes said to resemble a bird flapping its wings. This motor disorder is characterized by an inability to maintain a position, which is demonstrated by jerking movements of the outstretched hands when bent upward at the wrist. The tremor is caused by abnormal function of the diencephalic motor centers in the brain, which regulate the muscles involved in maintaining position. Asterixis is associated with various encephalopathies due especially to faulty metabolism. The term derives from the Greek a, "not" and stērixis, "fixed position". Asterixis is the inability to maintain posture due to a metabolic encephalopathy. This can be elicited on physical exam by having the patient extend their arms and bend their hands back. With a metabolic encephalopathy, the patient is unable to hold their hands back resulting in a “flapping” motion consistent with asterixis. It can be seen in any metabolic encephalopathy e.g. chronic kidney failure, severe congestive heart failure, acute respiratory failure and commonly in decompensated liver failure. Associated conditions and presentation Usually there are brief, arrhythmic interruptions of sustained voluntary muscle contraction causing brief lapses of posture, with a frequency of 3–5 Hz. It is bilateral, but may be asymmetric. Unilateral asterixis may occur with structural brain disease. It can be a sign of hepatic encephalopathy, damage to brain cells presumably due to the inability of the liver to metabolize ammonia to urea. The cause is thought to be predominantly related to abnormal ammonia metabolism. Asterixis is seen most often in drowsy or stuporous patients with metabolic encephalopathies, especially in decompensated cirrhosis or acute liver failure. It is also seen in some patients with kidney failure and azotemia. It can also be a feature of Wilsons disease. Asterixis is also seen in respiratory failure due to carbon dioxide toxicity (hypercapnia). Some drugs are known to cause asterixis, particularly phenytoin (when it is known as phenytoin flap). Other drugs implicated include benzodiazepines, salicylates, barbiturates, valproate, gabapentin, lithium, ceftazidime, and metoclopramide. History R.D. Adams and J.M. Foley first described asterixis in 1949 in patients with severe liver failure and encephalopathy. Initially Foley and Adams referred to asterixis simply as "tremor" but realized that they needed a more appropriate term. On a literature search they found a poorly described phenomenon in similar patients mentioned by German physicians called “jactitations” but the reference was vague. Foley consulted Father Cadigan, a Jesuit classics scholar, who suggested “anisosterixis” (an "negative"–iso "equal"–sterixis "firmness") but Foley shortened this to asterixis due to the former being too difficult to pronounce. They introduced the term in 1953 by way of a medical abstract and later Adams solidified its medical use as he was an author and editor of the widely influential Harrisons Principles of Internal Medicine. References External links Diagram
Tetraplegia
Tetraplegia, also known as quadriplegia, is paralysis caused by illness or injury that results in the partial or total loss of use of all four limbs and torso; paraplegia is similar but does not affect the arms. The loss is usually sensory and motor, which means that both sensation and control are lost. The paralysis may be flaccid or spastic. Signs and symptoms Although the most obvious symptom is impairment of the limbs, functioning is also impaired in the torso. This can mean a loss or impairment in controlling bowel and bladder, sexual function, digestion, breathing and other autonomic functions. Furthermore, sensation is usually impaired in affected areas. This can manifest as numbness, reduced sensation or burning neuropathic pain. Secondarily, because of their depressed functioning and immobility, people with tetraplegia are often more vulnerable to pressure sores, osteoporosis and fractures, frozen joints, spasticity, respiratory complications and infections, autonomic dysreflexia, deep vein thrombosis, and cardiovascular disease.The severity of the condition depends on both the level at which the spinal cord is injured and the extent of the injury. An individual with an injury at C1 (the highest cervical vertebra, at the base of the skull) will probably lose function from the neck down and be ventilator-dependent. An individual with a C7 injury may lose function from the chest down but still retain use of the arms and much of the hands. The extent of the injury is also important. A complete severing of the spinal cord will result in complete loss of function from that vertebra down. A partial severing or even bruising of the spinal cord results in varying degrees of mixed function and paralysis. A common misconception with tetraplegia is that the victim cannot move legs, arms or any of the major function; this is often not the case. Some individuals with tetraplegia can walk and use their hands, as though they did not have a spinal cord injury, while others may use wheelchairs and they can still have function of their arms and mild finger movement; again, that varies on the spinal cord damage.It is common to have movement in limbs, such as the ability to move the arms but not the hands, or to be able to use the fingers but not to the same extent as before the injury. Furthermore, the deficit in the limbs may not be the same on both sides of the body; either left or right side may be more affected, depending on the location of the lesion on the spinal cord. Causes Tetraplegia is caused by damage to the brain or the spinal cord at a high level. The injury, which is known as a lesion, causes the loss of partial or total function of all four limbs, meaning the arms and the legs. Typical causes of this damage are trauma (such as a traffic collision, diving into shallow water, a fall, a sports injury), disease (such as transverse myelitis, Guillain–Barré syndrome, multiple sclerosis, or polio), or congenital disorders (such as muscular dystrophy).Tetraplegia is defined in many ways; C1–C4 usually affects arm movement more so than a C5–C7 injury; however, all tetraplegics have or have had some kind of finger dysfunction. So, it is not uncommon to have a tetraplegic with fully functional arms but no nervous control of their fingers and thumbs. It is possible to have a broken neck without becoming tetraplegic if the vertebrae are fractured or dislocated but the spinal cord is not damaged. Conversely, it is possible to injure the spinal cord without breaking the spine, for example when a ruptured disc or bone spur on the vertebra protrudes into the spinal column. Diagnosis Classification Spinal cord injuries are classified as complete and incomplete by the American Spinal Injury Association (ASIA) classification. The ASIA scale grades patients based on their functional impairment as a result of the injury, grading a patient from A to D. This has considerable consequences for surgical planning and therapy. Complete spinal-cord lesions Pathophysiologically, the spinal cord of a person with tetraplegia can be divided into three segments which can be useful for classifying the injury. First, there is an injured functional medullary segment. This segment has unparalysed, functional muscles; the action of these muscles is voluntary, not permanent and hand strength can be evaluated by the Medical Research Council (MRC) Scale. This scale is used when upper limb surgery is planned, as referred to in the International Classification for hand surgery in tetraplegic patients.A lesional segment (or an injured metamere) consists of denervated corresponding muscles. The lower motor neuron (LMN) of these muscles is damaged. These muscles are hypotonic, atrophic and have no spontaneous contraction. The existence of joint contractures should be monitored.Below the level of the injured metamere, there is an injured sublesional segment with the intact lower motor neuron, which means that medullary reflexes are present, but the upper cortical control is lost. These muscles show some increase in tone when elongated and sometimes spasticity, the trophicity is good. Incomplete spinal-cord lesions Incomplete spinal cord injuries result in varied post injury presentations. There are three main syndromes described, depending on the exact site and extent of the lesion. Central cord syndrome: most of the cord lesion is in the gray matter of the spinal cord, sometimes the lesion continues in the white matter. Brown-Séquard syndrome: hemisection of the spinal cord. Anterior cord syndrome: a lesion of the anterior horns and the anterolateral tracts, with a possible division of the anterior spinal artery.For most patients with ASIA A (complete) tetraplegia, ASIA B (incomplete) tetraplegia and ASIA C (incomplete) tetraplegia, the International Classification level of the patient can be established without great difficulty. The surgical procedures according to the International Classification level can be performed. In contrast, for patients with ASIA D (incomplete) tetraplegia it is difficult to assign an International Classification other than International Classification level X (others). Therefore, it is more difficult to decide which surgical procedures should be performed. A far more personalized approach is needed for these patients. Decisions must be based more on experience than on texts or journals.The results of tendon transfers for patients with complete injuries are predictable. On the other hand, it is well known that muscles lacking normal excitation perform unreliably after surgical tendon transfers. Despite the unpredictable aspect in incomplete lesions, tendon transfers may be useful. The surgeon should be confident that the muscle to be transferred has enough power and is under good voluntary control. Pre-operative assessment is more difficult to assess in incomplete lesions.Patients with an incomplete lesion also often need therapy or surgery before the procedure to restore function to correct the consequences of the injury. These consequences are hypertonicity/spasticity, contractures, painful hyperesthesias and paralyzed proximal upper limb muscles with distal muscle sparing.Spasticity is a frequent consequence of incomplete injuries. Spasticity often decreases function, but sometimes a patient can control the spasticity in a way that it is useful to their function. The location and the effect of the spasticity should be analyzed carefully before treatment is planned. An injection of Botulinum toxin (Botox) into spastic muscles is a treatment to reduce spasticity. This can be used to prevent muscle shortening and early contractures.Over the last ten years, an increase in traumatic incomplete lesions is seen, due to the better protection in traffic. Treatment Upper limb paralysis refers to the loss of function of the elbow and hand. When upper limb function is absent as a result of a spinal cord injury it is a major barrier to regain autonomy. People with tetraplegia should be examined and informed concerning the options for reconstructive surgery of the tetraplegic arms and hands. Prognosis Delayed diagnosis of cervical spine injury has grave consequences for the victim. About one in 20 cervical fractures are missed and about two-thirds of these patients have further spinal-cord damage as a result. About 30% of cases of delayed diagnosis of cervical spine injury develop permanent neurological deficits. In high-level cervical injuries, total paralysis from the neck can result. High-level tetraplegics (C4 and higher) will likely need constant care and assistance in activities of daily living, such as getting dressed, eating and bowel and bladder care. Low-level tetraplegics (C5 to C7) can often live independently.Even with "complete" injuries, in some rare cases, through intensive rehabilitation, slight movement can be regained through "rewiring" neural connections, as in the case of actor Christopher Reeve.In the case of cerebral palsy, which is caused by damage to the motor cortex either before, during (10%), or after birth, some people with tetraplegia are gradually able to learn to stand or walk through physical therapy.Quadriplegics can improve muscle strength by performing resistance training at least three times per week. Combining resistance training with proper nutrition intake can greatly reduce co-morbidities such as obesity and type 2 diabetes. Epidemiology There are an estimated 17,700 spinal cord injuries each year in the United States; the total number of people affected by spinal cord injuries is estimated to be approximately 290,000 people.In the US, spinal cord injuries alone cost approximately US$40.5 billion each year, which is a 317 percent increase from costs estimated in 1998 ($9.7 billion).The estimated lifetime costs for a 25-year-old in 2018 is $3.6 million when affected by low tetraplegia and $4.9 million when affected by high tetraplegia. In 2009, it was estimated that the lifetime care of a 25-year-old rendered with low tetraplegia was about $1.7 million, and $3.1 million with high tetraplegia.There are about 1,000 people affected each year in the UK (~1 in 60,000—assuming a population of 60 million). Terminology The condition of paralysis affecting four limbs is alternately termed tetraplegia or quadriplegia. Quadriplegia combines the Latin root quadra, for "four", with the Greek root πληγία plegia, for "paralysis". Tetraplegia uses the Greek root τετρα tetra for "four". Quadriplegia is the common term in North America; tetraplegia is more commonly used in Europe. See also Clearing the cervical spine Hemiplegia Locked-in syndrome Sexuality after spinal cord injury Spinal cord injury research References Citations External links Tetraplegia at Curlie
Menstrual disorder
A menstrual disorder is characterized as any abnormal condition with regards to a persons menstrual cycle. There are many different types of menstrual disorders that vary with signs and symptoms, including pain during menstruation, heavy bleeding, or absence of menstruation. Normal variations can occur in menstrual patterns but generally menstrual disorders can also include periods that come sooner than 21 days apart, more than 3 months apart, or last more than 10 days in duration. Variations of the menstrual cycle are mainly caused by the immaturity of the hypothalamic-pituitary-ovarian (HPO) axis, and early detection and management is required in order to minimize the possibility of complications regarding future reproductive ability.Though menstrual disorders were once considered more of a nuisance problem, they are now widely recognized as having a serious impact on society in the form of days lost from work brought about by the pain and suffering experienced by women. These disorders can arise from physiologic sources (pregnancy etc.), pathologic sources (stress, excessive exercise, weight loss, endocrine or structural abnormalities etc.), or iatrogenic sources (secondary to contraceptive use etc.). Types of menstrual disorders Premenstrual Disorders Premenstrual syndrome (PMS) or premenstrual tension refers to the emotional and physical symptoms that routinely occur in the two weeks leading up to menstruation. Symptoms are usually mild, but 5-8% of women experience moderate to severe symptoms that significantly affect daily activities. Symptoms may include anxiety, irritability, mood swings, depression, headache, food cravings, increased appetite, and bloating. Premenstrual dysphoric disorder (PMDD) is a severe mood disorder that affects cognitive and physical functions in the week leading up to menstruation. Premenstrual dysphoric disorder is diagnosed with at least one affective, or mood, symptom and at least five physical, mood, and/or behavioral symptoms. Disorders of cycle length Normal menstrual cycle length is 22–45 days. Amenorrhea is the absence of a menstrual period in a woman of reproductive age. Physiologic states of amenorrhoea are seen during pregnancy and lactation (breastfeeding). Outside of the reproductive years there is absence of menses during childhood and after menopause. Irregular menstruation is where there is variation in menstrual cycle length of more than approximately 8 days for a woman. The term metrorrhagia is often used for irregular menstruation that occurs between the expected menstrual periods. Oligomenorrhea is the medical term for infrequent, often light menstrual periods (intervals exceeding 35 days). Polymenorrhea is the medical term for cycles with intervals of 21 days or fewer. Disorders of flow Normal menstrual flow length is 3–7 days. Abnormal uterine bleeding (AUB) is a broad term used to describe any disruption in bleeding that involves the volume, duration, and/or regularity of flow. Bleeding may occur frequently or infrequently, and can occur between periods, after sexual intercourse, and after menopause. Bleeding during pregnancy is excluded. Hypomenorrhea is abnormally light menstrual bleeding. Menorrhagia (meno = prolonged, rrhagia = excessive flow/discharge) is an abnormally heavy and prolonged menstrual period. Metrorrhagia is bleeding at irregular times, especially outside the expected intervals of the menstrual cycle. If there is excessive menstrual and uterine bleeding other than that caused by menstruation, menometrorrhagia (meno = prolonged, metro = time, rrhagia = excessive flow/discharge) may be diagnosed. Causes may be due to abnormal blood clotting, disruption of normal hormonal regulation of periods or disorders of the endometrial lining of the uterus. Depending upon the cause, it may be associated with abnormally painful periods. Disorders of ovulation Disorders of ovulation include oligoovulation and anovulation: Anovulation is absence of ovulation when it would be normally expected (in a post-menarchal, premenopausal woman). Anovulation usually manifests itself as irregularity of menstrual periods, that is, unpredictable variability of intervals, duration, or bleeding. Anovulation can also cause cessation of periods (secondary amenorrhea) or excessive bleeding (dysfunctional uterine bleeding). Oligoovulation is infrequent or irregular ovulation (usually defined as cycles of >35 days or <8 cycles a year). Other menstrual disorders Dysmenorrhea (or dysmenorrhoea), cramps or painful menstruation, involves menstrual periods that are accompanied by either sharp, intermittent pain or dull, aching pain, usually in the pelvis or lower abdomen. Signs and symptoms of menstrual disorders The signs and symptoms of menstrual disorders can cause significant stress. Abnormal uterine bleeding (AUB) has the potential to be one of the most urgent gynecological problems during menstruation. Dysmenorrhea is the most common. Premenstrual Syndrome (PMS) Symptoms may include irritability, bloating, depression, food cravings, aggressiveness, and mood swings. Fluid retention and fluctuating weight gain are also reported.Precipitating risk factors include: stress, alcohol consumption, exercise, smoking, and some medications. Amenorrhea Lack of a menses by the age of 16 where secondary sexual characteristics have developed or by the age of 14 where no secondary sexual characteristics have developed (primary amenorrhea), or lack of a menses for more than 3–6 months after first menstruation cycle. Although missing a period is the main sign, other symptoms can include: excess facial, hair loss, headache, changes to vision, milky discharge from the breasts, or absence of breast development. Abnormal Uterine Bleeding One-third of women will experience abnormal uterine bleeding in their life. Normal menstrual cycle has a frequency of 24 to 38 days, lasts 7 to 9 days, so bleeding that lasts longer could be considered abnormal. Very heavy bleeding (for example, needing to use 1 or more tampons or sanitary pads every hour) is another symptom. Dysmenorrhea Especially painful or persistent menstrual cramping that occurs in the absence of any underlying pelvic disease.Pain radiating to the low back or upper thighs with onset of menstruation and lasting anywhere from 12 to 72 hours. Headache, nausea, vomiting, diarrhea, and fatigue may also accompany the pain. Pain may begin gradually, with the first several years of menses, and then intensified as menstruation becomes regular. Patients who also have secondary amenorrhea report symptoms beginning after age 20 and lasting 5–7 days with progressive worsening of pain over time. Pelvic pain is also reported. Causes of menstrual disorders There are many causes of menstrual disorders, including uterine fibroids, hormonal imbalances, clotting disorders, cancer, sexually-transmitted infections, polycystic ovary syndrome, and genetics. Uterine fibroids are benign, non-cancerous growths in the uterus that affect most women at some point in their lives and usually does not require treatment unless they cause intolerable symptoms. Stress and lifestyle factors commonly impact menstruation, which includes weight changes, dieting, changes in exercise, travel, and illness.Hyperprolactinaemia can also cause menstrual disorders. Amenorrhea There are different causes depending on the type of menstrual(period) disorder. Amenorrhea, or the absence of menstruation, is subdivided into primary and secondary amenorrhea. In primary amenorrhea, in which there is a failure to menstruate by the age of 16 with normal sexual development or by 14 without normal sexual development, causes can be from developmental abnormalities of the uterus, ovaries, or genital tract, or endocrine disorders. In secondary amenorrhea, or the absence of menstruation for greater than 6 months, can be caused by the same reasons as primary amenorrhea, as well as polycystic ovary syndrome, pregnancy, chronic illness, and certain drugs like cocaine and opioids. Hypomenorrhea Causes of hypomenorrhea, or irregular light periods, include periods around menopause, eating disorders, excessive exercise, thyroid dysfunction, uncontrolled diabetes, Cushings syndrome, hormonal birth control, and certain medications to treat epilepsy or mental health conditions. Menorrhagia Causes of menorrhagia, or heavy menstrual bleeding, include polycystic ovary syndrome, uterine fibroids, endometrial polyps, bleeding disorders, and miscarriage. Dysmenorrhea Causes of dysmenorrhea, or menstrual pain, include endometriosis, pelvic scarring due to chlamydia or gonorrhea, and intrauterine devices or IUDs. Primary dysmenorrhea is when there is no underlying cause that is identified, and secondary dysmenorrhea is when the menstrual pain is caused by other conditions such as endometriosis, fibroids, or infection. Diagnosis of menstrual disorders Diagnosis begins with an in-depth medical history and physical exam, including a pelvic exam and sometimes a Pap smear.Additional testing may include but are not limited to blood tests, hormonal tests, ultrasound, gynecologic ultrasound, magnetic resonance imaging (MRI), hysteroscopy, laparoscopy, endometrial biopsy, and dilation and curettage (D&C). Treatment of menstrual disorders Premenstrual syndrome and premenstrual dysphoric disorder Due to the unclear etiology of premenstrual syndrome and premenstrual dysphoric disorder, symptom relief is the primary goal of treatment. Selective serotonin reuptake inhibitors and spironolactone decrease physical and psychological symptoms associated with premenstrual syndrome. Oral contraceptives may ameliorate physical symptoms of breast tenderness and bloating. Ovarian suppression treatment with gonadotropin-releasing hormone agonist as an off-label use may reduce symptoms but have adverse side effects including decreased bone density. Other less commonly use medications such as alprazolam may reduce anxiety symptoms but has potential for dependence, tolerance, and abuse. Pyridoxine, a form of vitamin B6, may be used as a dietary supplement to relieve overall symptoms. Amenorrhea Successful treatment varies depending on the diagnosis of amenorrhea. In patients with functional hypothalamic amenorrhea due to physical or psychological stress, non-pharmacological options include weight gain, resolution of emotional issues, or decreased intensity of exercise. Patients experiencing amenorrhea due to hypothyroidism may be started with thyroid replacement therapy. Dopamine agonists such as bromocriptine are used in patients with pituitary adenomas. Amenorrhea associated with gonadal dysgenesis or a hypoestrogenic state may be treated with oral contraceptives, patches, or vaginal rings.Amenorrhea associated with structural anomalies can be addressed with surgical treatment such as gonadectomy. Menorrhagia Acute management of menstrual bleeding includes hormonal therapy with estrogen or oral contraceptives until bleeding has stopped followed by an oral contraceptive tapering regimen. Adjunctive therapy may include iron supplements and nonsteroidal anti-inflammatory drugs. Patients who do not respond to hormonal therapy may use antifibrinolytics. Procedural therapy such as a suction curettage and intrauterine balloon tamponade are reserved for patients who do not respond to medication therapy and do not put fertility at risk. Life-threatening situations may consider more invasive procedures such as endometrial ablation, uterine artery embolization, and hysterectomy.Long-term management include estrogen-containing therapy and progestin therapy. Dysmenorrhea Primary dysmenorrhea is commonly treated with nonsteroidal anti-inflammatory drugs such as ibuprofen to reduce moderate to severe pain. Other simple analgesics such as aspirin or acetaminophen are less commonly used but may also reduce short-term pain. Supplements including thiamine and vitamin E may reduce pain in younger women. Non-pharmacological interventions such as the use of external heat are also effective at reducing pain. Regular exercises can also reduce pain. See also References External links NIH Putting tampon in painlessly
Hyperkinesia
Hyperkinesia refers to an increase in muscular activity that can result in excessive abnormal movements, excessive normal movements, or a combination of both. Hyperkinesia is a state of excessive restlessness which is featured in a large variety of disorders that affect the ability to control motor movement, such as Huntingtons disease. It is the opposite of hypokinesia, which refers to decreased bodily movement, as commonly manifested in Parkinsons disease. Many hyperkinetic movements are the result of improper regulation of the basal ganglia–thalamocortical circuitry. Overactivity of a direct pathway combined with decreased activity of indirect pathway results in activation of thalamic neurons and excitation of cortical neurons, resulting in increased motor output. Often, hyperkinesia is paired with hypotonia, a decrease in muscle tone. Many hyperkinetic disorders are psychological in nature and are typically prominent in childhood. Depending on the specific type of hyperkinetic movement, there are different treatment options available to minimize the symptoms, including different medical and surgical therapies. The word hyperkinesis comes from the Greek hyper, meaning "increased," and kinein, meaning "to move." Classification Basic hyperkinetic movements can be defined as any unwanted, excess movement. Such abnormal movements can be distinguished from each other on the basis of whether or not, or to what degree they are, rhythmic, discrete, repeated, and random. In evaluating the individual with a suspected form of hyperkinesia, the physician will record a thorough medical history, including a clear description of the movements in question, medications prescribed in the past and present, family history of similar diseases, medical history, including past infections, and any past exposure to toxic chemicals. Hyperkinesia is a defining feature of many childhood movement disorders, yet distinctly differs from both hypertonia and negative signs, which are also typically involved in such disorders. Several prominent forms of hyperkinetic movements include: Ataxia The term ataxia refers to a group of progressive neurological diseases that alter coordination and balance. Ataxias are often characterized by poor coordination of hand and eye movements, speech problems, and a wide-set, unsteady gait. Possible causes of ataxias may include stroke, tumor, infection, trauma, or degenerative changes in the cerebellum. These types of hyperkinetic movements can be further classified into two groups. The first group, hereditary ataxias, affect the cerebellum and spinal cord and are passed from one generation to the next through a defective gene. A common hereditary ataxia is Friedreichs ataxia. in contrast, sporadic ataxias occur spontaneously in individuals with no known family history of such movement disorders. Athetosis Athetosis is defined as a slow, continuous, involuntary writhing movement that prevents the individual from maintaining a stable posture. These are smooth, nonrhythmic movements that appear random and are not composed of any recognizable sub-movements. They mainly involve the distal extremities, but can also involve the face, neck, and trunk. Athetosis can occur in the resting state, as well as in conjunction with chorea and dystonia. When combined with o, as in cerebral palsy, the term "choreoathetosis" is frequently used. Chorea Chorea is a continuous, random-appearing sequence of one or more discrete involuntary movements or movement fragments. Although chorea consists of discrete movements, many are often strung together in time, thus making it difficult to identify each movements start and end point. These movements can involve the face, trunk, neck, tongue, and extremities. Unlike dystonic movements, chorea-associated movements are often more rapid, random and unpredictable. Movements are repeated, but not rhythmic in nature. Children with chorea appear fidgety and will often try to disguise the random movements by voluntarily turning the involuntary, abnormal movement into a seemingly more normal, purposeful motion. Chorea may result specifically from disorders of the basal ganglia, cerebral cortex, thalamus, and cerebellum. It has also been associated with encephalitis, hyperthyroidism, anticholinergic toxicity, and other genetic and metabolic disorders. Chorea is also the prominent movement featured in Huntingtons disease. Dystonia Dystonia is a movement disorder in which involuntarily sustained or intermittent muscle contractions cause twisting or repetitive movements, abnormal postures, or both. Such abnormal postures include foot inversion, wrist ulnar deviation, or lordotic trunk twisting. They can be localized to specific parts of the body or be generalized to many different muscle groups. These postures are often sustained for long periods of time and can be combined in time. Dystonic movements can augment hyperkinetic movements, especially when linked to voluntary movements.Blepharospasm is a type of dystonia characterized by the involuntary contraction of the muscles controlling the eyelids. Symptoms can range from a simple increased frequency of blinking to constant, painful eye closure leading to functional blindness.Oromandibular dystonia is a type of dystonia marked by forceful contractions of the lower face, which causes the mouth to open or close. Chewing motions and unusual tongue movements may also occur with this type of dystonia.Laryngeal dystonia or spasmodic dysphonia results from abnormal contraction of muscles in the voice box, resulting in altered voice production. Patients may have a strained-strangled quality to their voice or, in some cases, a whispering or breathy quality.Cervical dystonia (CD) or spasmodic torticollis is characterized by muscle spasms of the head and neck, which may be painful and cause the neck to twist into unusual positions or postures.Writers cramp and musicians cramp is a task-specific dystonia, meaning that it only occurs when performing certain tasks. Writers cramp is a contraction of hand and/or arm muscles that happens only when a patient is writing. It does not occur in other situations, such as when a patient is typing or eating. Musicians cramp occurs only when a musician plays an instrument, and the type of cramp experienced is specific to the instrument. For example, pianists may experience cramping of their hands when playing, while brass players may have cramping or contractions of their mouth muscles. Hemiballismus Typically caused by damage to the subthalamic nucleus or nuclei, hemiballismus movements are nonrhythmic, rapid, nonsuppressible, and violent. They usually occur in an isolated body part, such as the proximal arm. Hemifacial spasm Hemifacial spasm (HFS) is characterized by involuntary contraction of facial muscles, typically occurring only on one side of the face. Like blepharospasm, the frequency of contractions in hemifacial spasm may range from intermittent to frequent and constant. The unilateral blepharospasm of HFS may interfere with routine tasks such as driving. In addition to medication, patients may respond well to treatment with Botox. HFS may be due to vascular compression of the nerves going to the muscles of the face. For these patients, surgical decompression may be a viable option for the improvement of symptoms. Myoclonus Myoclonus is defined as a sequence of repeated, often nonrhythmic, brief, shock-like jerks due to sudden involuntary contraction or relaxation of one or more muscles. These movements may be asynchronous, in which several muscles contract variably in time, synchronous, in which muscles contract simultaneously, or spreading, in which several muscles contract sequentially. It is characterized by a sudden, unidirectional movement due to muscle contraction, followed by a relaxation period in which the muscle is no longer contracted. However, when this relaxation phase is decreased, as when muscle contractions become faster, a myoclonic tremor results. Myoclonus can often be associated with seizures, delirium, dementia, and other signs of neurological disease and gray matter damage. Stereotypies Stereotypies are repetitive, rhythmic, simple movements that can be voluntarily suppressed. Like tremors, they are typically back and forth movements, and most commonly occur bilaterally. They often involve fingers, wrists, or proximal portions of the upper extremities. Although, like tics, they can stem from stress or excitement, there is no underlying urge to move associated with stereotypies and these movements can be stopped with distraction. When aware of the movements, the child can also suppress them voluntarily. Stereotypies are often associated with developmental syndromes, including the autism spectrum disorders. Stereotypies are quite common in preschool-aged children and for this reason are not necessarily indicative of neurological pathology on their own. Tardive dyskinesia / tardive dystonia Tardive dyskinesia or tardive dystonia, both referred to as "TD", refers to a wide variety of involuntary stereotypical movements caused by the prolonged use of dopamine receptor-blocking agents. The most common types of these agents are antipsychotics and anti-nausea agents. The classic form of TD refers to stereotypic movements of the mouth, which resemble chewing. However, TD can also appear as other involuntary movements such as chorea, dystonia, or tics. Tics A tic can be defined as a repeated, individually recognizable, intermittent movement or movement fragments that are almost always briefly suppressible and are usually associated with awareness of an urge to perform the movement. These abnormal movements occur with intervening periods of normal movement. These movements are predictable, often triggered by stress, excitement, suggestion, or brief voluntary suppressibility. Many children say that the onset of tics can stem from the strong urge to move. Tics can be either muscular (alter normal motor function) or vocal (alter normal speech) in nature and most commonly involve the face, mouth, eyes, head, neck or shoulder muscles. Tics can also be classified as simple motor tics (a single brief stereotyped movement or movement fragment), complex motor tics (a more complex or sequential movement involving multiple muscle groups), or phonic tics (including simple, brief phonations or vocalizations).When both motor and vocal tics are present and persist for more than one year, a diagnosis of Tourette syndrome (TS) is likely. TS is an inherited neurobehavioral disorder characterized by both motor and vocal tics. Many individuals with TS may also develop obsessions, compulsions, inattention and hyperactivity. TS usually begins in childhood. Up to 5% of the population has tics, but at least 20% of boys will have developed tics at some point in their lifetimes. Tremor A tremor can be defined as a rhythmic, back and forth or oscillating involuntary movement about a joint axis. Tremors are symmetric about a midpoint within the movement, and both portions of the movement occur at the same speed. Unlike the other hyperkinetic movements, tremors lack both the jerking associated movements and posturing.Essential tremor (ET), also known as benign essential tremor, or familial tremor, is the most common movement disorder. It is estimated that 5 percent of people worldwide have this condition, affecting those of all ages but typically staying within families. ET typically affects the hands and arms but can also affect the head, voice, chin, trunk and legs. Both sides of the body tend to be equally affected. The tremor is called an action tremor, becoming noticeable in the arms when they are being used. Patients often report that alcohol helps lessen the symptoms. Primary medical treatments for ET are usually beta-blockers. For patients who fail to respond sufficiently to medication, deep brain stimulation and thalamotomy can be highly effective.A "flapping tremor," or asterixis, is characterized by irregular flapping-hand movement, which appears most often with outstretched arms and wrist extension. Individuals with this condition resemble birds flapping their wings. Volitional hyperkinesia Volitional hyperkinesia refers to any type of involuntary movement described above that interrupts an intended voluntary muscular movement. These movements tend to be jolts that present suddenly during an otherwise smoothly coordinated action of skeletal muscle. Pathophysiology The causes of the majority of the above hyperkinetic movements can be traced to improper modulation of the basal ganglia by the subthalamic nucleus. In many cases, the excitatory output of the subthalamic nucleus is reduced, leading to a reduced inhibitory outflow of the basal ganglia. Without the normal restraining influence of the basal ganglia, upper motor neurons of the circuit tend to become more readily activated by inappropriate signals, resulting in the characteristic abnormal movements.There are two pathways involving basal ganglia-thalamocortical circuitry, both of which originate in the neostriatum. The direct pathway projects to the internal globus pallidus (GPi) and to the substantia nigra pars reticulata (SNr). These projections are inhibitory and have been found to utilize both GABA and substance P. The indirect pathway, which projects to the globus pallidus external (GPe), is also inhibitory and uses GABA and enkephalin. The GPe projects to the subthalamic nucleus (STN), which then projects back to the GPi and GPe via excitatory, glutaminergic pathways. Excitation of the direct pathway leads to disinhibition of the GABAergic neurons of the GPi/SNr, ultimately resulting in activation of thalamic neurons and excitation of cortical neurons. In contrast, activation of the indirect pathway stimulates the inhibitory striatal GABA/enkephalin projection, resulting in suppression of GABAergic neuronal activity. This, in turn, causes disinhibition of the STN excitatory outputs, thus triggering the GPi/SNr inhibitory projections to the thalamus and decreased activation of cortical neurons. While deregulation of either of these pathways can disturb motor output, hyperkinesia is thought to result from overactivity of the direct pathway and decreased activity from the indirect pathway.Hyperkinesia occurs when dopamine receptors, and norepinephrine receptors to a lesser extent, within the cortex and the brainstem are more sensitive to dopamine or when the dopaminergic receptors/neurons are hyperactive. Hyperkinesia can be caused by a large number of different diseases including metabolic disorders, endocrine disorders, heritable disorders, vascular disorders, or traumatic disorders. Other causes include toxins within the brain, autoimmune disease, and infections, which include meningitis.Since the basal ganglia often have many connections with the frontal lobe of the brain, hyperkinesia can be associated with neurobehavioral or neuropsychiatric disorders such as mood changes, psychosis, anxiety, disinhibition, cognitive impairments, and inappropriate behavior.In children, primary dystonia is usually inherited genetically. Secondary dystonia, however, is most commonly caused by dyskinetic cerebral palsy, due to hypoxic or ischemic injury to the basal ganglia, brainstem, cerebellum, and thalamus during the prenatal or infantile stages of development. Chorea and ballism can be caused by damage to the subthalamic nucleus. Chorea can be secondary to hyperthyroidism. Athetosis can be secondary to sensory loss in the distal limbs; this is called pseudoathetosis in adults but is not yet proven in children. Diagnosis Definition There are various terms which refer to specific movement mechanisms that contribute to the differential diagnoses of hyperkinetic disorders. As defined by Hogan and Sternad, "posture" is a nonzero time period during which bodily movement is minimal. When a movement is called "discrete," it means that a new posture is assumed without any other postures interrupting the process. "Rhythmic" movements are those that occur in cycles of similar movements. "Repetitive," "recurrent," and "reciprocal" movements feature a certain bodily or joint position that occur more than once in a period, but not necessarily in a cyclic manner.Overflow refers to unwanted movements that occur during a desired movement. It may occur in situations where the individuals motor intention spreads to either nearby or distant muscles, taking away from the original goal of the movement. Overflow is often associated with dystonic movements and may be due to a poor focusing of muscle activity and inability to suppress unwanted muscle movement. Co-contraction refers to a voluntary movement performed to suppress the involuntary movement, such as forcing ones wrist toward the body to stop it from involuntarily moving away from the body.In evaluating these signs and symptoms, one must consider the frequency of repetition, whether or not the movements can be suppressed voluntarily (either by cognitive decisions, restraint, or sensory tricks), the awareness of the affected individual during the movement events, any urges to make the movements, and if the affected individual feels rewarded after having completed the movement. The context of the movement should also be noted; this means that a movement could be triggered in a certain posture, while at rest, during action, or during a specific task. The movements quality can also be described in observing whether or not the movement can be categorized as a normal movement by an unaffected individual, or one that is not normally made on a daily basis by unaffected individuals. Differential diagnosis Diseases that feature one or more hyperkinetic movements as prominent symptoms include: Huntingtons disease Hyperkinesia, more specifically chorea, is the hallmark symptom of Huntingtons disease, formerly referred to as Huntingtons chorea. Appropriately, chorea is derived from the Greek word, khoros, meaning "dance." The extent of the hyperkinesia exhibited in the disease can vary from solely the little finger to the entire body, resembling purposeful movements but occurring involuntarily. In children, rigidity and seizures are also symptoms. Other hyperkinetic symptoms include: Head turning to shift eye position Facial movements, including grimaces Slow, uncontrolled movements Quick, sudden, sometimes wild jerking movements of the arms, legs, face, and other body parts Unsteady gait Abnormal reflexes "prancing," or a wide walkThe disease is characterized further by the gradual onset of defects in behavior and cognition, including dementia and speech impediments, beginning in the fourth or fifth decades of life. Death usually occurs within 10–20 years after a progressive worsening of symptoms. Caused by the Huntington gene, the disease eventually contributes to selective atrophy of the Caudate nucleus and Putamen, especially of GABAergic and acetylcholinergic neurons, with some additional degeneration of the frontal and temporal cortices of the brain. The disrupted signaling in the basal ganglia network is thought to cause the hyperkinesia. There is no known cure for Huntingtons disease, yet there is treatment available to minimize the hyperkinetic movements. Dopamine blockers, such as haloperidol, tetrabenazine, and amantadine, are often effective in this regard. Wilsons disease Wilsons disease (WD) is a rare inherited disorder in which patients have a problem metabolizing copper. In patients with WD, copper accumulates in the liver and other parts of the body, particularly the brain, eyes and kidneys. Upon accumulation in the brain, patients may experience speech problems, incoordination, swallowing problems, and prominent hyperkinetic symptoms including tremor, dystonia, and gait difficulties. Psychiatric disturbances such as irritability, impulsiveness, aggressiveness, and mood disturbances are also common. Restless leg syndrome Restless leg syndrome is a disorder in which patients feel uncomfortable or unpleasant sensations in the legs. These sensations usually occur in the evening, while the patient is sitting or lying down and relaxing. Patients feel like they have to move their legs to relieve the sensations, and walking generally makes the symptoms disappear. In many patients, this can lead to insomnia and excessive daytime sleepiness. This is a very common problem and can occur at any age.Similarly, the syndrome akathisia ranges from mildly compulsive movement usually in the legs to intense frenzied motion. These movements are partly voluntary, and the individual typically has the ability to suppress them for short amounts of time. Like restless leg syndrome, relief results from movement. Post-stroke repercussions A multitude of movement disorders have been observed after either ischemic or hemorrhagic stroke. Some examples include athetosis, chorea with or without hemiballismus, tremor, dystonia, and segmental or focal myoclonus, although the prevalence of these manifestations after stroke is quite low. The amount of time that passes between stroke event and presentation of hyperkinesia depends on the type of hyperkinetic movement since their pathologies slightly differ. Chorea tends to affect older stroke survivors while dystonia tends to affect younger ones. Men and women have an equal chance of developing the hyperkinetic movements after stroke. Strokes causing small, deep lesions in the basal ganglia, brain stem and thalamus are those most likely to be associated with post-stroke hyperkinesia. Dentatorubral-pallidoluysian atrophy DRPLA is a rare trinucleotide repeat disorder (polyglutamine disease) that can be juvenile-onset (< 20 years), early adult-onset (20–40 years), or late adult-onset (> 40 years). Late adult-onset DRPLA is characterized by ataxia, choreoathetosis and dementia. Early adult-onset DRPLA also includes seizures and myoclonus. Juvenile-onset DRPLA presents with ataxia and symptoms consistent with progressive myoclonus epilepsy (myoclonus, multiple seizure types and dementia). Other symptoms that have been described include cervical dystonia, corneal endothelial degeneration autism, and surgery-resistant obstructive sleep apnea. Management Athetosis, chorea and hemiballismus Before prescribing medication for these conditions which often resolve spontaneously, recommendations have pointed to improved skin hygiene, good hydration via fluids, good nutrition, and installation of padded bed rails with use of proper mattresses. Pharmacological treatments include the typical neuroleptic agents such as fluphenazine, pimozide, haloperidol and perphenazine which block dopamine receptors; these are the first line of treatment for hemiballismus. Quetiapine, sulpiride and olanzapine, the atypical neuroleptic agents, are less likely to yield drug-induced parkinsonism and tardive dyskinesia. Tetrabenazine works by depleting presynaptic dopamine and blocking postsynaptic dopamine receptors, while reserpine depletes the presynaptic catecholamine and serotonin stores; both of these drugs treat hemiballismus successfully but may cause depression, hypotension and parkinsonism. Sodium valproate and clonazepam have been successful in a limited number of cases. Stereotactic ventral intermediate thalamotomy and use of a thalamic stimulator have been shown to be effective in treating these conditions. Essential tremor The medical treatment of essential tremor at the Movement Disorders Clinic at Baylor College of Medicine begins with minimizing stress and tremorgenic drugs along with recommending a restricted intake of beverages containing caffeine as a precaution, although caffeine has not been shown to significantly intensify the presentation of essential tremor. Alcohol amounting to a blood concentration of only 0.3% has been shown to reduce the amplitude of essential tremor in two-thirds of patients; for this reason it may be used as a prophylactic treatment before events during which one would be embarrassed by the tremor presenting itself. Using alcohol regularly and/or in excess to treat tremors is highly inadvisable, as there is a purported correlation between tremor and alcoholism. Alcohol is thought to stabilize neuronal membranes via potentiation of GABA receptor-mediated chloride influx. It has been demonstrated in essential tremor animal models that the food additive 1-octanol suppresses tremors induced by harmaline, and decreases the amplitude of essential tremor for about 90 minutes.Two of the most valuable drug treatments for essential tremor are propranolol, a beta blocker, and primidone, an anticonvulsant. Propranolol is much more effective for hand tremor than head and voice tremor. Some beta-adrenergic blockers (beta blockers) are not lipid-soluble and therefore cannot cross the blood–brain barrier (propranolol being an exception), but can still act against tremors; this indicates that this drugs mechanism of therapy may be influenced by peripheral beta-adrenergic receptors. Primidones mechanism of tremor prevention has been shown significantly in controlled clinical studies. The benzodiazepine drugs such as diazepam and barbiturates have been shown to reduce presentation of several types of tremor, including the essential variety. Controlled clinical trials of gabapentin yielded mixed results in efficacy against essential tremor while topiramate was shown to be effective in a larger double-blind controlled study, resulting in both lower Fahn-Tolosa-Marin tremor scale ratings and better function and disability as compared to placebo.It has been shown in two double-blind controlled studies that injection of botulinum toxin into muscles used to produce oscillatory movements of essential tremors, such as forearm, wrist and finger flexors, may decrease the amplitude of hand tremor for approximately three months and that injections of the toxin may reduce essential tremor presenting in the head and voice. The toxin also may help tremor causing difficulty in writing, although properly adapted writing devices may be more efficient. Due to high incidence of side effects, use of botulinum toxin has only received a C level of support from the scientific community.Deep brain stimulation toward the ventral intermediate nucleus of the thalamus and potentially the subthalamic nucleus and caudal zona incerta nucleus have been shown to reduce tremor in numerous studies. That toward the ventral intermediate nucleus of the thalamus has been shown to reduce contralateral and some ipsilateral tremor along with tremors of the cerebellar outflow, head, resting state and those related to hand tasks; however, the treatment has been shown to induce difficulty articulating thoughts (dysarthria), and loss of coordination and balance in long-term studies. Motor cortex stimulation is another option shown to be viable in numerous clinical trials. Dystonia Treatment of primary dystonia is aimed at reducing symptoms such as involuntary movements, pain, contracture, embarrassment, and to restore normal posture and improve the patients function. This treatment is therefore not neuroprotective. According to the European Federation of Neurological Sciences and Movement Disorder Society, there is no evidence-based recommendation for treating primary dystonia with antidopaminergic or anticholinergic drugs although recommendations have been based on empirical evidence. Anticholinergic drugs prove to be most effective in treating generalized and segmental dystonia, especially if dose starts out low and increases gradually. Generalized dystonia has also been treated with such muscle relaxants as the benzodiazepines. Another muscle relaxant, baclofen, can help reduce spasticity seen in cerebral palsy such as dystonia in the leg and trunk. Treatment of secondary dystonia by administering levodopa in dopamine-responsive dystonia, copper chelation in Wilsons disease, or stopping the administration of drugs that may induce dystonia have been proven effective in a small number of cases. Physical therapy has been used to improve posture and prevent contractures via braces and casting, although in some cases, immobilization of limbs can induce dystonia, which is by definition known as peripherally induced dystonia. There are not many clinical trials that show significant efficacy for particular drugs, so medical of dystonia must be planned on a case-by-case basis. Botulinum toxin B, or Myobloc, has been approved by the US Food and Drug Administration to treat cervical dystonia due to level A evidential support by the scientific community. Surgery known as GPi DBS (Globus Pallidus Pars Interna Deep Brain Stimulation) has come to be popular in treating phasic forms of dystonia, although cases involving posturing and tonic contractions have improved to a lesser extent with this surgery. A follow-up study has found that movement score improvements observed one year after the surgery was maintained after three years in 58% of the cases. It has also been proven effective in treating cervical and cranial-cervical dystonia. Tics Treatment of tics present in conditions such as Tourettes syndrome begins with patient, relative, teacher and peer education about the presentation of the tics. Sometimes, pharmacological treatment is unnecessary and tics can be reduced by behavioral therapy such as habit-reversal therapy and/or counseling. Often this route of treatment is difficult because
Hyperkinesia
it depends most heavily on patient compliance. Once pharmacological treatment is deemed most appropriate, lowest effective doses should be given first with gradual increases. The most effective drugs belong to the neuroleptic variety such as monoamine-depleting drugs and dopamine receptor-blocking drugs. Of the monoamine-depleting drugs, tetrabenazine is most powerful against tics and results in fewest side effects. A non-neuroleptic drug found to be safe and effective in treating tics is topiramate. Botulinum toxin injection in affected muscles can successfully treat tics; involuntary movements and vocalizations can be reduced, as well as life-threatening tics that have the potential of causing compressive myelopathy or radiculopathy. Surgical treatment for disabling Tourettes syndrome has been proven effective in cases presenting with self-injury. Deep Brain Stimulation surgery targeting the globus pallidus, thalamus and other areas of the brain may be effective in treating involuntary and possibly life-threatening tics. History In the 16th century, Andreas Vesalius and Francesco Piccolomini were the first to distinguish between white matter, the cortex, and the subcortical nuclei in the brain. About a century later, Thomas Willis noticed that the corpus striatum was typically discolored, shrunken, and abnormally softened in the cadavers of people who had died from paralysis. The view that the corpus striatum played such a large role in motor functions was the most prominent one until the 19th century when electrophysiologic stimulation studies began to be performed. For example, Gustav Fritsch and Eduard Hitzig performed them on dog cerebral cortices in 1870, while David Ferrier performed them, along with ablation studies, on cerebral cortices of dogs, rabbits, cats, and primates in 1876. During the same year, John Hughlings Jackson posited that the motor cortex was more relevant to motor function than the corpus striatum after carrying out clinical-pathologic experiments in humans. Soon it would be discovered that the theory about the corpus striatum would not be completely incorrect.By the late 19th century, a few hyperkinesias such as Huntingtons chorea, post-hemiplegic choreoathetosis, Tourettes syndrome, and some forms of both tremor and dystonia were described in a clinical orientation. However, the common pathology was still a mystery. British neurologist William Richard Gowers called these disorders "general and functional diseases of the nervous system" in his 1888 publication entitled A Manual of Diseases of the Nervous System. It was not until the late 1980s and 1990s that sufficient animal models and human clinical trials were utilized to discover the specific involvement of the basal ganglia in the hyperkinesia pathology. In 1998, Wichmann and Delong made the conclusion that hyperkinesia is associated with decreased output from the basal ganglia, and in contrast, hypokinesia is associated with increased output from the basal ganglia. This generalization, however, still leaves a need for more complex models to distinguish the more nuanced pathologies of the numerous diverse hyperkinesias which are still being studied today.In the 2nd century, Galen was the first to define tremor as "involuntary alternating up-and-down motion of the limbs." Further classification of hyperkinetic movements came in the 17th and 18th centuries by Franciscus Sylvius and Gerard van Swieten. Parkinsons disease was one of the first disorders to be named as a result of the recent classification of its featured hyperkinetic tremor. The subsequent naming of other disorders involving abnormal motions soon followed. Research directions Studies have been done with electromyography to trace skeletal muscle activity in some hyperkinetic disorders. The electromyogram (EMG) of dystonia sometimes shows rapid rhythmic bursts, but these patterns can almost always be produced intentionally. In the myoclonus EMG, there are typically brief, and sometimes rhythmic, bursts or pauses in the recording pattern. When the bursts last for 50 milliseconds or less they are indicative of cortical myoclonus, but when they last up to 200 milliseconds, they are indicative of spinal or brainstem myoclonus. Such bursts can occur in multiple muscles simultaneously quite quickly, but high time resolution must be used in the EMG trace to clearly record them. The bursts recorded for tremor tend to be longer in duration than those of myoclonus, although some types can last for durations within the range for those of myoclonus. Future studies would have to examine the EMGs for tics, athetosis, stereotypies and chorea as there are minimal recordings done for those movements. However, it may be predicted that the EMG for chorea would include bursts varying in duration, timing, and amplitude, while that for tics and stereotypies would take on patterns of voluntary movements.In general, research for treatment of hyperkinesia has most recently been focusing on ameliorating symptoms rather than attempting to correct the pathogenesis of the disease. Therefore, now and in the future it may be beneficial to inform the learning of the diseases pathology through carefully controlled, long-term, observation-based studies. As therapies are supported by proven effectiveness that can be repeated in multiple studies, they are useful, but the clinician may also consider that the best treatments for patients can only be evaluated on a case-by-case basis. It is the interplay of these two facets of neurology and medicine that may bring about significant progress in this field. See also Basal ganglia disease References == External links ==
Vertebral compression fracture
A compression fracture is a collapse of a vertebra. It may be due to trauma or due to a weakening of the vertebra (compare with burst fracture). This weakening is seen in patients with osteoporosis or osteogenesis imperfecta, lytic lesions from metastatic or primary tumors, or infection. In healthy patients, it is most often seen in individuals suffering extreme vertical shocks, such as ejecting from an ejection seat. Seen in lateral views in plain x-ray films, compression fractures of the spine characteristically appear as wedge deformities, with greater loss of height anteriorly than posteriorly and intact pedicles in the anteroposterior view. Signs and symptoms Acute fractures will cause severe back pain. Compression fractures which develop gradually, such as in osteoporosis, may initially not cause any symptoms, but will later often lead to back pain and loss of height. Diagnosis Compression fractures are usually diagnosed on spinal radiographs, where a wedge-shaped vertebra may be visible or there may be loss of height of the vertebra. In addition, bone density measurement may be performed to evaluate for osteoporosis. When a tumor is suspected as the underlying cause, or the fracture was caused by severe trauma, CT or MRI scans may be performed. Treatment Conservative treatment Back brace for support while the bone heals—either a Jewett brace for relatively stable and mild injuries, or a thoracic lumbar sacral orthosis (TLSO) for more severe ones. Opioids or non-steroidal anti-inflammatory drugs (NSAIDs) for pain. For osteoporotic patients, calcitonin may be helpful. Surgical Kyphoplasty and vertebroplasty are minimally invasive procedures that inject cement into the bone of the back that is fractured. However, the data examining the effectiveness of these procedures is mixed. References Further reading Zeller, J. L.; Burke, A. E.; Glass, R. M. (2008). "Osteomyelitis". JAMA. 299 (7): 858. doi:10.1001/jama.299.7.858. PMID 18285597. Medscape article on lytic lesions Emedicine article on spinal metastasis == External links ==
Lung abscess
Lung abscess is a type of liquefactive necrosis of the lung tissue and formation of cavities (more than 2 cm) containing necrotic debris or fluid caused by microbial infection. This pus-filled cavity is often caused by aspiration, which may occur during anesthesia, sedation, or unconsciousness from injury. Alcoholism is the most common condition predisposing to lung abscesses. Lung abscess is considered primary (60%) when it results from existing lung parenchymal process and is termed secondary when it complicates another process e.g. vascular emboli or follows rupture of extrapulmonary abscess into lung. Signs and symptoms Onset of symptoms is often gradual, but in necrotizing staphylococcal or gram-negative bacillary pneumonias patients can be acutely ill. Cough, fever with shivering, and night sweats are often present. Cough can be productive of foul smelling purulent mucus (≈70%) or less frequently with blood in one third of cases). Affected individuals may also complain of chest pain, shortness of breath, lethargy and other features of chronic illness.Those with a lung abscess are generally cachectic at presentation. Finger clubbing is present in one third of patients. Dental decay is common especially in alcoholics and children. On examination of the chest there will be features of consolidation such as localized dullness on percussion and bronchial breath sounds. Complications Although rare in modern times, can include spread of infection to other lung segments, bronchiectasis, empyema, and bacteremia with metastatic infection such as brain abscess. Causes Conditions contributing to lung abscessAspiration of oropharyngeal or gastric secretion Septic emboli Necrotizing pneumonia Vasculitis: Granulomatosis with polyangiitis Necrotizing tumors: 8% to 18% are due to neoplasms across all age groups, higher in older people; primary squamous carcinoma of the lung is the most common.OrganismsIn the post-antibiotic era pattern of frequency is changing. In older studies anaerobes were found in up to 90% cases but they are much less frequent now. Anaerobic bacteria: Actinomyces, Peptostreptococcus, Bacteroides, Fusobacterium species, Microaerophilic streptococcus : Streptococcus milleri Aerobic bacteria: Staphylococcus, Klebsiella, Haemophilus, Pseudomonas, Nocardia, Escherichia coli, Streptococcus, Mycobacterium Fungi: Candida, Aspergillus Parasites: Entamoeba histolytica Diagnosis Imaging studies Lung abscesses are often on one side and single involving posterior segments of the upper lobes and the apical segments of the lower lobes as these areas are gravity dependent when lying down. Presence of air-fluid levels implies rupture into the bronchial tree or rarely growth of gas forming organism. Laboratory studies Raised inflammatory markers (high ESR, CRP) are common but nonspecific. Examination of the coughed up mucus is important in any lung infection and often reveals mixed bacterial flora. Transtracheal or transbronchial (via bronchoscopy) aspirates can also be cultured. Fiber optic bronchoscopy is often performed to exclude obstructive lesion; it also helps in bronchial drainage of pus. Management Broad spectrum antibiotic to cover mixed flora is the mainstay of treatment. Pulmonary physiotherapy and postural drainage are also important. Surgical procedures are required in selective patients for drainage or pulmonary resection. The treatment is divided according to the type of abscess, acute or chronic. For acute cases the treatment is antibiotics: if anaerobic: metronidazole or clindamycin if aerobic: beta-lactams, cephalosporins if MRSA or Staphylococcus infection: vancomycin or linezolid postural drainage and chest physiotherapy bronchoscopy is used for the following cases: aspiration or instillation of antibiotics patients with atypical presentation suspected of having underlying foreign body or malignancy Prognosis Most cases respond to antibiotics and prognosis is usually excellent unless there is a debilitating underlying condition. Mortality from lung abscess alone is around 5% and is improving. See also Empyema Bronchiectasis Abscess Pleural effusion References == External links ==
Irritation
Irritation, in biology and physiology, is a state of inflammation or painful reaction to allergy or cell-lining damage. A stimulus or agent which induces the state of irritation is an irritant. Irritants are typically thought of as chemical agents (for example phenol and capsaicin) but mechanical, thermal (heat), and radiative stimuli (for example ultraviolet light or ionising radiations) can also be irritants. Irritation also has non-clinical usages referring to bothersome physical or psychological pain or discomfort. Irritation can also be induced by some allergic response due to exposure of some allergens for example contact dermatitis, irritation of mucosal membranes and pruritus. Mucosal membrane is the most common site of irritation because it contains secretory glands that release mucous which attracts the allergens due to its sticky nature. Chronic irritation is a medical term signifying that afflictive health conditions have been present for a while. There are many disorders that can cause chronic irritation, the majority involve the skin, vagina, eyes and lungs. Irritation in organisms In higher organisms, an allergic response may be the cause of irritation. An allergen is defined distinctly from an irritant, however, as allergy requires a specific interaction with the immune system and is thus dependent on the (possibly unique) sensitivity of the organism involved while an irritant, classically, acts in a non-specific manner. It is a form of stress, but conversely, if one is stressed by unrelated matters, mild imperfections can cause more irritation than usual: one is irritable; see also sensitivity (human). In more basic organisms, the status of pain is the perception of the being stimulated, which is not observable although it may be shared (see gate control theory of pain). It is not proven that oysters can feel pain, but it is known that they react to irritants. When an irritating object becomes trapped within an oysters shell, it deposits layers of calcium carbonate (CaCO3), slowly increasing in size and producing a pearl. This is purely a defense mechanism, to trap a potentially threatening irritant such as a parasite inside its shell, or an attack from outside, injuring the mantle tissue. The oyster creates a pearl sac to seal off the irritation. It has also been observed that an amoeba avoids being prodded with a pin, but there is not enough evidence to suggest how much it feels this. Irritation is apparently the only universal sense shared by even single-celled creatures. It is postulated that most such beings also feel pain, but this is a projection – empathy. Some philosophers, notably René Descartes, denied it entirely, even for such higher mammals as dogs or primates like monkeys; Descartes considered intelligence a pre-requisite for the feeling of pain. Types Eye irritation Modern office work with use of office equipment has raised concerns about possible adverse health effects. Since the 1970s, reports have linked mucosal, skin, and general symptoms to work with self-copying paper. Emission of various particulate and volatile substances has been suggested as specific causes. These symptoms have been related to Sick Building Syndrome, which involves symptoms such as irritation to the eyes, skin, and upper airways, headache and fatigue.The eye is also a source of chronic irritation. Disorders like Sjögrens syndrome, where one does not make tears, can cause a dry eye sensation which feels very unpleasant. The condition is difficult to treat and is lifelong. Besides artificial tears, there is a drug called Restasis which may help.Blepharitis is dryness and itching on the upper eyelids. This condition is often seen in young people and can lead to reddish dry eye and scaly eyebrows. To relieve the itching sensation, one may need to apply warm compresses and use topical corticosteroid creams. Skin Eczema is another cause of chronic irritation and affects millions of individuals. Eczema simply means a dry skin which is itchy. The condition usually starts at an early age and continues throughout life. The major complaint of people with eczema is an itchy dry skin. Sometimes, the itching will be associated with a skin rash. The affected areas are always dry, scaly, reddish and may ooze sometimes. Eczema cannot be cured, but its symptoms can be controlled. One should use moisturizers, use cold compresses and avoid frequent hot showers. There are over the counter corticosteroids creams which can be applied. Sometimes, an anti histamine has to be used to prevent the chronic itching sensations. There are also many individuals who have allergies to a whole host of substances like nuts, hair, dander, plants and fabrics. For these individuals, even the minimal exposure can lead to a full blown skin rash, itching, wheezing and coughing. Unfortunately, other than avoidance, there is no other cure. There are allergy shots which can help desensitize against an allergen but often the results are poor and the treatments are expensive. Most of these individuals with chronic irritation from allergens usually need to take anti histamines or use a bronchodilator to relieve symptoms.Another common irritation disorder in females is intertrigo. This disorder is associated with chronic irritation under folds of skin. This is typically seen under large breasts, groins and folds of the abdomen in obese individuals. Candida quickly grows in warm moist areas of these folds and presents as a chronic itch. Over time, the skin becomes red and often oozes. Perspiration is also a chronic type of irritation which can be very annoying. Besides being socially unacceptable, sweat stain the clothes and can present with a foul odor. In some individuals, the warm moist areas often become easily infected. The best way to treat excess sweating is good hygiene, frequent change of clothes and use of deodorants/antiperspirants. Vaginal irritation One of the most common areas of the body associated with irritation is the vagina. Many women complain of an itch, dryness, or discharge in the perineum at some point in their lives. There are several causes of vaginal irritation including fungal vaginitis (like candida) or trichomoniasis. Often, herpes simplex infection of the mouth or genitalia can be recurrent and prove to be extremely irritating. Sometimes, the irritation can be of the chronic type and it can be so intense that it also causes painful intercourse. Aside from infections, chronic irritation of the vagina may be related to the use of contraceptives and condoms made from latex. The majority of contraceptives are made of synthetic chemicals which can induce allergies, rash and itching. Sometimes the lubricant used for intercourse may cause irritation. Another cause of irritation in women is post menopausal vaginitis. The decline in the female sex hormones leads to development of dryness and itching in the vagina. This is often accompanied by painful sexual intercourse. Cracks and tears often develop on outer aspects of the labia which becomes red from chronic scratching. Post menopausal vaginitis can be treated with short term use of vaginal estrogen pessary and use of a moisturizer. Lungs Individuals who smoke or are exposed to smog or other airborne pollutants can develop a condition known as COPD. In this disorder, there is constant irritation of the breathing tubes (trachea) and the small airways. The constant irritation results in excess production of mucus which makes breathing difficult. Frequently, these individuals wake up in the morning with copious amounts of foul smelling mucus and a cough which lasts all day. Wheeze and heavy phlegm are common findings. COPD is a lifelong disorder and there is no cure. Eventually most people develop recurrent pneumonia, lack any type of endurance, and are unable to work productively. One of the ways to avoid chronic bronchitis is to stop or not smoke. Stomach Gastritis or stomach upset is a common irritating disorder affecting millions of people. Gastritis is basically inflammation of the stomach wall lining and has many causes. Smoking, excess alcohol consumption and the use of non-steroidal anti-inflammatory drugs (NSAIDs), such as ibuprofen, account for the majority of causes of gastritis. In some cases, gastritis may develop after surgery, a major burn, infection or emotional stress. The most common symptoms of gastritis include sharp abdominal pain which may radiate to the back. This may be associated with nausea, vomiting, abdominal bloating and a lack of appetite. When the condition is severe it may even result in loss of blood on the stools. The condition often comes and goes for years because most people continue to drink alcohol or use NSAIDs. Treatment includes the use of antacids or acid neutralizing drugs, antibiotics, and avoiding spicy food and alcohol. See also Allergy Irritability (psychology) Itch Stimulus (physiology) == References ==
Voice change
A voice change or voice mutation, sometimes referred to as a voice break or voice crack, commonly refers to the deepening of the voice of people as they reach puberty. Before puberty, both sexes have roughly similar vocal pitch, but during puberty the male voice typically deepens an octave, while the female voice usually deepens only by a few tones.A similar effect is a "voice crack", during which a persons voice suddenly and unintentionally enters a higher register (usually falsetto) for a brief period of time. This may be caused by singing or talking at a pitch outside the persons natural vocal range, stress, fatigue, emotional tension, or the physical changes associated with puberty. An instance of a voice crack (when associated with puberty) lasts for only a moment and generally occurs less frequently as a person grows into maturity. Anatomical changes Most of the voice change begins around puberty. Adult pitch is reached 2–3 years later but the voice does not stabilize until the early years of adulthood. It usually happens months or years before the development of significant facial hair. Under the influence of sex hormones, the voice box, or larynx, grows in both sexes. This growth is far more prominent in males than in females and is more easily perceived. It causes the voice to drop and deepen. Along with the larynx, the vocal folds (vocal cords) grow significantly longer and thicker. The facial bones begin to grow as well. Cavities in the sinuses, the nose, and the back of the throat grow bigger, thus creating more space within the head to allow the voice to resonate. Occasionally, voice change is accompanied by unsteadiness of vocalization in the early stages of untrained voices. Due to the significant drop in pitch to the vocal range, people may unintentionally speak in head voice or even strain their voices using pitches which were previously chest voice, the lowest part of the modal voice register. History Historical changes in the average age of puberty have had profound effects on the composing of music for childrens voices. The composer Joseph Haydn was known for typically singing parts in high pitches throughout his 17th year. Unchanged voices were in high demand for church choirs, which historically excluded women. The British cathedral choir ideal remains based on boy sopranos (or trebles), with the alto part executed by adult countertenors. In German-speaking countries, however, the alto parts are also sung by boys. Historically, a strategy for avoiding the shift altogether was castration. Castrati are first documented in Italian church records from the 1550s. Mozarts Exultate Jubilate, Allegris Miserere and parts of Handels Messiah were written for this voice, whose distinctive timbre was widely exploited in Baroque opera. In 1861, the practice of castration became illegal in Italy, and in 1878 Pope Leo XIII prohibited the hiring of new castrati by the church. The last castrato was Alessandro Moreschi, who served in the Sistine Chapel Choir. Singing Children are able to sing in the same octave as women. When the voices of male teenagers break, they are no longer able to sing in the same octave. For music sung in the same key as women, they can sing in falsetto or drop an octave. See also Puberty Voice masculinization and feminization == References ==
Abdominal pain
Abdominal pain, also known as a stomach ache, is a symptom associated with both non-serious and serious medical issues. Common causes of pain in the abdomen include gastroenteritis and irritable bowel syndrome. About 15% of people have a more serious underlying condition such as appendicitis, leaking or ruptured abdominal aortic aneurysm, diverticulitis, or ectopic pregnancy. In a third of cases the exact cause is unclear.Given that a variety of diseases can cause some form of abdominal pain, a systematic approach to the examination of a person and the formulation of a differential diagnosis remains important. Differential diagnosis The most frequent reasons for abdominal pain are gastroenteritis (13%), irritable bowel syndrome (8%), urinary tract problems (5%), inflammation of the stomach (5%) and constipation (5%). In about 30% of cases, the cause is not determined. About 10% of cases have a more serious cause including gallbladder (gallstones or biliary dyskinesia) or pancreas problems (4%), diverticulitis (3%), appendicitis (2%) and cancer (1%). More common in those who are older, ischemic colitis, mesenteric ischemia, and abdominal aortic aneurysms are other serious causes. Acute abdominal pain Acute abdomen can be defined as severe, persistent abdominal pain of sudden onset that is likely to require surgical intervention to treat its cause. The pain may frequently be associated with nausea and vomiting, abdominal distention, fever and signs of shock. One of the most common conditions associated with acute abdominal pain is acute appendicitis. Selected causes Traumatic: blunt or perforating trauma to the stomach, bowel, spleen, liver, or kidney Inflammatory: Infections such as appendicitis, cholecystitis, pancreatitis, pyelonephritis, Peritonitis, pelvic inflammatory disease, hepatitis, mesenteric adenitis, or a subdiaphragmatic abscess Perforation of a peptic ulcer, a diverticulum, or the caecum Complications of inflammatory bowel disease such as Crohns disease or ulcerative colitis Mechanical: Small bowel obstruction secondary to adhesions caused by previous surgeries, intussusception, hernias, benign or malignant neoplasms Large bowel obstruction caused by colorectal cancer, inflammatory bowel disease, volvulus, fecal impaction or hernia Vascular: occlusive intestinal ischemia, usually caused by thromboembolism of the superior mesenteric artery By system A more extensive list includes the following: Gastrointestinal GI tract Inflammatory: gastroenteritis, appendicitis, gastritis, esophagitis, diverticulitis, Crohns disease, ulcerative colitis, microscopic colitis Obstruction: hernia, intussusception, volvulus, post-surgical adhesions, tumors, severe constipation, hemorrhoids Vascular: embolism, thrombosis, hemorrhage, sickle cell disease, abdominal angina, blood vessel compression (such as celiac artery compression syndrome), superior mesenteric artery syndrome, postural orthostatic tachycardia syndrome Digestive: peptic ulcer, lactose intolerance, celiac disease, food allergies, indigestion Glands Bile system Inflammatory: cholecystitis, cholangitis Obstruction: cholelithiasis, tumours Liver Inflammatory: hepatitis, liver abscess Pancreatic Inflammatory: pancreatitis Renal and urological Inflammation: pyelonephritis, bladder infection Obstruction: kidney stones, urolithiasis, urinary retention, tumours Vascular: left renal vein entrapment Gynaecological or obstetric Inflammatory: pelvic inflammatory disease Mechanical: ovarian torsion Endocrinological: menstruation, Mittelschmerz Tumors: endometriosis, fibroids, ovarian cyst, ovarian cancer Pregnancy: ruptured ectopic pregnancy, threatened abortion Abdominal wall muscle strain or trauma muscular infection neurogenic pain: herpes zoster, radiculitis in Lyme disease, abdominal cutaneous nerve entrapment syndrome (ACNES), tabes dorsalis Referred pain from the thorax: pneumonia, pulmonary embolism, ischemic heart disease, pericarditis from the spine: radiculitis from the genitals: testicular torsion Metabolic disturbance uremia, diabetic ketoacidosis, porphyria, C1-esterase inhibitor deficiency, adrenal insufficiency, lead poisoning, black widow spider bite, narcotic withdrawal Blood vessels aortic dissection, abdominal aortic aneurysm Immune system sarcoidosis vasculitis familial Mediterranean fever Idiopathic irritable bowel syndrome (IBS)(affecting up to 20% of the population, IBS is the most common cause of recurrent and intermittent abdominal pain) By location The location of abdominal pain can provide information about what may be causing the pain. The abdomen can be divided into four regions called quadrants. Locations and associated conditions include: Diffuse Peritonitis Vascular: mesenteric ischemia, ischemic colitis, Henoch-Schonlein purpura, sickle cell disease, systemic lupus erythematosus, polyarteritis nodosa Small bowel obstruction Irritable bowel syndrome Metabolic disorders: ketoacidosis, porphyria, familial Mediterranean fever, adrenal crisis Epigastric Heart: myocardial infarction, pericarditis Stomach: gastritis, stomach ulcer, stomach cancer Pancreas: pancreatitis, pancreatic cancer Intestinal: duodenal ulcer, diverticulitis, appendicitis Right upper quadrant Liver: hepatomegaly, fatty liver, hepatitis, liver cancer, abscess Gallbladder and biliary tract: inflammation, gallstones, worm infection, cholangitis Colon: bowel obstruction, functional disorders, gas accumulation, spasm, inflammation, colon cancer Other: pneumonia, Fitz-Hugh-Curtis syndrome Left upper quadrant Splenomegaly Colon: bowel obstruction, functional disorders, gas accumulation, spasm, inflammation, colon cancer Peri-umbilical (the area around the umbilicus, aka the belly button) Appendicitis Pancreatitis Inferior myocardial infarction Peptic ulcer Diabetic ketoacidosis Vascular: aortic dissection, aortic rupture Bowel: mesenteric ischemia, Celiac disease, inflammation, intestinal spasm, functional disorders, small bowel obstruction Lower abdominal pain Diarrhea Colitis Crohns Dysentery Hernia Right lower quadrant Colon: intussusception, bowel obstruction, appendicitis (McBurneys point) Renal: kidney stone (nephrolithiasis), pyelonephritis Pelvic: cystitis, bladder stone, bladder cancer, pelvic inflammatory disease, pelvic pain syndrome Gynecologic: endometriosis, intrauterine pregnancy, ectopic pregnancy, ovarian cyst, ovarian torsion, fibroid (leiomyoma), abscess, ovarian cancer, endometrial cancer Left lower quadrant Bowel: diverticulitis, sigmoid colon volvulus, bowel obstruction, gas accumulation, Toxic megacolon Right low back pain Liver: hepatomegaly Kidney: kidney stone (nephrolithiasis), complicated urinary tract infection Left low back pain Spleen Kidney: kidney stone (nephrolithiasis), complicated urinary tract infection Low back pain kidney pain (kidney stone, kidney cancer, hydronephrosis) Ureteral stone pain Pathophysiology Abdominal pain can be referred to as visceral pain or peritoneal pain. The contents of the abdomen can be divided into the foregut, midgut, and hindgut. The foregut contains the pharynx, lower respiratory tract, portions of the esophagus, stomach, portions of the duodenum (proximal), liver, biliary tract (including the gallbladder and bile ducts), and the pancreas. The midgut contains portions of the duodenum (distal), cecum, appendix, ascending colon, and first half of the transverse colon. The hindgut contains the distal half of the transverse colon, descending colon, sigmoid colon, rectum, and superior anal canal.Each subsection of the gut has an associated visceral afferent nerve that transmits sensory information from the viscera to the spinal cord, traveling with the autonomic sympathetic nerves. The visceral sensory information from the gut traveling to the spinal cord, termed the visceral afferent, is non-specific and overlaps with the somatic afferent nerves, which are very specific. Therefore, visceral afferent information traveling to the spinal cord can present in the distribution of the somatic afferent nerve; this is why appendicitis initially presents with T10 periumbilical pain when it first begins and becomes T12 pain as the abdominal wall peritoneum (which is rich with somatic afferent nerves) is involved. Diagnosis A thorough patient history and physical examination is used to better understand the underlying cause of abdominal pain. The process of gathering a history may include: Identifying more information about the chief complaint by eliciting a history of present illness; i.e. a narrative of the current symptoms such as the onset, location, duration, character, aggravating or relieving factors, and temporal nature of the pain. Identifying other possible factors may aid in the diagnosis of the underlying cause of abdominal pain, such as recent travel, recent contact with other ill individuals, and for females, a thorough gynecologic history. Learning about the patients past medical history, focusing on any prior issues or surgical procedures. Clarifying the patients current medication regimen, including prescriptions, over-the-counter medications, and supplements. Confirming the patients drug and food allergies. Discussing with the patient any family history of disease processes, focusing on conditions that might resemble the patients current presentation. Discussing with the patient any health-related behaviors (e.g. tobacco use, alcohol consumption, drug use, and sexual activity) that might make certain diagnoses more likely. Reviewing the presence of non-abdominal symptoms (e.g., fever, chills, chest pain, shortness of breath, vaginal bleeding) that can further clarify the diagnostic picture. Using Carnetts sign to differentiate between visceral pain and pain originating in the muscles of the abdominal wall.After gathering a thorough history, one should perform a physical exam in order to identify important physical signs that might clarify the diagnosis, including a cardiovascular exam, lung exam, thorough abdominal exam, and for females, a genitourinary exam.Additional investigations that can aid diagnosis include: Blood tests including complete blood count, basic metabolic panel, electrolytes, liver function tests, amylase, lipase, troponin I, and for females, a serum pregnancy test. Urinalysis Imaging including chest and abdominal X-rays ElectrocardiogramIf diagnosis remains unclear after history, examination, and basic investigations as above, then more advanced investigations may reveal a diagnosis. Such tests include: Computed tomography of the abdomen/pelvis Abdominal or pelvic ultrasound Endoscopy and/or colonoscopy Management The management of abdominal pain depends on many factors, including the etiology of the pain. In the emergency department, a person presenting with abdominal pain may initially require IV fluids due to decreased intake secondary to abdominal pain and possible emesis or vomiting. Treatment for abdominal pain includes analgesia, such as non-opioid (ketorolac) and opioid medications (morphine, fentanyl). Choice of analgesia is dependent on the cause of the pain, as ketorolac can worsen some intra-abdominal processes. Patients presenting to the emergency department with abdominal pain may receive a "GI cocktail" that includes an antacid (examples include omeprazole, ranitidine, magnesium hydroxide, and calcium chloride) and lidocaine. After addressing pain, there may be a role for antimicrobial treatment in some cases of abdominal pain. Butylscopolamine (Buscopan) is used to treat cramping abdominal pain with some success. Surgical management for causes of abdominal pain includes but is not limited to cholecystectomy, appendectomy, and exploratory laparotomy. Emergencies Below is a brief overview of abdominal pain emergencies. Epidemiology Abdominal pain is the reason about 3% of adults see their family physician. Rates of emergency department (ED) visits in the United States for abdominal pain increased 18% from 2006 through to 2011. This was the largest increase out of 20 common conditions seen in the ED. The rate of ED use for nausea and vomiting also increased 18%. References == External links ==
Ischemia
Ischemia or ischaemia is a restriction in blood supply to any tissues, muscle group, or organ of the body, causing a shortage of oxygen that is needed for cellular metabolism (to keep tissue alive). Ischemia is generally caused by problems with blood vessels, with resultant damage to or dysfunction of tissue i.e. hypoxia and microvascular dysfunction. It also means local hypoxia in a given part of a body sometimes resulting from constriction (such as vasoconstriction, thrombosis or embolism). Ischemia comprises not only insufficiency of oxygen, but also reduced availability of nutrients and inadequate removal of metabolic wastes. Ischemia can be partial (poor perfusion) or total blockage. The inadequate delivery of oxygenated blood to the organs must be resolved either by treating the cause of the inadequate delivery or reducing the oxygen demand of the system that needs it. For example, patients with myocardial ischemia have a decreased blood flow to the heart and are prescribed with medications that reduce chronotrophy and ionotrophy to meet the new level of blood delivery supplied by the stenosed so that it is adequate. Signs and symptoms The signs and symptoms of ischemia vary, as it can occur anywhere in the body and depends on the degree to which blood flow is interrupted. For example, clinical manifestations of acute limb ischemia (which can be summarized as the "six Ps") include pain, pallor, pulseless, paresthesia, paralysis, and poikilothermia.Without immediate intervention, ischemia may progress quickly to tissue necrosis and gangrene within a few hours. Paralysis is a very late sign of acute arterial ischemia and signals the death of nerves supplying the extremity. Foot drop may occur as a result of nerve damage. Because nerves are extremely sensitive to hypoxia, limb paralysis or ischemic neuropathy may persist after revascularization and may be permanent. Cardiac ischemia Cardiac ischemia may be asymptomatic or may cause chest pain, known as angina pectoris. It occurs when the heart muscle, or myocardium, receives insufficient blood flow. This most frequently results from atherosclerosis, which is the long-term accumulation of cholesterol-rich plaques in the coronary arteries. In most Western countries, Ischemic heart disease is the most common cause of death in both men and women, and a major cause of hospital admissions. Bowel Both large and small intestines can be affected by ischemia. The blockage of blood flow to the large intestine (colon) is called ischemic colitis. Ischemia of the small bowel is called mesenteric ischemia. Brain Brain ischemia is insufficient blood flow to the brain, and can be acute or chronic. Acute ischemic stroke is a neurological emergency typically caused by a blood clot blocking blood flow in a vessel in the brain. Chronic ischemia of the brain may result in a form of dementia called vascular dementia. A sudden, brief episode (symptoms lasting only minutes) of ischemia affecting the brain is called a transient ischemic attack (TIA), often called a mini-stroke. TIAs can be a warning of future strokes, with approximately 1/3 of TIA patients having a serious stroke within one year. Limb Inadequate blood supply to a limb may results in acute limb ischemia or chronic limb threatening ischemia. Cutaneous Reduced blood flow to the skin layers may result in mottling or uneven, patchy discoloration of the skin. Kidney Ischemia Kidney Ischemia is a loss of blood flow to the kidney cells. Several physical symptoms include shrinkage of one or both kidneys, renovascular hypertension, acute renal failure, progressive azotemia, and acute pulmonary edema. It is a disease with high mortality rate and high morbidity. Failure to treat could cause chronic kidney disease and a need for renal surgery. Causes Ischemia is a vascular disease involving an interruption in the arterial blood supply to a tissue, organ, or extremity that, if untreated, can lead to tissue death. It can be caused by embolism, thrombosis of an atherosclerotic artery, or trauma. Venous problems like venous outflow obstruction and low-flow states can cause acute arterial ischemia. An aneurysm is one of the most frequent causes of acute arterial ischemia. Other causes are heart conditions including myocardial infarction, mitral valve disease, chronic atrial fibrillation, cardiomyopathies, and prosthesis, in all of which thrombi are prone to develop. Occlusion The thrombi may dislodge and may travel anywhere in the circulatory system, where they may lead to pulmonary embolus, an acute arterial occlusion causing the oxygen and blood supply distal to the embolus to decrease suddenly. The degree and extent of symptoms depend on the size and location of the obstruction, the occurrence of clot fragmentation with embolism to smaller vessels, and the degree of peripheral arterial disease (PAD). Thromboembolism (blood clots) Embolism (foreign bodies in the circulation, e.g. amniotic fluid embolism) Trauma Traumatic injury to an extremity may produce partial or total occlusion of a vessel from compression, shearing, or laceration. Acute arterial occlusion may develop as a result of arterial dissection in the carotid artery or aorta or as a result of iatrogenic arterial injury (e.g., after angiography). Other An inadequate flow of blood to a part of the body may be caused by any of the following: Thoracic outlet syndrome (compression of the brachial plexus) Atherosclerosis (lipid-laden plaques obstructing the lumen of arteries) Hypoglycemia (lower than normal level of glucose) Tachycardia (abnormally rapid beating of the heart) Radiotherapy Hypotension (low blood pressure, e.g. in septic shock, heart failure) Outside compression of a blood vessel, e.g. by a tumor or in the case of superior mesenteric artery syndrome Sickle cell disease (abnormally shaped red blood cells) Induced g-forces which restrict the blood flow and force the blood to the extremities of the body, as in acrobatics and military flying Localized extreme cold, such as by frostbite or improper cold compression therapy Tourniquet application An increased level of glutamate receptor stimulation Arteriovenous malformations and peripheral artery occlusive disease rupture of significant blood vessels supplying a tissue or organ. Anemia vasoconstricts the periphery so that red blood cells can work internally on vital organs such as the heart, brain, etc., thus causing lack of oxygen to the periphery. Premature discontinuation of any oral anticoagulant. Unconsciousness, such as due to the ingestion of excessive doses of central depressants like alcohol or opioids, can result in ischemia of the extremities due to unusual body positions that prevent normal circulation Pathophysiology Ischemia results in tissue damage in a process known as ischemic cascade. The damage is the result of the build-up of metabolic waste products, inability to maintain cell membranes, mitochondrial damage, and eventual leakage of autolyzing proteolytic enzymes into the cell and surrounding tissues. Restoration of blood supply to ischemic tissues can cause additional damage known as reperfusion injury that can be more damaging than the initial ischemia. Reintroduction of blood flow brings oxygen back to the tissues, causing a greater production of free radicals and reactive oxygen species that damage cells. It also brings more calcium ions to the tissues causing further calcium overloading and can result in potentially fatal cardiac arrhythmias and also accelerates cellular self-destruction. The restored blood flow also exaggerates the inflammation response of damaged tissues, causing white blood cells to destroy damaged cells that may otherwise still be viable. Treatment Early treatment is essential to keep the affected organ viable. The treatment options include injection of an anticoagulant, thrombolysis, embolectomy, surgical revascularization, or partial amputation. Anticoagulant therapy is initiated to prevent further enlargement of the thrombus. Continuous IV unfractionated heparin has been the traditional agent of choice.If the condition of the ischemic limb is stabilized with anticoagulation, recently formed emboli may be treated with catheter-directed thrombolysis using intra-arterial infusion of a thrombolytic agent (e.g., recombinant tissue plasminogen activator (tPA), streptokinase, or urokinase). A percutaneous catheter inserted into the femoral artery and threaded to the site of the clot is used to infuse the drug. Unlike anticoagulants, thrombolytic agents work directly to resolve the clot over a period of 24 to 48 hours.Direct arteriotomy may be necessary to remove the clot. Surgical revascularization may be used in the setting of trauma (e.g., laceration of the artery). Amputation is reserved for cases where limb salvage is not possible. If the patient continues to have a risk of further embolization from some persistent source, such as chronic atrial fibrillation, treatment includes long-term oral anticoagulation to prevent further acute arterial ischemic episodes.Decrease in body temperature reduces the aerobic metabolic rate of the affected cells, reducing the immediate effects of hypoxia. Reduction of body temperature also reduces the inflammation response and reperfusion injury. For frostbite injuries, limiting thawing and warming of tissues until warmer temperatures can be sustained may reduce reperfusion injury. Ischemic stroke is at times treated with various levels of statin therapy at hospital discharge, followed by home time, in an attempt to lower the risk of adverse events. Society and culture The Infarct Combat Project (ICP) is an international nonprofit organization founded in 1998 to fight ischemic heart diseases through education and research. Etymology and pronunciation The word ischemia () is from Greek ἴσχαιμος iskhaimos staunching blood, from ἴσχω iskhο keep back, restrain and αἷμα haima blood. See also Infarction Inhibitor protein Trauma triad of death Ischemia-reperfusion injury of the appendicular musculoskeletal system References Bibliography Elizabeth (editor). Oxford Reference: Concise Medical Dictionary (1990, 3rd ed.). Oxford University Press: Market House Books, 1987, 2nd ed., pp. 107, ISBN 978-0-19-281991-8
Taste
The gustatory system or sense of taste is the sensory system that is partially responsible for the perception of taste (flavor). Taste is the perception produced or stimulated when a substance in the mouth reacts chemically with taste receptor cells located on taste buds in the oral cavity, mostly on the tongue. Taste, along with olfaction and trigeminal nerve stimulation (registering texture, pain, and temperature), determines flavors of food and other substances. Humans have taste receptors on taste buds and other areas, including the upper surface of the tongue and the epiglottis. The gustatory cortex is responsible for the perception of taste. The tongue is covered with thousands of small bumps called papillae, which are visible to the naked eye. Within each papilla are hundreds of taste buds. The exception to this is the filiform papillae that do not contain taste buds. There are between 2000 and 5000 taste buds that are located on the back and front of the tongue. Others are located on the roof, sides and back of the mouth, and in the throat. Each taste bud contains 50 to 100 taste receptor cells. Taste receptors in the mouth sense the five taste modalities: sweetness, sourness, saltiness, bitterness, and savoriness (also known as savory or umami). Scientific experiments have demonstrated that these five tastes exist and are distinct from one another. Taste buds are able to distinguish between different tastes through detecting interaction with different molecules or ions. Sweet, savoriness, and bitter tastes are triggered by the binding of molecules to G protein-coupled receptors on the cell membranes of taste buds. Saltiness and sourness are perceived when alkali metal or hydrogen ions enter taste buds, respectively.The basic taste modalities contribute only partially to the sensation and flavor of food in the mouth—other factors include smell, detected by the olfactory epithelium of the nose; texture, detected through a variety of mechanoreceptors, muscle nerves, etc.; temperature, detected by thermoreceptors; and "coolness" (such as of menthol) and "hotness" (pungency), through chemesthesis. As the gustatory system senses both harmful and beneficial things, all basic taste modalities are classified as either aversive or appetitive, depending upon the effect the things they sense have on our bodies. Sweetness helps to identify energy-rich foods, while bitterness serves as a warning sign of poisons.Among humans, taste perception begins to fade at an older age because of loss of tongue papillae and a general decrease in saliva production. Humans can also have distortion of tastes (dysgeusia). Not all mammals share the same taste modalities: some rodents can taste starch (which humans cannot), cats cannot taste sweetness, and several other carnivores including hyenas, dolphins, and sea lions, have lost the ability to sense up to four of their ancestral five taste modalities. Basic tastes The gustatory system allows animals to distinguish between safe and harmful food, and to gauge foods’ nutritional value. Digestive enzymes in saliva begin to dissolve food into base chemicals that are washed over the papillae and detected as tastes by the taste buds. The tongue is covered with thousands of small bumps called papillae, which are visible to the naked eye. Within each papilla are hundreds of taste buds. The exception to this are the filiform papillae that do not contain taste buds. There are between 2000 and 5000 taste buds that are located on the back and front of the tongue. Others are located on the roof, sides and back of the mouth, and in the throat. Each taste bud contains 50 to 100 taste receptor cells. The five specific tastes received by taste receptors are saltiness, sweetness, bitterness, sourness, and savoriness, often known by its Japanese name "umami" which translates to ‘deliciousness’. As of the early 20th century, Western physiologists and psychologists believed there were four basic tastes: sweetness, sourness, saltiness, and bitterness. The concept of a "savory" taste was not present in Western science at that time, but was postulated in Japanese research. By the end of the 20th century, the concept of umami was becoming familiar to Western society. One study found that both salt and sour taste mechanisms detect, in different ways, the presence of sodium chloride (salt) in the mouth. However, acids are also detected and perceived as sour. The detection of salt is important to many organisms, but specifically mammals, as it serves a critical role in ion and water homeostasis in the body. It is specifically needed in the mammalian kidney as an osmotically active compound which facilitates passive re-uptake of water into the blood. Because of this, salt elicits a pleasant taste in most humans. Sour and salt tastes can be pleasant in small quantities, but in larger quantities become more and more unpleasant to taste. For sour taste this is presumably because the sour taste can signal under-ripe fruit, rotten meat, and other spoiled foods, which can be dangerous to the body because of bacteria which grow in such media. Additionally, sour taste signals acids, which can cause serious tissue damage. Sweet taste signals the presence of carbohydrates in solution. Since carbohydrates have a very high calorie count (saccharides have many bonds, therefore much energy), they are desirable to the human body, which evolved to seek out the highest calorie intake foods. They are used as direct energy (sugars) and storage of energy (glycogen). However, there are many non-carbohydrate molecules that trigger a sweet response, leading to the development of many artificial sweeteners, including saccharin, sucralose, and aspartame. It is still unclear how these substances activate the sweet receptors and what adaptational significance this has had. The savory taste (known in Japanese as "umami") was identified by Japanese chemist Kikunae Ikeda, which signals the presence of the amino acid L-glutamate, triggers a pleasurable response and thus encourages the intake of peptides and proteins. The amino acids in proteins are used in the body to build muscles and organs, transport molecules (hemoglobin), antibodies, and the organic catalysts known as enzymes. These are all critical molecules, and as such it is important to have a steady supply of amino acids, hence the pleasurable response to their presence in the mouth. Pungency (piquancy or hotness) had traditionally been considered a sixth basic taste. In 2015, researchers suggested a new basic taste of fatty acids called fat taste, although oleogustus and pinguis have both been proposed as alternate terms. Sweetness Sweetness, usually regarded as a pleasurable sensation, is produced by the presence of sugars and substances that mimic sugar. Sweetness may be connected to aldehydes and ketones, which contain a carbonyl group. Sweetness is detected by a variety of G protein coupled receptors (GPCR) coupled to the G protein gustducin found on the taste buds. At least two different variants of the "sweetness receptors" must be activated for the brain to register sweetness. Compounds the brain senses as sweet are compounds that can bind with varying bond strength to two different sweetness receptors. These receptors are T1R2+3 (heterodimer) and T1R3 (homodimer), which account for all sweet sensing in humans and animals.Taste detection thresholds for sweet substances are rated relative to sucrose, which has an index of 1. The average human detection threshold for sucrose is 10 millimoles per liter. For lactose it is 30 millimoles per liter, with a sweetness index of 0.3, and 5-nitro-2-propoxyaniline 0.002 millimoles per liter. “Natural” sweeteners such as saccharides activate the GPCR, which releases gustducin. The gustducin then activates the molecule adenylate cyclase, which catalyzes the production of the molecule cAMP, or adenosine 3, 5-cyclic monophosphate. This molecule closes potassium ion channels, leading to depolarization and neurotransmitter release. Synthetic sweeteners such as saccharin activate different GPCRs and induce taste receptor cell depolarization by an alternate pathway. Sourness Sourness is the taste that detects acidity. The sourness of substances is rated relative to dilute hydrochloric acid, which has a sourness index of 1. By comparison, tartaric acid has a sourness index of 0.7, citric acid an index of 0.46, and carbonic acid an index of 0.06.Sour taste is detected by a small subset of cells that are distributed across all taste buds called Type III taste receptor cells. H+ ions (protons) that are abundant in sour substances can directly enter the Type III taste cells through a proton channel. This channel was identified in 2018 as otopetrin 1 (OTOP1). The transfer of positive charge into the cell can itself trigger an electrical response. Some weak acids such as acetic acid, can also penetrate taste cells; intracellular hydrogen ions inhibit potassium channels, which normally function to hyperpolarize the cell. By a combination of direct intake of hydrogen ions through OTOP1 ion channels (which itself depolarizes the cell) and the inhibition of the hyperpolarizing channel, sourness causes the taste cell to fire action potentials and release neurotransmitter.The most common foods with natural sourness are fruits, such as lemon, lime, grape, orange, tamarind, and bitter melon. Fermented foods, such as wine, vinegar or yogurt, may have sour taste. Children show a greater enjoyment of sour flavors than adults, and sour candy containing citric acid or malic acid is common. Saltiness The simplest receptor found in the mouth is the sodium chloride (salt) receptor. Saltiness is a taste produced primarily by the presence of sodium ions. Other ions of the alkali metals group also taste salty, but the further from sodium, the less salty the sensation is. A sodium channel in the taste cell wall allows sodium cations to enter the cell. This on its own depolarizes the cell, and opens voltage-dependent calcium channels, flooding the cell with positive calcium ions and leading to neurotransmitter release. This sodium channel is known as an epithelial sodium channel (ENaC) and is composed of three subunits. An ENaC can be blocked by the drug amiloride in many mammals, especially rats. The sensitivity of the salt taste to amiloride in humans, however, is much less pronounced, leading to conjecture that there may be additional receptor proteins besides ENaC to be discovered. The size of lithium and potassium ions most closely resemble those of sodium, and thus the saltiness is most similar. In contrast, rubidium and caesium ions are far larger, so their salty taste differs accordingly. The saltiness of substances is rated relative to sodium chloride (NaCl), which has an index of 1. Potassium, as potassium chloride (KCl), is the principal ingredient in salt substitutes and has a saltiness index of 0.6.Other monovalent cations, e.g. ammonium (NH4+), and divalent cations of the alkali earth metal group of the periodic table, e.g. calcium (Ca2+), ions generally elicit a bitter rather than a salty taste even though they, too, can pass directly through ion channels in the tongue, generating an action potential. But the chloride of calcium is saltier and less bitter than potassium chloride, and is commonly used in pickle brine instead of KCl. Bitterness Bitterness is one of the most sensitive of the tastes, and many perceive it as unpleasant, sharp, or disagreeable, but it is sometimes desirable and intentionally added via various bittering agents. Common bitter foods and beverages include coffee, unsweetened cocoa, South American mate, coca tea, bitter gourd, uncured olives, citrus peel, some varieties of cheese, many plants in the family Brassicaceae, dandelion greens, horehound, wild chicory, and escarole. The ethanol in alcoholic beverages tastes bitter, as do the additional bitter ingredients found in some alcoholic beverages including hops in beer and gentian in bitters. Quinine is also known for its bitter taste and is found in tonic water. Bitterness is of interest to those who study evolution, as well as various health researchers since a large number of natural bitter compounds are known to be toxic. The ability to detect bitter-tasting, toxic compounds at low thresholds is considered to provide an important protective function. Plant leaves often contain toxic compounds, and among leaf-eating primates there is a tendency to prefer immature leaves, which tend to be higher in protein and lower in fiber and poisons than mature leaves. Amongst humans, various food processing techniques are used worldwide to detoxify otherwise inedible foods and make them palatable. Furthermore, the use of fire, changes in diet, and avoidance of toxins has led to neutral evolution in human bitter sensitivity. This has allowed several loss of function mutations that has led to a reduced sensory capacity towards bitterness in humans when compared to other species.The threshold for stimulation of bitter taste by quinine averages a concentration of 8 μM (8 micromolar). The taste thresholds of other bitter substances are rated relative to quinine, which is thus given a reference index of 1. For example, brucine has an index of 11, is thus perceived as intensely more bitter than quinine, and is detected at a much lower solution threshold. The most bitter natural substance is amarogentin, a compound present in the roots of the plant Gentiana lutea, and the most bitter substance known is the synthetic chemical denatonium, which has an index of 1,000. It is used as an aversive agent (a bitterant) that is added to toxic substances to prevent accidental ingestion. It was discovered accidentally in 1958 during research on a local anesthetic, by MacFarlan Smith of Gorgie, Edinburgh, Scotland.Research has shown that TAS2Rs (taste receptors, type 2, also known as T2Rs) such as TAS2R38 coupled to the G protein gustducin are responsible for the human ability to taste bitter substances. They are identified not only by their ability to taste for certain "bitter" ligands, but also by the morphology of the receptor itself (surface bound, monomeric). The TAS2R family in humans is thought to comprise about 25 different taste receptors, some of which can recognize a wide variety of bitter-tasting compounds. Over 670 bitter-tasting compounds have been identified, on a bitter database, of which over 200 have been assigned to one or more specific receptors. Recently it is speculated that the selective constraints on the TAS2R family have been weakened due to the relatively high rate of mutation and pseudogenization. Researchers use two synthetic substances, phenylthiocarbamide (PTC) and 6-n-propylthiouracil (PROP) to study the genetics of bitter perception. These two substances taste bitter to some people, but are virtually tasteless to others. Among the tasters, some are so-called "supertasters" to whom PTC and PROP are extremely bitter. The variation in sensitivity is determined by two common alleles at the TAS2R38 locus. This genetic variation in the ability to taste a substance has been a source of great interest to those who study genetics. Gustducin is made of three subunits. When it is activated by the GPCR, its subunits break apart and activate phosphodiesterase, a nearby enzyme, which in turn converts a precursor within the cell into a secondary messenger, which closes potassium ion channels. Also, this secondary messenger can stimulate the endoplasmic reticulum to release Ca2+ which contributes to depolarization. This leads to a build-up of potassium ions in the cell, depolarization, and neurotransmitter release. It is also possible for some bitter tastants to interact directly with the G protein, because of a structural similarity to the relevant GPCR. Umami Umami, or savory, is an appetitive taste. It can be tasted in soy sauce, meat, dashi and consomme. A loanword from Japanese meaning "good flavor" or "good taste", umami (旨味) is considered fundamental to many East Asian cuisines, such as Japanese cuisine. It dates back to the use of fermented fish sauce: garum in ancient Rome and ge-thcup or koe-cheup in ancient China.Umami was first studied in 1907 by Ikeda isolating dashi taste, which he identified as the chemical monosodium glutamate (MSG). MSG is a sodium salt that produces a strong savory taste, especially combined with foods rich in nucleotides such as meats, fish, nuts, and mushrooms.Some savory taste buds respond specifically to glutamate in the same way that "sweet" ones respond to sugar. Glutamate binds to a variant of G protein coupled glutamate receptors. L-glutamate may bond to a type of GPCR known as a metabotropic glutamate receptor (mGluR4) which causes the G-protein complex to activate the sensation of umami. Measuring relative tastes Measuring the degree to which a substance presents one basic taste can be achieved in a subjective way by comparing its taste to a reference substance. Sweetness is subjectively measured by comparing the threshold values, or level at which the presence of a dilute substance can be detected by a human taster, of different sweet substances. Substances are usually measured relative to sucrose, which is usually given an arbitrary index of 1 or 100. Rebaudioside A is 100 times sweeter than sucrose; fructose is about 1.4 times sweeter; glucose, a sugar found in honey and vegetables, is about three-quarters as sweet; and lactose, a milk sugar, is one-half as sweet.The sourness of a substance can be rated by comparing it to very dilute hydrochloric acid (HCl).Relative saltiness can be rated by comparison to a dilute salt solution.Quinine, a bitter medicinal found in tonic water, can be used to subjectively rate the bitterness of a substance. Units of dilute quinine hydrochloride (1 g in 2000 mL of water) can be used to measure the threshold bitterness concentration, the level at which the presence of a dilute bitter substance can be detected by a human taster, of other compounds. More formal chemical analysis, while possible, is difficult.There may not be an absolute measure for pungency, though there are tests for measuring the subjective presence of a given pungent substance in food, such as the Scoville scale for capsaicine in peppers or the Pyruvate scale for pyruvates in garlics and onions. Functional structure Taste is a form of chemoreception which occurs in the specialised taste receptors in the mouth. To date, there are five different types of taste these receptors can detect which are recognized: salt, sweet, sour, bitter, and umami. Each type of receptor has a different manner of sensory transduction: that is, of detecting the presence of a certain compound and starting an action potential which alerts the brain. It is a matter of debate whether each taste cell is tuned to one specific tastant or to several; Smith and Margolskee claim that "gustatory neurons typically respond to more than one kind of stimulus, [a]lthough each neuron responds most strongly to one tastant". Researchers believe that the brain interprets complex tastes by examining patterns from a large set of neuron responses. This enables the body to make "keep or spit out" decisions when there is more than one tastant present. "No single neuron type alone is capable of discriminating among stimuli or different qualities, because a given cell can respond the same way to disparate stimuli." As well, serotonin is thought to act as an intermediary hormone which communicates with taste cells within a taste bud, mediating the signals being sent to the brain. Receptor molecules are found on the top of microvilli of the taste cells. SweetnessSweetness is produced by the presence of sugars, some proteins, and other substances such as alcohols like anethol, glycerol and propylene glycol, saponins such as glycyrrhizin, artificial sweeteners (organic compounds with a variety of structures), and lead compounds such as lead acetate. It is often connected to aldehydes and ketones, which contain a carbonyl group. Many foods can be perceived as sweet regardless of their actual sugar content. For example, some plants such as liquorice, anise or stevia can be used as sweeteners. Rebaudioside A is a steviol glycoside coming from stevia that is 200 times sweeter than sugar. Lead acetate and other lead compounds were used as sweeteners, mostly for wine, until lead poisoning became known. Romans used to deliberately boil the must inside of lead vessels to make a sweeter wine. Sweetness is detected by a variety of G protein-coupled receptors coupled to a G protein that acts as an intermediary in the communication between taste bud and brain, gustducin. These receptors are T1R2+3 (heterodimer) and T1R3 (homodimer), which account for sweet sensing in humans and other animals. SaltinessSaltiness is a taste produced best by the presence of cations (such as Na+, K+ or Li+) and is directly detected by cation influx into glial like cells via leak channels causing depolarisation of the cell.Other monovalent cations, e.g., ammonium, NH+4, and divalent cations of the alkali earth metal group of the periodic table, e.g., calcium, Ca2+, ions, in general, elicit a bitter rather than a salty taste even though they, too, can pass directly through ion channels in the tongue. SournessSourness is acidity, and, like salt, it is a taste sensed using ion channels. Undissociated acid diffuses across the plasma membrane of a presynaptic cell, where it dissociates in accordance with Le Chateliers principle. The protons that are released then block potassium channels, which depolarise the cell and cause calcium influx. In addition, the taste receptor PKD2L1 has been found to be involved in tasting sour. BitternessResearch has shown that TAS2Rs (taste receptors, type 2, also known as T2Rs) such as TAS2R38 are responsible for the ability to taste bitter substances in vertebrates. They are identified not only by their ability to taste certain bitter ligands, but also by the morphology of the receptor itself (surface bound, monomeric). SavorinessThe amino acid glutamic acid is responsible for savoriness, but some nucleotides (inosinic acid and guanylic acid) can act as complements, enhancing the taste.Glutamic acid binds to a variant of the G protein-coupled receptor, producing a savory taste. Further sensations and transmission The tongue can also feel other sensations not generally included in the basic tastes. These are largely detected by the somatosensory system. In humans, the sense of taste is conveyed via three of the twelve cranial nerves. The facial nerve (VII) carries taste sensations from the anterior two thirds of the tongue, the glossopharyngeal nerve (IX) carries taste sensations from the posterior one third of the tongue while a branch of the vagus nerve (X) carries some taste sensations from the back of the oral cavity. The trigeminal nerve (cranial nerve V) provides information concerning the general texture of food as well as the taste-related sensations of peppery or hot (from spices). Pungency (also spiciness or hotness) Substances such as ethanol and capsaicin cause a burning sensation by inducing a trigeminal nerve reaction together with normal taste reception. The sensation of heat is caused by the foods activating nerves that express TRPV1 and TRPA1 receptors. Some such plant-derived compounds that provide this sensation are capsaicin from chili peppers, piperine from black pepper, gingerol from ginger root and allyl isothiocyanate from horseradish. The piquant ("hot" or "spicy") sensation provided by such foods and spices plays an important role in a diverse range of cuisines across the world—especially in equatorial and sub-tropical climates, such as Ethiopian, Peruvian, Hungarian, Indian, Korean, Indonesian, Lao, Malaysian, Mexican, New Mexican, Singaporean, Southwest Chinese (including Sichuan cuisine), Vietnamese, and Thai cuisines. This particular sensation, called chemesthesis, is not a taste in the technical sense, because the sensation does not arise from taste buds, and a different set of nerve fibers carry it to the brain. Foods like chili peppers activate nerve fibers directly; the sensation interpreted as "hot" results from the stimulation of somatosensory (pain/temperature) fibers on the tongue. Many parts of the body with exposed membranes but no taste sensors (such as the nasal cavity, under the fingernails, surface of the eye or a wound) produce a similar sensation of heat when exposed to hotness agents. Coolness Some substances activate cold trigeminal receptors even when not at low temperatures. This "fresh" or "minty" sensation can be tasted in peppermint, spearmint and is triggered by substances such as menthol, anethol, ethanol, and camphor. Caused by activation of the same mechanism that signals cold, TRPM8 ion channels on nerve cells, unlike the actual change in temperature described for sugar substitutes, this coolness is only a perceived phenomenon. Numbness Both Chinese and Batak Toba cooking include the idea of 麻 (má or mati rasa), a tingling numbness caused by spices such as Sichuan pepper. The cuisines of Sichuan province in China and of the Indonesian province of North Sumatra often combine this with chili pepper to produce a 麻辣 málà, "numbing-and-hot", or "mati rasa" flavor. Typical in northern Brazilian cuisine, jambu is an herb used in dishes like tacacá. These sensations, although not taste, fall into a category of chemesthesis. Astringency Some foods, such as unripe fruits, contain tannins or calcium oxalate that cause an astringent or puckering sensation of the mucous membrane of the mouth. Examples include tea, red wine, or rhubarb. Other terms for the astringent sensation are "dry", "rough", "harsh" (especially for wine), "tart" (normally referring to sourness), "rubbery", "hard" or "styptic". Metallicness A metallic taste may be caused by food and drink, certain medicines or amalgam dental fillings. It is generally considered an off flavor when present in food and drink. A metallic taste may be caused by galvanic reactions in the mouth. In the case where it is caused by dental work, the dissimilar metals used may produce a measurable current. Some artificial sweeteners are perceived to have a metallic taste, which is detected by the TRPV1 receptors. Many people consider blood to have a metallic taste. A metallic taste in the mouth is also a symptom of various medical conditions, in which case it may be classified under the symptoms dysgeusia or parageusia, referring to distortions of the sense of taste, and can be caused by medication, including saquinavir, zonisamide, and various kinds of chemotherapy, as well as occupational hazards, such as working with pesticides. Fat taste Recent research reveals a potential taste receptor called the CD36 receptor. CD36 was targeted as a possible lipid taste receptor because it binds to fat molecules (more specifically, long-chain fatty acids), and it has been localized to taste bud cells (specifically, the circumvallate and foliate papillae). There is a debate over whether we can truly taste fats, and supporters of our ability to taste free fatty acids (FFAs) have based the argument on a few main points: there is an evolutionary advantage to oral fat detection; a potential fat receptor has been located on taste bud cells; fatty acids evoke specific responses that activate gustatory neurons, similar to other currently accepted tastes; and, there is a physiological response to the presence of oral fat. Although CD36 has been studied primarily in
Taste
mice, research examining human subjects ability to taste fats found that those with high levels of CD36 expression were more sensitive to tasting fat than were those with low levels of CD36 expression; this study points to a clear association between CD36 receptor quantity and the ability to taste fat. Other possible fat taste receptors have been identified. G protein-coupled receptors GPR120 and GPR40 have been linked to fat taste, because their absence resulted in reduced preference to two types of fatty acid (linoleic acid and oleic acid), as well as decreased neuronal response to oral fatty acids.Monovalent cation channel TRPM5 has been implicated in fat taste as well, but it is thought to be involved primarily in downstream processing of the taste rather than primary reception, as it is with other tastes such as bitter, sweet, and savory.Proposed alternate names to fat taste include oleogustus and pinguis, although these terms are not widely accepted. The main form of fat that is commonly ingested is triglycerides, which are composed of three fatty acids bound together. In this state, triglycerides are able to give fatty foods unique textures that are often described as creaminess. But this texture is not an actual taste. It is only during ingestion that the fatty acids that make up triglycerides are hydrolysed into fatty acids via lipases. The taste is commonly related to other, more negative, tastes such as bitter and sour due to how unpleasant the taste is for humans. Richard Mattes, a co-author of the study, explained that low concentrations of these fatty acids can create an overall better flavor in a food, much like how small uses of bitterness can make certain foods more rounded. However, a high concentration of fatty acids in certain foods is generally considered inedible. To demonstrate that individuals can distinguish fat taste from other tastes, the researchers separated volunteers into groups and had them try samples that also contained the other basic tastes. Volunteers were able to separate the taste of fatty acids into their own category, with some overlap with savory samples, which the researchers hypothesized was due to poor familiarity with both. The researchers note that the usual "creaminess and viscosity we associate with fatty foods is largely due to triglycerides", unrelated to the taste; while the actual taste of fatty acids is not pleasant. Mattes described the taste as "more of a warning system" that a certain food should not be eaten.There are few regularly consumed foods rich in fat taste, due to the negative flavor that is evoked in large quantities. Foods whose flavor to which fat taste makes a small contribution include olive oil and fresh butter, along with various kinds of vegetable and nut oils. Heartiness Kokumi (, Japanese: kokumi (コク味) from koku (こく)) is translated as "heartiness", "full flavor" or "rich" and describes compounds in food that do not have their own taste, but enhance the characteristics when combined. Alongside the five basic tastes of sweet, sour, salt, bitter and savory, kokumi has been described as something that may enhance the other five tastes by magnifying and lengthening the other tastes, or "mouthfulness".: 290  Garlic is a common ingredient to add flavor used to help define the characteristic kokumi flavors.Calcium-sensing receptors (CaSR) are receptors for "kokumi" substances. Kokumi substances, applied around taste pores, induce an increase in the intracellular Ca concentration in a subset of cells. This subset of CaSR-expressing taste cells are independent from the influenced basic taste receptor cells. CaSR agonists directly activate the CaSR on the surface of taste cells and integrated in the brain via the central nervous system. However, a basal level of calcium, corresponding to the physiological concentration, is necessary for activation of the CaSR to develop the kokumi sensation. Calcium The distinctive taste of chalk has been identified as the calcium component of that substance. In 2008, geneticists discovered a calcium receptor on the tongues of mice. The CaSR receptor is commonly found in the gastrointestinal tract, kidneys, and brain. Along with the "sweet" T1R3 receptor, the CaSR receptor can detect calcium as a taste. Whether the perception exists or not in humans is unknown. Temperature Temperature can be an essential element of the taste experience. Heat can accentuate some flavors and decrease others by varying the density and phase equilibrium of a substance. Food and drink that—in a given culture—is traditionally served hot is often considered distasteful if cold, and vice versa. For example, alcoholic beverages, with a few exceptions, are usually thought best when served at room temperature or chilled to varying degrees, but soups—again, with exceptions—are usually only eaten hot. A cultural example are soft drinks. In North America it is almost always preferred cold, regardless of season. Starchiness A 2016 study suggested that humans can taste starch (specifically, a glucose oligomer) independently of other tastes such as sweetness. However, no specific chemical receptor has yet been found for this taste. Nerve supply and neural connections The glossopharyngeal nerve innervates a third of the tongue including the circumvallate papillae. The facial nerve innervates the other two thirds of the tongue and the cheek via the chorda tympani.The pterygopalatine ganglia are ganglia (one on each side) of the soft palate. The greater petrosal, lesser palatine and zygomatic nerves all synapse here. The greater petrosal, carries soft palate taste signals to the facial nerve. The lesser palatine sends signals to the nasal cavity; which is why spicy foods cause nasal drip. The zygomatic sends signals to the lacrimal nerve that activate the lacrimal gland; which is the reason that spicy foods can cause tears. Both the lesser palatine and the zygomatic are maxillary nerves (from the trigeminal nerve). The special visceral afferents of the vagus nerve carry taste from the epiglottal region of the tongue. The lingual nerve (trigeminal, not shown in diagram) is deeply interconnected with the chorda tympani in that it provides all other sensory info from the anterior ⅔ of the tongue. This info is processed separately (nearby) in the rostal lateral subdivision of the nucleus of the solitary tract (NST). NST receives input from the amygdala (regulates oculomotor nuclei output), bed nuclei of stria terminalis, hypothalamus, and prefrontal cortex. NST is the topographical map that processes gustatory and sensory (temp, texture, etc.) info.Reticular formation (includes Raphe nuclei responsible for serotonin production) is signaled to release serotonin during and after a meal to suppress appetite. Similarly, salivary nuclei are signaled to decrease saliva secretion. Hypoglossal and thalamic connections aid in oral-related movements. Hypothalamus connections hormonally regulate hunger and the digestive system. Substantia innominata connects the thalamus, temporal lobe, and insula. Edinger-Westphal nucleus reacts to taste stimuli by dilating and constricting the pupils.Spinal ganglion are involved in movement. The frontal operculum is speculated to be the memory and association hub for taste.The insula cortex aids in swallowing and gastric motility. Other concepts Supertasters A supertaster is a person whose sense of taste is significantly more sensitive than most. The cause of this heightened response is likely, at least in part, due to an increased number of fungiform papillae. Studies have shown that supertasters require less fat and sugar in their food to get the same satisfying effects. However, contrary to what one might think, these people actually tend to consume more salt than most people. This is due to their heightened sense of the taste of bitterness, and the presence of salt drowns out the taste of bitterness. (This also explains why supertasters prefer salted cheddar cheese over non-salted.) Aftertaste Aftertastes arise after food has been swallowed. An aftertaste can differ from the food it follows. Medicines and tablets may also have a lingering aftertaste, as they can contain certain artificial flavor compounds, such as aspartame (artificial sweetener). Acquired taste An acquired taste often refers to an appreciation for a food or beverage that is unlikely to be enjoyed by a person who has not had substantial exposure to it, usually because of some unfamiliar aspect of the food or beverage, including bitterness, a strong or strange odor, taste, or appearance. Clinical significance Patients with Addisons disease, pituitary insufficiency, or cystic fibrosis sometimes have a hyper-sensitivity to the five primary tastes. Disorders of taste ageusia (complete loss of taste) hypogeusia (reduced sense of taste) dysgeusia (distortion in sense of taste) hypergeusia (abnormally heightened sense of taste)Viruses can also cause loss of taste. About 50% of patients with SARS-CoV-2 (causing COVID-19) experience some type of disorder associated with their sense of smell or taste, including ageusia and dysgeusia. SARS-CoV-1, MERS-CoV and even the flu (influenza virus) can also disrupt olfaction. History Ayurveda, an ancient Indian healing science, has its own tradition of basic tastes, comprising sweet, salty, sour, pungent, bitter & astringent.In the West, Aristotle postulated in c. 350 BC that the two most basic tastes were sweet and bitter. He was one of the first identified persons to develop a list of basic tastes.The Ancient Chinese regarded spiciness as a basic taste. Research The receptors for the basic tastes of bitter, sweet and savory have been identified. They are G protein-coupled receptors. The cells that detect sourness have been identified as a subpopulation that express the protein PKD2L1. The responses are mediated by an influx of protons into the cells but the receptor for sour is still unknown. The receptor for amiloride-sensitive attractive salty taste in mice has been shown to be a sodium channel. There is some evidence for a sixth taste that senses fatty substances.In 2010, researchers found bitter taste receptors in lung tissue, which cause airways to relax when a bitter substance is encountered. They believe this mechanism is evolutionarily adaptive because it helps clear lung infections, but could also be exploited to treat asthma and chronic obstructive pulmonary disease. == See also ==
Polyarteritis nodosa
Polyarteritis nodosa (PAN) is a systemic necrotizing inflammation of blood vessels (vasculitis) affecting medium-sized muscular arteries, typically involving the arteries of the kidneys and other internal organs but generally sparing the lungs circulation. Small aneurysms are strung like the beads of a rosary, therefore making this "rosary sign" an important diagnostic feature of the vasculitis. PAN is sometimes associated with infection by the hepatitis B or hepatitis C virus. The condition may be present in infants.PAN is a rare disease. With treatment, five-year survival is 80%; without treatment, five-year survival is 13%. Death is often a consequence of kidney failure, myocardial infarction, or stroke. Signs and symptoms PAN may affect nearly every organ system and thus can present with a broad array of signs and symptoms. These manifestations result from ischemic damage to affected organs, often the skin, heart, kidneys, and nervous system. Constitutional symptoms are seen in up to 90% of affected individuals and include fever, fatigue, weakness, loss of appetite, and unintentional weight loss.Skin: The skin may show rashes, swelling, necrotic ulcers, and subcutaneous nodules (lumps). Skin manifestations of PAN include palpable purpura and livedo reticularis in some individuals.Neurologic system: Nerve involvement may cause sensory changes with numbness, pain, burning, and weakness (peripheral neuropathy). Peripheral nerves are often affected, and this most commonly presents as mononeuritis multiplex, which is the most common neurologic sign of PAN. Mononeuritis multiplex develops in more than 70% of patients with polyarteritis nodosa because of damage to arteries supplying large peripheral nerves. Most cases are marked by asymmetric polyneuropathy, but progressive disease can lead to symmetric nerve involvement. Central nervous system involvement may cause strokes or seizures.Renal system: Kidney involvement is common and often leads to death of parts of the kidney. Involvement of the renal artery, which supplies the kidneys with highly oxygenated blood, often leads to high blood pressure in about one-third of cases. deposition of protein or blood in the urine may also be seen. Almost all patients with PAN have renal insufficiency caused by renal artery narrowing, thrombosis, and infarctions. Cardiovascular system: Involvement of the arteries of the heart may cause a heart attack, heart failure, and inflammation of the sac around the heart (pericarditis).Gastrointestinal system: Damage to mesenteric arteries can cause abdominal pain, mesenteric ischemia, and bowel perforation. Abdominal pain may also be seen. Musculoskeletal system: Muscle and joint aches are common. Complications Stroke Heart failure resulting from cardiomyopathy and pericarditis Intestinal necrosis and perforation Causes PAN has no association with anti-neutrophil cytoplasmic antibodies, but about 30% of people with PAN have chronic hepatitis B and deposits containing HBsAg-HBsAb complexes in affected blood vessels, indicating an immune complex-mediated cause in that subset. Infection with the hepatitis C virus and HIV are occasionally discovered in people affected by PAN. PAN has also been associated with underlying hairy cell leukemia. The cause remains unknown in the remaining cases; there may be causal and clinical distinctions between classic idiopathic PAN, the cutaneous forms of PAN, and PAN associated with chronic hepatitis. In children, cutaneous PAN is frequently associated with streptococcal infections, and positive streptococcal serology is included in the diagnostic criteria. Diagnosis No specific lab tests exist for diagnosing polyarteritis nodosa. Diagnosis is generally based on the physical examination and a few laboratory studies that help confirm the diagnosis: CBC (may demonstrate an elevated white blood count) ESR (elevated) Perinuclear pattern of antineutrophil cytoplasmic antibodies (p-ANCA) - not associated with "classic" polyarteritis nodosa, but is present in a form of the disease affecting smaller blood vessels, known as microscopic polyangiitis or leukocytoclastic angiitis Tissue biopsy (reveals inflammation in small arteries, called arteritis) Elevated C-reactive proteinA patient is said to have polyarteritis nodosa if he or she has three of the 10 signs known as the 1990 American College of Rheumatology (ACR) criteria, when a radiographic or pathological diagnosis of vasculitis is made: Weight loss greater than/equal to 4.5 kg Livedo reticularis (a mottled purplish skin discoloration over the extremities or torso) Testicular pain or tenderness (occasionally, a site biopsied for diagnosis) Muscle pain, weakness, or leg tenderness Nerve disease (either single or multiple) Diastolic blood pressure greater than 90 mmHg (high blood pressure) Elevated kidney blood tests (BUN greater than 40 mg/dL or creatinine greater than 1.5 mg/dL) Hepatitis B (not C) virus tests positive (for surface antigen or antibody) Arteriogram (angiogram) showing the arteries that are dilated (aneurysms) or constricted by the blood vessel inflammation Biopsy of tissue showing the arteritis (typically inflamed arteries): The sural nerve is a frequent location for the biopsy. In polyarteritis nodosa, small aneurysms are strung like the beads of a rosary, therefore making this "rosary sign" an important diagnostic feature of the vasculitis. The 1990 ACR criteria were designed for classification purposes only, but their good discriminatory performances, indicated by the initial ACR analysis, suggested their potential usefulness for diagnostic purposes as well. Subsequent studies did not confirm their diagnostic utility, demonstrating a significant dependence of their discriminative abilities on the prevalence of the various vasculitides in the analyzed populations. Recently, an original study, combining the analysis of more than 100 items used to describe patients characteristics in a large sample of vasculitides with a computer simulation technique designed to test the potential diagnostic utility of the various criteria, proposed a set of eight positively or negatively discriminating items to be used as a screening tool for diagnosis in patients suspected of systemic vasculitis. Differential diagnosis Polyarteritis nodosa rarely affects the blood vessels of the lungs and this feature can help to differentiate it from other vasculitides that may have similar signs and symptoms (e.g., granulomatosis with polyangiitis or microscopic polyangiitis). Treatment Treatment involves medications to suppress the immune system, including prednisone and cyclophosphamide. When present, underlying hepatitis B virus infection should be immediately treated. In some cases, methotrexate or leflunomide may be helpful. Some patients have entered a remission phase when a four-dose infusion of rituximab is used before the leflunomide treatment is begun. Therapy results in remissions or cures in 90% of cases. Untreated, the disease is fatal in most cases. The most serious associated conditions generally involve the kidneys and gastrointestinal tract. A fatal course usually involves gastrointestinal bleeding, infection, myocardial infarction, and/or kidney failure.In case of remission, about 60% experience relapse within five years. In cases caused by hepatitis B virus, however, recurrence rate is only around 6%. Epidemiology The condition affects adults more frequently than children and males more frequently than females. Most cases occur between the ages of 40 and 60. Polyarteritis nodosa is more common in people with hepatitis B infection. History The medical eponyms Kussmaul disease or Kussmaul-Maier disease reflect the seminal description of the disease in the medical literature by Adolph Kussmaul and Rudolf Robert Maier. Culture In the 1956 American film Bigger Than Life, the protagonist character played by James Mason is diagnosed with polyarteritis nodosa after experiencing excruciating chest pain and is treated with cortisone. References == External links ==
Aplastic anemia
Aplastic anemia is a disease in which the body fails to produce blood cells in sufficient numbers. Blood cells are produced in the bone marrow by stem cells that reside there. Aplastic anemia causes a deficiency of all blood cell types: red blood cells, white blood cells, and platelets.It occurs most frequently in people in their teens and twenties but is also common among the elderly. It can be caused by heredity, immune disease, or exposure to chemicals, drugs, or radiation. However, in about half of cases, the cause is unknown.Aplastic anemia can be definitively diagnosed by bone marrow biopsy. Normal bone marrow has 30–70% blood stem cells, but in aplastic anemia, these cells are mostly gone and are replaced by fat.First-line treatment for aplastic anemia consists of immunosuppressive drugs—typically either anti-lymphocyte globulin or anti-thymocyte globulin—combined with corticosteroids, chemotherapy, and ciclosporin. Hematopoietic stem cell transplantation is also used, especially for patients under 30 years of age with a related, matched marrow donor.Aplastic anemia is known to have caused the deaths of Eleanor Roosevelt and Marie Curie. Signs and symptoms Anemia may lead to fatigue, pale skin, severe bruising, and a fast heart rate.Low platelets are associated with an increased risk of bleeding, bruising, and petechiae, with lower blood counts that impact the ability of the blood to clot. Low white blood cells increase the risk of infections. Causes Aplastic anemia can be caused by immune disease or exposure to certain chemicals, drugs, radiation, or infection; in about half the cases, a definitive cause is unknown. It is not a hereditary condition, nor is it contagious.Aplastic anemia is also sometimes associated with exposure to toxins such as benzene or with the use of certain drugs, including chloramphenicol, carbamazepine, felbamate, phenytoin, quinine, and phenylbutazone. However, the probability that these drugs will lead to aplastic anemia in a given patient is very low. Chloramphenicol treatment is associated with aplasia in less than one in 40,000 treatment courses, and carbamazepine aplasia is even rarer.Exposure to ionizing radiation from radioactive materials or radiation-producing devices is also associated with the development of aplastic anemia. Marie Curie, famous for her pioneering work in the field of radioactivity, died of aplastic anemia after working unprotected with radioactive materials for a long period of time; the damaging effects of ionizing radiation were not then known.Aplastic anemia is present in up to 2% of patients with acute viral hepatitis.One known cause is an autoimmune disorder in which white blood cells attack the bone marrow. Acquired aplastic anemia is a T-cell mediated autoimmune disease, in which regulatory T cells are decreased and T-bet, a transcription factor and key regulator of Th1 development and function, is upregulated in affected T-cells. As a result of active transcription of the interferon gamma (IFN-gamma) gene by T-bet, IFN-gamma levels are increased, which reduces colony formation of hematopoietic progenitor cells in vitro by inducing apoptosis of CD34+ cells in the bone marrow.Short-lived aplastic anemia can also be a result of parvovirus infection. In humans, the P antigen (also known as globoside), one of many cellular receptors that contribute to a persons blood type, is the cellular receptor for parvovirus B19, which causes erythema infectiosum (fifth disease) in children. Because it infects red blood cells as a result of the affinity for the P antigen, parvovirus causes complete cessation of red blood cell production. In most cases, this goes unnoticed, as red blood cells live on average 120 days, and the drop in production does not significantly affect the total number of circulating cells. However, in people with conditions where the cells die early (such as sickle cell disease), parvovirus infection can lead to severe anemia.More frequently, parvovirus B19 is associated with aplastic crisis, which involves only red blood cells (despite the name). Aplastic anemia involves all cell lines. Other viruses that have been linked to the development of aplastic anemia include hepatitis, Epstein-Barr, cytomegalovirus, and HIV. In some animals, aplastic anemia may have other causes. For example, in the ferret (Mustela putorius furo), it is caused by estrogen toxicity, because female ferrets are induced ovulators, so mating is required to bring the female out of heat. Intact females, if not mated, will remain in heat, and after some time the high levels of estrogen will cause the bone marrow to stop producing red blood cells. Diagnosis Aplastic anemia must be differentiated from pure red cell aplasia. In aplastic anemia, the patient has pancytopenia (i.e., also leukopenia and thrombocytopenia) resulting in a decrease of all formed elements. In contrast, pure red cell aplasia is characterized by a reduction in red cells only. The diagnosis can only be confirmed with a bone marrow examination.Before this procedure is undertaken, a patient will generally have had other blood tests to find diagnostic clues, including a complete blood count, renal function and electrolytes, liver enzymes, thyroid function tests, vitamin B12 and folic acid levels. Tests that may aid in determining an etiology for aplastic anemia include: History of iatrogenic exposure to cytotoxic chemotherapy: transient bone marrow suppression Vitamin B12 and folate levels: vitamin deficiency Liver tests: liver diseases Viral studies: viral infections Chest X-ray: infections X-rays, computed tomography (CT) scans, or ultrasound imaging tests: enlarged lymph nodes (sign of lymphoma), kidneys, and bones in arms and hands (abnormal in Fanconi anemia) Antibody test: immune competency Blood tests for paroxysmal nocturnal hemoglobinuria Bone marrow aspirate and biopsy: to rule out other causes of pancytopenia (i.e., neoplastic infiltration or significant myelofibrosis). Pathogenesis For many years, the cause of acquired aplastic anemia was not clear. Now, autoimmune processes are considered to be responsible. The majority of cases are hypothesized to be the result of T-cell-mediated autoimmunity and destruction of the bone marrow, which leads to defective or nearly absent hematopoiesis. It is suggested that unidentified antigens cause a polyclonal expansion of dysregulated CD4+ T cells and overproduction of pro-inflammatory cytokines, such as interferon-γ and tumor necrosis factor-α. Ex vivo bone marrow models show an expansion of dysregulated CD8+ T cell populations. Activated T cells also induce apoptosis in hematopoietic stem cells.Aplastic anemia is associated with increased levels of Th17 cells—which produce pro-inflammatory cytokine IL-17—and interferon-γ-producing cells in the peripheral blood and bone marrow. Th17 cell populations also negatively correlate with regulatory T-cell populations, suppressing auto-reactivity to normal tissues, including the bone marrow. Deep phenotyping of regulatory T-cells showed two subpopulations with specific phenotypes, gene expression signatures, and functions.Studies in patients who responded to immunosuppressive therapy found dominant subpopulations characterized by higher expression of HLA‐DR2 and HLA‐DR15 (mean age of two groups: 34 and 21 years), FOXP3, CD95, and CCR4; lower expression of CD45RA (mean age: 45 years); and expression of the IL‐2/STAT5 pathway. Higher frequency of HLA‐DR2 and HLA‐DR15 may cause augmented presentation of antigens to CD4+ T-cells, resulting in immune‐mediated destruction of the stem cells. In addition, HLA‐DR2-expressing cells augment the release of tumor necrosis factor-α, which plays a role in disease pathology.The hypothesis of aberrant, disordered T‐cell populations as the initiators of aplastic anemia is supported by findings that immunosuppressive therapy for T-cells (for example, anti-thymocyte globulin and ciclosporin) results in a response in up to 80% of severe aplastic anemia patients. CD34+ progenitor cells and lymphocytes in the bone marrow over-express the Fas receptor, the main element in apoptotic signaling. A significant increase in the proportion of apoptotic cells in the bone marrow of aplastic anemia patients has been demonstrated. This suggests that cytokine‐induced and Fas‐mediated apoptosis play roles in bone marrow failure because annihilation of CD34+ progenitor cells leads to hematopoietic stem cell deficiency. Frequently detected autoantibodies A study of blood and bone marrow samples obtained from 18 aplastic anemia patients revealed more than 30 potential specific candidate autoantigens after the serologic screening of a fetal liver library with sera from 8 patients. The human fetal liver cDNA library (chosen because of its high enrichment of CD34+ cells), compared with peripheral blood or the bone marrow, significantly increased the likelihood of detection of possible stem cell autoantigens. ELISA and Western blot analysis revealed that an IgG antibody response to one of the candidate autoantigens, kinectin, was present in a significant number of patients (39%). In contrast, no antibody was detected in 35 healthy volunteers. Antibody was detected in both transfused and transfusion-naive patients, suggesting that antikinectin autoantibody development was not due to transfusion-related alloreactivity. Negative sera from patients with other autoimmune diseases (systemic lupus erythematosus, rheumatoid arthritis, and multiple sclerosis) showed a specific association of antikinectin antibodies with aplastic anemia. These results support the hypothesis that immune response to kinectin may be involved in the pathophysiology of the disease. Kinectin is a large molecule (1,300 amino acid residues) expressed by CD34+ cells. Several kinectin-derived peptides can be processed and presented by HLA I and can induce antigen-specific CD8+ T-cell responses. Bone marrow microenvironment A critical factor for healthy stem cell production is the bone marrow microenvironment. Important components are stromal cells, the extracellular matrix, and local cytokine gradients. The hematopoietic and non-hematopoietic elements of the bone marrow closely interact with each other and sustain and maintain the balance of hematopoiesis. In addition to low numbers of hematopoietic stem cells, aplastic anemia patients have altered hematopoietic niche: cytotoxic T-cells (polyclonal expansion of dysregulated CD4+ T-cells) trigger apoptosis in bone marrow cells activated T-cells induce apoptosis in hematopoietic stem cells there is abnormal production of interferon-γ, tumor necrosis factor-α, and transforming growth factor overexpression of Fas receptor leads to apoptosis of hematopoietic stem cells poor quality and quantity of regulatory T-cells means failure in suppressing auto-reactivity, which leads to abnormal T-cell expansion due to higher amounts of interferon-γ, macrophages are more frequent in the bone marrow of aplastic anemia patients; interferon-mediated loss of hematopoietic stem cells occurs only in the presence of macrophages interferon-γ can cause direct exhaustion and depletion of hematopoietic stem cells and indirect reduction of their functions through cells that are part of the bone marrow microenvironment (e.g., macrophages and mesenchymal stem cells) increased numbers of B cells produce autoantibodies against hematopoietic stem cells increased numbers of adipocytes and decreased numbers of pericytes also play a role in suppressing hematopoiesis Treatment Treating immune-mediated aplastic anemia involves suppression of the immune system, an effect achieved by daily medicine or, in more severe cases, a bone marrow transplant, a potential cure. The transplanted bone marrow replaces the failing bone marrow cells with new ones from a matching donor. The multipotent stem cells in the bone marrow reconstitute all three blood cell lines, giving the patient a new immune system, red blood cells, and platelets. However, besides the risk of graft failure, there is also a risk that the newly created white blood cells may attack the rest of the body ("graft-versus-host disease"). In young patients with an HLA-matched sibling donor, bone marrow transplant can be considered as a first-line treatment. Patients lacking a matched sibling donor typically pursue immunosuppression as a first-line treatment, and matched, unrelated donor transplants are considered second-line therapy. Treatment often includes a course of antithymocyte globulin (ATG) and several months of treatment with ciclosporin to modulate the immune system. Chemotherapy with agents such as cyclophosphamide may also be effective but is more toxic than ATG. Antibody therapy such as ATG targets T cells, which are believed to attack the bone marrow. Corticosteroids are generally ineffective, though they are used to ameliorate serum sickness caused by ATG. Normally, success is judged by bone marrow biopsy six months after initial treatment with ATG.One prospective study involving cyclophosphamide was terminated early due to a high incidence of mortality from severe infections as a result of prolonged neutropenia.Before the above treatments became available, patients with low leukocyte counts were often confined to a sterile room or bubble (to reduce risk of infection), as in the case of Ted DeVita. Follow-up Full blood counts are required on a regular basis to determine whether the patient is still in remission. Many patients with aplastic anemia also have clones of cells characteristic of paroxysmal nocturnal hemoglobinuria (PNH), a rare disease that causes anemia with thrombocytopenia and/or thrombosis and is sometimes referred to as AA/PNH. Occasionally PNH dominates over time, with the major manifestation of intravascular hemolysis. The overlap of AA and PNH has been speculated to be an escape mechanism by the bone marrow against destruction by the immune system. Flow cytometry testing is performed regularly in people with previous aplastic anemia to monitor for the development of PNH. Prognosis Untreated, severe aplastic anemia has a high risk of death. Modern treatment produces a five-year survival rate that exceeds 85%, with younger age associated with higher survival.Survival rates for stem cell transplants vary depending on the age and availability of a well-matched donor. They are better for patients who have donors that are matched siblings and worse for patients who receive their marrow from unrelated donors. Overall, the five-year survival rate is higher than 75% among recipients of blood marrow transplantation.Older people (who are generally too frail to undergo bone marrow transplants) and people who are unable to find a good bone marrow match have five-year survival rates of up to 35% when undergoing immune suppression.Relapses are common. Relapse following ATG/ciclosporin use can sometimes be treated with a repeated course of therapy. In addition, 10–15% of severe aplastic anemia cases evolve into myelodysplastic syndrome and leukemia. According to one study, 15.9% of children who responded to immunosuppressive therapy eventually relapsed.Milder disease may resolve on its own. Etymology Aplastic is a combination of two ancient Greek elements: a- (meaning "not") and -plasis ("forming into a shape"). Anemia is a combination of the ancient Greek element an- ("not") and -emia (new Latin from the Greek -(h)aimia, meaning "blood"). Epidemiology Aplastic anemia is a rare, noncancerous disorder in which the blood marrow is unable to adequately produce blood cells required for survival. It is estimated that the incidence of aplastic anemia is 0.7–4.1 cases per million people worldwide, with the prevalence between men and women being approximately equal. The incidence rate of aplastic anemia in Asia is 2–3 times higher than it is in the West; the incidence in the United States is 300–900 cases per year. The disease most commonly affects adults aged 15–25 and over the age of 60, but it can be observed in all age groups.The disease is usually acquired during life and not inherited. Acquired cases are often linked to environmental exposures such as chemicals, drugs, and infectious agents that damage the bone marrow and compromise its ability to generate new blood cells. However, in many instances the underlying cause for the disease is not found. This is referred to as idiopathic aplastic anemia and accounts for 75% of cases. This compromises the effectiveness of treatment since treatment of the disease is often aimed at the underlying cause.Those with a higher risk for aplastic anemia include individuals who are exposed to high-dose radiation or toxic chemicals, take certain prescription drugs, have pre-existing autoimmune disorders or blood diseases, or are pregnant. No screening test currently exists for early detection of aplastic anemia. Notable cases Marie Curie Eleanor Roosevelt Donny Schmit Ted DeVita Demetrio Stratos John Dill (British Field Marshal) Robert McFall (asbestos worker from Pittsburgh who unsuccessfully sued his cousin for a transfusion of bone marrow) See also Fanconi anemia Acquired pure red cell aplasia References External links Mayo Clinic MedlinePlus Encyclopedia: 000554—Idiopathic aplastic anemia
Hepatocellular carcinoma
Hepatocellular carcinoma (HCC) is the most common type of primary liver cancer in adults and is currently the most common cause of death in people with cirrhosis. HCC is the third leading cause of cancer-related deaths worldwide.It occurs in the setting of chronic liver inflammation, and is most closely linked to chronic viral hepatitis infection (hepatitis B or C) or exposure to toxins such as alcohol, aflatoxin, or pyrrolizidine alkaloids. Certain diseases, such as hemochromatosis and alpha 1-antitrypsin deficiency, markedly increase the risk of developing HCC. Metabolic syndrome and NASH are also increasingly recognized as risk factors for HCC.As with any cancer, the treatment and prognosis of HCC vary depending on the specifics of tumor histology, size, how far the cancer has spread, and overall health. The vast majority of HCC cases and the lowest survival rates after treatment occur in Asia and sub-Saharan Africa, in countries where hepatitis B infection is endemic and many are infected from birth. The incidence of HCC in the United States and other developing countries is increasing due to an increase in hepatitis C virus infections. It is more than four times as common in males as in females, for unknown reasons. Signs and symptoms Most cases of HCC occur in people who already have signs and symptoms of chronic liver disease. They may present with worsening symptoms or without symptoms at the time of cancer detection. HCC may present with non-specific symptoms such as abdominal pain, nausea, vomiting, or feeling tired. Some symptoms that are more closely associated with liver disease include yellow skin (also called jaundice), abdominal swelling due to fluid in the abdominal cavity, easy bruising from blood clotting abnormalities, loss of appetite, unintentional weight loss, abdominal pain, nausea, vomiting, or feeling tired. Risk factors Since HCC mostly occurs in people with cirrhosis of the liver, risk factors generally include factors which cause chronic liver disease that may lead to cirrhosis. Still, certain risk factors are more highly associated with HCC than others. For example, while heavy alcohol consumption is estimated to cause 60–70% of cirrhosis, the vast majority of HCC occurs in cirrhosis attributed to viral hepatitis (although there may be overlap). Recognized risk factors include: Chronic viral hepatitis (estimated cause of 80% cases globally) Chronic hepatitis B (about 50% cases) Chronic hepatitis C (about 25% cases) Toxins: Alcohol use disorder: the most common cause of cirrhosis Aflatoxin Iron overload state (hemochromatosis) Pyrrolizidine alkaloids Metabolic: Nonalcoholic steatohepatitis: up to 20% progress to cirrhosis Nonalcoholic fatty liver disease Type 2 diabetes (probably aided by obesity) Congenital disorders: Alpha 1-antitrypsin deficiency Wilsons disease (controversial; while some theorise the risk increases, case studies are rare and suggest the opposite where Wilsons disease actually may confer protection) Hemophilia, although statistically associated with higher risk of HCC, this is due to coincident chronic viral hepatitis infection related to repeated blood transfusions over lifetime.The significance of these risk factors varies globally. In regions where hepatitis B infection is endemic, such as southeast China, hepatitis B is the predominant cause. In populations largely protected by hepatitis B vaccination, such as the United States, HCC is most often linked to causes of cirrhosis such as chronic hepatitis C, obesity, and excessive alcohol use.Certain benign liver tumors, such as hepatocellular adenoma, may sometimes be associated with coexisting malignant HCC. Evidence is limited for the true incidence of malignancy associated with benign adenomas; however, the size of hepatic adenoma is considered to correspond to risk of malignancy and so larger tumors may be surgically removed. Certain subtypes of adenoma, particularly those with β-catenin activation mutation, are particularly associated with increased risk of HCC.Chronic liver disease is rare in children and adolescents; however, congenital liver disorders are associated with an increased the chance of developing HCC. Specifically, children with biliary atresia, infantile cholestasis, glycogen-storage diseases, and other cirrhotic diseases of the liver are predisposed to developing HCC in childhood.Young adults afflicted by the rare fibrolamellar variant of hepatocellular carcinoma may have none of the typical risk factors, such as cirrhosis and hepatitis. Diabetes mellitus The risk of hepatocellular carcinoma in type 2 diabetics is greater (from 2.5 to 7.1 times the nondiabetic risk) depending on the duration of diabetes and treatment protocol. A suspected contributor to this increased risk is circulating insulin concentration such that diabetics with poor insulin control or on treatments that elevate their insulin output (both states that contribute to a higher circulating insulin concentration) show far greater risk of hepatocellular carcinoma than diabetics on treatments that reduce circulating insulin concentration. On this note, some diabetics who engage in tight insulin control (by keeping it from being elevated) show risk levels low enough to be indistinguishable from the general population. This phenomenon is thus not isolated to diabetes mellitus type 2, since poor insulin regulation is also found in other conditions such as metabolic syndrome (specifically, when evidence of nonalcoholic fatty liver disease or NAFLD is present) and again evidence of greater risk exists here, too. While there are claims that anabolic steroid abusers are at greater risk (theorized to be due to insulin and IGF exacerbation), the only evidence that has been confirmed is that anabolic steroid users are more likely to have the benign hepatocellular adenomas transform into the more dangerous hepatocellular carcinoma. Pathogenesis Hepatocellular carcinoma, like any other cancer, develops when epigenetic alterations and mutations affecting the cellular machinery cause the cell to replicate at a higher rate and/or result in the cell avoiding apoptosis.In particular, chronic infections of hepatitis B and/or C can aid the development of hepatocellular carcinoma by repeatedly causing the bodys own immune system to attack the liver cells, some of which are infected by the virus, others merely bystanders. Activated immune-system inflammatory cells release free radicals, such as reactive oxygen species and nitric oxide reactive species, which in turn can cause DNA damage and lead to carcinogenic gene mutations. Reactive oxygen species also cause epigenetic alterations at the sites of DNA repair.While this constant cycle of damage followed by repair can lead to mistakes during repair, which in turn lead to carcinogenesis, this hypothesis is more applicable, at present, to hepatitis C. Chronic hepatitis C causes HCC through the stage of cirrhosis. In chronic hepatitis B, however, the integration of the viral genome into infected cells can directly induce a noncirrhotic liver to develop HCC. Alternatively, repeated consumption of large amounts of ethanol can have a similar effect. The toxin aflatoxin from certain Aspergillus species of fungi is a carcinogen and aids carcinogenesis of hepatocellular cancer by building up in the liver. The combined high prevalence of rates of aflatoxin and hepatitis B in settings such as China and West Africa has led to relatively high rates of hepatocellular carcinoma in these regions. Other viral hepatitides such as hepatitis A have no potential to become a chronic infection, thus are not related to HCC. Diagnosis Methods of diagnosis in HCC have evolved with the improvement in medical imaging. The evaluation of both asymptomatic patients and those with symptoms of liver disease involves blood testing and imaging evaluation. Historically, a biopsy of a tumor was required to prove an HCC diagnosis. However, imaging (especially MRI) findings may be conclusive enough to obviate histopathologic confirmation. Screening HCC remains associated with a high mortality rate, in part because initial diagnosis commonly occurs at an advanced stage of disease. As with other cancers, outcomes are significantly improved if treatment is initiated earlier in the disease process. Since the vast majority of HCC cases occur in people with certain chronic liver diseases, especially those with cirrhosis, liver screening is commonly advocated in this population. Specific screening guidelines continue to evolve over time as evidence of its clinical impact becomes available. In the United States, the most commonly observed guidelines are those published by the American Association for the Study of Liver Diseases(AASLD), which recommends ultrasound screenings every six months for people with cirrhosis, with or without measurement of blood levels of tumor marker alpha-fetoprotein (AFP). Elevated levels of AFP are associated with active HCC disease, though their reliability can be inconsistent. At levels >20, sensitivity is 41–65% and specificity is 80–94%. However, at levels >200, sensitivity is 31 and specificity is 99%.On ultrasound, HCC often appears as a small hypoechoic lesion with poorly defined margins and coarse, irregular internal echoes. When the tumor grows, it can sometimes appear heterogeneous with fibrosis, fatty change, and calcifications. This heterogeneity can look similar to cirrhosis and the surrounding liver parenchyma. A systematic review found that the sensitivity was 60% (95% CI 44–76%) and specificity was 97% (95% CI 95–98%) compared with pathologic examination of an explanted or resected liver as the reference standard. The sensitivity increases to 79% with AFP correlation.Controversy remains as to the most effective screening protocols. For example, while some data support decreased mortality related to screening people with hepatitis B infection, the AASLD notes, “There are no randomized trials [for screening] in Western populations with cirrhosis secondary to chronic hepatitis C or fatty liver disease, and thus there is some controversy surrounding whether surveillance truly leads to a reduction in mortality in this population of patients with cirrhosis.” Higher risk people In a person where a higher suspicion of HCC exists, such as a person with symptoms or abnormal blood tests (i.e. alpha-fetoprotein and des-gamma carboxyprothrombin levels), evaluation requires imaging of the liver by CT or MRI scans. Optimally, these scans are performed with intravenous contrast in multiple phases of hepatic perfusion to improve detection and accurate classification of any liver lesions by the interpreting radiologist. Due to the characteristic blood flow pattern of HCC tumors, a specific perfusion pattern of any detected liver lesion may conclusively detect an HCC tumor. Alternatively, the scan may detect an indeterminate lesion and further evaluation may be performed by obtaining a physical sample of the lesion. Imaging Ultrasound, CT scan, and MRI may be used to evaluate the liver for HCC. On CT and MRI, HCC can have three distinct patterns of growth: A single large tumor Multiple tumors Poorly defined tumor with an infiltrative growth patternA systematic review of CT diagnosis found that the sensitivity was 68% (95% CI 55–80%) and specificity was 93% (95% CI 89–96%) compared with pathologic examination of an explanted or resected liver as the reference standard. With triple-phase helical CT, the sensitivity was 90% or higher, but these data have not been confirmed with autopsy studies.However, MRI has the advantage of delivering high-resolution images of the liver without ionizing radiation. HCC appears as a high-intensity pattern on T2-weighted images and a low-intensity pattern on T1-weighted images. The advantage of MRI is that it has improved sensitivity and specificity when compared to ultrasound and CT in cirrhotic patients with whom it can be difficult to differentiate HCC from regenerative nodules. A systematic review found that the sensitivity was 81% (95% CI 70–91%) and specificity was 85% (95% CI 77–93%) compared with pathologic examination of an explanted or resected liver as the reference standard. The sensitivity is further increased if gadolinium contrast-enhanced and diffusion-weighted imaging are combined. MRI is more sensitive and specific than CT.Liver image reporting and data system (LI-RADS) is a classification system for the reporting of liver lesions detected on CT and MRI. Radiologists use this standardized system to report on suspicious lesions and to provide an estimated likelihood of malignancy. Categories range from LI-RADS (LR) 1 to 5, in order of concern for cancer. A biopsy is not needed to confirm the diagnosis of HCC if certain imaging criteria are met. Pathology Macroscopically, liver cancer appears as a nodular or infiltrative tumor. The nodular type may be solitary (large mass) or multiple (when developed as a complication of cirrhosis). Tumor nodules are round to oval, gray or green (if the tumor produces bile), well circumscribed but not encapsulated. The diffuse type is poorly circumscribed and infiltrates the portal veins, or the hepatic veins (rarely).Microscopically, the four architectural and cytological types (patterns) of hepatocellular carcinoma are: fibrolamellar, pseudoglandular (adenoid), pleomorphic (giant cell), and clear cell. In well-differentiated forms, tumor cells resemble hepatocytes, form trabeculae, cords, and nests, and may contain bile pigment in the cytoplasm. In poorly differentiated forms, malignant epithelial cells are discohesive, pleomorphic, anaplastic, and giant. The tumor has a scant stroma and central necrosis because of the poor vascularization. A fifth form – lymphoepithelioma like hepatocellular carcinoma – has also been described. Staging BCLC Staging System The prognosis of HCC is affected by the staging of the tumor and the livers function due to the effects of liver cirrhosis.A number of staging classifications for HCC are available. However, due to the unique nature of the carcinoma to fully encompass all the features that affect the categorization of the HCC, a classification system should incorporate tumor size and number, presence of vascular invasion and extrahepatic spread, liver function (levels of serum bilirubin and albumin, presence of ascites, and portal hypertension) and general health status of the patient (defined by the ECOG classification and the presence of symptoms).Of all the staging classification systems available, the Barcelona Clinic Liver Cancer staging classification encompasses all of the above characteristics. This staging classification can be used to select people for treatment. Important features that guide treatment include: size spread (stage) involvement of liver vessels presence of a tumor capsule presence of extrahepatic metastases presence of daughter nodules vascularity of the tumorMRI is the best imaging method to detect the presence of a tumor capsule. The most common sites of metastasis are the lung, abdominal lymph nodes, and bone. Prevention Since hepatitis B and C are some of the main causes of hepatocellular carcinoma, prevention of infection is key to then prevent HCC. Thus, childhood vaccination against hepatitis B may reduce the risk of liver cancer in the future. In the case of patients with cirrhosis, alcohol consumption is to be avoided. Also, screening for hemochromatosis may be beneficial for some patients. Whether screening those with chronic liver disease for HCC improves outcomes is unclear. Treatment Treatment of hepatocellular carcinoma varies by the stage of disease, a persons likelihood to tolerate surgery, and availability of liver transplant: Curative intention: for limited disease, when the cancer is limited to one or more areas of within the liver, surgically removing the malignant cells may be curative. This may be accomplished by resection the affected portion of the liver (partial hepatectomy) or in some cases by orthotopic liver transplantation of the entire organ. "Bridging" intention: for limited disease which qualifies for potential liver transplantation, the person may undergo targeted treatment of some or all of the known tumor while waiting for a donor organ to become available. "Downstaging" intention: for moderately advanced disease which has not spread beyond the liver, but is too advanced to qualify for curative treatment. The person may be treated by targeted therapies in order to reduce the size or number of active tumors, with the goal of once again qualifying for liver transplant after this treatment. Palliative intention: for more advanced disease, including spread of cancer beyond the liver or in persons who may not tolerate surgery, treatment intended to decrease symptoms of disease and maximize duration of survival.Loco-regional therapy (also referred to as liver-directed therapy) refers to any one of several minimally-invasive treatment techniques to focally target HCC within the liver. These procedures are alternatives to surgery, and may be considered in combination with other strategies, such as a later liver transplantation. Generally, these treatment procedures are performed by interventional radiologists or surgeons, in coordination with a medical oncologist. Loco-regional therapy may refer to either percutaneous therapies (e.g. cryoablation), or arterial catheter-based therapies (chemoembolization or radioembolization). Surgical resection Surgical removal of the tumor is associated with better cancer prognosis, but only 5–15% of patients are suitable for surgical resection due to the extent of disease or poor liver function. Surgery is only considered if the entire tumor can be safely removed while preserving sufficient functional liver to maintain normal physiology. Thus, preoperative imaging assessment is critical to determine both the extent of HCC and to estimate the amount of residual liver remaining after surgery. To maintain liver function, residual liver volume should exceed 25% of total liver volume in a noncirrhotic liver, greater than 40% in a cirrhotic liver. Surgery on diseased or cirrhotic livers is generally associated with higher morbidity and mortality. The overall recurrence rate after resection is 50–60%. The Singapore Liver Cancer Recurrence score can be used to estimate risk of recurrence after surgery. Liver transplantation Liver transplantation, replacing the diseased liver with a cadaveric or a living donor liver, plays an increasing role in treatment of HCC. Although outcomes following liver transplant were initially poor (20%–36% survival rate), outcomes have significantly improved with improvement in surgical techniques and adoption of the Milan criteria at US transplantation centers. Expanded Shanghai criteria in China have resulted in overall survival and disease-free survival rates similar to those achieved using the Milan criteria. Studies from the late 2000s obtained higher survival rates ranging from 67% to 91%.The risks of liver transplantation extend beyond risk of the procedure itself. The immunosuppressive medication required after surgery to prevent rejection of the donor liver also impairs the bodys natural ability to combat dysfunctional cells. If the tumor has spread undetected outside the liver before the transplant, the medication effectively increases the rate of disease progression and decreases survival. With this in mind, liver transplant "can be a curative approach for patients with advanced HCC without extrahepatic metastasis". In fact, among patients with compensated cirrhosis, transplantation is not associated with improved survival compared to hepatectomy, but instead is significantly more expensive. Patient selection is considered a major key for success. Ablation Radiofrequency ablation (RFA) uses high-frequency radio waves to destroy tumor by local heating. The electrodes are inserted into the liver tumor under ultrasound image guidance using percutaneous, laparoscopic or open surgical approach. It is suitable for small tumors (<5 cm). RFA has the best outcomes in patients with a solitary tumor less than 4 cm. Since it is a local treatment and has minimal effect on normal healthy tissue, it can be repeated multiple times. Survival is better for those with smaller tumors. In one study, In one series of 302 patients, the three-year survival rates for lesions >5 cm, 2.1 to 5 cm, and ≤2 cm were 59, 74, and 91%, respectively. A large randomized trial comparing surgical resection and RFA for small HCC showed similar four-year survival and less morbidities for patients treated with RFA. Cryoablation is a technique used to destroy tissue using cold temperature. The tumor is not removed and the destroyed cancer is left to be reabsorbed by the body. Initial results in properly selected patients with unresectable liver tumors are equivalent to those of resection. Cryosurgery involves the placement of a stainless steel probe into the center of the tumor. Liquid nitrogen is circulated through the end of this device. The tumor and a half inch margin of normal liver are frozen to −190 °C for 15 minutes, which is lethal to all tissues. The area is thawed for 10 minutes and then refrozen to −190 °C for another 15 minutes. After the tumor has thawed, the probe is removed, bleeding is controlled, and the procedure is complete. The patient spends the first postoperative night in the intensive care unit and typically is discharged in 3–5 days. Proper selection of patients and attention to detail in performing the cryosurgical procedure are mandatory to achieve good results and outcomes. Frequently, cryosurgery is used in conjunction with liver resection, as some of the tumors are removed while others are treated with cryosurgery. Percutaneous ethanol injection is well tolerated, with high RR in small (<3 cm) solitary tumors; as of 2005, no randomized trial has comparing resection to percutaneous treatments; recurrence rates are similar to those for postresection. However, a comparative study found that local therapy can achieve a 5-year survival rate around 60% for patients with small HCC. Arterial catheter-based treatment Transcatheter arterial chemoembolization (TACE) is performed for unresectable tumors or as a temporary treatment while waiting for liver transplant ("bridge to transplant"). TACE is done by injecting an antineoplastic drug (e.g. cisplatin) mixed with a radio-opaque contrast (e.g. Lipiodol) and an embolic agent (e.g. Gelfoam) into the right or left hepatic artery via the groin artery. The goal of the procedure is to restrict the tumors vascular supply while supplying a targeted chemotherapeutic agent. TACE has been shown to increase survival and to downstage HCC in patients who exceed the Milan criteria for liver transplant. Patients who undergo the procedure are followed with CT scans and may need additional TACE procedures if the tumor persists. As of 2005, multiple trials show objective tumor responses and slowed tumor progression, but questionable survival benefit compared to supportive care; greatest benefit is seen in people with preserved liver function, absence of vascular invasion, and smallest tumors. TACE is not suitable for big tumors (>8 cm), the presence of portal vein thrombus, tumors with a portal-systemic shunt, and patients with poor liver function. Selective internal radiation therapy (SIRT) can be used to destroy the tumor from within (thus minimizing exposure to healthy tissue). Similar to TACE, this is a procedure in which an interventional radiologist selectively injects the artery or arteries supplying the tumor with a chemotherapeutic agent. The agent is typically Yttrium-90 (Y-90) incorporated into embolic microspheres that lodge in the tumor vasculature, causing ischemia and delivering their radiation dose directly to the lesion. This technique allows for a higher, local dose of radiation to be delivered directly to the tumor while sparing normal healthy tissue. While not curative, patients have increased survival. No studies have been done to compare whether SIRT is superior to TACE in terms of survival outcomes, although retrospective studies suggest similar efficacy. Two products are available, SIR-Spheres and TheraSphere. The latter is an FDA-approved treatment for primary liver cancer (HCC) which has been shown in clinical trials to increase the survival rate of low-risk patients. SIR-Spheres are FDA-approved for the treatment of metastatic colorectal cancer, but outside the US, SIR-Spheres are approved for the treatment of any nonresectable liver cancer including primary liver cancer. External beam therapy The role of radiotherapy in the treatment of hepatocellular carcinoma has evolved as technological advancements in treatment delivery and imaging have provided a means for safe and effective radiotherapy delivery in a wide spectrum of HCC patients. In metastatic cases, radiotherapy can be used for palliative care. Proton therapy for unresectable hepatocellular carcinoma was associated with improved survival relative to photon-based radiation therapy which may be driven by decreased incidence of post-treatment liver decompensation and a number of randomized controlled trials are currently ongoing. Systemic In disease which has spread beyond the liver, systemic therapy may be a consideration. In 2007, Sorafenib, an oral multikinase inhibitor, was the first systemic agent approved for first-line treatment of advanced HCC. Trials have found modest improvement in overall survival: 10.7 months vs 7.9 months and 6.5 months vs 4.2 months.The most common side effects of Sorafenib include a hand-foot skin reaction and diarrhea. Sorafenib is thought to work by blocking growth of both tumor cells and new blood vessels. Numerous other molecular targeted drugs are being tested as alternative first- and second-line treatments for advanced HCC.More recently, a host of additional targeted therapies and immune checkpoint inhibitors have been found to be effective against this disease. For instance, in the recent phase III trial IMBrave 150, the combination of atezolizumab and bevacizumab was found to improve both overall and progression-free survival compared to sorafenib alone. Other Portal vein embolization (PVE): This technique is sometimes used to increase the volume of healthy liver, in order to improve chances of survival following surgical removal of diseased liver. For example, embolization of the right main portal vein would result in compensatory hypertrophy of the left lobe, which may qualify the patient for a partial hepatectomy. Embolization is performed by an interventional radiologist using a percutaneous transhepatic approach. This procedure can also serve as a bridge to transplant. High intensity focused ultrasound (HIFU) (as opposed to diagnostic ultrasound) is an experimental technique which uses high-powered ultrasound waves to destroy tumor tissue. A systematic review assessed 12 articles involving a total of 318 patients with hepatocellular carcinoma treated with Yttrium-90 radioembolization. Excluding a study of only one patient, post-treatment CT evaluation of the tumor showed a response ranging from 29 to 100% of patients evaluated, with all but two studies showing a response of 71% or greater. Prognosis The usual outcome is poor because only 10–20% of hepatocellular carcinomas can be removed completely using surgery. If the cancer cannot be completely removed, the disease is usually deadly within 3 to 6 months. This is partially due to late presentation with tumors, but also the lack of medical expertise and facilities in the regions with high HCC prevalence. However, survival can vary, and occasionally people survive much longer than 6 months. The prognosis for metastatic or unresectable HCC has improved due to the approval of Sorafenib (Nexavar®) for advanced HCC. Epidemiology HCC is one of the most common tumors worldwide. The epidemiology of HCC exhibits two main patterns, one in North America and Western Europe and another in non-Western countries, such as those in sub-Saharan Africa, Central and Southeast Asia, and the Amazon basin. Males are affected more than females usually, and it is most common between the ages of 30 and 50, Hepatocellular carcinoma causes 662,000 deaths worldwide per year about half of them in China. Africa and Asia In some parts of the world, such as sub-Saharan Africa and Southeast Asia, HCC is the most common cancer, generally affecting men more than women, and with an age of onset between the late teens and 30s. This variability is in part due to the different patterns of hepatitis B and hepatitis C transmission in different populations – infection at or around birth predispose to earlier cancers than if people are infected later. The time between hepatitis B infection and development into HCC can be years, even decades, but from diagnosis of HCC to death, the average survival period is only 5.9 months according to one Chinese study during the 1970-80s, or 3 months (median survival time) in sub-Saharan Africa according to Mansons textbook of tropical diseases. HCC is one of the deadliest cancers in China, where chronic hepatitis B is found in 90% of cases. In Japan, chronic hepatitis C is associated with 90% of HCC cases
Hepatocellular carcinoma
. Foods infected with Aspergillus flavus (especially peanuts and corns stored during prolonged wet seasons) which produces aflatoxins pose another risk factor for HCC. North America and Western Europe The most common malignant tumors in the liver represent metastases (spread) from tumors which originate elsewhere in the body. Among cancers that originate from liver tissue, HCC is the most common primary liver cancer. In the United States, the US surveillance, epidemiology, and end results database program, shows that HCC accounts for 65% of all cases of liver cancers. As screening programs are in place for high-risk persons with chronic liver disease, HCC is often discovered much earlier in Western countries than in developing regions such as sub-Saharan Africa.Acute and chronic hepatic porphyrias (acute intermittent porphyria, porphyria cutanea tarda, hereditary coproporphyria, variegate porphyria) and tyrosinemia type I are risk factors for hepatocellular carcinoma. The diagnosis of an acute hepatic porphyria (AIP, HCP, VP) should be sought in patients with HCC without typical risk factors of hepatitis B or C, alcoholic liver cirrhosis, or hemochromatosis. Both active and latent genetic carriers of acute hepatic porphyrias are at risk for this cancer, although latent genetic carriers have developed the cancer at a later age than those with classic symptoms. Patients with acute hepatic porphyrias should be monitored for HCC.The incidence of HCC is relatively lower in the Western Hemisphere than in Eastern Asia. However, despite the statistics being low, the diagnosis of HCC has increased since the 1980s and it is continuing to increase, making it one of the rising causes of death due to cancer. The common risk factor for HCC is hepatitis C, along with other health issues. Research Preclinical Mipsagargin (G-202), has orphan drug designation as a treatment during chemotherapy for HCC. It is a thapsigargin-based prodrug with cytotoxic activity used to reduce blood flow to the tumor during treatment. Results from Phase 2 trial recommended G-202 as a first-in-class PSMA-targeted prodrug and that it move to clinical trials.Current research includes the search for the genes that are disregulated in HCC, antiheparanase antibodies, protein markers, non-coding RNAs (such as TUC338) and other predictive biomarkers. As similar research is yielding results in various other malignant diseases, it is hoped that identifying the aberrant genes and the resultant proteins could lead to the identification of pharmacological interventions for HCC.The development of three-dimensional culture methods provides a new approach for preclinical studies of cancer therapy using patient-derived organoids. These miniaturized organoid avatars of a patients tumor recapitulate several features of the original tumor, rendering them an attractive model for drug-sensitivity testing and precision medicine for HCC and other types of primary liver cancer.Furthermore, HCC occurs in patients with liver disease. A biomarker named six-miRNA signature allows effective treatment of patients with HCC and is able to predict its recurrence in the liver. Clinical JX-594, an oncolytic virus, has orphan drug designation for this condition and is undergoing clinical trials. Hepcortespenlisimut-L (Hepko-V5), an oral cancer vaccine, also has US FDA orphan drug designation for HCC. Immunitor Inc. completed a Phase II trial, published in 2017. A randomized trial of people with advanced HCC showed no benefit for the combination of everolimus and pasireotide. See also Hemihypertrophy Oncovirus Portal hypertension References Further reading "Long-term results of liver transplantation for hepatocellular carcinoma: an update of the University of Padova experience". September 23, 2013. Retrieved 6 February 2014. Bruix, Jordi; Sherman, Morris; Practice Guidelines Committee (November 2005). "Management of hepatocellular carcinoma". Hepatology. 42 (5): 1208–1236. doi:10.1002/hep.20933. PMID 16250051. S2CID 5106445. Liu, Chi-leung, M.D., "Hepatic Resection for Hepatocellular Carcinoma", The Hong Kong Medical Diary, Vol.10 No.12, December 2005 Medical Bulletin External links Blue Faery: The Adrienne Wilson Liver Cancer Association (hepatocellular carcinoma patient support site) NCI Liver Cancer Homepage
Chorioamnionitis
Chorioamnionitis, also known as intra-amniotic infection (IAI), is inflammation of the fetal membranes (amnion and chorion), usually due to bacterial infection. In 2015, a National Institute of Child Health and Human Development Workshop expert panel recommended use of the term "triple I" to address the heterogeneity of this disorder. The term triple I refers to intrauterine infection or inflammation or both and is defined by strict diagnostic criteria, but this terminology has not been commonly adopted although the criteria are used.Chorioamnionitis results from an infection caused by bacteria ascending from the vagina into the uterus and is associated with premature or prolonged labor. It triggers an inflammatory response to release various inflammatory signaling molecules, leading to increased prostaglandin and metalloproteinase release. These substances promote uterine contractions and cervical ripening, causations of premature birth. The risk of developing chorioamnionitis increases with number of vaginal examinations performed in the final month of pregnancy, including labor. Tobacco and alcohol use also puts mothers at risk for chorioamnionitis development.Chorioamnionitis is caught early by looking at signs and symptoms such as fever, abdominal pain, or abnormal vaginal excretion. Administration of antibiotics if the amniotic sac bursts prematurely can prevent chorioamnionitis occurrence. Signs and symptoms The signs and symptoms of clinical chorioamnionitis include fever, leukocytosis (>15,000 cells/mm³), maternal (>100 bpm) or fetal (>160 bpm) tachycardia, uterine tenderness and preterm rupture of membranes. Causes Causes of chorioamnionitis stem from bacterial infection as well as obstetric and other related factors. Microorganisms Bacterial, viral, and even fungal infections can cause chorioamnionitis. Most commonly from Ureaplasma, Fusobacterium, and Streptococcus bacteria species. Less commonly, Gardnerella, Mycoplasma, and Bacteroides bacteria species. Sexually transmitted infections, chlamydia and gonorrhea, can cause development of the condition as well. Studies are continuing to identify other microorganism classes and species as infection sources. Obstetric and other Birthing-related events, lifestyle, and ethnic background have been linked to an increase in the risk of developing chorioamnionitis apart from bacterial causation. Premature deliveries, ruptures of the amniotic sac membranes, prolonged labor, and primigravida childbirth are associated with this condition. At term mothers who experience a combination of pre-labor membrane ruptures and multiple invasive vaginal examinations, prolonged labor, or have meconium appear in the amniotic fluid are at higher risk than at term mothers experiencing just one of those events. In other studies, smoking, alcohol use and drug use are noted as risk factors. Those of African American ethnicity are noted to be at higher risk. Anatomy The amniotic sac consists of two parts: The outer membrane is the chorion. It is closest to the mother and physically supports the much thinner amnion. The chorion is the last and outermost of the membranes that make up the amniotic sac. The inner membrane is the amnion. It is in direct contact with the amniotic fluid, which surrounds the fetus. The amniotic fluid exists within the amnion, and is where the fetus is able to grow and develop. The swelling of the amnion and chorion is characteristic of chorioamnionitis, occurring when bacteria makes its way into the amniotic fluid and creates an infection within the amniotic fluid. Diagnosis Pathologic Chorioamnionitis is diagnosed from a histologic (tissue) examination of the fetal membranes. Confirmed histologic chorioamnionitis without any clinical symptoms is termed subclinical chorioamnionitis and is more common than symptomatic clinical chorioamnionitis.Infiltration of the chorionic plate by neutrophils is diagnostic of (mild) chorioamnionitis. More severe chorioamnionitis involves subamniotic tissue and may have fetal membrane necrosis and/or abscess formation.Severe chorioamnionitis may be accompanied by vasculitis of the umbilical blood vessels due to the fetus inflammatory cells. If very severe, funisitis, inflammation of the umbilical cord connective tissue, occurs. Suspected clinical diagnosis The presence of fever between 38.0°C and 39.0°C alone is insufficient to indicate chorioamnionitis and is termed isolated maternal fever. Isolated maternal fever may not have an infectious cause and does not required antibiotic treatment.When intrapartum (during delivery) fever is higher than 39.0°C, suspected diagnosis of chorioamnionitis can be made. Alternatively, if intrapartum fever is between 38.0°C and 39.0°C, an additional risk factor must be present to make a presumptive diagnosis of chorioamnionitis. Additional risk factors include: Fetal tachycardia Maternal leukocytosis (>15,000 cells/mm³) Purulent cervical drainage Confirmed diagnosis Diagnosis is typically not confirmed until after delivery. However, people with confirmed diagnosis and suspected diagnosis have the same post-delivery treatment regardless of diagnostic status. Diagnosis can be confirmed histologically or through amniotic fluid tests such as gram staining, glucose levels, or other culture results consistent with infection. Prevention If the amniotic sac breaks early into pregnancy, the potential of introducing bacteria in the amniotic fluid can increase. Administering antibiotics maternally can potentially prevent chorioamnionitis and allow for a longer pregnancy. In addition, it has been shown that it is not necessary to deliver the fetus quickly after chorioamnionitis is diagnosed, so a C-section is not necessary unless maternal health concern is present. However, research has found that beginning labor early at approximately 34 weeks can lessen the likelihood of fetal death, and reduce the potential for excessive infection within the mother.In addition, providers should interview people suspected to have chorioamnionitis about whether they are experiencing signs and symptoms at scheduled obstetrics visits during pregnancy, including whether the individual has experienced excretion vaginally, febrile, or abdominal pain. Treatment The American College of Obstetricians and Gynecologists Committee Opinion proposes the use of antibiotic treatment in intrapartum mothers with suspected or confirmed chorioamnionitis and maternal fever without an identifiable cause.Intrapartum antibiotic treatment consists of: Standard Ampicillin + gentamicin Alternative Ampicillin/sulbactam Ticarcillin/clavulanate Cefoxitine Cefotetan Piperacillin/tazobactam Ertapenem Cesarean delivery Ampicillin and gentamicin plus either clindamycin or metronidazole Penicillin-allergy Vancomycin + gentamicin Gentamicin + clindamycinHowever, there is not enough evidence to support the most efficient antimicrobial regimen. Starting the treatment during the intrapartum period is more effective than starting it postpartum; it shortens the hospital stay for the mother and the neonate. There is currently not enough evidence to dictate how long antibiotic therapy should last. Completion of treatment/cure is only considered after delivery. Supportive measures Acetaminophen is often used for treating fevers and may be beneficial for fetal tachycardia. There can be increased likelihood for neonatal encephalopathy when mothers have intrapartum fever. Outcomes Chorioamnionitis has possible associations with numerous neonatal conditions. Intrapartum (during labor) chorioamnionitis may be associated with neonatal pneumonia, meningitis, sepsis, and death. Long-term infant complications like bronchopulmonary dysplasia, cerebral palsy, and Wilson-Mikity syndrome have been associated to the bacterial infection. Furthermore, histological chorioamnionitis may increase the likelihood of newborn necrotizing enterocolitis, where one or more sections of the bowel die. This occurs when the fetal gut barrier becomes compromised and is more susceptible to conditions like infection and sepsis. In addition, chorioamnionitis can act as a risk factor for premature birth and periventricular leukomalacia. Complications For mother and fetus, chorioamnionitis may lead to short-term and long-term issues when microbes move to different areas or trigger inflammatory responses due to infection. Maternal complications Higher risk for C-section Postpartum hemorrhage Endometritis Bacteremia (often due to Group B streptococcus and Escherichia coli) Pelvic abscessMothers with chorioamnionitis who undergo a C-section may be more likely to develop pelvic abscesses, septic pelvic thrombophlebitis, and infections at the surgical site. Fetal complications Fetal death Neonatal sepsis Neonatal complications Perinatal death Asphyxia Early onset neonatal sepsis Septic shock Neonatal pneumonia Infant respiratory distressIn the long-term, infants may be more likely to experience cerebral palsy or neurodevelopmental disabilities. Disability development is related to the activation of the fetal inflammatory response syndrome (FIRS) when the fetus is exposed to infected amniotic fluid or other foreign entities. This systemic response results in neutrophil and cytokine release that can impair the fetal brain and other vital organs. Compared to infants with clinical chorioamnionitis, it appears cerebral palsy may occur at a higher rate for those with histologic chorioamnionitis. However, more research needs to be done to examine this association. There is also concern about the impact of FIRS on infant immunity as this is a critical time for growth and development. For instance, it may be linked to chronic inflammatory disorders, such as asthma. Epidemiology Chorioamnionitis is approximated to occur in about 4% of births in the United States. However, many other factors can increase the risk of chorioamnionitis. For example, in births with premature rupture of membranes (PROM), between 40 and 70% involve chorioamnionitis. Furthermore, clinical chorioamnionitis is implicated in 12% of all cesarean deliveries. Some studies have shown that the risk of chorioamnionitis is higher in those of African American ethnicity, those with immunosuppression, and those who smoke, use alcohol, or abuse drugs. See also Chronic deciduitis Funisitis Placentitis Whartons jelly Notes References External links Overview at Cleveland Clinic. Cerebral palsy inflammation link (29 November 2003) at BBC.
Gray baby syndrome
Gray baby syndrome (also termed Gray or Grey syndrome) is a rare but serious, even fatal, side effect that occurs in newborn infants (especially premature babies) following the accumulation of antibiotic chloramphenicol. Chloramphenicol is a broad-spectrum antibiotic that has been used to treat a variety of bacteria infections like Streptococcus pneumoniae as well as typhoid fever, meningococcal sepsis, cholera, and eye infections. Chloramphenicol works by binding to ribosomal subunits which blocks transfer ribonucleic acid and prevents the synthesis of bacterial proteins. Chloramphenicol has also been used to treat neonates born before 37 weeks of the gestational period for prophylaxis purposes. In 1958, newborns born prematurely due to rupture of the amniotic sac were given chloramphenicol to prevent possible infections, and it was noticed that these newborns had a higher mortality rate compared with those who were not treated with the antibiotic. Over the years, chloramphenicol has been used less in clinical practices due to the risks of toxicity not only to neonates, but also to adults due to the risk of aplastic anemia. Chloramphenicol is now reserved to treat certain severe bacteria infections that were not successfully treated with other antibiotic medications. Signs and symptoms are summarized in the WHO Model Formulary for Children 2010 under the rare adverse effect section of chloramphenicol. Signs and symptoms Since the syndrome is due to the accumulation of chloramphenicol, the signs and symptoms are dose related. According to Kastens review published in the Mayo Clinic Proceedings, a serum concentration of more than 50 μg/mL is a warning sign, while Hammett-Stabler and John states that the common therapeutics peak level is 10-20 μg/mL and is expected to achieve after 0.5-1.5 hours of intravenous administration in their review of antimicrobial drugs. The common onset of signs and symptoms are 2 to 9 days after the initiation of the medication, which allows the serum concentration to build up to reach the toxic concentration above. Common signs and symptoms include loss of appetite, fussiness, vomiting, ashen gray color of the skin, hypotension (low blood pressure), cyanosis (blue discoloration of lips and skin), hypothermia, cardiovascular collapse, hypotonia (muscle stiffness), abdominal distension, irregular respiration, and increased blood lactate. Pathophysiology Two pathophysiologic mechanisms are thought to play a role in the development of gray baby syndrome after exposure to chloramphenicol. This condition is due to a lack of glucuronidation reactions occurring in the baby (phase II hepatic metabolism), thus leading to an accumulation of toxic chloramphenicol metabolites: Metabolism: The UDP-glucuronyl transferase enzyme system in infants, especially premature infants, is not fully developed and incapable of metabolizing the excessive drug load needed to excrete chloramphenicol. Elimination: Insufficient renal excretion of the unconjugated drug.Insufficient metabolism and excretion of chloramphenicol leads to increased blood concentrations of the drug, causing blockade of the electron transport of the liver, myocardium, and skeletal muscles. Since the electron transport is an essential part of cellular respiration, its blockade can result in cell damage. In addition, the presence of chloramphenicol weakens the binding of bilirubin and albumin, so increased levels of the drug can lead to high levels of free bilirubin in the blood, resulting in brain damage or kernicterus. If left untreated, possible bleeding, renal (kidney) and/or hepatic (liver) failure, anemia, infection, confusion, weakness, blurred vision, or eventually death are expected. Additionally, chloramphenicol is significantly insoluble due to an absence of acidic and basic groups in its molecular compound. As a result, larger amounts of the medication are required to achieve the desired therapeutic effect. High volumes of a medication that can cause various toxicities is another avenue how chloramphenicol can potentially lead to grey baby syndrome. Diagnosis Gray baby syndrome should be suspected in a new born with abdominal distension, progressive pallid cyanosis, irregular respirations, and refusal to breastfeed. The cause of gray baby syndrome can come from the direct use of intravenous or oral chloramphenicol in neonates. Direct chronological relation between the use of the medication and signs and symptoms of the syndrome should be found in the previous medical history. In terms of the possible route of chloramphenicol, gray baby syndrome may not come from the mothers use of chloramphenicol during pregnancy or breastfeeding. According to the Drug and Lactation database (LactMed), it states that "milk concentrations are not sufficient to induce gray baby syndrome". It is also reported that the syndrome may not develop in infants when their mothers use the medication in their late period of pregnancy. According to the Oxford Review, chloramphenicol given to mothers during their pregnancy did not result in gray baby syndrome, but was caused by infants receiving supra-therapeutic doses of chloramphenicol after birth. The presentation of symptoms can depend on the level of exposure of the drug to the baby, given its dose-related nature. A broad diagnosis is usually needed for babies who present with cyanosis. To support the diagnosis, blood work should be done to determine the level of serum chloramphenicol, and to further evaluate chloramphenicol toxicity, a complete blood panel including levels of serum ketones, glucose (due to the risk of hypoglycemia), metabolic panel should be completed to help determine if an infant has the syndrome. Other tools used to help with diagnosis include CT scans, ultrasound, and electrocardiogram. Prevention Since the syndrome is a side effect of chloramphenicol, the prevention is primarily related to the proper use of the medication. The WHO Model Formulary for Children 2010 recommends to reserve chloramphenicol for life-threatening infections. As well as using chloramphenicol only when necessary, it should also be used in short periods of time to also prevent the potential for toxicity. In particular, this medication should not be prescribed especially in neonates less than one week old due to the significant risk of toxicity. Preterm infants especially should not be administered chloramphenicol. Gray baby syndrome has been noted to be dose-dependent as it typically occurs in neonates who have received a daily dose greater than 200 milligrams.When chloramphenicol is necessary, the condition can be prevented by using chloramphenicol at the recommended doses and monitoring blood levels, or alternatively, third generation cephalosporins can be effectively substituted for the drug, without the associated toxicity. Also, repeated course of administration and prolonged treatment should be avoided as much as possible. In terms of hepatic development in neonates, it take only weeks from birth for them to develop their UDPGT expression and function to be at an adult-like level, while the function is only about 1% in the late pregnancy, even right before birth-giving. According to MSD Manuals, chloramphenicol should not be given to neonates with younger than 1 month of age with more than a dose of 25 mg/kg/day to start with. The serum concentration of the medication should be monitored to titrate to a therapeutic level and to prevent toxicity. Medication reconciliation of other medications that neonates may be taking that can decrease blood cell count should be monitored because of this medications ability to suppress bone marrow activity. Rifampicin and trimethoprim are examples of medications with bone marrow suppression abilities and are contraindicated for concomitant use with chloramphenicol. In regards to bone marrow suppression, chloramphenicol has two major etiological manifestations. The first mechanism of bone marrow suppression affects the formation of blood cells such as erythrocytes, and this can be reversible since it is an early sign up toxicity. The second form of bone marrow suppression is bone marrow aplasia, which is associated with being late into toxicity and cannot be reversed in some cases. Chloramphenicol is contraindicated in persons who are breastfeeding due to the risk of toxic effects in the baby, but if maternal use of chloramphenicol cannot be avoided, close monitoring of the babys symptoms such as feeding difficulties, and blood work is recommended. Treatment Chloramphenicol therapy should be stopped immediately if objective or subjective signs of gray baby syndrome are suspected since gray baby syndrome can be fatal for the infant if it is not diagnosed early on as it can lead to anemia, shock, and end-organ damage. After discontinuing the antibiotic, the side effects caused by the toxicity should be treated. This includes treating hypoglycemia to help prevent hemodynamic instability, as well as increasing the temperature of the infant if they have developed hypothermia. Since symptoms of gray baby syndrome are correlated with elevated serum chloramphenicol concentrations, exchange transfusion may be required to remove the drug, charcoal column hemoperfusion is a type of transfusion that has shown significant effects but is associated with numerous side effects. The associated side effects isnt the only reason why this method of treatment is not a first line therapy. According to the American Journal of Kidney Diseases, elevated cartridge prices and viable lifespan of the product are deterring factors to consider. Phenobarbital and theophylline are two drugs in particular that have shown significant efficacy with charcoal hemoperfusion, aside from its most significant indication for chronic aluminum toxicity in people with end-stage renal disease (ESRD) traditionally. Sometimes, phenobarbital is used to induce UDP-glucuronyl transferase enzyme function. For hemodynamically unstable neonates, supportive care measures such as resuscitation, oxygenation, and treatment for hypothermia are common practices when cessation of chloramphenicol alone is insufficient. With sepsis being a complication of severe gray baby syndrome, usage of broad-spectrum antibiotics such as vancomycin, for example, is a recommended treatment option. Third generation antibiotics have also proven efficacy in treating gray baby-induced sepsis. References Further reading == External links ==
Spasticity
Spasticity (from Greek spasmos- drawing, pulling) is a feature of altered skeletal muscle performance with a combination of paralysis, increased tendon reflex activity, and hypertonia. It is also colloquially referred to as an unusual "tightness", stiffness, or "pull" of muscles. Clinically, spasticity results from the loss of inhibition of motor neurons, causing excessive velocity-dependent muscle contraction. This ultimately leads to hyperreflexia, an exaggerated deep tendon reflex. Spasticity is often treated with the drug baclofen, which acts as an agonist at GABA receptors, which are inhibitory. Spastic cerebral palsy is the most common form of cerebral palsy, which is a group of permanent movement problems that do not get worse over time. GABAs inhibitory actions contribute to baclofens efficacy as an anti-spasticity agent. Cause Spasticity mostly occurs in disorders of the central nervous system (CNS) affecting the upper motor neurons in the form of a lesion, such as spastic diplegia, or upper motor neuron syndrome, and can also be present in various types of multiple sclerosis, where it occurs as a symptom of the progressively-worsening attacks on myelin sheaths and is thus unrelated to the types of spasticity present in neuromuscular cerebral palsy rooted spasticity disorders.The cause of spasticity is thought to be where an imbalance occurs in the excitatory and inhibitory input to α motor neurons caused by damage to the spinal cord and/or central nervous system. The damage causes a change in the balance of signals between the nervous system and the muscles, leading to increased excitability in muscles. This is common in people who have cerebral palsy, brain injuries or a spinal cord injury, but it can happen to anybody e.g. having a stroke.One factor that is thought to be related to spasticity is the stretch reflex. This reflex is important in coordinating normal movements in which muscles are contracted and relaxed and in keeping the muscle from stretching too far. Although the result of spasticity is problems with the muscles, spasticity is actually caused by an injury to a part of the central nervous system (the brain or spinal cord) that controls voluntary movements. The damage causes a change in the balance of signals between the nervous system and the muscles. This imbalance leads to increased activity (excitability) in the muscles. Receptors in the muscles receive messages from the nervous system, which sense the amount of stretch in the muscle and sends that signal to the brain. The brain responds by sending a message back to reverse the stretch by contracting or shortening.Overall, a defining feature of spasticity is that the increased resistance to passive stretch is velocity-dependent. Lance (1980) describes it this way: "...a motor disorder, characterised by a velocity-dependent increase in tonic stretch reflexes (muscle tone) with exaggerated tendon jerks, resulting from hyper-excitability of the stretch reflex as one component of the upper motor neurone (UMN) syndrome".Spasticity is found in conditions where the brain and/or spinal cord are damaged or fail to develop normally; these include cerebral palsy, multiple sclerosis, spinal cord injury and acquired brain injury including stroke. Damage to the CNS as a result of stroke or spinal cord injury, alter the [net inhibition] of peripheral nerves in the affected region. This change in input to bodily structures tends to favor excitation and therefore increase nerve excitability. CNS damage also causes nerve cell membranes to rest in a more [depolarized] state. The combination of decreased inhibition and an increased depolarized state of cell membranes, decreases action potential threshold for nerve signal conduction, and thus increases activity of structures innervated by the affected nerves (spasticity). Muscles affected in this way have many other potential features of altered performance in addition to spasticity, including muscle weakness; decreased movement control; clonus (a series of involuntary rapid muscle contractions often symptomatic of muscle over-exertion and/or muscle fatigue); exaggerated deep tendon reflexes; and decreased endurance. Spasticity and clonus Clonus (i.e. involuntary, rhythmic, muscular contractions and relaxations) tends to co-exist with spasticity in many cases of stroke and spinal cord injury likely due to their common physiological origins. Some consider clonus as simply an extended outcome of spasticity. Although closely linked, clonus is not seen in all patients with spasticity. Clonus tends to not be present with spasticity in patients with significantly increased muscle tone, as the muscles are constantly active and therefore not engaging in the characteristic on/off cycle of clonus. Clonus results due to an increased motor neuron excitation (decreased action potential threshold) and is common in muscles with long conduction delays, such as the long reflex tracts found in distal muscle groups. Clonus is commonly seen in the ankle but may exist in other distal structures as well, such as the knee or spine. Diagnosis The clinical underpinnings of two of the most common spasticity conditions, spastic cerebral palsy and multiple sclerosis, can be described as follows: in spastic diplegia, the upper motor neuron lesion arises often as a result of neonatal asphyxia, while in conditions like multiple sclerosis, spasticity is thought by some to be as a result of the autoimmune destruction of the myelin sheaths around nerve endings—which in turn can mimic the gamma amino butyric acid deficiencies present in the damaged nerves of spastic cerebral palsy children, leading to roughly the same presentation of spasticity, but which clinically is fundamentally different from the latter.Spasticity is assessed by feeling the resistance of the muscle to passive lengthening in its most relaxed state. A spastic muscle will have immediately noticeable, often quite forceful, increased resistance to passive stretch when moved with speed and/or while attempting to be stretched out, as compared to the non-spastic muscles in the same persons body (if any exist). As there are many features of the upper motor neuron syndrome, there are likely to be multiple other changes in affected musculature and surrounding bones, such as progressive malalignments of bone structure around the spastic muscles (leading for example to the scissor gait and tip-toeing gait due to ankle equinus or ankle planter flexion deformity in spastic cerebral palsy children. scissor gait is caused by spasticity of the hip adductor muscles while tip-toeing gait is caused by spasticity of the gastrocnemius-soleus muscle complex or calf musculature. Also, following an upper motor neuron lesion, there may be multiple muscles affected, to varying degrees, depending on the location and severity of the upper motor neuron damage. The result for the affected individual, is that they may have any degree of impairment, ranging from a mild to a severe movement disorder. A relatively mild movement disorder may contribute to a loss of dexterity in an arm, or difficulty with high level mobility such as running or walking on stairs. A severe movement disorder may result in marked loss of function with minimal or no volitional muscle activation. There are several scales used to measure spasticity, such as the Kings hypertonicity scale, the Tardieu, and the modified Ashworth. Of these three, only the Kings hypertonicity scale measures a range of muscle changes from the UMN lesion, including active muscle performance as well as passive response to stretch.Assessment of a movement disorder featuring spasticity may involve several health professionals depending on the affected individuals situation, and the severity of their condition. This may include physical therapists, physicians (including neurologists and rehabilitation physicians), orthotists and occupational therapists. Assessment is needed of the affected individuals goals, their function, and any symptoms that may be related to the movement disorder, such as pain. A thorough assessment will include analysis of posture, active movement, muscle strength, movement control and coordination, and endurance, as well as spasticity (response of the muscle to stretch). Spastic muscles typically demonstrate a loss of selective movement, including a loss of eccentric control (decreased ability to actively lengthen). While multiple muscles in a limb are usually affected in the upper motor neuron syndrome, there is usually an imbalance of activity, such that there is a stronger pull in one direction, such as into elbow flexion. Decreasing the degree of this imbalance is a common focus of muscle strengthening programs. Spastic movement disorders also typically feature a loss of stabilisation of an affected limb or the head from the trunk, so a thorough assessment requires this to be analysed as well.Secondary effects are likely to impact on assessment of spastic muscles. If a muscle has impaired function following an upper motor neuron lesion, other changes such as increased muscle stiffness are likely to affect the feeling of resistance to passive stretch. Other secondary changes such as loss of muscle fibres following acquired muscle weakness are likely to compound the weakness arising from the upper motor neuron lesion. In severely affected spastic muscles, there may be marked secondary changes, such as muscle contracture, particularly if management has been delayed or absent. Treatment Treatment should be based on assessment by relevant health professionals. For spastic muscles with mild-to-moderate impairment, exercise should be the mainstay of management, and is likely needed to be prescribed by a Physiatrist (Doctor specialized in Rehabilitation medicine), occupational therapist, physical therapist, accredited exercise physiologist (AEP) or other health professional skilled in neurological rehabilitation.Muscles with severe spasticity are likely to be more limited in their ability to exercise, and may require help to do this. In spastic cerebral palsy children the main treatment modality of spasticity is conservative in the form of botulinum toxin A injection and various physical therapy modalities such as serial casting, sustained stretching and medical pharmacologic treatment. Spasticity in cerebral palsy children is usually generalized although with varying degrees of severity across the affected extremities and trunk musculature. Neglected or inappropriately treated spasticity can eventually lead to joint contractures. Both spasticity and contractures can cause joint subluxations or dislocations and severe gait difficulties. In the event of contracture there is no role for conservative treatment. Hip dislocation and ankle equinus deformity are known to arise from muscle spasticity primarily. Orthopedic surgical reconstruction of the hip is commonly practiced to improve sitting balance, nursing care and relief hip pain. Treatment should be done with firm and constant manual contact positioned over nonspastic areas to avoid stimulating the spastic muscle(s). Alternatively, rehabilitation robotics can be used to provide high volumes of passive or assisted movement, depending on the individuals requirements; this form of therapy can be useful if therapists are at a premium, and has been found effective at reducing spasticity in patients with strokes. For muscles that lack any volitional control, such as after complete spinal cord injury, exercise may be assisted, and may require equipment, such as using a standing frame to sustain a standing position. A general treatment guideline can be followed that involves: The initial focus on first activating contraction of antagonist muscles to provide reciprocal inhibition and lengthen spastic muscles Reciprocal actions are attempted. Agonist contractions are performed first in small ranges progressing to larger arcs of movement Highly stressful activities be minimized early in training Functional skills are targeted for training Patients and family/caregivers should be educated about the importance of maintaining range of motion and doing daily exercisesMedical interventions may include such medications as baclofen, diazepam, dantrolene, or clonazepam. Phenol injections can be used, or botulinum toxin injections into the muscle belly, to attempt to dampen the signals between nerve and muscle. The effectiveness of medications vary between individuals, and vary based on location of the upper motor neuron lesion (in the brain or the spinal cord). Medications are commonly used for spastic movement disorders, but research has not shown functional benefit for some drugs. Some studies have shown that medications have been effective in decreasing spasticity, but that this has not been accompanied by functional benefits. Surgery could be required for a tendon release in the case of a severe muscle imbalance leading to contracture. In spastic CP, selective dorsal rhizotomy has also been used to decrease muscle overactivity.Incorporating hydrotherapy in the treatment program may help decrease spasm severity, promote functional independence, improve motor recovery and decrease medication required for spasticity, which may help reduce the side effects that are possible with oral drug treatments. A 2004 study compared the effects of hydrotherapy on spasticity, oral baclofen dosage and Functional Independence Measure (FIM) scores of patients with a spinal cord injury (SCI). It was found that subjects who received hydrotherapy treatment obtained increased FIM scores and a decreased intake of oral baclofen medication. A 2009 study looked at the effect of hydrotherapy to decrease spasticity on post-stroke, hemiparetic patients with limited mobility and concluded that there was a significantly larger increase in FIM scores compared to the control group that did not receive hydrotherapy. Prognosis The prognosis for those with spastic muscles depends on multiple factors, including the severity of the spasticity and the associated movement disorder, access to specialised and intensive management, and ability of the affected individual to maintain the management plan (particularly an exercise program). Most people with a significant UMN lesion will have ongoing impairment, but most of these will be able to make progress. The most important factor to indicate ability to progress is seeing improvement, but improvement in many spastic movement disorders may not be seen until the affected individual receives help from a specialised team or health professional. Research Doublecortin positive cells, similar to stem cells, are extremely adaptable and, when extracted from a brain, cultured and then re-injected in a lesioned area of the same brain, they can help repair and rebuild it. The treatment using them would take some time to be available for general public use, as it has to clear regulations and trials. History Historical progression of spasticity and the upper motor neuron lesion on which it is based has progressed considerably in recent decades. However, the term "spasticity" is still often used interchangeably with "upper motor neuron syndrome" in the clinical settings, and it is not unusual to see patients labeled as "spastic" who actually demonstrate not just spasticity alone, but also an array of upper motor neuron findings.Research has clearly shown that exercise is beneficial for spastic muscles, even though in the very early days of research it was assumed that strength exercise would increase spasticity. Also, from at least the 1950s through at least the 1980s, there was a strong focus on other interventions for spastic muscles, particularly stretching and splinting, but the evidence does not support these as effective. While splinting is not considered effective for decreasing spasticity, a range of different orthotics are effectively used for preventing muscle contractures on patients with spasticity. In the case of spastic diplegia there is also a permanent neurosurgical treatment for spasticity, selective dorsal rhizotomy, that directly targets nerves in the spine that cause the spasticity, and destroys them, so that the spasticity cannot be activated at all. See also References Further reading Lance JW: Symposium synopsis, in Feldman RG, Young RR, Koella WP (eds): Spasticity: Disordered Motor Control. Chicago, Yearbook Medical Publishers, 1980 "Other Complications of Spinal Cord Injury: Spasticity." (Louis Calder Memorial Library of the University of Miami/Jackson Memorial Medical Center, October 3, 2002), http://calder.med.miami.edu/pointis/spasticity.html Maureen E. Neistadt; Elizabeth Blesedell Crepeau, eds. (1998). Willard and Spackmans occupational therapy. Philadelphia: Lippincott-Raven Publishers. pp. 233. ISBN 978-0-397-55192-7. This article contains text from the public domain document at http://www.ninds.nih.gov/health_and_medical/disorders/spasticity_doc.htm Douglas, Wallace M.; Bruce H Ross; Christine K. Thomas (Aug 25, 2005). "Motor unit behaviour during clonus". Journal of Applied Physiology. 99 (6): 2166–2172. CiteSeerX 10.1.1.501.9581. doi:10.1152/japplphysiol.00649.2005. PMID 16099891. S2CID 8598394. Hilder, Joseph M.; Zev W. Rymer (September 1999). "A Stimulation Study of Reflex Instability in Spasticity: Origins of Clonus". IEEE Transactions on Rehabilitation Engineering. 7 (3): 327–340. doi:10.1109/86.788469. PMID 10498378. S2CID 18315004. == External links ==
Standing
Standing, also referred to as orthostasis, is a position in which the body is held in an erect ("orthostatic") position and supported only by the feet. Although seemingly static, the body rocks slightly back and forth from the ankle in the sagittal plane. The sagittal plane bisects the body into right and left sides. The sway of quiet standing is often likened to the motion of an inverted pendulum.Standing at attention is a military standing posture, as is stand at ease, but these terms are also used in military-style organisations and in some professions which involve standing, such as modeling. At ease refers to the classic military position of standing with legs slightly apart, not in as formal or regimented a pose as standing at attention. In modeling, model at ease refers to the model standing with one leg straight, with the majority of the weight on it, and the other leg tucked over and slightly around. Control Standing posture relies on dynamic rather than static balance. The human center of mass is in front of the ankle, and unlike in tetrapods, the base of support is narrow, consisting of only two feet. A static pose would cause humans to fall forward onto the face. In addition, there are constant external perturbations, such as breezes, and internal perturbations that come from respiration. Erect posture requires adjustment and correction. There are many mechanisms in the body that are suggested to control this, e.g. a spring action in muscles, higher control from the nervous system or core muscles. Humans begin to stand between 8 and 12 months of age. Spring action Traditionally, such correction was explained by the spring action of the muscles, a local mechanism taking place without the intervention of the central nervous system (C.N.S.). Recent studies, however, show that this spring action by itself is insufficient to prevent a forward fall. Also, human sway is too complicated to be adequately explained by spring action. Nervous system According to current theory, the nervous system continually and unconsciously monitors our direction and velocity. The vertical body axis alternates between tilting forward and backward. Before each tilt reaches the tipover point the nervous system counters with a signal to reverse direction. Sway also occurs in the hip and there is a slight winding and unwinding of the lower back.An analogy would be a ball that volleys back and forth between two players without touching the ground. The muscle exertion required to maintain an aligned standing posture is crucial but minimal. Electromyography has detected slight activity in the muscles of the calves, hips and lower back. Core muscles The core muscles play a role in maintaining stability. The core muscles are deep muscle layers that lie close to the spine and provide structural support. The transverse abdominals wrap around the spine and function as a compression corset. The multifidi are intersegmental muscles. Dysfunction in the core muscles has been implicated in back pain. Expansion of pendulum model Some investigators have replaced the ankle inverted pendulum analogy with a model of double linked pendulums involving both hip and ankle sway. Neither model is accepted as more than an approximation. Analysis of postural sway shows much more variation than is seen in a physical pendulum or even a pair of coupled pendulums. Furthermore, quiet standing involves activity in all joints, not just the ankles or hips.In the past the variation was attributed to random effects. A more recent interpretation is that sway has a fractal structure. A fractal pattern consists of a motif repeated at varying levels of magnification. The levels are related by a ratio called the fractal dimension. It is believed that the fractal pattern offers a range of fine and gross control tuning. Fractal dimension is altered in some motor dysfunctions. In other words, the body cannot compensate well enough for imbalances. Pathology Although standing per se is not dangerous, there are pathologies associated with it. One short term condition is orthostatic hypotension, and long term conditions are sore feet, stiff legs and low back pain. Orthostatic hypotension Orthostatic hypotension is characterized by an unusually low blood pressure when the patient is standing up. It can cause dizziness, lightheadedness, headache, blurred or dimmed vision and fainting, because the brain does not get sufficient blood supply. This, in turn, is caused by gravity, pulling the blood into the lower part of the body. Normally, the body compensates, but in the presence of other factors, e.g. hypovolemia, diseases and medications, this response may not be sufficient. There are medications to treat hypotension. In addition, there are many lifestyle advices. Many of them, however, are specific for a certain cause of orthostatic hypotension, e.g. maintaining a proper fluid intake in dehydration. Orthostatic hypercoagulability Prolonged still standing significantly activates the coagulation cascade, called orthostatic hypercoagulability. Overall, it causes an increase in transcapillary hydrostatic pressure. As a result, approximately 12% of blood plasma volume crosses into the extravascular compartment. This plasma shift causes an increase in the concentration of coagulation factors and other proteins of coagulation, in turn causing hypercoagulability. Orthostatic tremor Characterized by fast (12–18 Hz) rhythmic muscle contractions that occur in the legs and trunk immediately after standing. No other clinical signs or symptoms are present and the shaking ceases when the patient sits or is lifted off the ground. The high frequency of the tremor often makes the tremor look like rippling of leg muscles while standing. Long-term complications Standing per se does not pose any harm. In the long term, however, complications may arise. See also At attention Agonoclita, a former Christian group that said prayers standing Human position Prostration Standing desk Bowing (social) References External links Dizziness-and-balance.com – Description apta.org – The Secret of Good Posture
Pituitary apoplexy
Pituitary apoplexy is bleeding into or impaired blood supply of the pituitary gland. This usually occurs in the presence of a tumor of the pituitary, although in 80% of cases this has not been diagnosed previously. The most common initial symptom is a sudden headache, often associated with a rapidly worsening visual field defect or double vision caused by compression of nerves surrounding the gland. This is often followed by acute symptoms caused by lack of secretion of essential hormones, predominantly adrenal insufficiency.The diagnosis is achieved with magnetic resonance imaging and blood tests. Treatment is by the timely correction of hormone deficiencies. In many cases, surgical decompression is required. Many people who have had a pituitary apoplexy develop pituitary hormone deficiencies and require long-term hormone supplementation. The first case of the disease was recorded in 1898. Signs and symptoms Acute symptoms The initial symptoms of pituitary apoplexy are related to the increased pressure in and around the pituitary gland. The most common symptom, in over 95% of cases, is a sudden-onset headache located behind the eyes or around the temples. It is often associated with nausea and vomiting. Occasionally, the presence of blood leads to irritation of the lining of the brain, which may cause neck rigidity and intolerance to bright light, as well as a decreased level of consciousness. This occurs in 24% of cases. Pressure on the part of the optic nerve known as the chiasm, which is located above the gland, leads to loss of vision on the outer side of the visual field on both sides, as this corresponds to areas on the retinas supplied by these parts of the optic nerve; it is encountered in 75% of cases. Visual acuity is reduced in half, and over 60% have a visual field defect. The visual loss depends on which part of the nerve is affected. If the part of the nerve between the eye and the chiasm is compressed, the result is vision loss in one eye. If the part after the chiasm is affected, visual loss on one side of the visual field occurs.Adjacent to the pituitary lies a part of the skull base known as the cavernous sinus. This contains a number of nerves that control the eye muscles. 70% of people with pituitary apoplexy experience double vision due to compression of one of the nerves. In half of these cases, the oculomotor nerve (the third cranial nerve), which controls a number of eye muscles, is affected. This leads to diagonal double vision and a dilated pupil. The fourth (trochlear) and sixth (abducens) cranial nerves are located in the same compartment and can cause diagonal or horizontal double vision, respectively. The oculomotor nerve is predominantly affected as it lies closest to the pituitary. The cavernous sinus also contains the carotid artery, which supplies blood to the brain; occasionally, compression of the artery can lead to one-sided weakness and other symptoms of stroke. Endocrine dysfunction The pituitary gland consists of two parts, the anterior (front) and posterior (back) pituitary. Both parts release hormones that control numerous other organs. In pituitary apoplexy, the main initial problem is a lack of secretion of adrenocorticotropic hormone (ACTH, corticotropin), which stimulates the secretion of cortisol by the adrenal gland. This occurs in 70% of those with pituitary apoplexy. A sudden lack of cortisol in the body leads to a constellation of symptoms called "adrenal crisis" or "Addisonian crisis" (after a complication of Addisons disease, the main cause of adrenal dysfunction and low cortisol levels). The main problems are low blood pressure (particularly on standing), low blood sugars (which can lead to coma) and abdominal pain; the low blood pressure can be life-threatening and requires immediate medical attention.Hyponatremia, an unusually low level of sodium in the blood that may cause confusion and seizures, is found in 40% of cases. This may be caused by low cortisol levels or by inappropriate release of antidiuretic hormone (ADH) from the posterior pituitary. Several other hormonal deficiencies may develop in the subacute phase. 50% have a deficiency in thyroid-stimulating hormone (TSH), leading to hyposecretion of thyroid hormone by the thyroid gland and characteristic symptoms such as fatigue, weight gain, and cold intolerance. 75% develop a deficiency to gonadotropins (LH and FSH), which control the reproductive hormone glands. This leads to a disrupted menstrual cycle, infertility, and decreased libido. Causes Almost all cases of pituitary apoplexy arise from a pituitary adenoma, a benign tumor of the pituitary gland. In 80%, the patient has been previously unaware of this (although some will retrospectively report associated symptoms). It was previously thought that particular types of pituitary tumors were more prone to apoplexy than others, but this has not been confirmed. In absolute terms, only a very small proportion of pituitary tumors eventually undergoes apoplexy. In an analysis of incidentally found pituitary tumors, apoplexy occurred in 0.2% annually, but the risk was higher in tumors larger than 10 mm ("macroadenomas") and tumors that were growing more rapidly; in a meta-analysis, not all these associations achieved statistical significance.The majority of cases (60–80%) are not precipitated by a particular cause. A quarter has a history of high blood pressure, but this is a common problem in the general population, and it is not clear whether it significantly increase the risk of apoplexy. A number of cases has been reported in association with particular conditions and situations; it is uncertain whether these were in fact causative. Amongst reported associations are surgery (especially coronary artery bypass graft, where there are significant fluctuations in the blood pressure), disturbances in blood coagulation or medication that inhibits coagulation, radiation therapy to the pituitary, traumatic brain injury, pregnancy (during which the pituitary enlarges) and treatment with estrogens. Hormonal stimulation tests of the pituitary have been reported to provoke episodes. Treatment of prolactinomas (pituitary adenomas that secrete prolactin) with dopamine agonist drugs, as well as withdrawal of such treatment, has been reported to precipitate apoplexy.Hemorrhage from a Rathkes cleft cyst, a remnant of Rathkes pouch that normally regresses after embryological development, may cause symptoms that are indistinguishable from pituitary apoplexy. Pituitary apoplexy is regarded by some as distinct from Sheehans syndrome, where the pituitary undergoes infarction as a result of prolonged very low blood pressure, particularly when caused by bleeding after childbirth. This condition usually occurs in the absence of a tumor. Others regard Sheehans syndrome as a form of pituitary apoplexy. Mechanism The pituitary gland is located in a recess in the skull base known as the sella turcica ("Turkish saddle", after its shape). It is attached to the hypothalamus, a part of the brain, by a stalk that also contains the blood vessels that supply the gland. It is unclear why pituitary tumors are five times more likely to bleed than other tumors in the brain. There are various proposed mechanisms by which a tumor can increase the risk of either infarction (insufficient blood supply leading to tissue dysfunction) or hemorrhage. The pituitary gland normally derives its blood supply from vessels that pass through the hypothalamus, but tumors develop a blood supply from the nearby inferior hypophyseal artery that generates a higher blood pressure, possibly accounting for the risk of bleeding. Tumors may also be more sensitive to fluctuations in blood pressure, and the blood vessels may show structural abnormalities that make them vulnerable to damage. It has been suggested that infarction alone causes milder symptoms than either hemorrhage or hemorrhagic infarction (infarction followed by hemorrhage into the damaged tissue). Larger tumors are more prone to bleeding, and more rapidly growing lesions (as evidenced by detection of increased levels of the protein PCNA) may also be at a higher risk of apoplexy.After an apoplexy, the pressure inside the sella turcica rises, and surrounding structures such as the optic nerve and the contents of the cavernous sinus are compressed. The raised pressure further impairs the blood supply to the pituitary hormone-producing tissue, leading to tissue death due to insufficient blood supply. Diagnosis It is recommended that magnetic resonance imaging (MRI) scan of the pituitary gland is performed if the diagnosis is suspected; this has a sensitivity of over 90% for detecting pituitary apoplexy; it may demonstrate infarction (tissue damage due to a decreased blood supply) or hemorrhage. Different MRI sequences can be used to establish when the apoplexy occurred, and the predominant form of damage (hemorrhage or infarction). If MRI is not suitable (e.g. due to claustrophobia or the presence of metal-containing implants), a computed tomography (CT) scan may demonstrate abnormalities in the pituitary gland, although it is less reliable. Many pituitary tumors (25%) are found to have areas of hemorrhagic infarction on MRI scans, but apoplexy is not said to exist unless it is accompanied by symptoms.In some instances, lumbar puncture may be required if there is a suspicion that the symptoms might be caused by other problems (meningitis or subarachnoid hemorrhage). This is the examination of the cerebrospinal fluid that envelops the brain and the spinal cord; the sample is obtained with a needle that is passed under local anesthetic into the spine. In pituitary apoplexy the results are typically normal, although abnormalities may be detected if blood from the pituitary has entered the subarachnoid space. If there is remaining doubt about the possibility of subarachnoid hemorrhage (SAH), a magnetic resonance angiogram (MRI with a contrast agent) may be required to identify aneurysms of the brain blood vessels, the most common cause of SAH.Professional guidelines recommend that if pituitary apoplexy is suspected or confirmed, the minimal blood tests performed should include a complete blood count, urea (a measure of renal function, usually performed together with creatinine), electrolytes (sodium and potassium), liver function tests, routine coagulation testing, and a hormonal panel including IGF-1, growth hormone, prolactin, luteinizing hormone, follicle-stimulating hormone, thyroid-stimulating hormone, thyroid hormone, and either testosterone in men or estradiol in women.Visual field testing is recommended as soon as possible after diagnosis, as it quantifies the severity of any optic nerve involvement, and may be required to decide on surgical treatment. Treatment The first priority in suspected or confirmed pituitary apoplexy is stabilization of the circulatory system. Cortisol deficiency can cause severe low blood pressure. Depending on the severity of the illness, admission to a high dependency unit (HDU) may be required.Treatment for acute adrenal insufficiency requires the administration of intravenous saline or dextrose solution; volumes of over two liters may be required in an adult. This is followed by the administration of hydrocortisone, which is pharmaceutical grade cortisol, intravenously or into a muscle. The drug dexamethasone has similar properties, but its use is not recommended unless it is required to reduce swelling in the brain around the area of hemorrhage. Some are well enough not to require immediate cortisol replacement; in this case, blood levels of cortisol are determined at 9:00 AM (as cortisol levels vary over the day). A level below 550 nmol/l indicates a need for replacement.The decision on whether to surgically decompress the pituitary gland is complex and mainly dependent on the severity of visual loss and visual field defects. If visual acuity is severely reduced, there are large or worsening visual field defects, or the level of consciousness falls consistently, professional guidelines recommend that surgery is performed. Most commonly, operations on the pituitary gland are performed through transsphenoidal surgery. In this procedure, surgical instruments are passed through the nose towards the sphenoid bone, which is opened to give access to the cavity that contains the pituitary gland. Surgery is most likely to improve vision if there was some remaining vision before surgery, and if surgery is undertaken within a week of the onset of symptoms.Those with relatively mild visual field loss or double vision only may be managed conservatively, with close observation of the level of consciousness, visual fields, and results of routine blood tests. If there is any deterioration, or expected spontaneous improvement does not occur, surgical intervention may still be indicated. If the apoplexy occurred in a prolactin-secreting tumor, this may respond to dopamine agonist treatment.After recovery, people who have had pituitary apoplexy require follow-up by an endocrinologist to monitor for long-term consequences. MRI scans are performed 3–6 months after the initial episode and subsequently on an annual basis. If after surgery some tumor tissue remains, this may respond to medication, further surgery, or radiation therapy with a "gamma knife". Prognosis In larger case series, the mortality was 1.6% overall. In the group of patients who were unwell enough to require surgery, the mortality was 1.9%, with no deaths in those who could be treated conservatively.After an episode of pituitary apoplexy, 80% of people develop hypopituitarism and require some form of hormone replacement therapy. The most common problem is growth hormone deficiency, which is often left untreated but may cause decreased muscle mass and strength, obesity and fatigue. 60–80% require hydrocortisone replacement (either permanently or when unwell), 50–60% need thyroid hormone replacement, and 60–80% of men require testosterone supplements. Finally, 10–25% develop diabetes insipidus, the inability to retain fluid in the kidneys due to a lack of the pituitary antidiuretic hormone. This may be treated with the drug desmopressin, which can be applied as a nose spray or taken by mouth. Epidemiology Pituitary apoplexy is rare. Even in people with a known pituitary tumor, only 0.6–10% experience apoplexy; the risk is higher in larger tumors. Based on extrapolations from existing data, one would expect 18 cases of pituitary apoplexy per one million people every year; the actual figure is probably lower.The average age at onset is 50; cases have reported in people between 15 and 90 years old. Men are affected more commonly than women, with a male-to-female ratio of 1.6. The majority of the underlying tumors are "null cell" or nonsecretory tumors, which do not produce excessive amounts of hormones; this might explain why the tumor has often gone undetected prior to an episode of apoplexy. History The first case description of pituitary apoplexy has been attributed to the American neurologist Pearce Bailey in 1898. This was followed in 1905 by a further report from the German physician Bleibtreu. Surgery for pituitary apoplexy was described in 1925. Before the introduction of steroid replacement, the mortality from pituitary apoplexy approximated 50%.The name of the condition was coined in 1950 in a case series by physicians from Boston City Hospital and Harvard Medical School. The term "apoplexy" was applied as it referred to both necrosis and bleeding into pituitary tumors. References == External links ==
Cholestasis
Cholestasis is a condition where bile cannot flow from the liver to the duodenum. The two basic distinctions are an obstructive type of cholestasis where there is a mechanical blockage in the duct system that can occur from a gallstone or malignancy, and metabolic types of cholestasis which are disturbances in bile formation that can occur because of genetic defects or acquired as a side effect of many medications. Classification is further divided into acute or chronic and extrahepatic or intrahepatic. Signs and symptoms The signs and symptoms of cholestasis vary according to the cause. In case of sudden onset, the disease is likely to be acute, while the gradual appearance of symptoms suggests chronic pathology. In many cases, patients may experience pain in the abdominal area. Localization of pain to the upper right quadrant can be indicative of cholecystitis or choledocholithiasis, which can progress to cholestasis.Pruritus or itching is often present in many patients with cholestasis. Patients may present with visible scratch marks as a result of scratching. Pruritus is often misdiagnosed as a dermatological condition, especially in patients that do not have jaundice as an accompanying symptom. In a typical day, pruritus worsens as the day progresses, particularly during the evening time. Overnight, pruritus dramatically improves. This cycle can be attributed to an increase in the concentration of biliary elements during the day due to food consumption, and a decline at night. Pruritus is mostly localized to the limbs, but can also be more generalized. The efficacy of naltrexone for cholestatic pruritus suggests involvement of the endogenous opioid system. Many patients may experience jaundice as a result of cholestasis. This is usually evident after physical examination as yellow pigment deposits on the skin, in the oral mucosa, or conjunctiva. Jaundice is an uncommon occurrence in intrahepatic (metabolic) cholestasis, but is common in obstructive cholestasis. The majority of patients with chronic cholestasis also experience fatigue. This is likely a result of defects in the corticotrophin hormone axis or other abnormalities with neurotransmission. Some patients may also have xanthomas, which are fat deposits that accumulate below the skin. These usually appear waxy and yellow, predominantly around the eyes and joints. This condition results from an accumulation of lipids within the blood. If gallstones prevent bile flowing from the pancreas to the small intestine, it can lead to gallstone pancreatitis. Physical symptoms include nausea, vomiting, and abdominal pain. Bile is required for the absorption of fat-soluble vitamins. As such, patients with cholestasis may present with a deficiency in vitamins A, D, E, or K due to a decline in bile flow. Patients with cholestasis may also experience pale stool and dark urine. Causes Possible causes: pregnancy androgens birth control pills antibiotics (such as TMP/SMX) abdominal mass (e.g. cancer) pediatric liver diseases biliary trauma congenital anomalies of the biliary tract gallstones biliary dyskinesia acute hepatitis cystic fibrosis primary biliary cholangitis, an autoimmune disorder primary sclerosing cholangitis, associated with inflammatory bowel disease some drugs (e.g. flucloxacillin and erythromycin): 208  secondary syphilis, albeit rarelyDrugs such as gold salts, nitrofurantoin, anabolic steroids, sulindac, chlorpromazine, erythromycin, prochlorperazine, cimetidine, estrogen, and statins can cause cholestasis and may result in damage to the liver.: 208 Drug-induced cholestasis Acute and chronic cholestasis can be caused by certain drugs or their metabolites. Drug-induced cholestasis (DIC) falls under drug-induced liver injury (DILI), specifically the cholestatic or mixed type. While some drugs (e.g., acetaminophen) are known to cause DILI in a predictable dose-dependent manner (intrinsic DILI), most cases of DILI are idiosyncratic, i.e., affecting only a minority of individuals taking the medication. Seventy-three percent of DIC cases can be attributed to a single prescription medication, commonly antibiotics & antifungals, anti-diabetics, anti-inflammatory, & cardiovascular drugs, psychotropic drugs. The exact pathomechanism may vary for different drugs and requires further elucidation.Typical symptoms of DIC include pruritus and jaundice, nausea, fatigue, and dark urine, which usually resolve after discontinuation of the offending medication.Clinically, DIC can manifest as acute bland (pure) cholestasis, acute cholestatic hepatitis, secondary sclerosing cholangitis (involving bile duct injury), or vanishing bile duct syndrome (loss of intrahepatic bile ducts).: 17  Bland cholestasis occurs when there is obstruction to bile flow in the absence of inflammation or biliary and hepatic injury, whereas these features are present in cholestatic hepatitis.: 17 Bland cholestasis is almost always caused by anabolic steroids or estrogen contraceptive use, while many drugs may cause cholestatic hepatitis, including penicillins, sulfonamides, rifampin, cephalosporins, fluoroquinolones, tetracyclines, and methimazole, among others.Antibiotics and antifungals that commonly cause DIC are penicillins, macrolides, trimethoprim/sulfamethoxazole, and tetracyclines. Due to its clavulanic acid component, penicillin amoxicillin-clavulanate is the most common culprit of cholestatic liver injury. Flucloxacillin, which is commonly prescribed in the UK, Sweden, and Australia, is another penicillin frequently implicated in DIC. Cholestasis induced by penicillins usually resolves after withdrawal. Macrolides with cholestatic potential include erythromycin, clarithromycin, and azithromycin, and prognosis is likewise favorable with these drugs. Trimethoprim/sulfamethoxazole (via its sulfonamide component) is the fourth most common antibiotic responsible for DILI in North America. However, DIC is comparatively less common with low-dose tetracyclines like doxycycline. Other cholestatic antimicrobials include the antifungal terbinafine, notable for its potential to cause life-threatening cholestatic injury, and quinolones (ciprofloxacin, levofloxacin), which have been linked to cholestatic hepatitis and vanishing bile duct syndrome.Among psychotropic drugs, chlorpromazine is known to cause cholestatic hepatitis. Tricyclic antidepressants (imipramine, amitriptyline) and SSRIs (duloxetine) causing cholestasis have also been reported. Anti-inflammatory drugs with cholestatic potential include the immunosuppressant azathioprine, which has been reported to cause fatal cholestatic hepatitis, and the NSAID diclofenac. Rare causes of cholestasis The causes of cholestasis are diverse, and some feature more prominently than others. Some rare causes include primary sclerosing cholangitis, primary biliary cholangitis, familial intrahepatic cholestasis, Alagille syndrome, sepsis, total parenteral nutrition-based cholestasis, benign recurrent intrahepatic cholestasis, biliary atresia, and intrahepatic cholestasis of pregnancy. Primary biliary cholangitis Chronic cholestasis occurs in primary biliary cholangitis (PBC). PBC is a progressive autoimmune liver disease in which small intrahepatic bile ducts are selectively destroyed, leading to cholestasis, biliary fibrosis, cirrhosis, and eventually liver failure that requires transplantation. Prevalence of PBC ranges from 19 to 402 cases/million depending on geographic location, with a 9:1 female preponderance and median ages of diagnosis of 68.5 years for females and 54.5 years for males.At diagnosis, 50% of PBC patients are asymptomatic, indicative of an early stage of disease, while another 50% report fatigue and daytime sleepiness. Other symptoms include pruritus and skin lesions, and in prolonged cholestasis, malabsorption and steatorrhea leading to fat-soluble vitamin deficiency. Disease progression is accompanied by intensifying portal hypertension and hepatosplenomegaly. Clinically, diagnosis generally requires a 1:40 or greater titer of anti-mitochondrial antibody (AMA) against PDC-E2 and elevated alkaline phosphatase persisting for 6+ months.Ursodeoxycholic acid (UDCA) is an FDA-approved first-line treatment for PBC. At moderate doses, UDCA has been demonstrated to slow disease progression and improve transplant-free survival. A complete response is achieved in 25-30% patients, and similar survival as the general population is expected in 2/3 of patients on UDCA. For the 1/3 non-responders, obeticholic acid (OCA) is approved by the FDA as a second-line treatment.The precise etiology of PBC remains poorly understood, though a clearer picture is starting to emerge. A loss of immune tolerance is indicated by the presence of AMAs and autoreactive CD4+ and CD8+ T cells targeting cholangiocytes that line the bile ducts. Cholangiocytes are normally responsible for 40% of bile flow, mostly through secretion of bicarbonate into bile via anion exchanger 2 (AE2) on their apical membrane. The resulting bicarbonate "umbrella" that forms over cholangiocytes provides protection from toxic bile salts. However, in PBC there is repression of AE2 activity due to upregulation of miR-506. This results in decreased biliary bicarbonate secretion and consequently, cholestasis and injury to cholangiocytes by bile salts. Injury may induce cholangiocytes to undergo apoptosis, and during this process, the unique way in which cholangiocytes handle the degradation of PDC-E2 (the E2 subunit of mitochondrial pyruvate dehydrogenase complex) may be a trigger for PSC. Specifically, PDC-E2 in apoptotic cholangiocytes undergo a covalent modification that may render them recognizable to antibodies and thereby trigger a break in self-tolerance. The problem is compounded by cholangiocytes peculiarly abundant expression of HLA-II and HLA-I, as well as adhesion and chemoattractant molecules, which recruit aid in recruitment of mononuclear immune cells.Both genetic and environmental factors probably contribute to PBC pathogenesis. Genetic predisposition is suggested by high concordance between identical twins, higher incidence among relatives, and a strong association of disease with certain HLA variants. Disease is likely triggered in the genetically predisposed by some environmental factor, such as pollutants, xenobiotics (e.g., chemicals in makeup), diet, drugs, stress, and infectious agents. Urinary tract infection with E. coli is a particularly strong risk factor for PBC. A possible explanation is that E. coli possess a similar PDC-E2 as humans which could trigger autoimmunity via molecular mimicry. Primary sclerosing cholangitis Chronic cholestasis is a feature in primary sclerosing cholangitis (PSC). PSC is a rare and progressive cholestatic liver disease characterized by narrowing, fibrosis, and inflammation of intrahepatic or extrahepatic bile ducts, leading to reduced bile flow or formation (i.e., cholestasis). The pathogenesis of PSC remains unclear but probably involves a combination of environmental factors and genetic predisposition. Notably, 70-80% of patients with PSC are comorbid with inflammatory bowel disease (e.g., ulcerative colitis or Crohns colitis), suggesting there exists a link between the two.PSC predominantly affects males (60-70%) of 30–40 years of age. The disease has an incidence is 0.4-2.0 cases/100,000 and a prevalence of 16.2 cases/100,000, making it a rare disease. Nonetheless, PSC accounts for 6% of liver transplants in the US due to its eventual progression to end-stage liver disease, with a mean transplant-free survival of 21.3 year.Though 40-50% of patients are asymptomatic, commonly reported symptoms include abdominal pain in the right upper quadrant, pruritus, jaundice, fatigue, and fever. The most common signs are hepatomegaly and splenomegaly. Prolonged cholestasis in PSC may cause fat-soluble vitamin deficiency leading to osteoporosisDiagnosis requires elevated serum alkaline phosphatase persisting for at least 6 months and the presence of bile duct strictures on cholangiogram. Unlike primary biliary cholangitis, PSC lacks a diagnostic autoantibody or reliable biomarker of disease progression. Although a liver biopsy is not required for diagnosis, the characteristic histological finding is concentric periductal fibrosis resembling onion skin.PSC is associated with increased risk of several cancers, most notably, a 400 times greater risk for cholangiocarcinoma compared to the general population. Patients with PSC also face elevated risk of pancreatic and colorectal cancer. Therefore, regular screening is recommended.No drugs are currently approved for treating PSC specifically. Although commonly given, ursodeoxycholic acid at moderate doses failed to improve transplant-free survival in randomized controlled trials. Due to disease progression, 40% of patients eventually require liver transplantation, which has survival rates (91% at 1 year, 82% at 5 years, and 74% at 10 years). However, the disease recurs in at least 25% of transplant recipients, particularly in those with IBD and an intact colon. Clinical trials are underway for several novel therapies, including obeticholic acid (a bile acid analogue), simtuzumab (a monoclonal antibody), and 24-norursodeoxycholic acid (a synthetic bile acid). Although the pathogenesis of PSC is poorly understood, three dominant theories have been proposed: 1) aberrant immune response, 2) increased intestinal permeability, and 3) dysbiosis of gut microbiota. The first theory involves immune-mediated damage to bile ducts by T cells. In PSC, cholangiocytes and hepatocytes display aberrant expression of adhesion molecules, which facilitate homing of intestinal T cells to the liver. Additionally, intestinal microbiota may produce pathogen-associated molecular patterns that stimulate cholangiocytes and hepatic macrophages to produce proinflammatory cytokines, which promote recruitment of immune cells to the bile ducts, fibrosis, cholangiocyte apoptosis and senescence, and ultimately destruction of the bile ducts. In support of T cell involvement, certain human leukocyte antigen (HLA) variants are strongly associated with PSC risk. Further evidence for genetic predisposition include the identification of 23 non-HLA susceptibility loci and a higher disease risk among siblings, though environmental factors appear to play a much greater role in pathogenesis.Another theory postulates that increased intestinal permeability contributes to PSC. Tight junctions, which normally maintain the integrity of the intestinal epithelium, may become disrupted in inflammation. Leaky tight junctions could allow commensal bacteria and toxins to enter portal circulation and reach the liver, where they can trigger inflammation and fibrosis.The intestinal dysbiosis theory hypothesizes that yet unidentified environmental triggers (e.g., diet, medication, inflammation) reduce microbiota diversity and/or alter the population of specific species. The resulting imbalance between primary and secondary bile acids may lead to PSC via the gut-liver axis. The primary bile acids cholic acid (CA) and chenodeoxycholic acid (CDCA) are synthesized in the liver and undergo conjugation before being released into the small intestine to aid digestion. In the distal ileum, 95% of these conjugated BAs are actively reabsorbed via ASBT but 5% enter the colon and are converted by gut microbes into deconjugated secondary bile acids, predominantly deoxycholic acid (DCA) and lithocholic acid (LCA). DCA and LCA are then reabsorbed into portal circulation and reach the liver, where they serve as signaling molecules that maintain bile acid homeostasis. Specifically, DCA and LCA and potent agonists of farnesoid X receptor (FXR) and Takeda G protein-coupled receptor 5 (TGR5), both of which mediate anti-inflammatory and cholangioprotective effects upon activation. On cholangiocytes, TGR5 activation induces CFTR to secrete chloride into bile ducts, which then drives anion exchanger 2 to secrete bicarbonate into bile canaliculi. Bicarbonate serves to protect the apical surface of cholangiocytes from damage by bile acids. On macrophages, activation of FXR and TGR5 inhibits NF-κB, thereby reducing production of proinflammatory cytokines. Therefore, it is hypothesized that a reduction in secondary bile acid production, as a result of dysbiosis, could lead to bile duct damage via decreased activation of FXR and TGR5. Indeed, lower levels of secondary bile acids were found in PSC patients, but a causal relationship is yet to be confirmed. Familial intrahepatic cholestasis Familial intrahepatic cholestasis (FIH) is a group of disorders that lead to intrahepatic cholestasis in children. Most often, FIH occurs during the first year of life, with an incidence rate of 1/50,000 to 1/100,000. There are three different versions of FIH, with each causing a different severity of jaundice. Typically, children exhibit recurrent jaundice episodes, which eventually become permanent. Diagnosis usually occurs by analyzing laboratory features, liver biopsy results, DNA/RNA sequences, and biliary lipid analysis. The definitive treatment for FIH is liver transplant which usually results in a high recovery rate. Each type of FIH is a result of a different mutation. The three genes thought to be involved include APT8B1, which encodes for the FIC1 protein. The ABCB11 gene encodes for the bile salt export pump (BSEP) protein, and the ABCB4 gene encodes for the multidrug resistance 3 (MDR3) protein. BSEP and MDR3 are respectively responsible for transporting bile salt and phospholipid, two major constituents of bile, across the apical membrane of hepatocytes. Alagille syndrome Alagille syndrome is an autosomal dominant disorder that impacts five systems, including the liver, heart, skeleton, face, and eyes. In the early part of life (within the first three months), patients with Alagille syndrome exhibit conjugated hyperbilirubinemia, severe pruritus, and jaundice. Bile duct obliteration usually worsens over time, causing cirrhosis of the liver and eventual failure. Diagnosis usually occurs using the classic criteria by looking at changes associated with the five systems discussed earlier. Like FIH, the definitive treatment is a liver transplant. Almost all patients with Alagille syndrome have mutations of the genes involved in the Notch signaling pathway. Most have a mutation of the JAG1 gene, while a small minority have a mutation of the NOTCH2 gene. Sepsis A variety of factors associated with sepsis may cause cholestasis. Typically, patients have conjugated hyperbilirubinemia and alkaline phosphatase (ALP) elevation but not to extreme levels. Sepsis-induced cholestasis may occur due to increased serum lipopolysaccharide levels. Lipopolysaccharides can inhibit and down-regulate bile salt transporters in hepatocytes, thereby leading to cholestasis. As such, in the case of sepsis, cholestasis occurs not as a result of impaired obstruction but rather the disruption of bile flow. Ischemic liver injury resulting from sepsis can also cause cholestasis. Importantly, jaundice is not indicative of cholestasis in all cases. Widespread hemolysis resulting from sepsis may release bilirubin, thereby overwhelming bilirubin reabsorption and excretion mechanism. TPN-based cholestasis Total parenteral nutrition (TPN) is given to patients with intestinal failure or a variety of other gastrointestinal problems. Under normal settings, TPN causes a slight elevation of ALP levels. However, this does not indicate cholestasis alone. In the case of TPN-induced cholestasis, there is an excessive elevation of ALP, gamma-glutamyltransferase (GGT), and conjugated bilirubin. Without appropriate intervention, symptoms can quickly exacerbate, leading to liver cirrhosis and failure. Cholestasis arising from TPN has a diverse range of causes, including toxicity to TPN components, underlying disorders, or a lack of enteral nutrition. Without enteral food consumption, gallbladder function is greatly inhibited, leading to gallstone formation, subsequent blockage, and eventually cholestasis. Cholestasis resulting from TPN may also be a result of reduced bile flow from portal endotoxins. With TPN, there is a reduction in gastrointestinal motility, immunity, with an increase in permeability. These changes facilitate bacteria growth and increase the amount of circulating endotoxin. Moreover, given that patients using TPN often have underlying health problems, drugs being used with known liver toxicity may also cause cholestasis. Lipids in TPN may cause cholestasis and liver damage by overwhelming clearage mechanisms. Intravenous glucose can also cause cholestasis as a result of increased fatty acid synthesis and decreased breakdown, which facilitates the accumulation of fats. Intrahepatic cholestasis of pregnancy (obstetric cholestasis) Intrahepatic cholestasis of pregnancy (ICP) is an acute cause of cholestasis that manifests most commonly in the third trimester of pregnancy. It affects 0.5-1.5% of pregnancies in Europe and the US and up to 28% in women of Mapuche ethnicity in Chile. ICP is characterized by severe pruritus and elevated serum levels of bile acids as well as transaminases and alkaline phosphatase. These signs and symptoms resolve on their own shortly after delivery, though they may reappar in subsequent pregnancies for 45-70% of women. In the treatment of ICP, current evidence suggests ursodeoxycholic acid (UDCA), a minor secondary bile acid in humans, is the most effective drug for reducing pruritus and improving liver function.The etiology of ICP is multifactorial and likely involves hormonal, genetic, and environmental factors. Several observations suggest estrogen plays a major role: ICP begins in the third trimester, when estrogen levels are highest, resolves after estrogen levels return to normal post-delivery, and occurs with higher incidence in multiple pregnancies, where estrogen levels are more elevated than usual. Although estrogens exact pathomechanism in ICP remains unclear, several explanations have been offered. Estrogen may induce a decrease in the fluidity of the hepatic sinusoidal membrane, leading to a decrease in the activity of basolateral Na+/K+-ATPase. A weaker Na+ gradient results in diminished sodium-dependent uptake of bile acids from venous blood into hepatocytes by the sodium/bile acid cotransporter. More recent evidence suggests that estrogen promotes cholestasis via its metabolite estradiol-17-β-D-glucuronide (E2). E2 secreted into the canaliculi by MRP2 was found to repress the transcription of bile salt export pump (BSEP), the apical ABC transporter responsible for exporting monoanionic conjugated bile acids from hepatocytes into bile canaliculi. E2 was also found to upregulate miR-148a, which represses expression of the pregnane X receptor (PXR). PXR is a nuclear receptor in hepatocytes that senses intracellular bile acid concentrations and regulates gene expression accordingly to increase bile efflux.Genetic predisposition for ICP is suggested by familial and regional clustering of cases. Several studies have implicated heterozygous mutations of the genes ABCB11 and ABCB4 in ICP, which respectively encode the canalicular transport proteins BSEP and multidrug resistance protein 3 (MDR3). MDR3 is responsible for exporting phosphatidylcholine, the major lipid component of bile, into bile canaliculi where it forms micelles with bile salts to prevent the latter from damaging luminal epithelium. Bile flow requires canalicular secretion of both bile salts and phosphatidylcholine. MDR3 mutations are an established predisposing factor, found in 16% of ICP cases. More recently, studies have demonstrated involvement of BSEP mutations in at least 5% of cases. The V444A polymorphism of ABCB11 in particular may lead to ICP by causing a reduction in hepatic BSEP expression and consequently decreased bile salt export. Other notable mutations identified in ICP patients include ones in the farnesoid X receptor (FXR), a nuclear receptor in hepatocytes which activates transcription of MDR3 and BSEP upon binding intracellular bile acids, thereby increasing canalicular bile efflux. Mechanism Bile is secreted by the liver to aid in the digestion of fats. Bile formation begins in bile canaliculi that form between two adjacent surfaces of liver cells (hepatocytes) similar to the terminal branches of a tree. The canaliculi join each other to form larger and larger structures, sometimes referred to as the canals of Hering, which themselves join to form small bile ductules that have an epithelial surface. The ductules join to form bile ducts that eventually form either the right main hepatic duct that drains the right lobe of the liver, or the left main hepatic duct draining the left lobe of the liver. The two ducts join to form the common hepatic duct, which in turn joins the cystic duct from the gall bladder, to give the common bile duct. This duct then enters the duodenum at the ampulla of Vater. In cholestasis, bile accumulates in the hepatic parenchyma.One of the most common causes of extrahepatic, or obstructive cholestasis, is biliary obstruction. This is better known as choledocholithiasis where gallstones become stuck in the common bile duct. Mechanisms of drug-induced cholestasis Drugs may induce cholestasis by interfering with 1) hepatic transporters, 2) bile canaliculi dynamics, and/or 3) cell structure and protein localization. Hepatic transporters are essential for maintaining enterohepatic bile flow and bile acid homeostasis. Therefore, their direct inhibition by certain drugs may lead to cholestasis. Relevant transporters implicated include BSEP, MDR3, MRP2-4, and NTCP.Cholestasis can result from competitive inhibition of BSEP by several drugs, including cyclosporine A, rifampicin, nefazodone, glibenclamide, troglitazone, and bosentan. BSEP is the main transporter in hepatocytes responsible for exporting bile salts across the apical membrane into bile canaliculi. Therefore, inhibiting BSEP should cause cytotoxic bile salts to accumulate in hepatocytes, leading to liver injury and impaired bile flow. Indeed, there is a strong association between BSEP inhibition and cholestasis in humans, and BSEP inhibitors are shown to induce cholestasis in vitro. However, hepatocytes have safety mechanisms that can compensate for impaired canalicular bile efflux. In response to cholestasis, MRP3 and MRP4 on the basolateral membrane are upregulated to allow efflux of accumulated bile salts into portal blood. Similarly, MRP2 can accommodate additional bile flow across the apical membrane in cholestatic conditions. These compensatory mechanisms explain why some BSEP inhibitors do not cause cholestasis. On the contrary, contrast, drugs that inhibit both MRP3/4 and BSEP (e.g.,
Cholestasis
rifampicin, troglitazone, bosentan) pose greater risk for cholestasisMDR3 is another key canalicular efflux transporter that is the target of inhibition by certain drugs. MDR3 secretes phosphatidylcholine into bile canaliculi, where it form micelles with bile salts to dissolve cholesterol as well as protect hepatocyte and cholangiocytes from damage by bile salts. MDR3 inhibition leads to low phospholipid concentrations in bile that damages cholangiocytes and leads to cholestasis. Antifungal azoles such itraconazole have been shown to inhibit both MDR3 and BSEP, thus giving them higher cholestatic potential. Other MDR3-inhibiting drugs are chlorpromazine, imipramine, haloperidol, ketoconazole, saquinavir, clotrimazole, ritonavir, and troglitazone.Another target for inhibition, MRP2 is an apical efflux transporter that mainly exports bilirubin glucuronide and glutathione into bile. However, MRP2 is also the preferential route of export for certain sulfated conjugated BAs (taurolithocholic acid and glycolithocholic acid), so its inhibition could contribute to cholestasis.On the hepatocyte basolateral membrane, Na+-taurocholate cotransporting peptide (NTCP) is the major transporter of conjugated bile acids. Enterohepatic bile flow requires the concerted activity of both NTCP and BSEP, which form the major route by which BAs enter and exit hepatocytes respectively. Therefore, NTCP inhibitors, such as cyclosporine A, ketoconazole, propranolol, furosemide, rifamycin, saquinavir, and ritonavir, should theoretically cause cholestasis by decreasing hepatocyte BA uptake. However, no relationship was found between NTCP inhibition and DIC risk, possibly because basolateral sodium-independent OATPs can partially compensate for bile salt uptake. Therefore, NTCP inhibition alone seems to be insufficient for causeing cholestasis. Indeed, the cholestatic effect of cyclosporine A relies on its inhibition of both NTCP and the compensatory OATP1B1.In addition to direct inhibition, drugs can also induce cholestasis by promoting downregulation and internalization of transporters. For example, cyclosporine A in rats was shown to induce BSEP internalization in addition to inhibition. Furthermore, human hepatocytes showed decreased expression of BSEP mRNA and protein following long-term exposure to metformin and tamoxifen, neither of which are direct BSEP inhibitors.Bile canaliculi dynamics refers to the contractile motion of bile canaliculi (ducts) required for bile flow. Cholestasis can result when drugs constrict or dilate bile canaliculi. Constrictors include chlorpromazine, nefazodone, troglitazone, perhexiline, metformin, cyclosporin A. These drugs activate the RhoA/Rho-kinase pathway, which inhibits myosin light chain phosphatase (MLCP), and in turn, increases myosin light chain phosphorylation by MLC kinase leading to constriction of bile canaliculi. Drugs that dilate canaliculi work by inhibiting MLCK or RhoA/Rho-kinase and include diclofenac, bosentan, entacapone, tacrolimus, cimetidine, and flucloxacillin. Constriction is more serious than dilation, as the former causes irreversible cell damage and death.Minor mechanisms that may contribute to DIC include aberrant paracellular permeability, membrane fluidity, and transporter localization. Tight junctions normally seal the gap between hepatocytes to prevent bile from diffusing out of the canaliculi. If a drug causes internalization of hepatocyte tight junctions, like rifampicin does in mice, bile flow may become impaired due to paracellular leakage. Membrane fluidity can affect bile flow by regulating the activity of hepatocyte Na+/K+-ATPase, which maintains the inwardly-directed Na+ gradient that drives BA uptake by apical NTCP. In rats, cyclosporine A was found to increase canalicular membrane fluidity and consequently reduce bile secretion. Bile flow was similarly reduced in rats as a result of alterations to basolateral membrane fluidity by ethinylestradiol and chlorpromazine. Lastly, some agents (rimpaficin and 17β-estradiol) were shown to hinder proper localization of hepatocyte transporters by interfering with the microtubules required for their insertion into plasma membranes. Diagnosis Cholestasis can be suspected when there is an elevation of both 5-nucleotidase and ALP enzymes. With a few exceptions, the optimal test for cholestasis would be elevations of serum bile acid levels. However, this is not normally available in most clinical settings necessitating the use of other biomarkers. If 5 nucleosidase and ALP enzymes are elevated, imaging studies such as computed tomography (CT) scan, ultrasound, and magnetic resonance imaging (MRI) are used to differentiate intrahepatic cholestasis from extrahepatic cholestasis. Additional imaging, laboratory testing, and biopsies might be conducted to identify the cause and extent of cholestasis. Biomarkers ALP enzymes are found abundantly within the bile canaliculi and bile. If a duct is obstructed, tight junctions permit migration of the ALP enzymes until the polarity is reversed and the enzymes are found on the whole of the cell membrane. Serum ALP levels exceeding 2-3 times the upper baseline value may be due to a variety of liver diseases. However, an elevation that exceeds 10 times the upper baseline limit is strongly indicative of either intrahepatic or extrahepatic cholestasis and requires further investigation. Cholestasis can be differentiated from other liver disorders by measuring the proportion of ALP to serum aminotransferases, where a greater proportion indicates a higher likelihood of cholestasis. Typically, aminotransferase enzymes are localized within hepatocytes and leak across the membrane upon damage. However, measurement of serum aminotransferase levels alone is not a good marker to determine cholestasis. In up to a third of patients, ALP levels may be elevated without the presence of cholestasis. As such, other biomarkers should be measured to corroborate findings. Measurement of 5 nucleosidase levels may be used to identify cholestasis in conjunction with ALP. Levels of ALP may rise within a few hours of cholestasis onset while 5 nucleosidase levels may take a few days. Many labs cannot measure 5 nucleosidase and ALP levels so, GGT may be measured in some cases. Abnormal GGT elevation may be attributable to a variety of factors. As such, GGT elevations lack the necessary specificity to be a useful confirmatory test for cholestasis.Importantly, conjugated hyperbilirubinemia is present in 80% of patients with extrahepatic cholestasis and 50% of patients with intrahepatic cholestasis. Given that many patients with hyperbilirubinemia may not have cholestasis, the measurement of bilirubin levels is not a good diagnostic tool for identifying cholestasis. In a later stage of cholestasis aspartate transaminase (AST), alanine transaminase (ALT) and unconjugated bilirubin may be elevated due to hepatocyte damage as a secondary effect of cholestasis. Imaging After determination using biomarkers, a variety of imaging studies may be used to differentiate between intrahepatic or extrahepatic cholestasis. Ultrasound is often used to identify the location of the obstruction but, is often insufficient in determining the level of biliary obstruction or its cause because it can pick up bowel gas that may interfere with readings. CT scans are not impacted by bowel gas and may also be more suitable for overweight patients. Typically, the cause of cholestasis and magnitude of obstruction is better diagnosed with CT compared to ultrasound. MRI scans provide similar information to CT scans but are more prone to interference from breathing or other bodily functions.Although CT, ultrasound, and MRI may help differentiate intrahepatic and extrahepatic cholestasis, the cause and extent of obstruction is best determined by cholangiography. Potential causes of extrahepatic cholestasis include obstructions outside the wall of the lumen, those outside the duct, and obstructions found in the duct lumen. Endoscopic retrograde cholangiography may be useful to visualize the extrahepatic biliary ducts. In case of anatomical anomalies, or if endoscopic retrograde cholangiography is unsuccessful, percutaneous transhepatic cholangiography may be used. CT or MRI-based cholangiography may also be useful, particularly in cases where additional interventions are not anticipated. Histopathology There is a significant overlap between cholestasis resulting from a hepatocellular origin and cholestasis caused by bile duct obstruction. Due to this, obstructive cholestasis can only be diagnosed after finding additional diagnostic signs that are specific to obstructive changes to the bile ducts or portal tracts. In both non-obstructive and obstructive cholestasis, there is an accumulation of substances that are typically secreted in the bile, as well as degeneration of hepatocytes. The most significant feature from a histopathological perspective includes pigmentation resulting from the retention of bilirubin. Under a microscope, the individual hepatocytes will have a brownish-green stippled appearance within the cytoplasm, representing bile that cannot get out of the cell. Pigmentation can involve regurgitation of bile into the sinusoidal spaces caused by phagocytosis from Kupffer cells, an accumulation of bilirubin within hepatocytes, and inspissated bile in the canaliculi. Most pigmentation and canaliculi dilation occurs in the perivenular region of the hepatic lobule. In chronic cases, this may extend into the periportal area.Hepatocyte necrosis is not a significant feature of cholestasis; however, apoptosis may often occur. Under the microscope, hepatocytes in the perivenular zone appear enlarged and flocculent. In cases of obstructive cholestasis, bile infarcts may be produced during the degeneration and necrosis of hepatocytes. Bile infarcts are marked by a large amount of pigmented tissue surrounded by a ring of necrotic hepatocytes. In some cases, hepatocyte degeneration is uncommon. E.g., with Alagille syndrome limited degeneration occurs, however, there may be a small amount of apoptosis and enlarged hepatocytes.Cholestasis is often marked by cholate statis, which are a set of changes that occur in the periportal hepatocytes. Cholate statis is more common in obstructive cholestasis compared to non-obstructive cholestasis. During the cholate statis process, hepatocytes first undergo swelling and then degeneration. Under the microscope, this is evident as a lucent cell periphery and enlarged cytoplasm around the nucleus. Oftentimes, Mallory bodies may also be found in the periportal areas. Due to the retention of bile, which contains copper, stains made for staining copper-associated protein can be used to visualize bile accumulation in the hepatocytes.Cholestatic liver cell rosettes may occur in children with chronic cholestasis. Histologically, this is evident as two or more hepatocytes in a pseudotubular fashion that encircle a segment of enlarged bile canaliculi. Children may also have giant hepatocytes present, which are characterized by a pigmented spongy appearance. Giant cell formation is likely caused by the detergent properties of bile salts causing a loss of the lateral membrane and joining of hepatocytes. In the case of Alagille syndrome, hepatocyte degeneration is uncommon. However, there may be a small amount of apoptosis and enlarged hepatocytes.In non-obstructive cholestasis, changes to the portal tracts are unlikely. However, it may occur in some unique situations. In the case of neutrophilic pericholangitis, neutrophils surround the portal ducts and obstruct them. Neutrophilic pericholangitis has a variety of causes including endotoxemia, Hodgkins disease, among others. Cholangitis lenta can also cause changes to the portal tracts. This occurs during chronic cases of sepsis and results in dilation of the bile ductules. Cholangitis lenta is likely a result of a stoppage of bile secretion and bile flow through the ductules. Back pressure created from obstructive cholestasis can cause dilation of the bile duct and biliary epithelial cell proliferation, mainly in the portal tracts. Portal tract edema may also occur as a result of bile retention, as well as periductular infiltration of neutrophils. If the obstruction is left untreated, it can lead to a bacterial infection of the biliary tree. Infection is mostly caused by coliforms and enterococci and is evident from a large migration of neutrophils to the duct lumina. This can result in the formation of a cholangitic abscess. With treatment, many of the histological features of cholestasis can be corrected once the obstruction is removed. If the obstruction is not promptly resolved, portal tract fibrosis can result. Even with treatment, some fibrosis may remain. Management Surgical management In cases involving obstructive cholestasis, the primary treatment includes biliary decompression. If bile stones are present in the common bile duct, an endoscopic sphincterotomy can be conducted either with or without placing a stent. To do this, a duodenoscope is placed by the endoscopist in the second portion of the duodenum. A catheter and guidewire is moved up into the common bile duct. A sphincterotome can then enlarge the ampulla of Vater and release the stones. Later, the endoscopist can place a stent in the common bile duct to soften any remaining stones and allow for bile drainage. If needed, a balloon catheter is available to remove any leftover stones. If these stones are too large with these methods, surgical removal may be needed. Patients can also request an elective cholecystectomy to prevent future cases of choledocholithiasis. In case of narrowing of the common bile duct, a stent can be placed after dilating the constriction to resolve the obstruction.The treatment approach for patients with obstructive cholestasis resulting from cancer varies based on whether they are a suitable candidate for surgery. In most cases, surgical intervention is the best option. For patients whom complete removal of the biliary obstruction is not possible, a combination of a gastric bypass and hepaticojejunostomy can be used. This can reestablish bile flow into the small intestine, thereby bypassing the blockage. In cases where a patient is not a suitable candidate for surgery, an endoscopic stent can be placed. If this is not possible or successful, a percutaneous transhepatic cholangiogram and percutaneous biliary drainage can be used to visualize the blockage and re-establish bile flow. Medical management A significant portion of patients with cholestasis (80%) will experience pruritus at some point during their disease. This is a condition that can severely decrease a patients quality of life as it can impact sleep, concentration, work ability, and mood. Many treatments exist, but how effective each option is depends on the patient and their condition. Assessment using a scale, such as a visual analogue scale or a 5-D itch scale will be useful to identify an appropriate treatment. Possible treatment options include antihistamines, ursodeoxycholic acid, and phenobarbital. Nalfurafine hydrochloride can also be used to treat pruritus caused by chronic liver disease and was recently approved in Japan for this purpose.Bile acid binding resins like cholestyramine are the most common treatment. Side effects of this treatment are limited and include constipation and bloating. Other commonly used treatments include rifampin, naloxone, and sertraline. In cholestatic liver disease, when bilirubin concentration starts to build up, a deficiency of fat soluble vitamins may also occur. To manage this, doses of vitamin A, D, E, and K are recommended to retain appropriate vitamin levels. Cholestatic liver disease can impact lipids, and possibly lead to dyslipidemia, which may present a risk for coronary artery disease. Statins and fibrates are generally used as lipid lowering therapy to treat patients with cholestatic liver disease. For intrahepatic cholestasis in pregnant women, S-adenosylmethionine has proven to be an effective treatment. Dexamethasone is a viable treatment in regards to the symptom of intensive itching. Research directions Primary sclerosing cholangitis (PSC) is one of the most common cholestatic liver diseases, yet treatment options remain limited. Treatment for primary biliary cholangitis (PBC) is often done with ursodeoxycholic acid (UDCA) and with no other suitable alternative, it poses a problem for those that are not responsive to (UDCA). However, with advancing technology in the molecular biochemistry field and higher understanding of bile acid regulation, novel pharmacological treatments have been considered. For patients with primary biliary cholangitis, current guidelines recommend about 13–15 mg/kg of ursodeoxycholic acid as a first line treatment. This drug stimulates biliary bicarbonate secretion, improves survival without having to resort to a liver transplantation, and is very well tolerated— making it an ideal treatment. However, around 40% of patients with primary biliary cholangitis are not responsive to UDCA.Obeticholic acid has been approved by the US Food and Drug Administration for PBC in 2016 after experiments found beneficial improvements for the liver in half of patients with inadequate response to UDCA.Primary sclerosing cholangitis is a challenging liver disease as treatment options are limited. There is still uncertainty about the efficacy of ursodeoxycholic acid for PSC and researchers offer conflicting recommendations. One study found UDCA had improved biochemical functions but did lower the rate for death or transplant-free survival. Peroxisomes receptor agonists An important regulator in bile acid homeostasis is the alpha and delta isoforms of the peroxisome proliferator-activated receptor (PPARα, PPARδ). The function of PPARα is that it promotes bile acid excretion and lowers inflammation by acting on nuclear transcription factors. A well known agonist are fibrates and in the clinical trials, there was a significant biochemical response in most patients. A combination therapy with bezafibrate showed remarkable biochemical improvement, with 67% of patients normalizing their alkaline phosphatase levels. Another study of 48 patients with PBC found a combination of bezafibrate and UDCA showed a decrease of alkaline phosphatase in all patients. Further, the study found those treated had a marked relief in pruritus.However, fibrates are associated with a number of adverse effects including arthritis, leg edema, polydipsia, and myalgias. Elevations of creatinine and creatinine phosphokinase were also found over a long term use. Farnesoid X-receptor agonist A new novel treatment option is the farnesoid X receptor is responsible for regulating bile acid homeostasis. An agonist of this nuclear hormone receptor is seen as a possible treatment as it can downregulate bile acid synthesis and reabsorption. Further the farnesoid X receptor is partly responsible for lipid and glucose homeostasis, as well as pathogen recognition. An agonist for the farnesoid X receptor can therefore lead to an anti-cholestatic environment to minimize the effect of toxic bile acids on the liver. A candidate agonist for the farnesoid X receptor is obeticholic acid with experiments showing it has very strong affinity. A worry though is despite benefits in the biochemical pathways, pruritus was more intense and prevalent than the placebo. A titration strategy may help to mitigate pruritus, but FDA approval for obeticholic acid is currently unlikely. In fact, in February 2018, the FDA gave a black box warning for OCA. A recent study did find that if the drug is given with UDCA, the incidence for cirrhosis and liver transplants decreases.Another target that is being looked into is the All-trans retinoic acid (ATRA), an activator for the retinoid X receptor. In vitro and animal studies found ATRA had lowered the amount of bile acid and decreased hepatic inflammation. 24-norursodeoxycholic acid A recent scientific breakthrough for cholestasis that has allowed us to evaluate a new treatment option is that a hydrophilic environment and bicarbonate production protects hepatocytes from bile acid. The novel agent norUDCA (24-norursodeoxycholic acid) can be passively absorbed by cholangiocytes. This leads to bicarbonate production and an environment that is less toxic. Mouse models have found promising results with norUDCA with the drug showing antiproliferative and anti-inflammatory properties. A recent clinical trial found norUDCA had significant dose-dependent reductions for ALP levels. This makes norUDCA a viable possibility to look into as it clearly plays a significant role in the treatment of cholestasis. Immunomodulatory treatments In PBC, the liver is filled with T cells and B cells that contribute to a worsening condition. Therefore, some treatments are looking into targeting the antigens of these immune cells. The monoclonal antibody Rituximab targets the CD20 antigen on the B cells, and is already used in a wide array of other rheumatologic diseases. In an open-label study, six patients that were unresponsive to UDCA had improvement in ALP levels after rituximab infusions. However, the efficacy of rituximab is still uncertain, and awaits further studies and trials. PBC can also lead to higher levels of interleukin 12 and interleukin 23. This was what motivated researches to look at the viability of ustekinumab, a monoclonal antibody targeted against interleukin 12 and 23. An experiment found though it did not significantly improve serum ALP levels. The researchers were further even criticized for placing patients at risk by allowing them to move to advanced disease stages where immunomodulatory therapies may not even be an option. Gut microbiome In several chronic liver diseases, the gut microbiome, which regulates both the innate and adaptive immune systems, is implicated. This can result in abnormal immunological development and an accumulation of primary bile acids. Using this information a bile-acid–intestinal-microbiota–cholestasis triangle is thought to be involved in the pathogenesis of PBC and PSC. After all, bile acids do modulate the gut microbiota; a disturbance here can result in development and progression of cholestasis. This information has prompted researchers into manipulating the microbiota via antibiotics and probiotics for new treatment options. Some antibiotics examined for PSC include vancomycin, which has extensively studied and reviewed. The usage of the drug is found along with a significant decrease in ALP levels, although the long term clinical benefit is unknown.As biochemistry technology becomes more advanced, promising targets have appeared, prompting numerous studies and trials to evaluate the feasibility. Fibrates, FXR agonists, and norUDCA are all innovative therapies for cholestasis. See also Jaundice Liver function tests Lipoprotein-X - an abnormal low density lipoprotein found in cholestasis Intrahepatic cholestasis of pregnancy Progressive familial intrahepatic cholestasis Feathery degeneration - a histopathologic finding associated with cholestasis References == External links ==
Cerebral infarction
A cerebral infarction is the pathologic process that results in an area of necrotic tissue in the brain (cerebral infarct). It is caused by disrupted blood supply (ischemia) and restricted oxygen supply (hypoxia), most commonly due to thromboembolism, and manifests clinically as ischemic stroke. In response to ischemia, the brain degenerates by the process of liquefactive necrosis. Classification There are various classification systems for a cerebral infarction, some of which are described below. The Oxford Community Stroke Project classification (OCSP, also known as the Bamford or Oxford classification) relies primarily on the initial symptoms. Based on the extent of the symptoms, the stroke episode is classified as total anterior circulation infarct (TACI), partial anterior circulation infarct (PACI), lacunar infarct (LACI) or posterior circulation infarct (POCI). These four entities predict the extent of the stroke, the area of the brain affected, the underlying cause, and the prognosis. The TOAST (Trial of Org 10172 in Acute Stroke Treatment) classification is based on clinical symptoms as well as results of further investigations; on this basis, a stroke is classified as being due to (1) thrombosis or embolism due to atherosclerosis of a large artery, (2) embolism of cardiac origin, (3) occlusion of a small blood vessel, (4) other determined cause, (5) undetermined cause (two possible causes, no cause identified, or incomplete investigation). Symptoms Symptoms of cerebral infarction are determined by the parts of the brain affected. If the infarct is located in the primary motor cortex, contralateral hemiparesis is said to occur. With brainstem localization, brainstem syndromes are typical: Wallenbergs syndrome, Webers syndrome, Millard–Gubler syndrome, Benedikt syndrome or others. Infarctions will result in weakness and loss of sensation on the opposite side of the body. Physical examination of the head area will reveal abnormal pupil dilation, light reaction and lack of eye movement on opposite side. If the infarction occurs on the left side brain, speech will be slurred. Reflexes may be aggravated as well. Risk factors Major risk factors for cerebral infarction are generally the same as for atherosclerosis. These include high blood pressure, diabetes mellitus, tobacco smoking, obesity, and dyslipidemia. The American Heart Association/American Stroke Association (AHA/ASA) recommends controlling these risk factors in order to prevent stroke. The AHA/ASA guidelines also provide information on how to prevent stroke if someone has more specific concerns, such as sickle-cell disease or pregnancy. It is also possible to calculate the risk of stroke in the next decade based on information gathered through the Framingham Heart Study. Pathophysiology Cerebral infarction is caused by a disruption to blood supply that is severe enough and long enough in duration to result in tissue death. The disruption to blood supply can come from many causes, including: Thrombosis (obstruction of a blood vessel by a blood clot forming locally) Embolism (obstruction due to an embolus from elsewhere in the body), Systemic hypoperfusion (general decrease in blood supply, e.g., in shock) Cerebral venous sinus thrombosis. Unusual causes such as gas embolism from rapid ascents in scuba diving.Even in cases where there is a complete blockage to blood flow of a major blood vessel supplying the brain, there is typically some blood flow to the downstream tissue through collateral blood vessels, and the tissue can typically survive for some length of time that is dependent upon the level of remaining blood flow. If blood flow is reduced enough, oxygen delivery can decrease enough to cause the tissue to undergo the ischemic cascade. The ischemic cascade leads to energy failure that prevents neurons from sufficiently moving ions through active transport which leads the neurons to first cease firing, then depolarize leading to ion imbalances that cause fluid inflows and cellular edema, then undergo a complex chain of events that can lead to cell death through one or more pathways. Diagnosis Computed tomography (CT) and MRI scanning will show damaged area in the brain, showing that the symptoms were not caused by a tumor, subdural hematoma or other brain disorder. The blockage will also appear on the angiogram. In people who die of cerebral infarction, an autopsy of stroke may give a clue about the duration from the infarction onset until the time of death. Treatment In the last decade, similar to myocardial infarction treatment, thrombolytic drugs were introduced in the therapy of cerebral infarction. The use of intravenous rtPA therapy can be advocated in patients who arrive to stroke unit and can be fully evaluated within 3 hours of the onset. If cerebral infarction is caused by a thrombus occluding blood flow to an artery supplying the brain, definitive therapy is aimed at removing the blockage by breaking the clot down (thrombolysis), or by removing it mechanically (thrombectomy). The more rapidly blood flow is restored to the brain, the fewer brain cells die. In increasing numbers of primary stroke centers, pharmacologic thrombolysis with the drug tissue plasminogen activator (tPA), is used to dissolve the clot and unblock the artery. Another intervention for acute cerebral ischaemia is removal of the offending thrombus directly. This is accomplished by inserting a catheter into the femoral artery, directing it into the cerebral circulation, and deploying a corkscrew-like device to ensnare the clot, which is then withdrawn from the body. Mechanical embolectomy devices have been demonstrated effective at restoring blood flow in patients who were unable to receive thrombolytic drugs or for whom the drugs were ineffective, though no differences have been found between newer and older versions of the devices. The devices have only been tested on patients treated with mechanical clot embolectomy within eight hours of the onset of symptoms. Angioplasty and stenting have begun to be looked at as possible viable options in treatment of acute cerebral ischaemia. In a systematic review of six uncontrolled, single-center trials, involving a total of 300 patients, of intra-cranial stenting in symptomatic intracranial arterial stenosis, the rate of technical success (reduction to stenosis of <50%) ranged from 90 to 98%, and the rate of major peri-procedural complications ranged from 4-10%. The rates of restenosis and/or stroke following the treatment were also favorable. This data suggests that a large, randomized controlled trial is needed to more completely evaluate the possible therapeutic advantage of this treatment. If studies show carotid stenosis, and the patient has residual function in the affected side, carotid endarterectomy (surgical removal of the stenosis) may decrease the risk of recurrence if performed rapidly after cerebral infarction. Carotid endarterectomy is also indicated to decrease the risk of cerebral infarction for symptomatic carotid stenosis (>70 to 80% reduction in diameter).In tissue losses that are not immediately fatal, the best course of action is to make every effort to restore impairments through physical therapy, cognitive therapy, occupational therapy, speech therapy and exercise. Permissive hypertension - allowing for higher than normal blood pressures in the acute phase of cerebral infarction - can be used to encourage perfusion to the penumbra. References == External links ==
Barretts esophagus
Barretts esophagus is a condition in which there is an abnormal (metaplastic) change in the mucosal cells lining the lower portion of the esophagus, from stratified squamous epithelium to simple columnar epithelium with interspersed goblet cells that are normally present only in the small intestine and large intestine. This change is considered to be a premalignant condition because it is associated with a high incidence of further transition to esophageal adenocarcinoma, an often-deadly cancer.The main cause of Barretts esophagus is thought to be an adaptation to chronic acid exposure from reflux esophagitis. Barretts esophagus is diagnosed by endoscopy: observing the characteristic appearance of this condition by direct inspection of the lower esophagus; followed by microscopic examination of tissue from the affected area obtained from biopsy. The cells of Barretts esophagus are classified into four categories: nondysplastic, low-grade dysplasia, high-grade dysplasia, and frank carcinoma. High-grade dysplasia and early stages of adenocarcinoma may be treated by endoscopic resection or radiofrequency ablation. Later stages of adenocarcinoma may be treated with surgical resection or palliation. Those with nondysplastic or low-grade dysplasia are managed by annual observation with endoscopy, or treatment with radiofrequency ablation. In high-grade dysplasia, the risk of developing cancer might be at 10% per patient-year or greater.The incidence of esophageal adenocarcinoma has increased substantially in the Western world in recent years. The condition is found in 5–15% of patients who seek medical care for heartburn (gastroesophageal reflux disease, or GERD), although a large subgroup of patients with Barretts esophagus are asymptomatic. The condition is named after surgeon Norman Barrett (1903–1979) even though the condition was originally described by Philip Rowland Allison in 1946. Signs and symptoms The change from normal to premalignant cells indicate Barretts esophagus does not cause any particular symptoms. Barretts esophagus, however, is associated with these symptoms: frequent and longstanding heartburn trouble swallowing (dysphagia) vomiting blood (hematemesis) pain under the sternum where the esophagus meets the stomach pain when swallowing (odynophagia), which can lead to unintentional weight lossThe risk of developing Barretts esophagus is increased by central obesity (vs. peripheral obesity). The exact mechanism is unclear. The difference in distribution of fat among men (more central) and women (more peripheral) may explain the increased risk in males. Pathophysiology Barretts esophagus occurs due to chronic inflammation. The principal cause of chronic inflammation is gastroesophageal reflux disease, GERD (UK: GORD). In this disease, acidic stomach, bile, and small intestine and pancreatic contents cause damage to the cells of the lower esophagus. In turn, this provokes an advantage for cells more resistant to these noxious stimuli in particular HOXA13-expressing stem cells that are characterised by distal (intestinal) characteristics and outcompete the normal squamous cells.This mechanism also explains the selection of HER2/neu (also called ERBB2) and the overexpressing (lineage-addicted) cancer cells during the process of carcinogenesis, and the efficacy of targeted therapy against the Her-2 receptor with trastuzumab (Herceptin) in the treatment of adenocarcinomas at the gastroesophageal junction. Researchers are unable to predict who with heartburn will develop Barretts esophagus. While no relationship exists between the severity of heartburn and the development of Barretts esophagus, a relationship does exist between chronic heartburn and the development of Barretts esophagus. Sometimes, people with Barretts esophagus have no heartburn symptoms at all. Some anecdotal evidence indicates those with the eating disorder bulimia are more likely to develop Barretts esophagus because bulimia can cause severe acid reflux, and because purging also floods the esophagus with acid. However, a link between bulimia and Barretts esophagus remains unproven.During episodes of reflux, bile acids enter the esophagus, and this may be an important factor in carcinogenesis. Individuals with GERD and BE are exposed to high concentrations of deoxycholic acid that has cytotoxic effects and can cause DNA damage. Diagnosis Both macroscopic (from endoscopy) and microscopic positive findings are required to make a diagnosis. Barretts esophagus is marked by the presence of columnar epithelia in the lower esophagus, replacing the normal squamous cell epithelium—an example of metaplasia. The secretory columnar epithelium may be more able to withstand the erosive action of the gastric secretions; however, this metaplasia confers an increased risk of adenocarcinoma. Screening Screening endoscopy is recommended among males over the age of 60 who have reflux symptoms that are of long duration and not controllable with treatment. Among those not expected to live more than 5 years screening is not recommended.The Seattle protocol is used commonly in endoscopy to obtain endoscopic biopsies for screening, taken every 1 to 2 cm from the gastroesophageal junction. Since the COVID-19 pandemic In Scotland, the local NHS started using a swallowable sponge (Cytosponge) in hospitals to collect cell samples for diagnosis. Preliminary studies have shown this diagnostic test to be a useful tool for screening people with heartburn symptoms and improved diagnosis. Intestinal metaplasia The presence of goblet cells, called intestinal metaplasia, is necessary to make a diagnosis of Barretts esophagus. This frequently occurs in the presence of other metaplastic columnar cells, but only the presence of goblet cells is diagnostic. The metaplasia is grossly visible through a gastroscope, but biopsy specimens must be examined under a microscope to determine whether cells are gastric or colonic in nature. Colonic metaplasia is usually identified by finding goblet cells in the epithelium and is necessary for the true diagnosis.Many histologic mimics of Barretts esophagus are known (i.e. goblet cells occurring in the transitional epithelium of normal esophageal submucosal gland ducts, "pseudogoblet cells" in which abundant foveolar [gastric] type mucin simulates the acid mucin true goblet cells). Assessment of relationship to submucosal glands and transitional-type epithelium with examination of multiple levels through the tissue may allow the pathologist to reliably distinguish between goblet cells of submucosal gland ducts and true Barretts esophagus (specialized columnar metaplasia). The histochemical stain Alcian blue pH 2.5 is also frequently used to distinguish true intestinal-type mucins from their histologic mimics. Recently, immunohistochemical analysis with antibodies to CDX-2 (specific for mid and hindgut intestinal derivation) has also been used to identify true intestinal-type metaplastic cells. The protein AGR2 is elevated in Barretts esophagus and can be used as a biomarker for distinguishing Barrett epithelium from normal esophageal epithelium.The presence of intestinal metaplasia in Barretts esophagus represents a marker for the progression of metaplasia towards dysplasia and eventually adenocarcinoma. This factor combined with two different immunohistochemical expression of p53, Her2 and p16 leads to two different genetic pathways that likely progress to dysplasia in Barretts esophagus. Also intestinal metaplastic cells can be positive for CK 7+/CK20-. Epithelial dysplasia After the initial diagnosis of Barretts esophagus is rendered, affected persons undergo annual surveillance to detect changes that indicate higher risk to progression to cancer: development of epithelial dysplasia (or "intraepithelial neoplasia"). Among all metaplastic lesions, around 8% were associated with dysplasia. particularly a recent study demonstrated that dysplastic lesions were located mainly in the posterior wall of the esophagus.Considerable variability is seen in assessment for dysplasia among pathologists. Recently, gastroenterology and GI pathology societies have recommended that any diagnosis of high-grade dysplasia in Barrett be confirmed by at least two fellowship-trained GI pathologists prior to definitive treatment for patients. For more accuracy and reproducibility, it is also recommended to follow international classification systems, such as the "Vienna classification" of gastrointestinal epithelial neoplasia (2000). Management Many people with Barretts esophagus do not have dysplasia. Medical societies recommend that if a patient has Barretts esophagus, and if the past two endoscopy and biopsy examinations have confirmed the absence of dysplasia, then the patient should not have another endoscopy within three years.Endoscopic surveillance of people with Barretts esophagus is often recommended, although little direct evidence supports this practice. Treatment options for high-grade dysplasia include surgical removal of the esophagus (esophagectomy) or endoscopic treatments such as endoscopic mucosal resection or ablation (destruction).The risk of malignancy is highest in the United States in Caucasian men over fifty years of age with more than five years of symptoms. Current recommendations include routine endoscopy and biopsy (looking for dysplastic changes). Although in the past physicians have taken a watchful waiting approach, newly published research supports consideration of intervention for Barretts esophagus. Balloon-based radiofrequency ablation, invented by Ganz, Stern, and Zelickson in 1999, is a new treatment modality for the treatment of Barretts esophagus and dysplasia and has been the subject of numerous published clinical trials. The findings demonstrate radiofrequency ablation is at least 90% effective to completely clear Barretts esophagus and dysplasia, with durability of up to five years and a favorable safety profile.Anti-reflux surgery has not been proven to prevent esophageal cancer. However, the indication is that proton pump inhibitors are effective in limiting the progression of esophageal cancer. Laser treatment is used in severe dysplasia, while overt malignancy may require surgery, radiation therapy, or systemic chemotherapy. A recent five-year random-controlled trial has shown that photodynamic therapy using photofrin is statistically more effective in eliminating dysplastic growth areas than sole use of a proton pump inhibitor.There is presently no reliable way to determine which patients with Barretts esophagus will go on to develop esophageal cancer, although a recent study found the detection of three different genetic abnormalities was associated with as much as a 79% chance of developing cancer in six years.Endoscopic mucosal resection has also been evaluated as a management technique. Additionally an operation known as a Nissen fundoplication can reduce the reflux of acid from the stomach into the esophagus.In a variety of studies, nonsteroidal anti-inflammatory drugs (NSAIDS) such as low-dose aspirin (75-300 mg/day) have shown evidence of preventing esophageal cancer in people with Barretts esophagus. Prognosis Barretts esophagus is a premalignant condition, not a malignant one. Its malignant sequela, esophagogastric junctional adenocarcinoma, has a mortality rate of over 85%. The risk of developing esophageal adenocarcinoma in people who have Barretts esophagus has been estimated to be 6–7 per 1000 person-years, but a cohort study of 11,028 patients from Denmark published in 2011 showed an incidence of only 1.2 per 1000 person-years (5.1 per 1000 person-years in patients with dysplasia, 1.0 per 1000 person-years in patients without dysplasia).The relative risk of esophageal adenocarcinoma is about ten times higher in those with Barretts esophagus than the general population. Most patients with esophageal carcinoma survive less than one year. Epidemiology The incidence in the United States among Caucasian men is eight times the rate among Caucasian women and five times greater than African American men. Overall, the male to female ratio of Barretts esophagus is 10:1. Several studies have estimated the prevalence of Barretts esophagus in the general population to be 1.3% to 1.6% in two European populations (Italian and Swedish), and 3.6% in a Korean population. History The condition is named after Australian thoracic surgeon Norman Barrett (1903–1979), who in 1950 argued that "ulcers are found below the squamocolumnar junction ... represent gastric ulcers within a pouch of stomach … drawn up by scar tissue into the mediastinum ... representing an example of a congenital short esophagus". In contrast, Philip Rowland Allison and Alan Johnstone argued that the condition related to the "esophagus lined with gastric mucous membrane and not intra-thoracic stomach as Barrett mistakenly believed." Philip Allison, cardiothoracic surgeon and Chair of Surgery at the University of Oxford, suggested "calling the chronic peptic ulcer crater of the esophagus a Barretts ulcer", but added this name did not imply agreement with "Barretts description of an esophagus lined with gastric mucous membrane as stomach." Bani-Hani KE and Bani-Hani KR argue that the terminology and definition of Barretts esophagus is surrounded by extraordinary confusion unlike most other medical conditions and that [t]he use of the eponym “Barrett’s” to describe [the condition] is not justified from a historical point of view. Bani-Hani KE and Bani-Hani KR investigated the historical aspects of the condition and found they could establish how little Norman Barrett had contributed to the core concept of this condition in comparison to the contributions of other investigators, particularly the contribution of Philip Allison.A further association was made with adenocarcinoma in 1975. References External links Barretts esophagus at National Institute of Diabetes and Digestive and Kidney Diseases (NIDDK) Barretts esophagus Video Overview Archived 2012-05-10 at the Wayback Machine and Barretts esophagus Health Information at Mayo Clinic
Neutropenic enterocolitis
Neutropenic enterocolitis is inflammation of the cecum (part of the large intestine) that may be associated with infection. It is particularly associated with neutropenia, a low level of neutrophil granulocytes (the most common form of white blood cells) in the blood. Signs and symptoms Signs and symptoms of typhlitis may include diarrhea, a distended abdomen, fever, chills, nausea, vomiting, and abdominal pain or tenderness. Cause The condition is usually caused by Gram-positive enteric commensal bacteria of the gut (gut flora). Clostridium difficile is a species of Gram-positive bacteria that commonly causes severe diarrhea and other intestinal diseases when competing bacteria are wiped out by antibiotics, causing pseudomembranous colitis, whereas Clostridium septicum is responsible for most cases of neutropenic enterocolitis.Typhlitis most commonly occurs in immunocompromised patients, such as those undergoing chemotherapy, patients with AIDS, kidney transplant patients, or the elderly. Diagnosis Typhlitis is diagnosed with a radiograph CT scan showing thickening of the cecum and "fat stranding". Treatment Typhlitis is a medical emergency and requires prompt management. Untreated typhlitis has a poor prognosis, particularly if associated with pneumatosis intestinalis (air in the bowel wall) and/or bowel perforation, and has significant morbidity unless promptly recognized and aggressively treated.Successful treatment hinges on: Early diagnosis provided by a high index of suspicion and the use of CT scanning Nonoperative treatment for uncomplicated cases Empiric antibiotics, particularly if the patient is neutropenic or at other risk of infection.In rare cases of prolonged neutropenia and complications such as bowel perforation, neutrophil transfusions can be considered but have not been studied in a randomized control trial. Elective right hemicolectomy may be used to prevent recurrence but is generally not recommended"...The authors have found nonoperative treatment highly effective in patients who do not manifest signs of peritonitis, perforation, gastrointestinal hemorrhage, or clinical deterioration. Recurrent typhlitis was frequent after conservative therapy (recurrence rate, 67 percent), however," as based on studies from the 1980s Prognosis Inflammation can spread to other parts of the gut in patients with typhlitis. The condition can also cause the cecum to become distended and can cut off its blood supply. This and other factors can result in necrosis and perforation of the bowel, which can cause peritonitis and sepsis.Historically, the mortality rate for typhlitis was as high as 50%, mostly because it is frequently associated with bowel perforation. More recent studies have demonstrated better outcomes with prompt medical management, generally with resolution of symptoms with neutrophil recovery without death . See also Colitis References == External links ==
Hemorrhagic gastroenteritis
Hemorrhagic gastroenteritis (HGE) is a disease of dogs characterized by sudden vomiting and bloody diarrhea. The symptoms are usually severe, and HGE can be fatal if not treated. HGE is most common in young adult dogs of any breed, but especially small dogs such as the Toy Poodle and Miniature Schnauzer. It is not contagious. Cause The cause is uncertain. Suspected causes include abnormal responses to bacteria or bacterial endotoxin, or a hypersensitivity to food. Pathologically there is an increase in the permeability of the intestinal lining and a leakage of blood and proteins into the bowel. Clostridium perfringens has been found in large numbers in the intestines of many affected dogs. Clinical signs Profuse vomiting is usually the first symptom, followed by depression and bloody diarrhea with a foul odor. Severe hypovolemia (low blood volume) is one of the hallmarks of the disease, and severe hemoconcentration (concentrated blood) is considered necessary for diagnosis. The progression of HGE is so rapid that hypovolemic shock and death can occur within 24 hours. Disseminated intravascular coagulation (DIC) is a possible sequela of HGE. As a result, this disease can cause severe damage. Diagnosis Clinical signs of HGE and canine parvovirus (CPV) are similar enough that they need to be differentiated. It may or may not be detected by a high or low white blood cell count, and there may be a low hematocrit. A negative fecal parvovirus test is sometimes necessary to completely rule out CPV. Other potential causes of vomiting and diarrhea, white foam from the mouth include gastrointestinal parasites, bacterial infections including E. coli, Campylobacter, or Salmonella, protozoal infections such as coccidiosis or giardiasis, and gastrointestinal cancer. Treatment The most important aspect of treatment of HGE is intravenous fluid therapy to replace lost fluid volume. The vomiting and diarrhea are treated symptomatically and will usually resolve after one to two days. Antibiotics targeting C. perfringens are also used but recent studies have shown no difference in outcome or survival rate between patients given antibiotics and those not when no signs of sepsis were present. In other words, if there are no signs of sepsis, antibiotics will not hasten a recovery or improve outcome. With prompt, aggressive treatment, the prognosis is good. There is less than 10 percent mortality with treatment, but 10 to 15 percent of cases will recur. See also Gastroenteritis == References ==
Wheal
Wheal may refer to: Wheals, a type of skin lesion Brad Wheal (born 1996), British cricketer Donald James Wheal (1931–2008), British British television writer, novelist and non-fiction writer David John Wheal, Australian businessman "The Wheal", a 1987 song by Coil See also All pages with titles containing Wheal Mining in Cornwall and Devon, includes mines whose names include Wheal Wheel (disambiguation)
Fasciculation
A fasciculation, or muscle twitch, is a spontaneous, involuntary muscle contraction and relaxation, involving fine muscle fibers. They are common, with as many as 70% of people experiencing them. They can be benign, or associated with more serious conditions. When no cause or pathology is identified, they are diagnosed as benign fasciculation syndrome. Diagnosis The most effective way to detect fasciculations may be surface electromyography (EMG). Surface EMG is more sensitive than needle electromyography and clinical observation in the detection of fasciculation in people with amyotrophic lateral sclerosis.Deeper areas of contraction can be detected by electromyography (EMG) testing, though they can happen in any skeletal muscle in the body. Fasciculations arise as a result of spontaneous depolarization of a lower motor neuron leading to the synchronous contraction of all the skeletal muscle fibers within a single motor unit. An example of normal spontaneous depolarization is the constant contractions of cardiac muscle, causing the heart to beat. Usually, intentional movement of the involved muscle causes fasciculations to cease immediately, but they may return once the muscle is at rest again. Tics must also be distinguished from fasciculations. Small twitches of the upper or lower eyelid, for example, are not tics, because they do not involve a whole muscle, rather are twitches of a few muscle fibre bundles, that are not suppressible. Causes Fasciculations have a variety of causes, the majority of which are benign, but can also be due to disease of the motor neurons. They are encountered by up to 70% of all healthy people, though for most, it is quite infrequent. In some cases, the presence of fasciculations can be annoying and interfere with quality of life. If a neurological examination is otherwise normal and EMG testing does not indicate any additional pathology, a diagnosis of benign fasciculation syndrome is usually made. Risk factors Risk factors for benign fasciculations are age, stress, fatigue, and strenuous exercise. Fasciculations can be caused by anxiety, caffeine or alcohol and thyroid disease. Magnesium deficiency is a common cause of fasciculation.Other factors may include the use of anticholinergic drugs over long periods. In particular, these include ethanolamines such as diphenhydramine (brand names Benadryl, Dimedrol, Daedalon and Nytol), used as an antihistamine and sedative, and dimenhydrinate (brand names Dramamine, Driminate, Gravol, Gravamin, Vomex, and Vertirosan) for nausea and motion sickness. Persons with benign fasciculation syndrome (BFS) may experience paraesthesia (especially numbness) shortly after taking such medication; fasciculation episodes begin as the medication wears off. Stimulants can cause fasciculations directly. These include caffeine, pseudoephedrine (Sudafed), amphetamines, and the asthma bronchodilator salbutamol (brand names Proventil, Combivent, Ventolin). Medications used to treat attention deficit disorder (ADHD) often contain stimulants as well, and are common causes of benign fasciculations. Since asthma and ADHD are much more serious than the fasciculations themselves, this side effect may have to be tolerated by the patient after consulting a physician or pharmacist. The depolarizing neuromuscular blocker succinylcholine causes fasciculations. It is a normal side effect of the drugs administration, and can be prevented with a small dose of a nondepolarizing neuromuscular blocker prior to the administration of succinylcholine, often 10% of a nondepolarizing NMBs induction dose. Even if a drug such as caffeine causes fasciculations, that does not necessarily mean it is the only cause. For example, a very slight magnesium deficiency by itself (see below) might not be enough for fasciculations to occur, but when combined with caffeine, the two factors together could be enough. Treatment Reducing stress and anxiety is therefore another useful treatment.There is no proven treatment for fasciculations in people with ALS. Among patients with ALS, fasciculation frequency is not associated with the duration of ALS and is independent of the degree of limb weakness and limb atrophy. No prediction of ALS disease duration can be made based on fasciculation frequency alone. Epidemiology Fasciculations are observed more often in males, and clinicians are overrepresented in study samples. See also Blepharospasm Carnitine palmitoyltransferase II deficiency Myokymia References == External links ==
Colorectal polyp
A colorectal polyp is a polyp (fleshy growth) occurring on the lining of the colon or rectum. Untreated colorectal polyps can develop into colorectal cancer.Colorectal polyps are often classified by their behaviour (i.e. benign vs. malignant) or cause (e.g. as a consequence of inflammatory bowel disease). They may be benign (e.g. hyperplastic polyp), pre-malignant (e.g. tubular adenoma) or malignant (e.g. colorectal adenocarcinoma). Signs and symptoms Colorectal polyps are not usually associated with symptoms. When they occur, symptoms include bloody stools; changes in frequency or consistency of stools (such as a week or more of constipation or diarrhoea); and fatigue arising from blood loss. Anemia arising from iron deficiency can also present due to chronic blood loss, even in the absence of bloody stools. Another symptom may be an increased mucus production especially those involving villous adenomas. Copious production of mucous causes loss of potassium that can occasionally result in symptomatic hypokalemia. Occasionally, if a polyp is big enough to cause a bowel obstruction, there may be nausea, vomiting and severe constipation. Structure Polyps are either pedunculated (attached to the intestinal wall by a stalk) or sessile (grow directly from the wall).: 1342  In addition to the gross appearance categorization, they are further divided by their histologic appearance as tubular adenoma which are tubular glands, villous adenoma which are long finger like projections on the surface, and tubulovillous adenoma which has features of both.: 1342 Genetics Hereditary syndromes causing increased colorectal polyp formation include: Familial adenomatous polyposis (FAP) Hereditary nonpolyposis colorectal cancer Peutz–Jeghers syndrome Juvenile polyposis syndromeSeveral genes have been associated with polyposis, such as GREM1, MSH3, MLH3, NTHL1, RNF43 and RPS20. Familial adenomatous polyposis Familial adenomatous polyposis (FAP) is a form of hereditary cancer syndrome involving the APC gene located on chromosome q521. The syndrome was first described in 1863 by Virchow on a 15-year-old boy with multiple polyps in his colon. The syndrome involves development of multiple polyps at an early age and those left untreated will all eventually develop cancer. The gene is expressed 100% in those with the mutation and it is autosomal dominant. 10% to 20% of patients have negative family history and acquire the syndrome from spontaneous germline mutation. The average age of newly diagnosed patient is 29 and the average age of newly discovered colorectal cancer is 39. It is recommended that those affected undergo colorectal cancer screening at younger age with treatment and prevention are surgical with removal of affected tissues. Hereditary nonpolyposis colorectal cancer (Lynch Syndrome) Hereditary nonpolyposis colorectal cancer (HNPCC, also known as Lynch syndrome) is a hereditary colorectal cancer syndrome. It is the most common hereditary form of colorectal cancer in the United States and accounts for about 3% of all cases of cancer. It was first recognized by Alder S. Warthin in 1885 at the University of Michigan. It was later further studied by Henry Lynch who recognized an autosomal dominant transmission pattern with those affected having relatively early onset of cancer (mean age 44 years), greater occurrence of proximal lesions, mostly mucinous or poorly differentiated adenocarcinoma, greater number of synchronous and metachronous cancer cells, and good outcome after surgical intervention. The Amsterdam Criteria was initially used to define Lynch syndrome before the underlying genetic mechanism had been worked out. The Criteria required that the patient has three family members all first-degree relatives with colorectal cancer that involves at least two generations with at least one affected person being younger than 50 years of age when the diagnosis was made. The Amsterdam Criteria is too restrictive and was later expanded to include cancers of endometrial, ovarian, gastric, pancreatic, small intestinal, ureteral, and renal pelvic origin. The increased risk of cancer seen in patients with by the syndrome is associated with dysfunction of DNA repair mechanism. Molecular biologists have linked the syndrome to specific genes such as hMSH2, hMSH1, hMSH6, and hPMS2. Peutz–Jeghers syndrome Peutz–Jeghers syndrome is an autosomal dominant syndrome that presents with hamartomatous polyps, which are disorganized growth of tissues of the intestinal tract, and hyperpigmentation of the interlining of the mouth, lips and fingers. The syndrome was first noted in 1896 by Hutchinson, and later separately described by Peutz, and then again in 1940 by Jeghers. The syndrome is associated with malfunction of serine-threonine kinase 11 or STK 11 gene, and has a 2% to 10% increase in risk of developing cancer of the intestinal tract. The syndrome also causes increased risk of extraintestinal cancer such as that involving breast, ovary, cervix, fallopian tubes, thyroid, lung, gallbladder, bile ducts, pancreas, and testicles. The polyps often bleeds and may cause obstruction that would require surgery. Any polyps larger than 1.5 cm needs removal and patients should be monitored closely and screen every 2 years for malignancy. Juvenile polyposis syndrome Juvenile polyposis syndrome is an autosomal dominant syndrome characterized by increased risk of cancer of intestinal tract and extraintestinal cancer. It often presents with bleeding and obstruction of the intestinal tract along with low serum albumin due to protein loss in the intestine. The syndrome is linked to malfunction of SMAD4 a tumor suppression gene that is seen in 50% of cases. Individuals with multiple juvenile polyps have at least 10% chance of developing malignancy and should undergo abdominal colectomy with ileorectal anastomosis, and close monitoring via endoscopy of rectum. For individuals with few juvenile polyps, patients should undergo endoscopic polypectomy. Types Colorectal polyps can broadly be classified as follows: hyperplastic, neoplastic (adenomatous & malignant), hamartomatous and, inflammatory. Comparison table Hyperplastic polyp Most hyperplastic polyps are found in the distal colon and rectum. They have no malignant potential, which means that they are no more likely than normal tissue to eventually become a cancer. Neoplastic polyp A neoplasm is a tissue whose cells have lost normal differentiation. They can be either benign growths or malignant growths. The malignant growths can either have primary or secondary causes. Adenomatous polyps are considered precursors to cancer and cancer becomes invasive once malignant cells cross the muscularis mucosa and invade the cells below. Any cellular changes seen above the lamina propria are considered non-invasive and are labeled atypia or dysplasia. Any invasive carcinoma that has penetrated the muscularis mocos has the potential for lymph node metastasis and local recurrence which will require more aggressive and extensive resection. The Haggitts criteria is used for classification of polyps containing cancer and is based on the depth of penetration. The Haggitts criteria has level 0 through level 4, with all invasive carcinoma of sessile polyp variant by definition being classified as level 4. Level 0: Cancer does not penetrate through the muscularis mucosa. Level 1: Cancer penetrates through the muscularis mucosa and invades the submucosa below but is limited to the head of the polyp. Level 2: Cancer invades through with involvement of the neck of polyp. Level 3: Cancer invades through with involvement of any parts of the stalk. Level 4: Cancer invades through the submucosa below the stalk of the polyp but above the muscularis propria of the bowel wall. Adenomas Neoplastic polyps of the bowel are often benign hence called adenomas. An adenoma is a tumor of glandular tissue, that has not (yet) gained the properties of cancer.The common adenomas of the colon (colorectal adenoma) are the tubular, tubulovillous, villous, and sessile serrated (SSA). A large majority (65% to 80%) are of the benign tubular type with 10% to 25% being tubulovillous, and villous being the most rare at 5% to 10%.As is evident from their name, sessile serrated and traditional serrated adenomas (TSAs) have a serrated appearance and can be difficult to distinguish microscopically from hyperplastic polyps. Making this distinction is important, however, since SSAs and TSAs have the potential to become cancers, while hyperplastic polyps do not.The villous subdivision is associated with the highest malignant potential because they generally have the largest surface area. (This is because the villi are projections into the lumen and hence have a bigger surface area.) However, villous adenomas are no more likely than tubular or tubulovillous adenomas to become cancerous if their sizes are all the same. Hamartomatous polyp Hamartomatous polyps are tumours, like growths found in organs as a result of faulty development. They are normally made up of a mixture of tissues. They contain mucus-filled glands, with retention cysts, abundant connective tissue, and chronic cellular infiltration of eosinophils. They grow at the normal rate of the host tissue and rarely cause problems such as compression. A common example of a hamartomatous lesion is a strawberry naevus. Hamartomatous polyps are often found by chance; occurring in syndromes such as Peutz–Jegher syndrome or Juvenile polyposis syndrome. Peutz–Jeghers syndrome is associated with polyps of the GI tract and also increased pigmentation around the lips, genitalia, buccal mucosa feet and hands. People are often diagnosed with Peutz-Jegher after presenting at around the age of 9 with an intussusception. The polyps themselves carry little malignant potential but because of potential coexisting adenomas there is a 15% chance of colonic malignancy. Juvenile polyps are hamartomatous polyps that often become evident before twenty years of age, but can also be seen in adults. They are usually solitary polyps found in the rectum which most commonly present with rectal bleeding. Juvenile polyposis syndrome is characterised by the presence of more than five polyps in the colon or rectum, or numerous juvenile polyps throughout the gastrointestinal tract, or any number of juvenile polyps in any person with a family history of juvenile polyposis. People with juvenile polyposis have an increased risk of colon cancer. Inflammatory polyp These are polyps that are associated with inflammatory conditions such as ulcerative colitis and Crohns disease. Prevention Diet and lifestyle are believed to play a large role in whether colorectal polyps form. Studies show there to be a protective link between consumption of cooked green vegetables, brown rice, legumes, and dried fruit and decreased incidence of colorectal polyps. Diagnosis Colorectal polyps can be detected using a faecal occult blood test, flexible sigmoidoscopy, colonoscopy, virtual colonoscopy, digital rectal examination, barium enema or a pill camera.Malignant potential is associated with degree of dysplasia Type of polyp (e.g. villous adenoma): Tubular Adenoma: 5% risk of cancer Tubulovillous adenoma: 20% risk of cancer Villous adenoma: 40% risk of cancer Size of polyp: <1 cm =<1% risk of cancer 1–2 cm=10% risk of cancer >2 cm=50% risk of cancerNormally an adenoma that is greater than 0.5 cm is treated. Gallery NICE classification In colonoscopy, colorectal polyps can be classified by NICE (Narrow-band imaging International Colorectal Endoscopic): Treatment Polyps can be removed during a colonoscopy or sigmoidoscopy using a wire loop that cuts the stalk of the polyp and cauterises it to prevent bleeding. Many "defiant" polyps—large, flat, and otherwise laterally spreading adenomas—may be removed endoscopically by a technique called endoscopic mucosal resection (EMR), which involves injection of fluid underneath the lesion to lift it and thus facilitate endoscopic resection. Saline water may be used to generate lift, though some injectable solutions such as SIC 8000 may be more effective. Minimally invasive surgery is indicated for polyps that are too large or in unfavorable locations, such as the appendix, that cannot be removed endoscopically. These techniques may be employed as an alternative to the more invasive colectomy. Follow-up By United States guidelines, the following follow-up is recommended: References External links Villous Adenoma – Medscape
Mammary tumor
A mammary tumor is a neoplasm originating in the mammary gland. It is a common finding in older female dogs and cats that are not spayed, but they are found in other animals as well. The mammary glands in dogs and cats are associated with their nipples and extend from the underside of the chest to the groin on both sides of the midline. There are many differences between mammary tumors in animals and breast cancer in humans, including tumor type, malignancy, and treatment options. The prevalence in dogs is about three times that of women. In dogs, mammary tumors are the second most common tumor (after skin tumors) over all and the most common tumor in female dogs with a reported incidence of 3.4%. Multiple studies have documented that spaying female dogs when young greatly decreases their risk of developing mammary neoplasia when aged. Compared with female dogs left intact, those spayed before puberty have 0.5% of the risk, those spayed after one estrous cycle have 8.0% of the risk, and dogs spayed after two estrous cycles have 26.0% of the risk of developing mammary neoplasia later in life. Overall, unspayed female dogs have a seven times greater risk of developing mammary neoplasia than do those that are spayed. While the benefit of spaying decreases with each estrous cycle, some benefit has been demonstrated in female dogs even up to 9 years of age. There is a much lower risk (about 1 percent) in male dogs and a risk in cats about half that of dogs. In dogs Causes The exact causes for the development of canine mammary tumors are not fully understood. However, hormones of the estrous cycle seem to be involved. Female dogs who are not spayed or who are spayed later than the first heat cycle are more likely to develop mammary tumors. Dogs have an overall reported incidence of mammary tumors of 3.4 percent. Dogs spayed before their first heat have 0.5 percent of this risk, and dogs spayed after just one heat cycle have 8 percent of this risk. The tumors are often multiple. The average age of dogs with mammary tumors is ten to eleven years old. Obesity at one year of age and eating red meat have also been associated with an increased risk for these tumors, as has the feeding of high fat homemade diets. There are several hypotheses on the molecular mechanisms involved in the development of canine mammary tumors but a specific genetic mutation has not been identified. Biology Historically, about 50 percent of mammary tumors in dogs were found to be malignant, although taking into account tumor behavior, one study has estimated true malignancy in mammary tumors to be 21 to 22 percent. Adenomas and fibroadenomas make up the benign types. Malignant mammary tumors are divided into sarcomas, carcinosarcomas, inflammatory carcinomas (usually anaplastic carcinomas), and carcinomas (including adenocarcinomas), which are the most common. Inflammatory carcinomas describe tumors that are fast growing and have bruising, edema, and pain, and can also cause disseminated intravascular coagulation. They are the most malignant type of canine mammary tumor.Malignant tumors are also subdivided histopathologically into those showing blood vessel wall invasion and those that do not. Without blood vessel wall invasion there is a better prognosis. Dogs with noninvasive adenocarcinomas have an average survival time of two years, while dogs with invasive adenocarcinomas have an average survival time of one year. Tumor size also affects the prognosis, in that dogs with tumors greater than five centimeters have a greater chance of lymph node metastasis. Tumor type is also important. Sarcomas and carcinosarcomas carry an average survival time of nine to twelve months. Inflammatory carcinomas have a very poor prognosis, and have usually metastasized by the time of diagnosis. Metastasis for any malignant mammary tumor is usually to the regional lymph nodes and lungs.The molecular carcinogenesis of canine mammary tumors are not completely understand. However, the increasing information on molecular pathways involved in the carcinogenesis of this canine tumor has potential to complement and refine the current diagnostic and therapeutic approach to this tumor type. Furthermore, current data show that significant similarities and differences exist between canine and human mammary tumors at the molecular level. Diagnosis and treatment Appearance and location of the tumor is enough to identify it as a mammary tumor. Biopsy will give type and invasiveness of the tumor. In addition, newer studies showed that certain gene expression patterns are associated with malignant behaviour of canine mammary tumors.Surgical removal is the treatment of choice, but chest x-rays should be taken first to rule out metastasis. Removal should be with wide margins to prevent recurrence, taking the whole mammary gland if necessary. Because 40 to 50 percent of dog mammary tumors have estrogen receptors, spaying is recommended by many veterinarians. A recent study showed a better prognosis in dogs that are spayed at the time of surgery or that had been recently spayed. However, several other studies found no improvement of disease outcome when spaying was performed after the tumor had developed. Chemotherapy is rarely used. Breeds at increased risk Chihuahua Poodle Brittany Spaniel English Setter Pointer Fox Terrier Boston Terrier Cocker Spaniel Lhasa Apso In cats Mammary tumors are the third most common neoplasia in cats, following lymphoid and skin cancers. The incidence of mammary tumors in cats is reduced by 91 percent in cats spayed prior to six months of age and by 86 percent in cats spayed prior to one year, according to one study. Siamese cats and Japanese breeds seem to have increased risk, and obesity also appears to be a factor in tumor development. Malignant tumors make up 80 to 96 percent of mammary tumors in cats, almost all adenocarcinomas. Male cats may also develop mammary adenocarcinoma, albeit rarely, and the clinical course is similar to female cats. As in dogs, tumor size is an important prognostic factor, although for tumors less than three centimeters the individual size is less predictive. According to one study, cats with tumors less than three cm had an average survival time of 21 months, and cats with tumors greater than three cm had an average survival of 12 months. About 10 percent of cat mammary tumors have estrogen receptors, so spaying at the time of surgery has little effect on recurrence or survival time. Metastasis tends to be to the lungs and lymph nodes, and rarely to bone. Diagnosis and treatment is similar to the dog. There is a better prognosis with bilateral radical surgery (removing both mammary chains) than with more conservative surgery. Doxorubicin has shown some promise in treatment. In rats Most mammary tumors in rats are benign fibroadenomas, which are also the most common tumor in the rat. Less than 10 percent are adenocarcinomas. They occur in male and female rats. The tumors can be large and occur anywhere on the trunk. There is a good prognosis with surgery. Spayed rats have a decreased risk of developing mammary tumors. In mice Most mammary tumors in mice are adenocarcinomas. They can be caused by viral infection. Recurrence rates are high, and therefore there is a poor prognosis. There is frequently local tissue invasion and metastasis to the lungs. A well known tumor virus of the mouse is the mouse mammary tumor virus, which may be the most common cause of this tumor in mice. In other animals Ferrets: Mammary tumors are rare in ferrets. Appearance tends to be a soft, dark colored lump. Most seem to be benign and occur most frequently in neutered males. Surgery is recommended. Guinea pigs: Mammary tumors in guinea pigs occur in males and females. Most are benign, but 30 percent are adenocarcinomas. They usually do not metastasize, but aggressive surgery is necessary to prevent recurrence. Hamsters and gerbils: Mammary tumors tend to be benign in hamsters and malignant in gerbils. Hedgehogs: Mammary gland adenocarcinoma is the most common tumor of the hedgehog. == References ==
Fasciculation
A fasciculation, or muscle twitch, is a spontaneous, involuntary muscle contraction and relaxation, involving fine muscle fibers. They are common, with as many as 70% of people experiencing them. They can be benign, or associated with more serious conditions. When no cause or pathology is identified, they are diagnosed as benign fasciculation syndrome. Diagnosis The most effective way to detect fasciculations may be surface electromyography (EMG). Surface EMG is more sensitive than needle electromyography and clinical observation in the detection of fasciculation in people with amyotrophic lateral sclerosis.Deeper areas of contraction can be detected by electromyography (EMG) testing, though they can happen in any skeletal muscle in the body. Fasciculations arise as a result of spontaneous depolarization of a lower motor neuron leading to the synchronous contraction of all the skeletal muscle fibers within a single motor unit. An example of normal spontaneous depolarization is the constant contractions of cardiac muscle, causing the heart to beat. Usually, intentional movement of the involved muscle causes fasciculations to cease immediately, but they may return once the muscle is at rest again. Tics must also be distinguished from fasciculations. Small twitches of the upper or lower eyelid, for example, are not tics, because they do not involve a whole muscle, rather are twitches of a few muscle fibre bundles, that are not suppressible. Causes Fasciculations have a variety of causes, the majority of which are benign, but can also be due to disease of the motor neurons. They are encountered by up to 70% of all healthy people, though for most, it is quite infrequent. In some cases, the presence of fasciculations can be annoying and interfere with quality of life. If a neurological examination is otherwise normal and EMG testing does not indicate any additional pathology, a diagnosis of benign fasciculation syndrome is usually made. Risk factors Risk factors for benign fasciculations are age, stress, fatigue, and strenuous exercise. Fasciculations can be caused by anxiety, caffeine or alcohol and thyroid disease. Magnesium deficiency is a common cause of fasciculation.Other factors may include the use of anticholinergic drugs over long periods. In particular, these include ethanolamines such as diphenhydramine (brand names Benadryl, Dimedrol, Daedalon and Nytol), used as an antihistamine and sedative, and dimenhydrinate (brand names Dramamine, Driminate, Gravol, Gravamin, Vomex, and Vertirosan) for nausea and motion sickness. Persons with benign fasciculation syndrome (BFS) may experience paraesthesia (especially numbness) shortly after taking such medication; fasciculation episodes begin as the medication wears off. Stimulants can cause fasciculations directly. These include caffeine, pseudoephedrine (Sudafed), amphetamines, and the asthma bronchodilator salbutamol (brand names Proventil, Combivent, Ventolin). Medications used to treat attention deficit disorder (ADHD) often contain stimulants as well, and are common causes of benign fasciculations. Since asthma and ADHD are much more serious than the fasciculations themselves, this side effect may have to be tolerated by the patient after consulting a physician or pharmacist. The depolarizing neuromuscular blocker succinylcholine causes fasciculations. It is a normal side effect of the drugs administration, and can be prevented with a small dose of a nondepolarizing neuromuscular blocker prior to the administration of succinylcholine, often 10% of a nondepolarizing NMBs induction dose. Even if a drug such as caffeine causes fasciculations, that does not necessarily mean it is the only cause. For example, a very slight magnesium deficiency by itself (see below) might not be enough for fasciculations to occur, but when combined with caffeine, the two factors together could be enough. Treatment Reducing stress and anxiety is therefore another useful treatment.There is no proven treatment for fasciculations in people with ALS. Among patients with ALS, fasciculation frequency is not associated with the duration of ALS and is independent of the degree of limb weakness and limb atrophy. No prediction of ALS disease duration can be made based on fasciculation frequency alone. Epidemiology Fasciculations are observed more often in males, and clinicians are overrepresented in study samples. See also Blepharospasm Carnitine palmitoyltransferase II deficiency Myokymia References == External links ==
Lymphedema
Lymphedema, also known as lymphoedema and lymphatic edema, is a condition of localized swelling caused by a compromised lymphatic system. The lymphatic system functions as a critical portion of the bodys immune system and returns interstitial fluid to the bloodstream. Lymphedema is most frequently a complication of cancer treatment or parasitic infections, but it can also be seen in a number of genetic disorders. Though incurable and progressive, a number of treatments may improve symptoms. Tissues with lymphedema are at high risk of infection because the lymphatic system has been compromised.While there is no cure, treatment may improve outcomes. This commonly include compression therapy, good skin care, exercise, and manual lymphatic drainage (MLD), which together are known as combined decongestive therapy. Diuretics are not useful. Signs and symptoms The most common manifestation of lymphedema is soft tissue swelling, edema. As the disorder progresses, worsening edema and skin changes including discoloration, verrucous (wart-like) hyperplasia, hyperkeratosis, papillomatosis, dermal thickening, and ulcers may be seen. Additionally, there is increased risk of infection of the skin, known as Erysipelas. Complications When the lymphatic impairment becomes so great that the lymph fluid exceeds the lymphatic systems ability to transport it, an abnormal amount of protein-rich fluid collects in the tissues. Left untreated, this stagnant, protein-rich fluid causes tissue channels to increase in size and number, reducing oxygen availability. This interferes with wound healing and provides a rich culture medium for bacterial growth that can result in infections, cellulitis, lymphangitis, lymphadenitis, and, in severe cases, skin ulcers. It is vital for lymphedema patients to be aware of the symptoms of infection and to seek immediate treatment, since recurrent infections or cellulitis, in addition to their inherent danger, further damage the lymphatic system and set up a vicious circle.In rare cases, lymphedema may lead to a form of cancer called lymphangiosarcoma, although the mechanism of carcinogenesis is not understood. Lymphedema-associated lymphangiosarcoma is called Stewart-Treves syndrome. Lymphangiosarcoma most frequently occurs in cases of long-standing lymphedema. The incidence of angiosarcoma is estimated to be 0.45% in patients living five years after radical mastectomy. Lymphedema is also associated with a low grade form of cancer called retiform hemangioendothelioma (a low grade angiosarcoma).Lymphedema can be disfiguring, and may result in a poor body image, which can cause psychological distress. Complications of lymphedema can cause difficulties in activities of daily living. Causes Lymphedema may be inherited (primary) or caused by injury to the lymphatic vessels (secondary). Lymph node damage It is most frequently seen after lymph node dissection, surgery and/or radiation therapy, in which damage to the lymphatic system is caused during the treatment of cancer, most notably breast cancer. In many patients with cancer, this condition does not develop until months or even years after therapy has concluded. Lymphedema may also be associated with accidents or certain diseases or problems that may inhibit the lymphatic system from functioning properly. In tropical endemic areas of the world, a common cause of secondary lymphedema is filariasis, a parasitic infection. It can also be caused by damage to the lymphatic system from infections such as cellulitis.Primary lymphedema may be congenital or arise sporadically. Multiple syndromes are associated with primary lymphedema, including Turner syndrome, Milroys disease, and Klippel-Trenaunay-Weber syndrome. It is generally thought to occur as a result of absent or malformed lymph nodes and/or lymphatic channels. Lymphedema may be present at birth, develop at the onset of puberty (praecox), or not become apparent for many years into adulthood (tarda). In men, lower-limb primary lymphedema is most common, occurring in one or both legs. Some cases of lymphedema may be associated with other vascular abnormalities.Secondary lymphedema affects both men and women. In women, it is most prevalent in the upper limbs after breast cancer surgery, in particular after axillary lymph node dissection, occurring in the arm on the side of the body in which the surgery is performed. Breast and trunk lymphedema can also occur but go unrecognised as there is swelling in the area after surgery, and its symptoms (peau dorange and/or an inverted nipple) can be confused with post surgery fat necrosis. In Western countries, secondary lymphedema is most commonly due to cancer treatment. Between 38 and 89% of breast cancer patients have lymphedema due to axillary lymph node dissection and/or radiation. Unilateral lymphedema occurs in up to 41% of patients after gynecologic cancer. For men, a 5-66% incidence of lymphedema has been reported in patients treated with incidence depending on whether staging or radical removal of lymph glands was done in addition to radiotherapy.Head and neck lymphedema can be caused by surgery or radiation therapy for tongue or throat cancer. It may also occur in the lower limbs or groin after surgery for colon, ovarian or uterine cancer, in which removal of lymph nodes or radiation therapy is required. Surgery or treatment for prostate, colon and testicular cancers may result in secondary lymphedema, particularly when lymph nodes have been removed or damaged.The onset of secondary lymphedema in patients who have had cancer surgery has also been linked to aircraft flight (likely due to decreased cabin pressure or relative immobility). For cancer survivors, therefore, wearing a prescribed and properly fitted compression garment may help decrease swelling during air travel.Some cases of lower-limb lymphedema have been associated with the use of tamoxifen, due to the blood clots and deep vein thrombosis (DVT) that can be associated with this medication. Resolution of the blood clots or DVT is needed before lymphedema treatment can be initiated.Infectious causes include lymphatic filariasis. At birth Hereditary lymphedema is a primary lymphedema – swelling that results from abnormalities in the lymphatic system that are present from birth. Swelling may be present in a single affected limb, several limbs, genitalia, or the face. It is sometimes diagnosed prenatally by a nuchal scan or post-natally by lymphoscintigraphy. The most common form is Meige disease that usually presents at puberty. Another form of hereditary lymphedema is Milroys disease caused by mutations in the VEGFR3 gene. Hereditary lymphedema is frequently syndromic and is associated with Turner syndrome, lymphedema–distichiasis syndrome, yellow nail syndrome, and Klippel–Trénaunay–Weber syndrome.One defined genetic cause for hereditary lymphedema is GATA2 deficiency. This deficiency is a grouping of several disorders caused by common defect, viz., familial or sporadic inactivating mutations in one of the two parental GATA2 genes. These autosomal dominant mutations cause a reduction, i.e. a haploinsufficiency, in the cellular levels of the genes product, GATA2. The GATA2 protein is a transcription factor critical for the embryonic development, maintenance, and functionality of blood-forming, lympathic-forming, and other tissue-forming stem cells. In consequence of these mutations, cellular levels of GATA2 are deficient and individuals develop over time hematological, immunological, lymphatic, and/or other disorders. GATA2 deficiency-induced defects in the lymphatic vessels and valves underlies the development of lymphedema which is primarily located in the lower extremities but may also occur in other places such as the face or testes (i.e. hydrocele). This form of the deficiency, when coupled with sensorineural hearing loss which may also be due to faulty development of the lymphatic system, is sometimes termed the Emberger syndrome.Primary lymphedema has a quoted incidence of approximately 1-3 births out of every 10,000 births, with a particular female preponderance to male ratio of 3.5:1 In North America, the incidence of primary lymphedema is approximately 1.15 births out of every 100,000 births Compared to secondary lymphedema, primary lymphedema is relatively rare. Inflammatory lymphedema Bilateral lower extremity inflammatory lymphedema (BLEIL) is a distinct type of lymphedema occurring in a setting of acute and prolonged standing, such as in new recruits during basic training. The possible underlying mechanisms are thought to be venous congestion and inflammatory vasculitis. Physiology Lymph is formed from the fluid that filters out of the blood circulation and contains proteins, cellular debris, bacteria, etc. The collection of this fluid is carried out by the initial lymph collectors that are blind-ended endothelial-lined vessels with fenestrated openings that allow fluids and particles as large as cells to enter. Once inside the lumen of the lymphatic vessels, the fluid is guided along increasingly larger vessels, first with rudimentary valves to prevent backflow, which later develops into complete valves similar to the venous valve. Once the lymph enters the fully valved lymphatic vessels, it is pumped by a rhythmic peristaltic-like action by smooth muscle cells within the lymphatic vessel walls. This peristaltic action is the primary driving force, moving lymph within its vessel walls. The regulation of the frequency and power of contraction is regulated by the sympathetic nervous system. Lymph movement can be influenced by the pressure of nearby muscle contraction, arterial pulse pressure and the vacuum created in the chest cavity during respiration, but these passive forces contribute only a minor percentage of lymph transport. The fluids collected are pumped into continually larger vessels and through lymph nodes, which remove debris and police the fluid for dangerous microbes. The lymph ends its journey in the thoracic duct or right lymphatic duct, which drain into the blood circulation. Diagnosis Diagnosis is generally based on signs and symptoms, with testing used to rule out other potential causes. An accurate diagnosis and staging may help with management. A swollen limb can result from different conditions that require different treatments. Diagnosis of lymphedema is currently based on history, physical exam, and limb measurements. Imaging studies such as lymphoscintigraphy and indocyanine green lymphography are only required when surgery is being considered. However, the ideal method for lymphedema staging to guide the most appropriate treatment is controversial because of several different proposed protocols. Lymphedema can occur in both the upper and lower extremities, and in some cases, the head and neck. Assessment of the extremities first begins with a visual inspection. Color, presence of hair, visible veins, size and any sores or ulcerations are noted. Lack of hair may indicate an arterial circulation problem. Given swelling, the extremities circumference is measured for reference as time continues. In early stages of lymphedema, elevating the limb may reduce or eliminate the swelling. Palpation of the wrist or ankle can determine the degree of swelling; assessment includes a check of the pulses. The axillary or inguinal nodes may be enlarged due to the swelling. Enlargement of the nodes lasting more than three weeks may indicate infection or other illnesses such as sequela from breast cancer surgery requiring further medical attention.Diagnosis or early detection of lymphedema is difficult. The first signs may be subjective observations such as a feeling of heaviness in the affected extremity. These may be symptomatic of early-stage lymphedema where accumulation of lymph is mild and not detectable by changes in volume or circumference. As lymphedema progresses, definitive diagnosis is commonly based upon an objective measurement of differences between the affected or at-risk limb at the opposite unaffected limb, e.g. in volume or circumference. No generally accepted criterion is definitively diagnostic, although a volume difference of 200 ml between limbs or a 4-cm difference (at a single measurement site or set intervals along the limb) is often used. Bioimpedance measurement (which measures the amount of fluid in a limb) offers greater sensitivity than existing methods.Chronic venous stasis changes can mimic early lymphedema, but the changes in venous stasis are more often bilateral and symmetric. Lipedema can also mimic lymphedema, however lipedema characteristically spares the feet beginning abruptly at the medial malleoli (ankle level). As a part of the initial work-up before diagnosing lymphedema, it may be necessary to exclude other potential causes of lower extremity swelling such as kidney failure, hypoalbuminemia, congestive heart-failure, protein-losing nephropathy, pulmonary hypertension, obesity, pregnancy and drug-induced edema. Classification According to the Fifth WHO Expert Committee on Filariasis the most common method of classification of lymphedema is as follows: (The same classification method can be used for both primary and secondary lymphedema) The International Society of Lymphology (ISL) Staging System is based solely on subjective symptoms, making it prone to substantial observer bias. Imaging modalities have been suggested as useful adjuncts to the ISL staging to clarify the diagnosis. The lymphedema expert Dr. Ming-Huei Cheng developed a Chengs Lymphedema Grading tool to assess the severity of extremity lymphedema based on objective limb measurements and providing appropriate options for management. I. Grading Grade 1: Spontaneously reversible on elevation. Mostly pitting edema. Grade 2: Non-spontaneously reversible on elevation. Mostly non-pitting edema. Grade 3: Gross increase in volume and circumference of Grade 2 lymphedema, with eight stages of severity given below based on clinical assessments. II. Staging As described by the Fifth WHO Expert Committee on Filariasis, and endorsed by the American Society of Lymphology., the staging system helps to identify the severity of lymphedema. With the assistance of medical imaging apparatus, such as MRI or CT, staging can be established by the physician, and therapeutic or medical interventions may be applied: Stage 0: The lymphatic vessels have sustained some damage that is not yet apparent. Transport capacity is sufficient for the amount of lymph being removed. Lymphedema is not present. Stage 1 : Swelling increases during the day and disappears overnight as the patient lies flat in bed. Tissue is still at the pitting stage: when pressed by the fingertips, the affected area indents and reverses with elevation. Usually, upon waking in the morning, the limb or affected area is normal or almost normal in size. Treatment is not necessarily required at this point. Stage 2: Swelling is not reversible overnight, and does not disappear without proper management. The tissue now has a spongy consistency and is considered non-pitting: when pressed by the fingertips, the affected area bounces back without indentation. Fibrosis found in Stage 2 lymphedema marks the beginning of the hardening of the limbs and increasing size. Stage 3: Swelling is irreversible and usually the limb(s) or affected area becomes increasingly large. The tissue is hard (fibrotic) and unresponsive; some patients consider undergoing reconstructive surgery, called "debulking". This remains controversial, however, since the risks may outweigh the benefits and the further damage done to the lymphatic system may in fact make the lymphedema worse. Stage 4: The size and circumference of the affected limb(s) become noticeably large. Bumps, lumps, or protrusions (also called knobs) on the skin begin to appear. Stage 5: The affected limb(s) become grossly large; one or more deep skin folds is prevalent among patients in this stage. Stage 6: Knobs of small elongated or small rounded sizes cluster together, giving mossy-like shapes on the limb. Mobility of the patient becomes increasingly difficult. Stage 7: The person becomes "handicapped", and is unable to independently perform daily routine activities such as walking, bathing and cooking. Assistance from the family and health care system is needed. Grades Lymphedema can also be categorized by its severity (usually referenced to a healthy extremity): Grade 1 (mild edema): Involves the distal parts such as a forearm and hand or a lower leg and foot. The difference in circumference is less than 4 cm and other tissue changes are not yet present. Grade 2 (moderate edema): Involves an entire limb or corresponding quadrant of the trunk. Difference in circumference is 4–6 cm. Tissue changes, such as pitting, are apparent. The patient may experience erysipelas. Grade 3a (severe edema): Lymphedema is present in one limb and its associated trunk quadrant. Circumferential difference is greater than 6 centimeters. Significant skin alterations, such as cornification or keratosis, cysts and/or fistulae, are present. Additionally, the patient may experience repeated attacks of erysipelas. Grade 3b (massive edema): The same symptoms as grade 3a, except that two or more extremities are affected. Grade 4 (gigantic edema): In this stage of lymphedema, the affected extremities are huge, due to almost complete blockage of the lymph channels. Differential Lymphedema should not be confused with edema arising from venous insufficiency, which is caused by compromise of the venous drainage rather than lymphatic drainage. However, untreated venous insufficiency can progress into a combined venous/lymphatic disorder. Treatment While there is no cure, treatment may improve outcomes. This commonly include compression therapy, good skin care, exercise, and manual lymphatic drainage (MLD), which together is known as combined decongestive therapy. MLD is most effective in mild to moderate disease. In breast cancer-related lymphedema, MLD is safe and may offer added benefit to compression bandages for reducing swelling. Most people with lymphedema can be medically managed with conservative treatment. Diuretics are not useful. Surgery is generally only used in those who are not improved with other measures. Compression Garments Once a person is diagnosed with lymphedema, compression becomes imperative in the management of the condition. Garments are often intended to be worn all day but may be taken off for sleeping unless otherwise prescribed. Elastic compression garments are worn on the affected limb following complete de-congestive therapy to maintain edema reduction. Inelastic garments provide containment and reduction. Available styles, options, and prices vary widely. A professional garment fitter or certified lymphedema therapist can help determine the best option for the patient. Bandaging Compression bandaging, also called wrapping, is the application of layers of padding and short-stretch bandages to the involved areas. Short-stretch bandages are preferred over long-stretch bandages (such as those normally used to treat sprains), as the long-stretch bandages cannot produce the proper therapeutic tension necessary to safely reduce lymphedema and may in fact end up producing a tourniquet effect. Compression bandages provide resistance that assists in pumping fluid out of the affected area during exercise. This counter-force results in increased lymphatic drainage and therefore a decrease in size of the swollen area. Intermittent pneumatic compression therapy Intermittent pneumatic compression therapy (IPC) utilizes a multi-chambered pneumatic sleeve with overlapping cells to promote movement of lymph fluid. Pump therapy should be used in addition to other treatments such as compression bandaging and manual lymph drainage. Pump therapy has been used a lot in the past to help with controlling lymphedema. In some cases, pump therapy helps soften fibrotic tissue and therefore potentially enable more efficient lymphatic drainage. However, reports link pump therapy to increased incidence of edema proximal to the affected limb, such as genital edema arising after pump therapy in the lower limb. Current literature has suggested the use of IPC treatment in conjunction with Kinesiotape (KT) is more effective in the overall reduction of lymphedema as well as increasing shoulder ROM than the traditional treatment of IPC paired with complete decongestive therapy. Kinesiotape (KT) is an elastic cotton strip with an acrylic adhesive that is used commonly used to relieve the discomfort and disability associated with sports injuries, but in the context of lymphedema, this increases the space between the dermis and the muscle which increases the opportunity for lymphatic fluid to flow out naturally. The use of IPC treatments with KT tape as well as subsequent lymphatic drainage has proven to significantly reduce the circumference of lymphatic limbs in patients experiencing lymphedema secondary to breast cancer postmastectomy. Exercise In those with lymphedema or at risk of developing lymphedema, following breast cancer treatment, resistance training did not increase swelling and decreases in some, in addition to other potential beneficial effects on cardiovascular health. Moreover, resistance training and other forms of exercise were not associated with an increased risk of developing lymphedema in people who previously received breast cancer-related treatment. Compression garments should be worn during exercise (with the possible exception of swimming).Physical therapy treatment of patients with lymphedema may include trigger point release, soft tissue massage, postural improvement, patient education on condition management, strengthening, and stretching exercises. Exercises may increase in intensity and difficulty over time, beginning with passive movements to increase range of motion and progressing towards using external weights and resistance in various postures. Surgery The treatment of lymphedema is usually conservative, however the use of surgery is proposed for some cases.Suction assisted lipectomy (SAL), also known as liposuction for lymphedema, may help improve chronic non pitting edema if present. The procedure removes fat and protein and is done along with continued compression therapy.Vascularized lymph node transfers (VLNT) and lymphovenous bypass are supported by tentative evidence as of 2017 but is associated with a number of complications. Laser therapy Low-level laser therapy (LLLT) was cleared by the US Food and Drug Administration (FDA) for the treatment of lymphedema in November 2006.According to the US National Cancer Institute, LLLT may be effective in reducing lymphedema in some women. Two cycles of laser treatment were found to be reduce the volume of the affected arm in approximately one-third of people with postmastectomy lymphedema at three months post-treatment. Epidemiology Lymphedema affects approximately 200 million people worldwide. References == External links ==
Gastroesophageal reflux disease
Gastroesophageal reflux disease (GERD) or gastro-oesophageal reflux disease (GORD) is one of the upper gastrointestinal chronic diseases where stomach content persistently and regularly flows up into the esophagus, resulting in symptoms and/or complications. Symptoms include dental corrosion, dysphagia, heartburn, odynophagia, regurgitation, non-cardiac chest pain, extraesophageal symptoms such as chronic cough, hoarseness, reflux-induced laryngitis, or asthma. On the long term, and when not treated, complications such as esophagitis, esophageal stricture, and Barretts esophagus may arise.Risk factors include obesity, pregnancy, smoking, hiatal hernia, and taking certain medications. Medications that may cause or worsen the disease include benzodiazepines, calcium channel blockers, tricyclic antidepressants, NSAIDs, and certain asthma medicines. Acid reflux is due to poor closure of the lower esophageal sphincter, which is at the junction between the stomach and the esophagus. Diagnosis among those who do not improve with simpler measures may involve gastroscopy, upper GI series, esophageal pH monitoring, or esophageal manometry.Treatment options include lifestyle changes, medications, and sometimes surgery for those who do not improve with the first two measures. Lifestyle changes include not lying down for three hours after eating, lying down on the left side, raising the pillow/bedhead height, losing weight, and stopping smoking. Foods that may precipitate GERD symptoms and could be avoided include coffee, alcohol, chocolate, fatty foods, acidic foods, and spicy foods. Medications include antacids, H2 receptor blockers, proton pump inhibitors, and prokinetics.In the Western world, between 10 and 20% of the population is affected by GERD. It is highly prevalent in North America with 18% to 28% of the population suffering from the condition. Occasional gastroesophageal reflux without troublesome symptoms or complications is even more common. The classic symptoms of GERD were first described in 1925, when Friedenwald and Feldman commented on heartburn and its possible relationship to a hiatal hernia. In 1934 gastroenterologist Asher Winkelstein described reflux and attributed the symptoms to stomach acid. Signs and symptoms Adults The most common symptoms of GERD in adults are an acidic taste in the mouth, regurgitation, and heartburn. Less common symptoms include pain with swallowing/sore throat, increased salivation (also known as water brash), nausea, chest pain, coughing, and globus sensation. The acid reflux can induce asthma attack symptoms like shortness of breath, cough, and wheezing in those with underlying asthma.GERD sometimes causes injury to the esophagus. These injuries may include one or more of the following: Reflux esophagitis – inflammation of esophageal epithelium which can cause ulcers near the junction of the stomach and esophagus Esophageal strictures – the persistent narrowing of the esophagus caused by reflux-induced inflammation Barretts esophagus – intestinal metaplasia (changes of the epithelial cells from squamous to intestinal columnar epithelium) of the distal esophagus Esophageal adenocarcinoma – a form of cancerGERD sometimes causes injury of the larynx (LPR). Other complications can include aspiration pneumonia. Children and babies GERD may be difficult to detect in infants and children since they cannot describe what they are feeling and indicators must be observed. Symptoms may vary from typical adult symptoms. GERD in children may cause repeated vomiting, effortless spitting up, coughing, and other respiratory problems, such as wheezing. Inconsolable crying, refusing food, crying for food and then pulling off the bottle or breast only to cry for it again, failure to gain adequate weight, bad breath, and burping are also common. Children may have one symptom or many; no single symptom is universal in all children with GERD. Of the estimated 4 million babies born in the US each year, up to 35% of them may have difficulties with reflux in the first few months of their lives, known as spitting up. About 90% of infants will outgrow their reflux by their first birthday. Mouth Acid reflux into the mouth can cause breakdown of the enamel, especially on the inside surface of the teeth. A dry mouth, acid or burning sensation in the mouth, bad breath and redness of the palate may occur. Other not so common symptoms of GERD include difficulty in swallowing, water brash which is flooding of the mouth with saliva, chronic cough, hoarse voice, nausea and vomiting.Signs of enamel erosion are the appearance of a smooth, silky-glazed, sometimes dull, enamel surfaces with the absence of perikymata, together with intact enamel along the gum margin. It will be evident in people with restorations as tooth structure typically dissolves much faster than the restorative material, causing it to seem as if it “stands above” the surrounding tooth structure. Barretts esophagus GERD may lead to Barretts esophagus, a type of intestinal metaplasia, which is in turn a precursor condition for esophageal cancer. The risk of progression from Barretts to dysplasia is uncertain, but is estimated at 20% of cases. Due to the risk of chronic heartburn progressing to Barretts, EGD every five years is recommended for people with chronic heartburn, or who take drugs for chronic GERD. Causes Acid reflux is due to poor closure of the lower esophageal sphincter, which is at the junction between the stomach and the esophagus. Factors that can contribute to GERD: Hiatal hernia, which increases the likelihood of GERD due to mechanical and motility factors. Obesity: increasing body mass index is associated with more severe GERD. In a large series of 2,000 patients with symptomatic reflux disease, it has been shown that 13% of changes in esophageal acid exposure is attributable to changes in body mass index.Factors that have been linked with GERD, but not conclusively: Obstructive sleep apnea Gallstones, which can impede the flow of bile into the duodenum, which can affect the ability to neutralize gastric acidIn 1999, a review of existing studies found that, on average, 40% of GERD patients also had H. pylori infection. The eradication of H. pylori can lead to an increase in acid secretion, leading to the question of whether H. pylori-infected GERD patients are any different than non-infected GERD patients. A double-blind study, reported in 2004, found no clinically significant difference between these two types of patients with regard to the subjective or objective measures of disease severity. Diagnosis The diagnosis of GERD is usually made when typical symptoms are present. Reflux can be present in people without symptoms and the diagnosis requires both symptoms or complications and reflux of stomach content.Other investigations may include esophagogastroduodenoscopy (EGD). Barium swallow X-rays should not be used for diagnosis. Esophageal manometry is not recommended for use in the diagnosis, being recommended only prior to surgery. Ambulatory esophageal pH monitoring may be useful in those who do not improve after PPIs and is not needed in those in whom Barretts esophagus is seen. Investigation for H. pylori is not usually needed.The current gold standard for diagnosis of GERD is esophageal pH monitoring. It is the most objective test to diagnose the reflux disease and allows monitoring GERD patients in their response to medical or surgical treatment. One practice for diagnosis of GERD is a short-term treatment with proton-pump inhibitors, with improvement in symptoms suggesting a positive diagnosis. Short-term treatment with proton-pump inhibitors may help predict abnormal 24-hr pH monitoring results among patients with symptoms suggestive of GERD. Endoscopy Endoscopy, the looking down into the stomach with a fibre-optic scope, is not routinely needed if the case is typical and responds to treatment. It is recommended when people either do not respond well to treatment or have alarm symptoms, including dysphagia, anemia, blood in the stool (detected chemically), wheezing, weight loss, or voice changes. Some physicians advocate either once-in-a-lifetime or 5- to 10-yearly endoscopy for people with longstanding GERD, to evaluate the possible presence of dysplasia or Barretts esophagus.Biopsies performed during gastroscopy may show: Edema and basal hyperplasia (nonspecific inflammatory changes) Lymphocytic inflammation (nonspecific) Neutrophilic inflammation (usually due to reflux or Helicobacter gastritis) Eosinophilic inflammation (usually due to reflux): The presence of intraepithelial eosinophils may suggest a diagnosis of eosinophilic esophagitis (EE) if eosinophils are present in high enough numbers. Less than 20 eosinophils per high-power microscopic field in the distal esophagus, in the presence of other histologic features of GERD, is more consistent with GERD than EE. Goblet cell intestinal metaplasia or Barretts esophagus Elongation of the papillae Thinning of the squamous cell layer Dysplasia CarcinomaReflux changes that are not erosive in nature lead to "nonerosive reflux disease". Severity Severity may be documented with the Johnson-DeMeesters scoring system: 0 - None 1 - Minimal - occasional episodes 2 - Moderate - medical therapy visits 3 - Severe - interference with daily activities Differential diagnosis Other causes of chest pain such as heart disease should be ruled out before making the diagnosis. Another kind of acid reflux, which causes respiratory and laryngeal signs and symptoms, is called laryngopharyngeal reflux (LPR) or "extraesophageal reflux disease" (EERD). Unlike GERD, LPR rarely produces heartburn, and is sometimes called silent reflux. Differential diagnosis of GERD can also include dyspepsia, peptic ulcer disease, esophageal and gastric cancer, and food allergies. Treatment The treatments for GERD may include food choices, lifestyle changes, medications, and possibly surgery. Initial treatment is frequently with a proton-pump inhibitor such as omeprazole. In some cases, a person with GERD symptoms can manage them by taking over-the-counter drugs. This is often safer and less expensive than taking prescription drugs. Some guidelines recommend trying to treat symptoms with an H2 antagonist before using a proton-pump inhibitor because of cost and safety concerns. Medical Nutrition Therapy and Lifestyle Changes Medical nutrition therapy plays an essential role in managing the symptoms of the disease by preventing reflux, preventing pain and irritation, and decreasing gastric secretions.Some foods such as chocolate, mint, high fat food , and alcohol showed to relax the lower esophageal sphincter increasing the risk of reflux. It is also recommended to lose weight if overweight or obese, Avoid eating bedtime snacks or lying down immediately after meals, Consume meals 2–3 hours before bedtime, Elevate head of bed on 6-inch blocks, Avoid smoking, Avoid wearing tight clothing that increases pressure in the stomach, avoid spices, citrus juices, tomatoes and soft drinks, as well as consume small frequent meals and drink liquids between meals . Some evidence suggests that reduced sugar intake and increased fiber intake can help. Although moderate exercise may improve symptoms in people with GERD, vigorous exercise may worsen them. Breathing exercises may relieve GERD symptoms. Medications The primary medications used for GERD are proton-pump inhibitors, H2 receptor blockers and antacids with or without alginic acid. The use of acid suppression therapy is a common response to GERD symptoms and many people get more of this kind of treatment than their case merits. The overuse of acid suppression is a problem because of the side effects and costs. Proton-pump inhibitors Proton-pump inhibitors (PPIs), such as omeprazole, are the most effective, followed by H2 receptor blockers, such as ranitidine. If a once-daily PPI is only partially effective they may be used twice a day. They should be taken one half to one hour before a meal. There is no significant difference between PPIs. When these medications are used long term, the lowest effective dose should be taken. They may also be taken only when symptoms occur in those with frequent problems. H2 receptor blockers lead to roughly a 40% improvement. Antacids The evidence for antacids is weaker with a benefit of about 10% (NNT=13) while a combination of an antacid and alginic acid (such as Gaviscon) may improve symptoms by 60% (NNT=4). Metoclopramide (a prokinetic) is not recommended either alone or in combination with other treatments due to concerns around adverse effects. The benefit of the prokinetic mosapride is modest. Other agents Sucralfate has similar effectiveness to H2 receptor blockers; however, sucralfate needs to be taken multiple times a day, thus limiting its use. Baclofen, an agonist of the GABAB receptor, while effective, has similar issues of needing frequent dosing in addition to greater adverse effects compared to other medications. Surgery The standard surgical treatment for severe GERD is the Nissen fundoplication. In this procedure, the upper part of the stomach is wrapped around the lower esophageal sphincter to strengthen the sphincter and prevent acid reflux and to repair a hiatal hernia. It is recommended only for those who do not improve with PPIs. Quality of life is improved in the short term compared to medical therapy, but there is uncertainty in the benefits of surgery versus long-term medical management with proton pump inhibitors. When comparing different fundoplication techniques, partial posterior fundoplication surgery is more effective than partial anterior fundoplication surgery, and partial fundoplication has better outcomes than total fundoplication.Esophagogastric dissociation is an alternative procedure that is sometimes used to treat neurologically impaired children with GERD. Preliminary studies have shown it may have a lower failure rate and a lower incidence of recurrent reflux.In 2012 the U.S. Food and Drug Administration (FDA) approved a device called the LINX, which consists of a series of metal beads with magnetic cores that are placed surgically around the lower esophageal sphincter, for those with severe symptoms that do not respond to other treatments. Improvement of GERD symptoms is similar to those of the Nissen fundoplication, although there is no data regarding long-term effects. Compared to Nissen fundoplication procedures, the procedure has shown a reduction in complications such as gas bloat syndrome that commonly occur. Adverse responses include difficulty swallowing, chest pain, vomiting, and nausea. Contraindications that would advise against use of the device are patients who are or may be allergic to titanium, stainless steel, nickel, or ferrous iron materials. A warning advises that the device should not be used by patients who could be exposed to, or undergo, magnetic resonance imaging (MRI) because of serious injury to the patient and damage to the device.In those with symptoms that do not improve with PPIs surgery known as transoral incisionless fundoplication may help. Benefits may last for up to six years. Special populations Pregnancy GERD is a common condition that develops during pregnancy, but usually resolves after delivery. The severity of symptoms tend to increase throughout the pregnancy. In pregnancy, dietary modifications and lifestyle changes may be attempted, but often have little effect. Some lifestyle changes that can be implemented are elevating the head of the bed, eating small portions of food at regularly scheduled intervals, reduce fluid intake with a meal, avoid eating 3 hours before bedtime, and refrain from lying down after eating. Calcium-based antacids are recommended if these changes are not effective, aluminum- and magnesium hydroxide -based antacids are also safe. Antacids that contain sodium bicarbonate or magnesium trisilicate should be avoided in pregnancy. Sucralfate has been studied in pregnancy and proven to be safe as is ranitidine and PPIs. Babies Babies may see relief with smaller, more frequent feedings, more frequent burping during feedings, holding the baby in an upright position 30 minutes after feeding, keeping the babys head elevated while laying on the back, removing milk and soy from the mothers diet or feeding the baby milk protein-free formula. They may also be treated with medicines such as ranitidine or proton pump inhibitors. Proton pump inhibitors however have not been found to be effective in this population and there is a lack of evidence for safety. The role of an Occupational Therapist with an infant with GERD includes positioning during and after feeding. One technique used is called “the log roll technique” which is practiced when changing an infants clothing or diapers. Placing an infant on their back while having their legs lifted is not recommended since it causes the acid to flow back up the esophagus. Instead, the occupational therapist would suggest rolling the child on the side, keeping the shoulders and hips aligned to avoid acid rising up the babys esophagus. Another technique used is feeding the baby on their side with an upright position instead of lying flat on their back. The final positioning technique used for infants is to keep them on their tummy or upright for 20 minutes after feeding. Epidemiology In Western populations, GERD affects approximately 10% to 20% of the population and 0.4% newly develop the condition. For instance, an estimated 3.4 million to 6.8 million Canadians have GERD. The prevalence rate of GERD in developed nations is also tightly linked with age, with adults aged 60 to 70 being the most commonly affected. In the United States 20% of people have symptoms in a given week and 7% every day. No data supports sex predominance with regard to GERD. History An obsolete treatment is vagotomy ("highly selective vagotomy"), the surgical removal of vagus nerve branches that innervate the stomach lining. This treatment has been largely replaced by medication. Vagotomy by itself tended to worsen contraction of the pyloric sphincter of the stomach, and delayed stomach emptying. Historically, vagotomy was combined with pyloroplasty or gastroenterostomy to counter this problem. Research A number of endoscopic devices have been tested to treat chronic heartburn. Endocinch, puts stitches in the lower esophogeal sphincter (LES) to create small pleats to help strengthen the muscle. However, long-term results were disappointing, and the device is no longer sold by Bard. Stretta procedure, uses electrodes to apply radio-frequency energy to the LES. A 2015 systematic review and meta-analysis in response to the systematic review (no meta-analysis) conducted by SAGES did not support the claims that Stretta was an effective treatment for GERD. A 2012 systematic review found that it improves GERD symptoms. NDO Surgical Plicator creates a plication, or fold, of tissue near the gastroesophageal junction, and fixates the plication with a suture-based implant. The company ceased operations in mid-2008, and the device is no longer on the market. Transoral incisionless fundoplication, which uses a device called Esophyx, may be effective. See also Acid perfusion test Esophageal motility disorder Esophageal motility study References Further reading Lichtenstein DR, Cash BD, Davila R, et al. (August 2007). "Role of endoscopy in the management of GERD" (PDF). Gastrointestinal Endoscopy. 66 (2): 219–24. doi:10.1016/j.gie.2007.05.027. PMID 17643692. Lay summary. {{cite journal}}: Cite uses deprecated parameter |lay-url= (help) Hirano I, Richter JE (March 2007). "ACG practice guidelines: esophageal reflux testing". American Journal of Gastroenterology. 102 (3): 668–85. CiteSeerX 10.1.1.619.3818. doi:10.1111/j.1572-0241.2006.00936.x. PMID 17335450. S2CID 10854440. Katz PO, Gerson LB, Vela MF (March 2013). "Guidelines for the diagnosis and management of gastroesophageal reflux disease". American Journal of Gastroenterology. 108 (3): 308–28. doi:10.1038/ajg.2012.444. PMID 23419381.
Neoplasm
A neoplasm () is a type of abnormal and excessive growth of tissue. The process that occurs to form or produce a neoplasm is called neoplasia. The growth of a neoplasm is uncoordinated with that of the normal surrounding tissue, and persists in growing abnormally, even if the original trigger is removed. This abnormal growth usually forms a mass, when it may be called a tumor.ICD-10 classifies neoplasms into four main groups: benign neoplasms, in situ neoplasms, malignant neoplasms, and neoplasms of uncertain or unknown behavior. Malignant neoplasms are also simply known as cancers and are the focus of oncology. Prior to the abnormal growth of tissue, as neoplasia, cells often undergo an abnormal pattern of growth, such as metaplasia or dysplasia. However, metaplasia or dysplasia does not always progress to neoplasia and can occur in other conditions as well. The word is from Ancient Greek νέος- neo new and πλάσμα plasma formation, creation. Types A neoplasm can be benign, potentially malignant, or malignant (cancer). Benign tumors include uterine fibroids, osteophytes and melanocytic nevi (skin moles). They are circumscribed and localized and do not transform into cancer. Potentially-malignant neoplasms include carcinoma in situ. They are localised, do not invade and destroy but in time, may transform into a cancer. Malignant neoplasms are commonly called cancer. They invade and destroy the surrounding tissue, may form metastases and, if untreated or unresponsive to treatment, will generally prove fatal. Secondary neoplasm refers to any of a class of cancerous tumor that is either a metastatic offshoot of a primary tumor, or an apparently unrelated tumor that increases in frequency following certain cancer treatments such as chemotherapy or radiotherapy. Rarely there can be a metastatic neoplasm with no known site of the primary cancer and this is classed as a cancer of unknown primary origin. Clonality Neoplastic tumors are often heterogeneous and contain more than one type of cell, but their initiation and continued growth is usually dependent on a single population of neoplastic cells. These cells are presumed to be monoclonal – that is, they are derived from the same cell, and all carry the same genetic or epigenetic anomaly – evident of clonality. For lymphoid neoplasms, e.g. lymphoma and leukemia, clonality is proven by the amplification of a single rearrangement of their immunoglobulin gene (for B cell lesions) or T cell receptor gene (for T cell lesions). The demonstration of clonality is now considered to be necessary to identify a lymphoid cell proliferation as neoplastic.It is tempting to define neoplasms as clonal cellular proliferations but the demonstration of clonality is not always possible. Therefore, clonality is not required in the definition of neoplasia. Neoplasm vs. tumor The word tumor or tumour comes from the Latin word for swelling, which is one of the cardinal signs of inflammation. The word originally referred to any form of swelling, neoplastic or not. In modern English, tumor is used as a synonym for neoplasm (a solid or fluid-filled cystic lesion that may or may not be formed by an abnormal growth of neoplastic cells) that appears enlarged in size. Some neoplasms do not form a tumor - these include leukemia and most forms of carcinoma in situ. Tumor is also not synonymous with cancer. While cancer is by definition malignant, a tumor can be benign, precancerous, or malignant. The terms mass and nodule are often used synonymously with tumor. Generally speaking, however, the term tumor is used generically, without reference to the physical size of the lesion. More specifically, the term mass is often used when the lesion has a maximal diameter of at least 20 millimeters (mm) in greatest direction, while the term nodule is usually used when the size of the lesion is less than 20 mm in its greatest dimension (25.4 mm = 1 inch). Causes Tumors in humans occur as a result of accumulated genetic and epigenetic alterations within single cells, which cause the cell to divide and expand uncontrollably. A neoplasm can be caused by an abnormal proliferation of tissues, which can be caused by genetic mutations. Not all types of neoplasms cause a tumorous overgrowth of tissue, however (such as leukemia or carcinoma in situ) and similarities between neoplasmic growths and regenerative processes, e.g., dedifferentiation and rapid cell proliferation, have been pointed out.Tumor growth has been studied using mathematics and continuum mechanics. Vascular tumors such as hemangiomas and lymphangiomas (formed from blood or lymph vessels) are thus looked at as being amalgams of a solid skeleton formed by sticky cells and an organic liquid filling the spaces in which cells can grow. Under this type of model, mechanical stresses and strains can be dealt with and their influence on the growth of the tumor and the surrounding tissue and vasculature elucidated. Recent findings from experiments that use this model show that active growth of the tumor is restricted to the outer edges of the tumor and that stiffening of the underlying normal tissue inhibits tumor growth as well.Benign conditions that are not associated with an abnormal proliferation of tissue (such as sebaceous cysts) can also present as tumors, however, but have no malignant potential. Breast cysts (as occur commonly during pregnancy and at other times) are another example, as are other encapsulated glandular swellings (thyroid, adrenal gland, pancreas). Encapsulated hematomas, encapsulated necrotic tissue (from an insect bite, foreign body, or other noxious mechanism), keloids (discrete overgrowths of scar tissue) and granulomas may also present as tumors. Discrete localized enlargements of normal structures (ureters, blood vessels, intrahepatic or extrahepatic biliary ducts, pulmonary inclusions, or gastrointestinal duplications) due to outflow obstructions or narrowings, or abnormal connections, may also present as a tumor. Examples are arteriovenous fistulae or aneurysms (with or without thrombosis), biliary fistulae or aneurysms, sclerosing cholangitis, cysticercosis or hydatid cysts, intestinal duplications, and pulmonary inclusions as seen with cystic fibrosis. It can be dangerous to biopsy a number of types of tumor in which the leakage of their contents would potentially be catastrophic. When such types of tumors are encountered, diagnostic modalities such as ultrasound, CT scans, MRI, angiograms, and nuclear medicine scans are employed prior to (or during) biopsy or surgical exploration/excision in an attempt to avoid such severe complications. Malignant neoplasms DNA damage DNA damage is considered to be the primary underlying cause of malignant neoplasms known as cancers. Its central role in progression to cancer is illustrated in the figure in this section, in the box near the top. (The central features of DNA damage, epigenetic alterations and deficient DNA repair in progression to cancer are shown in red.) DNA damage is very common. Naturally occurring DNA damages (mostly due to cellular metabolism and the properties of DNA in water at body temperatures) occur at a rate of more than 60,000 new damages, on average, per human cell, per day [also see article DNA damage (naturally occurring) ]. Additional DNA damages can arise from exposure to exogenous agents. Tobacco smoke causes increased exogenous DNA damage, and these DNA damages are the likely cause of lung cancer due to smoking. UV light from solar radiation causes DNA damage that is important in melanoma. Helicobacter pylori infection produces high levels of reactive oxygen species that damage DNA and contributes to gastric cancer. Bile acids, at high levels in the colons of humans eating a high fat diet, also cause DNA damage and contribute to colon cancer. Katsurano et al. indicated that macrophages and neutrophils in an inflamed colonic epithelium are the source of reactive oxygen species causing the DNA damages that initiate colonic tumorigenesis. Some sources of DNA damage are indicated in the boxes at the top of the figure in this section. Individuals with a germ line mutation causing deficiency in any of 34 DNA repair genes (see article DNA repair-deficiency disorder) are at increased risk of cancer. Some germ line mutations in DNA repair genes cause up to 100% lifetime chance of cancer (e.g., p53 mutations). These germ line mutations are indicated in a box at the left of the figure with an arrow indicating their contribution to DNA repair deficiency. About 70% of malignant neoplasms have no hereditary component and are called "sporadic cancers". Only a minority of sporadic cancers have a deficiency in DNA repair due to mutation in a DNA repair gene. However, a majority of sporadic cancers have deficiency in DNA repair due to epigenetic alterations that reduce or silence DNA repair gene expression. For example, of 113 sequential colorectal cancers, only four had a missense mutation in the DNA repair gene MGMT, while the majority had reduced MGMT expression due to methylation of the MGMT promoter region (an epigenetic alteration). Five reports present evidence that between 40% and 90% of colorectal cancers have reduced MGMT expression due to methylation of the MGMT promoter region.Similarly, out of 119 cases of mismatch repair-deficient colorectal cancers that lacked DNA repair gene PMS2 expression, PMS2 was deficient in 6 due to mutations in the PMS2 gene, while in 103 cases PMS2 expression was deficient because its pairing partner MLH1 was repressed due to promoter methylation (PMS2 protein is unstable in the absence of MLH1). In the other 10 cases, loss of PMS2 expression was likely due to epigenetic overexpression of the microRNA, miR-155, which down-regulates MLH1.In further examples, epigenetic defects were found at frequencies of between 13%-100% for the DNA repair genes BRCA1, WRN, FANCB, FANCF, MGMT, MLH1, MSH2, MSH4, ERCC1, XPF, NEIL1 and ATM. These epigenetic defects occurred in various cancers (e.g. breast, ovarian, colorectal and head and neck). Two or three deficiencies in expression of ERCC1, XPF or PMS2 occur simultaneously in the majority of the 49 colon cancers evaluated by Facista et al. Epigenetic alterations causing reduced expression of DNA repair genes is shown in a central box at the third level from the top of the figure in this section, and the consequent DNA repair deficiency is shown at the fourth level. When expression of DNA repair genes is reduced, DNA damages accumulate in cells at a higher than normal level, and these excess damages cause increased frequencies of mutation or epimutation. Mutation rates strongly increase in cells defective in DNA mismatch repair or in homologous recombinational repair (HRR).During repair of DNA double strand breaks, or repair of other DNA damages, incompletely cleared sites of repair can cause epigenetic gene silencing. DNA repair deficiencies (level 4 in the figure) cause increased DNA damages (level 5 in the figure) which result in increased somatic mutations and epigenetic alterations (level 6 in the figure). Field defects, normal appearing tissue with multiple alterations (and discussed in the section below), are common precursors to development of the disordered and improperly proliferating clone of tissue in a malignant neoplasm. Such field defects (second level from bottom of figure) may have multiple mutations and epigenetic alterations. Once a cancer is formed, it usually has genome instability. This instability is likely due to reduced DNA repair or excessive DNA damage. Because of such instability, the cancer continues to evolve and to produce sub clones. For example, a renal cancer, sampled in 9 areas, had 40 ubiquitous mutations, demonstrating tumor heterogeneity (i.e. present in all areas of the cancer), 59 mutations shared by some (but not all areas), and 29 “private” mutations only present in one of the areas of the cancer. Field defects Various other terms have been used to describe this phenomenon, including "field effect", "field cancerization", and "field carcinogenesis". The term "field cancerization" was first used in 1953 to describe an area or "field" of epithelium that has been preconditioned by (at that time) largely unknown processes so as to predispose it towards development of cancer. Since then, the terms "field cancerization" and "field defect" have been used to describe pre-malignant tissue in which new cancers are likely to arise.Field defects are important in progression to cancer. However, in most cancer research, as pointed out by Rubin “The vast majority of studies in cancer research has been done on well-defined tumors in vivo, or on discrete neoplastic foci in vitro. Yet there is evidence that more than 80% of the somatic mutations found in mutator phenotype human colorectal tumors occur before the onset of terminal clonal expansion. Similarly, Vogelstein et al. point out that more than half of somatic mutations identified in tumors occurred in a pre-neoplastic phase (in a field defect), during growth of apparently normal cells. Likewise, epigenetic alterations present in tumors may have occurred in pre-neoplastic field defects.An expanded view of field effect has been termed "etiologic field effect", which encompasses not only molecular and pathologic changes in pre-neoplastic cells but also influences of exogenous environmental factors and molecular changes in the local microenvironment on neoplastic evolution from tumor initiation to patient death.In the colon, a field defect probably arises by natural selection of a mutant or epigenetically altered cell among the stem cells at the base of one of the intestinal crypts on the inside surface of the colon. A mutant or epigenetically altered stem cell may replace the other nearby stem cells by natural selection. Thus, a patch of abnormal tissue may arise. The figure in this section includes a photo of a freshly resected and lengthwise-opened segment of the colon showing a colon cancer and four polyps. Below the photo, there is a schematic diagram of how a large patch of mutant or epigenetically altered cells may have formed, shown by the large area in yellow in the diagram. Within this first large patch in the diagram (a large clone of cells), a second such mutation or epigenetic alteration may occur so that a given stem cell acquires an advantage compared to other stem cells within the patch, and this altered stem cell may expand clonally forming a secondary patch, or sub-clone, within the original patch. This is indicated in the diagram by four smaller patches of different colors within the large yellow original area. Within these new patches (sub-clones), the process may be repeated multiple times, indicated by the still smaller patches within the four secondary patches (with still different colors in the diagram) which clonally expand, until stem cells arise that generate either small polyps or else a malignant neoplasm (cancer).In the photo, an apparent field defect in this segment of a colon has generated four polyps (labeled with the size of the polyps, 6mm, 5mm, and two of 3mm, and a cancer about 3 cm across in its longest dimension). These neoplasms are also indicated, in the diagram below the photo, by 4 small tan circles (polyps) and a larger red area (cancer). The cancer in the photo occurred in the cecal area of the colon, where the colon joins the small intestine (labeled) and where the appendix occurs (labeled). The fat in the photo is external to the outer wall of the colon. In the segment of colon shown here, the colon was cut open lengthwise to expose the inner surface of the colon and to display the cancer and polyps occurring within the inner epithelial lining of the colon.If the general process by which sporadic colon cancers arise is the formation of a pre-neoplastic clone that spreads by natural selection, followed by formation of internal sub-clones within the initial clone, and sub-sub-clones inside those, then colon cancers generally should be associated with, and be preceded by, fields of increasing abnormality reflecting the succession of premalignant events. The most extensive region of abnormality (the outermost yellow irregular area in the diagram) would reflect the earliest event in formation of a malignant neoplasm.In experimental evaluation of specific DNA repair deficiencies in cancers, many specific DNA repair deficiencies were also shown to occur in the field defects surrounding those cancers. The Table, below, gives examples for which the DNA repair deficiency in a cancer was shown to be caused by an epigenetic alteration, and the somewhat lower frequencies with which the same epigenetically caused DNA repair deficiency was found in the surrounding field defect. Some of the small polyps in the field defect shown in the photo of the opened colon segment may be relatively benign neoplasms. Of polyps less than 10mm in size, found during colonoscopy and followed with repeat colonoscopies for 3 years, 25% were unchanged in size, 35% regressed or shrank in size while 40% grew in size. Genome instability Cancers are known to exhibit genome instability or a mutator phenotype. The protein-coding DNA within the nucleus is about 1.5% of the total genomic DNA. Within this protein-coding DNA (called the exome), an average cancer of the breast or colon can have about 60 to 70 protein altering mutations, of which about 3 or 4 may be “driver” mutations, and the remaining ones may be “passenger” mutations. However, the average number of DNA sequence mutations in the entire genome (including non-protein-coding regions) within a breast cancer tissue sample is about 20,000. In an average melanoma tissue sample (where melanomas have a higher exome mutation frequency) the total number of DNA sequence mutations is about 80,000. This compares to the very low mutation frequency of about 70 new mutations in the entire genome between generations (parent to child) in humans.The high frequencies of mutations in the total nucleotide sequences within cancers suggest that often an early alteration in the field defects giving rise to a cancer (e.g. yellow area in the diagram in this section) is a deficiency in DNA repair. The large field defects surrounding colon cancers (extending to at about 10 cm on each side of a cancer) were shown by Facista et al. to frequently have epigenetic defects in 2 or 3 DNA repair proteins (ERCC1, XPF or PMS2) in the entire area of the field defect. Deficiencies in DNA repair cause increased mutation rates. A deficiency in DNA repair, itself, can allow DNA damages to accumulate, and error-prone translesion synthesis past some of those damages may give rise to mutations. In addition, faulty repair of these accumulated DNA damages may give rise to epimutations. These new mutations or epimutations may provide a proliferative advantage, generating a field defect. Although the mutations/epimutations in DNA repair genes do not, themselves, confer a selective advantage, they may be carried along as passengers in cells when the cells acquire additional mutations/epimutations that do provide a proliferative advantage. Etymology The term neoplasm is a synonym of tumor. Neoplasia denotes the process of the formation of neoplasms/tumors, and the process is referred to as a neoplastic process. The word neoplastic itself comes from Greek neo new and plastic formed, molded.The term tumor derives from the Latin noun tumor a swelling, ultimately from the verb tumēre to swell. In the British Commonwealth, the spelling tumour is commonly used, whereas in the U.S. the word is usually spelled tumor.In its medical sense, tumor has traditionally meant an abnormal swelling of the flesh. The Roman medical encyclopedist Celsus (c. 30 BC–38 AD) described the four cardinal signs of acute inflammation as tumor, dolor, calor, and rubor (swelling, pain, increased heat, and redness). (His treatise, De Medicina, was the first medical book printed in 1478 following the invention of the movable-type printing press.) In contemporary English, the word tumor is often used as a synonym for a cystic (liquid-filled) growth or solid neoplasm (cancerous or non-cancerous), with other forms of swelling often referred to as "swellings".Related terms occur commonly in the medical literature, where the nouns tumefaction and tumescence (derived from the adjective tumescent) are current medical terms for non-neoplastic swelling. This type of swelling is most often caused by inflammation caused by trauma, infection, and other factors. Tumors may be caused by conditions other than an overgrowth of neoplastic cells, however. Cysts (such as sebaceous cysts) are also referred to as tumors, even though they have no neoplastic cells. This is standard in medical-billing terminology (especially when billing for a growth whose pathology has yet to be determined). See also Somatic evolution in cancer List of biological development disorders Epidemiology of cancer Pleomorphism References == External links ==
Autosensitization dermatitis
Autosensitization dermatitis presents with the development of widespread dermatitis or dermatitis distant from a local inflammatory focus, a process referred to as autoeczematization.: 81 See also Id reaction List of cutaneous conditions References == External links ==
Human skin
The human skin is the outer covering of the body and is the largest organ of the integumentary system. The skin has up to seven layers of ectodermal tissue guarding muscles, bones, ligaments and internal organs. Human skin is similar to most of the other mammals skin, and it is very similar to pig skin. Though nearly all human skin is covered with hair follicles, it can appear hairless. There are two general types of skin, hairy and glabrous skin (hairless). The adjective cutaneous literally means "of the skin" (from Latin cutis, skin). Because it interfaces with the environment, skin plays an important immunity role in protecting the body against pathogens and excessive water loss. Its other functions are insulation, temperature regulation, sensation, synthesis of vitamin D, and the protection of vitamin B folates. Severely damaged skin will try to heal by forming scar tissue. This is often discoloured and depigmented. In humans, skin pigmentation (affected by melanin) varies among populations, and skin type can range from dry to non-dry and from oily to non-oily. Such skin variety provides a rich and diverse habitat for bacteria that number roughly 1000 species from 19 phyla, present on the human skin. Structure Human skin shares anatomical, physiological, biochemical and immunological properties with other mammalian lines, especially pig skin. Pig skin shares similar epidermal and dermal thickness ratios to human skin; pig and human skin share similar hair follicle and blood vessel patterns; biochemically the dermal collagen and elastic content is similar in pig and human skin; and pig skin and human skin have similar physical responses to various growth factors.Skin has mesodermal cells, pigmentation, such as melanin provided by melanocytes, which absorb some of the potentially dangerous ultraviolet radiation (UV) in sunlight. It also contains DNA repair enzymes that help reverse UV damage, such that people lacking the genes for these enzymes have high rates of skin cancer. One form predominantly produced by UV light, malignant melanoma, is particularly invasive, causing it to spread quickly, and can often be deadly. Human skin pigmentation varies among populations in a striking manner. This has led to the classification of people(s) on the basis of skin colour.In terms of surface area, the skin is the second largest organ in the human body (the inside of the small intestine is 15 to 20 times larger). For the average adult human, the skin has a surface area of from 1.5–2.0 square metres (16–22 sq ft). The thickness of the skin varies considerably over all parts of the body, and between men and women and the young and the old. An example is the skin on the forearm which is on average 1.3 mm in the male and 1.26 mm in the female. One average square inch (6.5 cm2) of skin holds 650 sweat glands, 20 blood vessels, 60,000 melanocytes, and more than 1,000 nerve endings. The average human skin cell is about 30 micrometres (μm) in diameter, but there are variants. A skin cell usually ranges from 25 to 40 μm2, depending on a variety of factors. Skin is composed of three primary layers: the epidermis, the dermis and the hypodermis. Epidermis Epidermis, "epi" coming from the Greek meaning "over" or "upon", is the outermost layer of the skin. It forms the waterproof, protective wrap over the bodys surface which also serves as a barrier to infection and is made up of stratified squamous epithelium with an underlying basal lamina. The epidermis contains no blood vessels, and cells in the deepest layers are nourished almost exclusively by diffused oxygen from the surrounding air and to a far lesser degree by blood capillaries extending to the outer layers of the dermis. The main type of cells that make up the epidermis are Merkel cells, keratinocytes, with melanocytes and Langerhans cells also present. The epidermis can be further subdivided into the following strata (beginning with the outermost layer): corneum, lucidum (only in palms of hands and bottoms of feet), granulosum, spinosum, and basale. Cells are formed through mitosis at the basale layer. The daughter cells (see cell division) move up the strata changing shape and composition as they die due to isolation from their blood source. The cytoplasm is released and the protein keratin is inserted. They eventually reach the corneum and slough off (desquamation). This process is called "keratinization". This keratinized layer of skin is responsible for keeping water in the body and keeping other harmful chemicals and pathogens out, making skin a natural barrier to infection. The epidermis contains no blood vessels and is nourished by diffusion from the dermis. The main type of cells that make up the epidermis are keratinocytes, melanocytes, Langerhans cells, and Merkel cells. The epidermis helps the skin regulate body temperature. Layers The skin has up to seven layers of ectodermal tissue and guards the underlying muscles, bones, ligaments and internal organs. The epidermis is divided into several layers, where cells are formed through mitosis at the innermost layers. They move up the strata changing shape and composition as they differentiate and become filled with keratin. After reaching the top layer stratum corneum they are eventually sloughed off, or desquamated. This process is called keratinization and takes place within weeks. It was previously believed that the stratum corneum was "a simple, biologically inactive, outer epidermal layer comprising a fibrillar lattice of dead keratin". It is now understood that this is not true, and that the stratum corneum should be considered to be a live tissue. While it is true that the stratum corneum is mainly composed of terminally differentiated keratinocytes called corneocytes that are anucleated, these cells remain alive and metabolically functional until desquamated. Sublayers The epidermis is divided into the following 5 sublayers or strata: Stratum corneum Stratum lucidum Stratum granulosum Stratum spinosum Stratum basale (also called "stratum germinativum")Blood capillaries are found beneath the epidermis and are linked to an arteriole and a venule. Arterial shunt vessels may bypass the network in ears, the nose and fingertips. Genes and proteins expressed in the epidermis About 70% of all human protein-coding genes are expressed in the skin. Almost 500 genes have an elevated pattern of expression in the skin. There are fewer than 100 genes that are specific for the skin, and these are expressed in the epidermis. An analysis of the corresponding proteins show that these are mainly expressed in keratinocytes and have functions related to squamous differentiation and cornification. Dermis The dermis is the layer of skin beneath the epidermis that consists of connective tissue and cushions the body from stress and strain. The dermis is tightly connected to the epidermis by a basement membrane. It also harbours many nerve endings that provide the sense of touch and heat. It contains the hair follicles, sweat glands, sebaceous glands, apocrine glands, lymphatic vessels and blood vessels. The blood vessels in the dermis provide nourishment and waste removal from its own cells as well as from the stratum basale of the epidermis. The dermis is structurally divided into two areas: a superficial area adjacent to the epidermis, called the papillary region, and a deep thicker area known as the reticular region. Papillary region The papillary region is composed of loose areolar connective tissue. It is named for its finger-like projections called papillae, which extend toward the epidermis. The papillae provide the dermis with a "bumpy" surface that interdigitates with the epidermis, strengthening the connection between the two layers of skin. In the palms, fingers, soles, and toes, the influence of the papillae projecting into the epidermis forms contours in the skins surface. These epidermal ridges occur in patterns (see: fingerprint) that are genetically and epigenetically determined and are therefore unique to the individual, making it possible to use fingerprints or footprints as a means of identification. Reticular region The reticular region lies deep in the papillary region and is usually much thicker. It is composed of dense irregular connective tissue, and receives its name from the dense concentration of collagenous, elastic, and reticular fibres that weave throughout it. These protein fibres give the dermis its properties of strength, extensibility, and elasticity. Also located within the reticular region are the roots of the hairs, sebaceous glands, sweat glands, receptors, nails, and blood vessels. Tattoo ink is held in the dermis. Stretch marks, often from pregnancy and obesity, are also located in the dermis. Subcutaneous tissue The subcutaneous tissue (also hypodermis and subcutis) is not part of the skin, but lies below the dermis of the cutis. Its purpose is to attach the skin to underlying bone and muscle as well as supplying it with blood vessels and nerves. It consists of loose connective tissue, adipose tissue and elastin. The main cell types are fibroblasts, macrophages and adipocytes (subcutaneous tissue contains 50% of body fat). Fat serves as padding and insulation for the body. Cross-section Development Skin colour Human skin shows high skin colour variety from the darkest brown to the lightest pinkish-white hues. Human skin shows higher variation in colour than any other single mammalian species and is the result of natural selection. Skin pigmentation in humans evolved to primarily regulate the amount of ultraviolet radiation (UVR) penetrating the skin, controlling its biochemical effects.The actual skin colour of different humans is affected by many substances, although the single most important substance determining human skin colour is the pigment melanin. Melanin is produced within the skin in cells called melanocytes and it is the main determinant of the skin colour of darker-skinned humans. The skin colour of people with light skin is determined mainly by the bluish-white connective tissue under the dermis and by the haemoglobin circulating in the veins of the dermis. The red colour underlying the skin becomes more visible, especially in the face, when, as consequence of physical exercise or the stimulation of the nervous system (anger, fear), arterioles dilate.There are at least five different pigments that determine the colour of the skin. These pigments are present at different levels and places. Melanin: It is brown in colour and present in the basal layer of the epidermis. Melanoid: It resembles melanin but is present diffusely throughout the epidermis. Carotene: This pigment is yellow to orange in colour. It is present in the stratum corneum and fat cells of dermis and superficial fascia. Hemoglobin (also spelled haemoglobin): It is found in blood and is not a pigment of the skin but develops a purple colour. Oxyhemoglobin: It is also found in blood and is not a pigment of the skin. It develops a red colour.There is a correlation between the geographic distribution of UV radiation (UVR) and the distribution of indigenous skin pigmentation around the world. Areas that highlight higher amounts of UVR reflect darker-skinned populations, generally located nearer towards the equator. Areas that are far from the tropics and closer to the poles have lower concentration of UVR, which is reflected in lighter-skinned populations.In the same population it has been observed that adult human females are considerably lighter in skin pigmentation than males. Females need more calcium during pregnancy and lactation, and vitamin D which is synthesized from sunlight helps in absorbing calcium. For this reason it is thought that females may have evolved to have lighter skin in order to help their bodies absorb more calcium.The Fitzpatrick scale is a numerical classification schema for human skin colour developed in 1975 as a way to classify the typical response of different types of skin to ultraviolet (UV) light: Ageing As skin ages, it becomes thinner and more easily damaged. Intensifying this effect is the decreasing ability of skin to heal itself as a person ages. Among other things, skin ageing is noted by a decrease in volume and elasticity. There are many internal and external causes to skin ageing. For example, ageing skin receives less blood flow and lower glandular activity. A validated comprehensive grading scale has categorized the clinical findings of skin ageing as laxity (sagging), rhytids (wrinkles), and the various facets of photoageing, including erythema (redness), and telangiectasia, dyspigmentation (brown discolouration), solar elastosis (yellowing), keratoses (abnormal growths) and poor texture.Cortisol causes degradation of collagen, accelerating skin ageing.Anti-ageing supplements are used to treat skin ageing. Photoageing Photoageing has two main concerns: an increased risk for skin cancer and the appearance of damaged skin. In younger skin, sun damage will heal faster since the cells in the epidermis have a faster turnover rate, while in the older population the skin becomes thinner and the epidermis turnover rate for cell repair is lower, which may result in the dermis layer being damaged. Types Though most human skin is covered with hair follicles, some parts can be hairless. There are two general types of skin, hairy and glabrous skin (hairless). The adjective cutaneous means "of the skin" (from Latin cutis, skin). Functions Skin performs the following functions: Protection: an anatomical barrier from pathogens and damage between the internal and external environment in bodily defence; Langerhans cells in the skin are part of the adaptive immune system. Perspiration contains lysozyme that break the bonds within the cell walls of bacteria. Sensation: contains a variety of nerve endings that react to heat and cold, touch, pressure, vibration, and tissue injury; see somatosensory system and haptics. Heat regulation: the skin contains a blood supply far greater than its requirements which allows precise control of energy loss by radiation, convection and conduction. Dilated blood vessels increase perfusion and heat loss, while constricted vessels greatly reduce cutaneous blood flow and conserve heat. Control of evaporation: the skin provides a relatively dry and semi-impermeable barrier to fluid loss. Loss of this function contributes to the massive fluid loss in burns. Aesthetics and communication: others see our skin and can assess our mood, physical state and attractiveness. Storage and synthesis: acts as a storage centre for lipids and water, as well as a means of synthesis of vitamin D by action of UV on certain parts of the skin. Excretion: sweat contains urea, however its concentration is 1/130th that of urine, hence excretion by sweating is at most a secondary function to temperature regulation. Absorption: the cells comprising the outermost 0.25–0.40 mm of the skin are "almost exclusively supplied by external oxygen", although the "contribution to total respiration is negligible". In addition, medicine can be administered through the skin, by ointments or by means of adhesive patch, such as the nicotine patch or iontophoresis. The skin is an important site of transport in many other organisms. Water resistance: The skin acts as a water-resistant barrier so essential nutrients are not washed out of the body. Skin flora The human skin is a rich environment for microbes. Around 1000 species of bacteria from 19 bacterial phyla have been found. Most come from only four phyla: Actinomycetota (51.8%), Bacillota (24.4%), Pseudomonadota (16.5%), and Bacteroidota (6.3%). Propionibacteria and Staphylococci species were the main species in sebaceous areas. There are three main ecological areas: moist, dry and sebaceous. In moist places on the body Corynebacteria together with Staphylococci dominate. In dry areas, there is a mixture of species but dominated by Betaproteobacteria and Flavobacteriales. Ecologically, sebaceous areas had greater species richness than moist and dry ones. The areas with least similarity between people in species were the spaces between fingers, the spaces between toes, axillae, and umbilical cord stump. Most similarly were beside the nostril, nares (inside the nostril), and on the back. Reflecting upon the diversity of the human skin researchers on the human skin microbiome have observed: "hairy, moist underarms lie a short distance from smooth dry forearms, but these two niches are likely as ecologically dissimilar as rainforests are to deserts."The NIH conducted the Human Microbiome Project to characterize the human microbiota which includes that on the skin and the role of this microbiome in health and disease.Microorganisms like Staphylococcus epidermidis colonize the skin surface. The density of skin flora depends on region of the skin. The disinfected skin surface gets recolonized from bacteria residing in the deeper areas of the hair follicle, gut and urogenital openings. Clinical significance Diseases of the skin include skin infections and skin neoplasms (including skin cancer). Dermatology is the branch of medicine that deals with conditions of the skin.The skin is also valuable for diagnosis of other conditions, since many medical signs show through the skin. Skin color affects the visibility of these signs, a source of misdiagnosis in unaware medical personnel. Society and culture Hygiene and skin care The skin supports its own ecosystems of microorganisms, including yeasts and bacteria, which cannot be removed by any amount of cleaning. Estimates place the number of individual bacteria on the surface of 6.5 square centimetres (1 sq in) of human skin at 50 million, though this figure varies greatly over the average 1.9 square metres (20 sq ft) of human skin. Oily surfaces, such as the face, may contain over 78 million bacteria per square centimetre (500 million per square inch). Despite these vast quantities, all of the bacteria found on the skins surface would fit into a volume the size of a pea. In general, the microorganisms keep one another in check and are part of a healthy skin. When the balance is disturbed, there may be an overgrowth and infection, such as when antibiotics kill microbes, resulting in an overgrowth of yeast. The skin is continuous with the inner epithelial lining of the body at the orifices, each of which supports its own complement of microbes. Cosmetics should be used carefully on the skin because these may cause allergic reactions. Each season requires suitable clothing in order to facilitate the evaporation of the sweat. Sunlight, water and air play an important role in keeping the skin healthy. Oily skin Oily skin is caused by over-active sebaceous glands, that produce a substance called sebum, a naturally healthy skin lubricant. A high glycemic-index diet and dairy products (except for cheese) consumption increase IGF-1 generation, which in turn increases sebum production. Overwashing the skin does not cause sebum overproduction but may cause dryness.When the skin produces excessive sebum, it becomes heavy and thick in texture, known as oily skin. Oily skin is typified by shininess, blemishes and pimples. The oily-skin type is not necessarily bad, since such skin is less prone to wrinkling, or other signs of ageing, because the oil helps to keep needed moisture locked into the epidermis (outermost layer of skin). The negative aspect of the oily-skin type is that oily complexions are especially susceptible to clogged pores, blackheads, and buildup of dead skin cells on the surface of the skin. Oily skin can be sallow and rough in texture and tends to have large, clearly visible pores everywhere, except around the eyes and neck. Permeability Human skin has a low permeability; that is, most foreign substances are unable to penetrate and diffuse through the skin. Skins outermost layer, the stratum corneum, is an effective barrier to most inorganic nanosized particles. This protects the body from external particles such as toxins by not allowing them to come into contact with internal tissues. However, in some cases it is desirable to allow particles entry to the body through the skin. Potential medical applications of such particle transfer has prompted developments in nanomedicine and biology to increase skin permeability. One application of transcutaneous particle delivery could be to locate and treat cancer. Nanomedical researchers seek to target the epidermis and other layers of active cell division where nanoparticles can interact directly with cells that have lost their growth-control mechanisms (cancer cells). Such direct interaction could be used to more accurately diagnose properties of specific tumours or to treat them by delivering drugs with cellular specificity. Nanoparticles Nanoparticles 40 nm in diameter and smaller have been successful in penetrating the skin. Research confirms that nanoparticles larger than 40 nm do not penetrate the skin past the stratum corneum. Most particles that do penetrate will diffuse through skin cells, but some will travel down hair follicles and reach the dermis layer. The permeability of skin relative to different shapes of nanoparticles has also been studied. Research has shown that spherical particles have a better ability to penetrate the skin compared to oblong (ellipsoidal) particles because spheres are symmetric in all three spatial dimensions. One study compared the two shapes and recorded data that showed spherical particles located deep in the epidermis and dermis whereas ellipsoidal particles were mainly found in the stratum corneum and epidermal layers. Nanorods are used in experiments because of their unique fluorescent properties but have shown mediocre penetration. Nanoparticles of different materials have shown skins permeability limitations. In many experiments, gold nanoparticles 40 nm in diameter or smaller are used and have shown to penetrate to the epidermis. Titanium oxide (TiO2), zinc oxide (ZnO), and silver nanoparticles are ineffective in penetrating the skin past the stratum corneum. Cadmium selenide (CdSe) quantum dots have proven to penetrate very effectively when they have certain properties. Because CdSe is toxic to living organisms, the particle must be covered in a surface group. An experiment comparing the permeability of quantum dots coated in polyethylene glycol (PEG), PEG-amine, and carboxylic acid concluded the PEG and PEG-amine surface groups allowed for the greatest penetration of particles. The carboxylic acid coated particles did not penetrate past the stratum corneum. Increasing permeability Scientists previously believed that the skin was an effective barrier to inorganic particles. Damage from mechanical stressors was believed to be the only way to increase its permeability.Recently, simpler and more effective methods for increasing skin permeability have been developed. Ultraviolet radiation (UVR) slightly damages the surface of skin and causes a time-dependent defect allowing easier penetration of nanoparticles. The UVRs high energy causes a restructuring of cells, weakening the boundary between the stratum corneum and the epidermal layer. The damage of the skin is typically measured by the transepidermal water loss (TEWL), though it may take 3–5 days for the TEWL to reach its peak value. When the TEWL reaches its highest value, the maximum density of nanoparticles is able to permeate the skin. While the effect of increased permeability after UVR exposure can lead to an increase in the number of particles that permeate the skin, the specific permeability of skin after UVR exposure relative to particles of different sizes and materials has not been determined.There are other methods to increase nanoparticle penetration by skin damage: tape stripping is the process in which tape is applied to skin then lifted to remove the top layer of skin; skin abrasion is done by shaving the top 5–10 μm off the surface of the skin; chemical enhancement applies chemicals such as polyvinylpyrrolidone (PVP), dimethyl sulfoxide (DMSO), and oleic acid to the surface of the skin to increase permeability; electroporation increases skin permeability by the application of short pulses of electric fields. The pulses are high voltage and on the order of milliseconds when applied. Charged molecules penetrate the skin more frequently than neutral molecules after the skin has been exposed to electric field pulses. Results have shown molecules on the order of 100 μm to easily permeate electroporated skin. Applications A large area of interest in nanomedicine is the transdermal patch because of the possibility of a painless application of therapeutic agents with very few side effects. Transdermal patches have been limited to administer a small number of drugs, such as nicotine, because of the limitations in permeability of the skin. Development of techniques that increase skin permeability has led to more drugs that can be applied via transdermal patches and more options for patients.Increasing the permeability of skin allows nanoparticles to penetrate and target cancer cells. Nanoparticles along with multi-modal imaging techniques have been used as a way to diagnose cancer non-invasively. Skin with high permeability allowed quantum dots with an antibody attached to the surface for active targeting to successfully penetrate and identify cancerous tumours in mice. Tumour targeting is beneficial because the particles can be excited using fluorescence microscopy and emit light energy and heat that will destroy cancer cells. Sunblock and sunscreen Sunblock and sunscreen are different important skin-care products though both offer full protection from the sun.Sunblock—Sunblock is opaque and stronger than sunscreen, since it is able to block most of the UVA/UVB rays and radiation from the sun, and does not need to be reapplied several times in a day. Titanium dioxide and zinc oxide are two of the important ingredients in sunblock.Sunscreen—Sunscreen is more transparent once applied to the skin and also has the ability to protect against UVA/UVB rays, although the sunscreens ingredients have the ability to break down at a faster rate once exposed to sunlight, and some of the radiation is able to penetrate to the skin. In order for sunscreen to be more effective it is necessary to consistently reapply and use one with a higher sun protection factor. Diet Vitamin A, also known as retinoids, benefits the skin by normalizing keratinization, downregulating sebum production which contributes to acne, and reversing and treating photodamage, striae, and cellulite. Vitamin D and analogues are used to downregulate the cutaneous immune system and epithelial proliferation while promoting differentiation. Vitamin C is an antioxidant that regulates collagen synthesis, forms barrier lipids, regenerates vitamin E, and provides photoprotection. Vitamin E is a membrane antioxidant that protects against oxidative damage and also provides protection against harmful UV rays. Several scientific studies confirmed that changes in baseline nutritional status affects skin condition. The Mayo Clinic lists foods they state help the skin: fruits and vegetables, whole-grains, dark leafy greens, nuts, and seeds. See also Acid mantle Anthropodermic bibliopegy Artificial skin Callus, thick area of skin List of cutaneous conditions Cutaneous structure development Fingerprint, skin on fingertips Hyperpigmentation, about excess skin colour Intertriginous Meissners corpuscle Pacinian corpuscle Polyphenol antioxidant Skin cancer Skin lesion Skin repair References External links Media related to Human skin at Wikimedia Commons MedlinePlus Skin Conditions National Library of Medicine retrieved 12 November 2013.
Salicylate poisoning
Salicylate poisoning, also known as aspirin poisoning, is the acute or chronic poisoning with a salicylate such as aspirin. The classic symptoms are ringing in the ears, nausea, abdominal pain, and a fast breathing rate. Early on, these may be subtle, while larger doses may result in fever. Complications can include swelling of the brain or lungs, seizures, low blood sugar, or cardiac arrest.While usually due to aspirin, other possible causes include oil of wintergreen and bismuth subsalicylate. Excess doses can be either on purpose or accidental. Small amounts of oil of wintergreen can be toxic. Diagnosis is generally based on repeated blood tests measuring aspirin levels and blood gases. While a type of graph has been created to try to assist with diagnosis, its general use is not recommended. In overdose maximum blood levels may not occur for more than 12 hours.Efforts to prevent poisoning include child-resistant packaging and a lower number of pills per package. Treatment may include activated charcoal, intravenous sodium bicarbonate with dextrose and potassium chloride, and dialysis. Giving dextrose may be useful even if the blood sugar is normal. Dialysis is recommended in those with kidney failure, decreased level of consciousness, blood pH less than 7.2, or high blood salicylate levels. If a person requires intubation, a fast respiratory rate may be required.The toxic effects of salicylates have been described since at least 1877. In 2004, more than 20,000 cases with 43 deaths were reported in the United States. About 1% of those with an acute overdose die, while chronic overdoses may have severe outcomes. Older people are at higher risks of toxicity for any given dose. Signs and symptoms Salicylate toxicity has potentially serious consequences, sometimes leading to significant morbidity and death. Patients with mild intoxication frequently have nausea and vomiting, abdominal pain, lethargy, ringing in the ears, and dizziness. More significant signs and symptoms occur in more severe poisonings and include high body temperature, fast breathing rate, respiratory alkalosis, metabolic acidosis, low blood potassium, low blood glucose, hallucinations, confusion, seizure, cerebral edema, and coma. The most common cause of death following an aspirin overdose is cardiopulmonary arrest usually due to pulmonary edema.High doses of salicylate can cause salicylate-induced tinnitus. Severity The severity of toxicity depends on the amount of aspirin taken. Pathophysiology High levels of salicylates stimulate peripheral chemoreceptors and the central respiratory centers in the medulla causing increased ventilation and a respiratory alkalosis. The increased pH secondary to hyperventilation with respiratory alkalosis causes an increase in lipolysis and ketogenesis which causes the production of lactate and organic keto-acids (such as beta-hydroxybutyrate). The accumulation of these organic acids can cause an acidosis with an increased anion-gap as well as a decreased buffering capacity of the body. Salicylate toxicity also causes an uncoupling of oxidative phosphorylation and a decrease in citric acid cycle activity in the mitochondria. This decrease in aerobic production of adenosine triphosphate (ATP) is accompanied by an increase in anaerobic production of ATP through glycolysis which leads to glycogen depletion and hypoglycemia. The inefficient ATP production through anaerobic metabolism causes the body to shift to a catabolic predominant mode for energy production which consists of increased oxygen consumption, increased heat production (often manifesting as sweating), liver glycogen utilization and increased carbon dioxide production. This increased catabolism accompanied by hyperventilation can lead to severe insensible water losses, dehydration and hypernatremia.Acute aspirin or salicylates overdose or poisoning can cause initial respiratory alkalosis though metabolic acidosis ensues thereafter. The acid-base, fluid, and electrolyte abnormalities observed in salicylate toxicity can be grouped into three broad phases: Phase I is characterized by hyperventilation resulting from direct respiratory center stimulation, leading to respiratory alkalosis and compensatory alkaluria. Potassium and sodium bicarbonate are excreted in the urine. This phase may last as long as 12 hours. Phase II is characterized by paradoxic aciduria in the presence of continued respiratory alkalosis occurs when sufficient potassium has been lost from the kidneys. This phase may begin within hours and may last 12–24 hours. Phase III is characterized by dehydration, hypokalemia, and progressive metabolic acidosis. This phase may begin 4–6 hours after ingestion in a young infant or 24 hours or more after ingestion in an adolescent or adult. Diagnosis The acutely toxic dose of aspirin is generally considered greater than 150 mg per kg of body mass. Moderate toxicity occurs at doses up to 300 mg/kg, severe toxicity occurs between 300 and 500 mg/kg, and a potentially lethal dose is greater than 500 mg/kg. Chronic toxicity may occur following doses of 100 mg/kg per day for two or more days.Monitoring of biochemical parameters such as electrolytes and solutes, liver and kidney function, urinalysis, and complete blood count is undertaken along with frequent checking of salicylate and blood sugar levels. Arterial blood gas assessments typically find respiratory alkalosis early in the course of the overdose due to hyperstimulation of the respiratory center, and may be the only finding in a mild overdose. An anion-gap metabolic acidosis occurs later in the course of the overdose, especially if it is a moderate to severe overdose, due to the increase in protons (acidic contents) in the blood. The diagnosis of poisoning usually involves measurement of plasma salicylate, the active metabolite of aspirin, by automated spectrophotometric methods. Plasma salicylate levels generally range from 30–100 mg/L (3–10 mg/dL) after usual therapeutic doses, 50–300 mg/L in patients taking high doses, and 700–1400 mg/L following acute overdose. Patients may undergo repeated testing until their peak plasma salicylate level can be estimated. Optimally, plasma levels should be assessed four hours after ingestion and then every two hours after that to allow calculation of the maximum level, which can then be used as a guide to the degree of toxicity expected. Patients may also be treated according to their individual symptoms. Prevention Efforts to prevent poisoning include child-resistant packaging and a lower number of pills per package. Treatment There is no antidote for salicylate poisoning. Initial treatment of an overdose involves resuscitation measures such as maintaining an adequate airway and adequate circulation followed by gastric decontamination by administering activated charcoal, which adsorbs the salicylate in the gastrointestinal tract. Stomach pumping is no longer routinely used in the treatment of poisonings, but is sometimes considered if the patient has ingested a potentially lethal amount less than one hour before presentation. Inducing vomiting with syrup of ipecac is not recommended. Repeated doses of activated charcoal have been proposed to be beneficial in cases of salicylate poisoning, especially in ingestion of enteric coated and extended release salicylic acid formulations which are able to remain in the gastrointestinal (GI) tract for longer periods of time. Repeated doses of activated charcoal are also useful to re-adsorb salicylates in the GI tract that may have desorbed from the previous administration of activated charcoal. The initial dose of activated charcoal is most useful if given within 2 hours of initial ingestion. Contraindications to the use of activated charcoal include altered mental status (due to the risk of aspiration), GI bleeding (often due to salicylates) or poor gastric motility. Whole bowel irrigation using the laxative polyethylene glycol can be useful to induce the gastrointestinal elimination of salicylates, particularly if there is partial or diminished response to activated charcoal.Alkalinization of the urine and plasma, by giving a bolus of sodium bicarbonate then adding sodium bicarbonate to maintenance fluids, is an effective method to increase the clearance of salicylates from the body. Alkalinization of the urine causes salicylates to be trapped in renal tubules in their ionized form and then readily excreted in the urine. Alkalinization of the urine increases urinary salicylate excretion by 18 fold. Alkalinization of the plasma decreases the lipid soluble form of salicylates facilitating movement out of the central nervous system. Oral sodium bicarbonate is contra-indicated in salicylate toxicity as it can cause dissociation of salicylate tablets in the GI tract and subsequent increased absorption. Intravenous fluids Intravenous fluids containing dextrose such as dextrose 5% in water (D5W) are recommended to keep a urinary output between 1 and 1.5 millilitres per kilogram per hour.Sodium bicarbonate is given in a significant aspirin overdose (salicylate level greater than 35 mg/dL 6 hours after ingestion) regardless of the serum pH, as it enhances elimination of aspirin in the urine. It is given until a urine pH between 7.5 and 8.0 is achieved. Dialysis Hemodialysis can be used to enhance the removal of salicylate from the blood, usually in those who are severely poisoned. Examples of severe poisoning include people with high salicylate blood levels: 7.25 mmol/L (100 mg/dL) in acute ingestions or 40 mg/dL in chronic ingestions, significant neurotoxicity (agitation, coma, convulsions), kidney failure, pulmonary edema, or cardiovascular instability. Hemodialysis also has the advantage of restoring electrolyte and acid-base abnormalities while removing salicylate.Salicylic acid has a small size (low molecular mass), has a low volume of distribution (is more water soluble), has low tissue binding and is largely free (and not protein bound) at toxic levels in the body; all of which make it easily removable from the body by hemodialysis.Indication for dialysis: Salicylate level higher than 90 mg/dL Severe acid base imbalance Severe cardiac toxicity Acute respiratory distress syndrome Cerebral involvement/ neurological signs and symptoms Rising serum salicylate level despite alkalinization/multidose activated charcoal, or people in which standard approaches to treatment ave failed Unable to tolerate fluids with fluid overload Epidemiology Acute salicylate toxicity usually occurs after an intentional ingestion by younger adults, often with a history of psychiatric disease or previous overdose, whereas chronic toxicity usually occurs in older adults who experience inadvertent overdose while ingesting salicylates therapeutically over longer periods of time.During the latter part of the 20th century, the number of poisonings from salicylates declined, mainly because of the increased popularity of other over-the-counter analgesics such as paracetamol (acetaminophen). Fifty-two deaths involving single-ingredient aspirin were reported in the United States in 2000; however, in all but three of these cases, the reason for the ingestion of lethal doses was intentional—predominantly suicidal. History Aspirin poisoning has controversially been cited as a possible cause of the high mortality rate during the 1918 flu pandemic, which killed 50 to 100 million people. See also NSAID hypersensitivity reactions Reye syndrome Salicylate sensitivity References External links Reingardiene, D; Lazauskas, R (2006). "[Acute salicylate poisoning]". Medicina (Kaunas, Lithuania). 42 (1): 79–83. PMID 16467617.
Anorexia (symptom)
Anorexia is a medical term for a loss of appetite. While the term in non-scientific publications is often used interchangeably with anorexia nervosa, many possible causes exist for a loss of appetite, some of which may be harmless, while others indicate a serious clinical condition or pose a significant risk. Anorexia is a symptom, not a diagnosis. Anorexia is not to be confused with the mental health disorder anorexia nervosa. Because the term anorexia is often used as a short-form of anorexia nervosa, to avoid confusion a provider must clarify to a patient whether they are simply referring to a decreased appetite or the mental health disorder. Anyone can manifest anorexia as a loss of appetite, regardless of their gender, age, or weight. The symptom also occurs in other animals, such as cats, dogs, cattle, goats, and sheep. In these species, anorexia may be referred to as inappetence. As in humans, loss of appetite can be due to a range of diseases and conditions, as well as environmental and psychological factors. Etymology The term is from Ancient Greek: ανορεξία (ἀν-, without + όρεξις, spelled órexis, meaning appetite). Common manifestations Anorexia simply manifests as a decreased or loss of appetite. This can present as not feeling hungry or lacking the desire to eat. Sometimes people do not even notice they lack an appetite until they begin to lose weight from eating less. In other cases, it can be more noticeable, such as when a person becomes nauseated from just the thought of eating. Any form of decreased appetite that leads to changes in the body (such as weight loss or muscle loss) that is not done intentionally as part of dieting is clinically significant. Physiology of anorexia Appetite stimulation and suppression is a complex process involving many different parts of the brain and body by the use of various hormones and signals. Appetite is thought to be stimulated by interplay between peripheral signals to the brain (taste, smell, sight, gut hormones) as well as the balance of neurotransmitters and neuropeptides in the hypothalamus. Examples of these signals or hormones include neuropeptide Y, leptin, ghrelin, insulin, serotonin, and orexins (also called hypocretins). Anything that causes an imbalance of these signals or hormones can lead to the symptom of anorexia. While it is known that these signals and hormones help control appetite, the complicated mechanisms regarding a pathological increase or decrease in appetite are still being explored. Common causes Acute radiation syndrome Addisons disease Alcoholism Alcohol withdrawal Anemia Anorexia nervosa Anxiety Appendicitis Babesiosis Benzodiazepine withdrawal Bipolar disorder Cancer Cannabinoid hyperemesis syndrome Cannabis withdrawal Celiac disease Chronic kidney disease Chronic pain Common cold Constipation COPD COVID-19 Crohns disease Dehydration Dementia Depression Ebola Fatty liver disease Fever Food poisoning Gastroparesis Hepatitis HIV/AIDS Hypercalcemia Hyperglycemia Hypervitaminosis D Hypothyroidism and sometimes hyperthyroidism Irritable bowel syndrome Ketoacidosis Kidney failure Low blood pressure Mania Metabolic disorders, particularly urea cycle disorders MELAS syndrome Nausea Opioid use disorder Pancreatitis Pernicious anemia Psychosis Schizophrenia Side effect of drugs Stimulant use disorder Stomach flu Stress Sickness behavior Superior mesenteric artery syndrome Syndrome of inappropriate antidiuretic hormone secretion Tuberculosis Thalassemia Ulcerative colitis Uremia Vitamin B12 deficiency Zinc deficiency Infection: Anorexia of infection is part of the acute phase response (APR) to infection. The APR can be triggered by lipopolysaccharides and peptidoglycans from bacterial cell walls, bacterial DNA, and double-stranded viral RNA, and viral glycoproteins, which can trigger production of a variety of proinflammatory cytokines. These can have an indirect effect on appetite by a number of means, including peripheral afferents from their sites of production in the body, by enhancing production of leptin from fat stores. Inflammatory cytokines can also signal to the central nervous system more directly by specialized transport mechanisms through the blood–brain barrier, via circumventricular organs (which are outside the barrier), or by triggering production of eicosanoids in the endothelial cells of the brain vasculature. Ultimately, the control of appetite by this mechanism is thought to be mediated by the same factors normally controlling appetite, such as neurotransmitters (serotonin, dopamine, histamine, norepinephrine, corticotropin releasing factor, neuropeptide Y, and α-melanocyte-stimulating hormone). Drugs Stimulants, such as ephedrine, amphetamine, methamphetamine, MDMA, cathinone, methylphenidate, nicotine, cocaine, caffeine, etc. Narcotics, such as heroin, morphine, codeine, hydrocodone, oxycodone, etc. Antidepressants can have anorexia as a side effect, primarily selective serotonin reuptake inhibitors (SSRIs) such as fluoxetine. Byetta, a type II diabetes drug, will cause moderate nausea and loss of appetite. Abruptly stopping appetite-increasing drugs, such as cannabis and corticosteroids. Chemicals that are members of the phenethylamine group. (Individuals with anorexia nervosa may seek them to suppress appetite.) Topiramate may cause anorexia as a side effect. Other drugs may be used to intentionally cause anorexia in order to help a patient preoperative fasting prior to general anesthesia. It is important to avoid food before surgery to mitigate the risk of pulmonary aspiration, which can be fatal. Other During the post-operative recovery period for a tonsillectomy or adenoidectomy, it is common for adult patients to experience a lack of appetite until their throat significantly heals (usually 10–14 days). Altitude sickness Significant emotional pain caused by an event (rather than a mental disorder) can cause an individual to temporarily lose all interest in food. Several Twelve-step programs including Overeaters Anonymous tackle psychological issues members believe lead to forms of deprivation Psychological stress Experiencing grotesque or unappealing thoughts or conversations, or viewing similar images Being in the presence of unappealing things such as waste matter, dead organisms, or bad smells Complications Complications of anorexia may result due to poor food intake. Poor food intake can lead to dehydration, electrolyte imbalances, anemia and nutritional deficiencies. These imbalances will worsen the longer that food is avoided. Sudden cardiac death Anorexia is a relatively common condition that can lead patients to have dangerous electrolyte imbalances, leading to acquired long QT syndrome which can result in sudden cardiac death. This can develop over a prolonged period of time, and the risk is further heightened when feeding resumes after a period of abstaining from consumption. Refeeding syndrome Care must be taken when a patient begins to eat after prolonged starvation to avoid the potentially fatal complications of refeeding syndrome. The initial signs of refeeding syndrome are minimal, but can rapidly progress to death. Thus, the reinitiation of food or oral intake is usually started slowly and requires close observation under supervision by trained healthcare professionals. This is usually done in a hospital or nutritional rehabilitation center. Management Anorexia can be treated with the help of orexigenic drugs. "Anorexia" vs "anorexic" vs anorexia nervosa Anorexic is a description of somebody with the stereotypical thin, frail, malnourished appearance. The appearance is classically associated with anorexia, although in rare cases do patients end up becoming anorexic. An anorexic or anorectic is also a description given to substances that cause anorexia for weight loss purposes. Anorexia nervosa is an eating disorder characterized by food restriction due to the strong desire to remain thin. It is considered a mental health diagnosis where people see themselves as obese regardless of their weight or appearance. The person does not necessarily exhibit anorexia as a symptom in their quest to restrict food intake. References == External links ==
Crystalluria
Crystalluria refers to crystals found in the urine when performing a urine test. Crystalluria is considered often as a benign condition and as one of the side effects of sulfonamides and penicillins. The main reason for the identification of urinary crystals is to detect the presence of the relatively few abnormal types that may represent a disease. Clinical significance It can be an indication of urolithiasis. It may be relevant when there is presence of specific abnormal types of crystals (cystine, cholesterol, leucine, tyrosine, etc.) and that may be a sign of metabolic or liver disorders such as cystinuria. It leads to formation of stones. References == External links ==
Adhesion (medicine)
Adhesions are fibrous bands that form between tissues and organs, often as a result of injury during surgery. They may be thought of as internal scar tissue that connects tissues not normally connected. Pathophysiology Adhesions form as a natural part of the bodys healing process after surgery in a similar way that a scar forms. The term "adhesion" is applied when the scar extends from within one tissue across to another, usually across a virtual space such as the peritoneal cavity. Adhesion formation post-surgery typically occurs when two injured surfaces are close to one another. According to the "classical paradigm" of adhesion formation, the pathogenesis starts with inflammation and activation of the coagulation system which causes fibrin deposits onto the damaged tissues. The fibrin then connects the two adjacent structures where damage of the tissues occurred. The fibrin acts like a glue to seal the injury and builds the fledgling adhesion, said at this point to be "fibrinous." In body cavities such as the peritoneal, pericardial, and synovial cavities, a family of fibrinolytic enzymes may act to limit the extent of the initial fibrinous adhesion, and may even dissolve it. In many cases, the production or activity of these enzymes are compromised because of inflammation following injury or infection, however, and the fibrinous adhesion persists. A more recent study suggested that the formation of "fibrinous" adhesions is preceded by the aggregation of cavity macrophages, that can act like extravascular platelets in the abdominal cavity.If this is allowed to happen, tissue repair cells such as macrophages, fibroblasts, and blood vessel cells penetrate into the fibrinous adhesion and lay down collagen and other matrix substances to form a permanent fibrous adhesion. In 2002, Giuseppe Martucciellos research group showed a possible role could be played by microscopic foreign bodies (FB) inadvertently contaminating the operative field during surgery. These data suggested that two different stimuli are necessary for adhesion formation: a direct lesion of the mesothelial layers and a solid substrate foreign body (FB). While some adhesions do not cause problems, others may prevent muscle and other tissues and organs from moving freely, sometimes causing organs to become twisted or pulled from their normal positions. Regions affected Adhesive capsulitis In the case of adhesive capsulitis of the shoulder (also known as frozen shoulder), adhesions grow between the shoulder joint surfaces, restricting motion. Abdominal adhesions Abdominal adhesions (or intra-abdominal adhesions) are most commonly caused by abdominal surgical procedures. The adhesions start to form within hours of surgery and may cause internal organs to attach to the surgical site or to other organs in the abdominal cavity. Adhesion-related twisting and pulling of internal organs may result in complications such as abdominal pain or intestinal obstruction. Small bowel obstruction (SBO) is a significant consequence of post-surgical adhesions. A SBO may be caused when an adhesion pulls or kinks the small intestine and prevents the flow of content through the digestive tract. Obstruction may occur 20 years or more after the initial surgical procedure, if a previously benign adhesion allows the small bowel to twist spontaneously around itself and obstruct. Without immediate medical attention, SBO is an emergent, possibly fatal, condition. According to statistics provided by the National Hospital Discharge Survey approximately 2,000 people die every year in the US from obstruction due to adhesions. Depending on the severity of the obstruction, a partial obstruction may relieve itself with conservative medical intervention. Many obstructive events require surgery, however, to loosen or dissolve the offending adhesion(s) or to resect the affected small intestine. Pelvic adhesions Pelvic adhesions are a form of abdominal adhesions in the pelvis. In women they typically affect reproductive organs and thus are of concern in reproduction or as a cause of chronic pelvic pain. Other than surgery, endometriosis and pelvic inflammatory disease are typical causes. Surgery inside the uterine cavity (e.g., suction dilation and curettage, myomectomy, endometrial ablation) may result in Ashermans syndrome (also known as intrauterine adhesions, intra uterine synechiae), a cause of infertility. The impairment of reproductive performance from adhesions may happen through many mechanisms, all of which usually stem from the distortion of the normal tubo-ovarian relationship. This distortion may prevent an ovum from traveling to the fimbriated end of the Fallopian tube.A meta-analysis in 2012 came to the conclusion that there is only little evidence for the surgical principle that using less invasive techniques, introducing fewer foreign bodies, or causing less ischemia reduces the extent and severity of adhesions in pelvic surgery. Pericardial adhesions Adhesions forming between the heart and the sternum after cardiac surgery place the heart at risk of catastrophic injury during re-entry for a subsequent procedure. Peridural adhesions Adhesions and scarring as epidural fibrosis may occur after spinal surgery that restricts the free movement of nerve roots, causing tethering and leading to pain. Peritendinous adhesions Adhesions and scarring occurring around tendons after hand surgery restrict the gliding of tendons in their sheaths and compromise digital mobility. Association with surgical procedures Applying adhesion barriers during surgery may help to prevent the formation of adhesions. There are two methods that are approved by the U.S. Food and Drug Administration (FDA) for adhesion prevention: Intercede and Seprafilm. One study found that Seprafilm is twice as effective at preventing adhesion formation when compared to just surgical technique alone. Surgical humidification therapy may also minimise the incidence of adhesion formation. Laparoscopic surgery has a reduced risk for creating adhesions. Steps may be taken during surgery to help prevent adhesions such as handling tissues and organs gently, using starch-free and latex-free gloves, not allowing tissues to dry out, and shortening surgery time.An unfortunate fact is, that adhesions are unavoidable in surgery and the main treatment for adhesions is more surgery. Besides intestinal obstructions caused by adhesions that may be seen in an X-ray, there are no diagnostic tests available to accurately diagnose an adhesion. Abdominal surgery A study showed that more than 90% of people develop adhesions following open abdominal surgery and that 55–100% of women develop adhesions following pelvic surgery. Adhesions from prior abdominal or pelvic surgery may obscure visibility and access at subsequent abdominal or pelvic surgery. In a very large study (29,790 participants) published in British medical journal The Lancet, 35% of patients who underwent open abdominal or pelvic surgery were readmitted to the hospital an average of two times after their surgery, due to adhesion-related or adhesion-suspected complications. Over 22% of all readmissions occurred in the first year after the initial surgery. Adhesion-related complexity at reoperation adds significant risk to subsequent surgical procedures.Certain organs and structures in the body are more prone to adhesion formation than others. The omentum is particularly susceptible to adhesion formation; one study found that 92% of post-operative adhesions were found in the omentum. It appears that the omentum is the chief organ responsible for "spontaneous" adhesion formation (i.e. no prior history of surgery). In another study, 100% of spontaneous adhesion formations were associated with the omentum.One method to reduce the formation of adhesions following abdominal surgery is hydroflotation, in which the organs are separated from one another by being floated in a solution. Carpal tunnel surgery The long-term use of a wrist splint during recovery from carpal tunnel surgery may cause adhesion formation. For that reason, it is advised that wrist splints be used only for short-term protection in work environments, but otherwise, splints do not improve grip strength, lateral pinch strength, or bowstringing. Beyond adhesion they also may cause stiffness or flexibility problems. Types There are three general types of adhesions: filmy, vascular, and cohesive, however, their pathophysiology is similar. Filmy adhesions usually do not pose problems. Vascular adhesions are problematic. References External links eMedicineHealth: Adhesions, General and After Surgery Smith, Orla M., Getting adhesions unstuck, Science, November 30, 2018, volume 362, issue 6418, pp. 1014-1016
Renal osteodystrophy
Renal osteodystrophy/adynamic bone disease is currently defined as an alteration of bone morphology in patients with chronic kidney disease (CKD). It is one measure of the skeletal component of the systemic disorder of chronic kidney disease-mineral and bone disorder (CKD-MBD). The term "renal osteodystrophy" was coined in 1943, 60 years after an association was identified between bone disease and kidney failure.The traditional types of renal osteodystrophy have been defined on the basis of turnover and mineralization as follows: 1) mild, slight increase in turnover and normal mineralization; 2) osteitis fibrosa, increased turnover and abnormal mineralization; 3) osteomalacia, decreased turnover and abnormal mineralization; 4) adynamic, decreased turnover and acellularity; and, 5) mixed, increased turnover with abnormal mineralization. A Kidney Disease: Improving Global Outcomes report has suggested that bone biopsies in patients with CKD should be characterized by determining bone turnover, mineralization, and volume (TMV system).On the other hand, CKD-MBD is defined as a systemic disorder of mineral and bone metabolism due to CKD manifested by either one or a combination of the following: 1) abnormalities of calcium, phosphorus, PTH, or vitamin D metabolism; 2) abnormalities in bone turnover, mineralization, volume, linear growth, or strength (renal osteodystrophy); and 3) vascular or other soft-tissue calcification. Signs and symptoms Renal osteodystrophy may exhibit no symptoms; if it does show symptoms, they include: Bone pain Joint pain Bone deformation Bone fracture The broader concept of chronic kidney disease-mineral and bone disorder (CKD-MBD) is not only associated with fractures but also with cardiovascular calcification, poor quality of life and increased morbidity and mortality in CKD patients (the so-called bone-vascular axis). These clinical consequences are acquiring such an importance that scientific working groups (such as the ERA-EDTA CKD-MBD Working Group) or international initiatives are trying to promote research in the field including basic, translational and clinical research. Pathogenesis Renal osteodystrophy has been classically described to be the result of hyperparathyroidism secondary to hyperphosphatemia combined with hypocalcemia, both of which are due to decreased excretion of phosphate by the damaged kidney.Low activated vitamin D3 levels are a result of the damaged kidneys inability to convert vitamin D3 into its active form, calcitriol, and result in further hypocalcemia. High levels of fibroblast growth factor 23 seem now to be the most important cause of decreased calcitriol levels in CKD patients.In CKD, the excessive production of parathyroid hormone increases the bone resorption rate and leads to histologic bone signs of secondary hyperparathyroidism. However, in other situations, the initial increase in parathyroid hormone and bone remodeling may be slowed excessively by a multitude of factors, including age, ethnic origin, sex, and treatments such as vitamin D, calcium salts, calcimimetics, steroids, and so forth, leading to low bone turnover or adynamic bone disease.Both high and low bone turnover diseases are currently observed equally in CKD patients treated by dialysis, and all types of renal osteodystrophy are associated with an increased risk of skeletal fractures, reduced quality of life, and poor clinical outcomes. Diagnosis Renal osteodystrophy is usually diagnosed after treatment for end-stage kidney disease begins; however the CKD-MBD starts early in the course of CKD. In advanced stages, blood tests will indicate decreased calcium and calcitriol (vitamin D) and increased phosphate, and parathyroid hormone levels. In earlier stages, serum calcium, phosphate levels are normal at the expense of high parathyroid hormone and fibroblast growth factor-23 levels. X-rays will also show bone features of renal osteodystrophy (subperiostic bone resorption, chondrocalcinosis at the knees and pubic symphysis, osteopenia and bone fractures) but may be difficult to differentiate from other conditions. Since the diagnosis of these bone abnormalities cannot be obtained correctly by current clinical, biochemical, and imaging methods (including measurement of bone-mineral density), bone biopsy has been, and still remains, the gold standard analysis for assessing the exact type of renal osteodystrophy. Differential diagnosis To confirm the diagnosis, renal osteodystrophy must be characterized by determining bone turnover, mineralization, and volume (TMV system) (bone biopsy). All forms of renal osteodystrophy should also be distinguished from other bone diseases which may equally result in decreased bone density (related or unrelated to CKD): osteoporosis osteopenia osteomalacia brown tumor should be considered as the top-line diagnosis if a mass-forming lesion is present. Treatment Treatment for renal osteodystrophy includes the following: calcium and/or native vitamin D supplementation restriction of dietary phosphate (especially inorganic phosphate contained in additives) phosphate binders such as calcium carbonate, calcium acetate, sevelamer hydrochloride or carbonate, lanthanum carbonate, sucroferric oxyhydroxide, ferric citrate among others active forms of vitamin D (calcitriol, alfacalcidol, paricalcitol, maxacalcitol, doxercalciferol, among others) cinacalcet renal transplantation haemodialysis five times a week is thought to be of benefit parathyroidectomy for symptomatic medication refractive end stage disease Prognosis Recovery from renal osteodystrophy has been observed following kidney transplantation. Renal osteodystrophy is a chronic condition with a conventional hemodialysis schedule. Nevertheless, it is important to consider that the broader concept of CKD-MBD, which includes renal osteodystrophy, is not only associated with bone disease and increased risk of fractures but also with cardiovascular calcification, poor quality of life and increased morbidity and mortality in CKD patients (the so-called bone-vascular axis). Actually, bone may now be considered a new endocrine organ at the heart of CKD-MBD. References External links Renal Osteodystrophy - NKUDIC, NIH
Bowel infarction
Bowel infarction or gangrenous bowel represents an irreversible injury to the intestine resulting from insufficient blood flow. It is considered a medical emergency because it can quickly result in life-threatening infection and death. Any cause of bowel ischemia, the earlier reversible form of injury, may ultimately lead to infarction if uncorrected. The causes of bowel ischemia or infarction include primary vascular causes (for example, mesenteric ischemia) and other causes of bowel obstruction. Causes Primary vascular causes of bowel infarction, also known as mesenteric ischemia, are due to blockages in the arteries or veins that supply the bowel. Types of mesenteric ischemia are generally separated into acute and chronic processes, because this helps determine treatment and prognosis.Bowel obstruction is most often caused by intestinal adhesions, which frequently form after abdominal surgeries, or by chronic infections such as diverticulitis, hepatitis, and inflammatory bowel disease. The condition may be difficult to diagnose, as the symptoms may resemble those of other bowel disorders. Bowel volvulus describes a specific form of bowel obstruction, where the intestine and/or mesentery are twisted, resulting in ischemia. Management An infarcted or dead intestinal segment is a serious medical problem because intestines contain non-sterile contents within the lumen. Although the fecal content and high bacterial loads of the intestine are normally safely contained, progressive ischemia causes tissue breakdown and inevitably leads to bacteria spreading to the bloodstream. Untreated bowel infarction quickly leads to life-threatening infection and sepsis, and may be fatal.The only treatment for bowel infarction is immediate surgical repair and eventually removal of the dead bowel segment.Patients who have undergone extensive resection of the small bowel may develop malabsorption, indicating the need for dietary supplements. Additional images See also Adhesions Ischemic colitis Volvulus References == External links ==
Adrenal insufficiency
Adrenal insufficiency is a condition in which the adrenal glands do not produce adequate amounts of steroid hormones, primarily cortisol; but may also include impaired production of aldosterone (a mineralocorticoid), which regulates sodium conservation, potassium secretion, and water retention. Craving for salt or salty foods due to the urinary losses of sodium is common.Addisons disease and congenital adrenal hyperplasia can manifest as adrenal insufficiency. If not treated, adrenal insufficiency may result in abdominal pains, vomiting, muscle weakness and fatigue, depression, low blood pressure, weight loss, kidney failure, changes in mood and personality, and shock (adrenal crisis). An adrenal crisis may occur if the body is subjected to stress, such as an accident, injury, surgery, or severe infection; death may quickly follow.Adrenal insufficiency can also occur when the hypothalamus or the pituitary gland does not make adequate amounts of the hormones that assist in regulating adrenal function. This is called secondary or tertiary adrenal insufficiency and is caused by lack of production of ACTH in the pituitary or lack of CRH in the hypothalamus, respectively. Types There are three major types of adrenal insufficiency. Primary adrenal insufficiency is due to impairment of the adrenal glands. 80% are due to an autoimmune disease called Addisons disease or autoimmune adrenalitis. One subtype is called idiopathic, meaning of unknown cause. It can also be due to congenital adrenal hyperplasia or an adenoma (tumor) of the adrenal gland. Other causes include; Infections (TB, CMV, histoplasmosis, paracoccidioidomycosis), vascular (hemorrhage from sepsis, adrenal vein thrombosis, HIT), deposition disease (hemochromatosis, amyloidosis, sarcoidosis), drugs (azole anti-fungals, etomidate (even one dose), rifampin, anticonvulsants) Secondary adrenal insufficiency is caused by impairment of the pituitary gland or hypothalamus. Its principal causes include pituitary adenoma (which can suppress production of adrenocorticotropic hormone (ACTH) and lead to adrenal deficiency unless the endogenous hormones are replaced; secondary adrenal insufficiency can be caused by steroids, inhaled steroids such as Flovent; and Sheehans syndrome, which is associated with impairment of only the pituitary gland. Tertiary adrenal insufficiency is due to hypothalamic disease and a decrease in the release of corticotropin releasing hormone (CRH). Causes can include brain tumors and sudden withdrawal from long-term exogenous steroid use (which is the most common cause overall). Signs and symptoms Signs and symptoms include: hypoglycemia, hyperpigmentation, dehydration, weight loss, and disorientation. Additional signs and symptoms include weakness, tiredness, dizziness, low blood pressure that falls further when standing (orthostatic hypotension), cardiovascular collapse, muscle aches, nausea, vomiting, and diarrhea. These problems may develop gradually and insidiously. Addisons disease can present with tanning of the skin that may be patchy or even all over the body. Characteristic sites of tanning are skin creases (e.g. of the hands) and the inside of the cheek (buccal mucosa). Goitre and vitiligo may also be present. Eosinophilia may also occur. Hyponatremia is a sign of secondary insufficiency. Causes Causes of acute adrenal insufficiency are mainly sudden withdrawal of long-term corticosteroid therapy, Waterhouse–Friderichsen syndrome, and stress in people with underlying chronic adrenal insufficiency. The latter is termed critical illness–related corticosteroid insufficiency.For chronic adrenal insufficiency, the major contributors are autoimmune adrenalitis (Addisons Disease), tuberculosis, AIDS, and metastatic disease. Minor causes of chronic adrenal insufficiency are systemic amyloidosis, fungal infections, hemochromatosis, and sarcoidosis.Autoimmune adrenalitis may be part of Type 2 autoimmune polyglandular syndrome, which can include type 1 diabetes, hyperthyroidism, and autoimmune thyroid disease (also known as autoimmune thyroiditis, Hashimotos thyroiditis, and Hashimotos disease). Hypogonadism may also present with this syndrome. Other diseases that are more common in people with autoimmune adrenalitis include premature ovarian failure, celiac disease, and autoimmune gastritis with pernicious anemia.X-Linked Recessive Adrenoleukodystrophy can also cause adrenal insufficiency.Adrenal insufficiency can also result when a patient has a craniopharyngioma, which is a histologically benign tumor that can damage the pituitary gland and so cause the adrenal glands not to function. This would be an example of secondary adrenal insufficiency syndrome.Causes of adrenal insufficiency can be categorized by the mechanism through which they cause the adrenal glands to produce insufficient cortisol. These are adrenal dysgenesis (the gland has not formed adequately during development), impaired steroidogenesis (the gland is present but is biochemically unable to produce cortisol) or adrenal destruction (disease processes leading to glandular damage). Corticosteroid withdrawal Use of high-dose steroids for more than a week begins to produce suppression of the persons adrenal glands because the exogenous glucocorticoids suppress release of hypothalamic corticotropin-releasing hormone (CRH) and pituitary adrenocorticotropic hormone (ACTH). With prolonged suppression, the adrenal glands atrophy (physically shrink), and can take months to recover full function after discontinuation of the exogenous glucocorticoid. During this recovery time, the person is vulnerable to adrenal insufficiency during times of stress, such as illness, due to both adrenal atrophy and suppression of CRH and ACTH release. Use of steroids joint injections may also result in adrenal suppression after discontinuation. Adrenal dysgenesis All causes in this category are genetic, and generally very rare. These include mutations to the SF1 transcription factor, congenital adrenal hypoplasia due to DAX-1 gene mutations and mutations to the ACTH receptor gene (or related genes, such as in the Triple A or Allgrove syndrome). DAX-1 mutations may cluster in a syndrome with glycerol kinase deficiency with a number of other symptoms when DAX-1 is deleted together with a number of other genes. Impaired steroidogenesis To form cortisol, the adrenal gland requires cholesterol, which is then converted biochemically into steroid hormones. Interruptions in the delivery of cholesterol include Smith–Lemli–Opitz syndrome and abetalipoproteinemia.Of the synthesis problems, congenital adrenal hyperplasia is the most common (in various forms: 21-hydroxylase, 17α-hydroxylase, 11β-hydroxylase and 3β-hydroxysteroid dehydrogenase), lipoid CAH due to deficiency of StAR and mitochondrial DNA mutations. Some medications interfere with steroid synthesis enzymes (e.g. ketoconazole), while others accelerate the normal breakdown of hormones by the liver (e.g. rifampicin, phenytoin). Adrenal destruction Autoimmune adrenalitis is the most common cause of Addisons disease in the industrialised world. Autoimmune destruction of the adrenal cortex is caused by an immune reaction against the enzyme 21-hydroxylase (a phenomenon first described in 1992). This may be isolated or in the context of autoimmune polyendocrine syndrome (APS type 1 or 2), in which other hormone-producing organs, such as the thyroid and pancreas, may also be affected.Adrenal destruction is also a feature of adrenoleukodystrophy (ALD), and when the adrenal glands are involved in metastasis (seeding of cancer cells from elsewhere in the body, especially lung), hemorrhage (e.g. in Waterhouse–Friderichsen syndrome or antiphospholipid syndrome), particular infections (tuberculosis, histoplasmosis, coccidioidomycosis), or the deposition of abnormal protein in amyloidosis. Pathophysiology Hyponatremia can be caused by glucocorticoid deficiency. Low levels of glucocorticoids leads to systemic hypotension (one of the effects of cortisol is to increase peripheral resistance), which results in a decrease in stretch of the arterial baroreceptors of the carotid sinus and the aortic arch. This removes the tonic vagal and glossopharyngeal inhibition on the central release of ADH: high levels of ADH will ensue, which will subsequently lead to increase in water retention and hyponatremia.Differently from mineralocorticoid deficiency, glucocorticoid deficiency does not cause a negative sodium balance (in fact a positive sodium balance may occur). Diagnosis The best diagnostic tool to confirm adrenal insufficiency is the ACTH stimulation test; however, if a patient is suspected to be experiencing an acute adrenal crisis, immediate treatment with IV corticosteroids is imperative and should not be delayed for any testing, as the patients health can deteriorate rapidly and result in death without replacing the corticosteroids.Dexamethasone should be used as the corticosteroid if the plan is to do the ACTH stimulation test at a later time as it is the only corticosteroid that will not affect the test results.If not performed during crisis, then labs to be run should include: random cortisol, serum ACTH, aldosterone, renin, potassium and sodium. A CT of the adrenal glands can be used to check for structural abnormalities of the adrenal glands. An MRI of the pituitary can be used to check for structural abnormalities of the pituitary. However, in order to check the functionality of the Hypothalamic Pituitary Adrenal (HPA) Axis the entire axis must be tested by way of ACTH stimulation test, CRH stimulation test and perhaps an Insulin Tolerance Test (ITT). In order to check for Addisons Disease, the auto-immune type of primary adrenal insufficiency, labs should be drawn to check 21-hydroxylase autoantibodies. Effects Treatment Adrenal crisis Intravenous fluids Intravenous steroid (Solu-Cortef/injectable hydrocortisone) later hydrocortisone, prednisone or methylpredisolone tablets Rest Cortisol deficiency (primary and secondary) Hydrocortisone (Cortef) Prednisone (Deltasone) Prednisolone (Delta-Cortef) Methylprednisolone (Medrol) Dexamethasone (Decadron) Hydrocortisone granules in capsules for opening (Alkindi) Mineralocorticoid deficiency (low aldosterone) Fludrocortisone acetate(To balance sodium, potassium and increase water retention) See also Addisons disease – primary adrenocortical insufficiency Cushings syndrome – overproduction of cortisol Insulin tolerance test – another test used to identify sub-types of adrenal insufficiency Adrenal fatigue (hypoadrenia) – a term used in alternative medicine to describe a believed exhaustion of the adrenal glands References Further reading Bornstein, Stefan R.; Allolio, Bruno; Arlt, Wiebke; Barthel, Andreas; Don-Wauchope, Andrew; Hammer, Gary D.; Husebye, Eystein S.; Merke, Deborah P.; Murad, M. Hassan; Stratakis, Constantine A.; Torpy, David J. (February 2016). "Diagnosis and Treatment of Primary Adrenal Insufficiency: An Endocrine Society Clinical Practice Guideline". The Journal of Clinical Endocrinology & Metabolism. 101 (2): 364–389. doi:10.1210/jc.2015-1710. PMC 4880116. PMID 26760044. == External links ==
Hypertrophic cardiomyopathy
Hypertrophic cardiomyopathy (HCM, or HOCM when obstructive) is a condition in which the heart becomes thickened without an obvious cause. The parts of the heart most commonly affected are the interventricular septum and the ventricles. This results in the heart being less able to pump blood effectively and also may cause electrical conduction problems.People who have HCM may have a range of symptoms. People may be asymptomatic, or may have fatigue, leg swelling, and shortness of breath. It may also result in chest pain or fainting. Symptoms may be worse when the person is dehydrated. Complications may include heart failure, an irregular heartbeat, and sudden cardiac death.HCM is most commonly inherited from a persons parents in an autosomal dominant pattern. It is often due to mutations in certain genes involved with making heart muscle proteins. Other inherited causes of left ventricular hypertrophy may include Fabry disease, Friedreichs ataxia, and certain medications such as tacrolimus. Other considerations for causes of enlarged heart are athletes heart and hypertension (high blood pressure). Making the diagnosis of HCM often involves a family history or pedigree, an electrocardiogram, echocardiogram, and stress testing. Genetic testing may also be done. HCM can be distinguished from other inherited causes of cardiomyopathy by its autosomal dominant pattern, whereas Fabry disease is X-linked and Friedreichs Ataxia is inherited in an autosomal recessive pattern.Treatment may depend on symptoms and other risk factors. Medications may include the use of beta blockers or disopyramide. An implantable cardiac defibrillator may be recommended in those with certain types of irregular heartbeat. Surgery, in the form of a septal myectomy or heart transplant, may be done in those who do not improve with other measures. With treatment, the risk of death from the disease is less than one percent per year.HCM affects about one in 500 people. Rates in men and women are about equal. People of all ages may be affected. The first modern description of the disease was by Donald Teare in 1958. Signs and symptoms The cause of HCM is variable. Many people are asymptomatic or mildly symptomatic, and many of those carrying disease genes for HCM do not have clinically detectable disease. The symptoms of HCM include shortness of breath due to stiffening and decreased blood filling of the ventricles, exertional chest pain (sometimes known as angina) due to reduced blood flow to the coronary arteries, uncomfortable awareness of the heart beat (palpitations), as well as disruption of the electrical system running through the abnormal heart muscle, lightheadedness, weakness, fainting and sudden cardiac death.Shortness of breath is largely due to increased stiffness of the left ventricle (LV), which impairs filling of the ventricles, but also leads to elevated pressure in the left ventricle and left atrium, causing back pressure and interstitial congestion in the lungs. Symptoms are not closely related to the presence or severity of an outflow tract gradient. Often, symptoms mimic those of congestive heart failure (esp. activity intolerance and dyspnea), but treatment of each is different. Beta blockers are used in both cases, but treatment with diuretics, a mainstay of CHF treatment, will exacerbate symptoms in hypertrophic obstructive cardiomyopathy by decreasing ventricular preload volume and thereby increasing outflow resistance (less blood to push aside the thickened obstructing tissue).Major risk factors for sudden death in individuals with HCM include prior history of cardiac arrest or ventricular fibrillation, spontaneous sustained ventricular tachycardia, abnormal exercise blood pressure and non-sustained ventricular tachycardia, unexplained syncope, family history of premature sudden death, and LVW thickness greater than 15 mm to 30 mm, on echocardiogram. "Spike and dome" pulse and "triple ripple apical impulse" are two other signs that can be discovered in physical examination. Genetics Familial hypertrophic cardiomyopathy is inherited as an autosomal dominant trait and is attributed to mutations in one of a number of genes that encode for the sarcomere proteins. Currently, about 50–60% of people with a high index of clinical suspicion for HCM will have a mutation identified in at least one of nine sarcomeric genes. Approximately 40% of these mutations occur in the β-myosin heavy chain gene on chromosome 14 q11.2-3, and approximately 40% involve the cardiac myosin-binding protein C gene. Since HCM is typically an autosomal dominant trait, children of a single HCM parent have 50% chance of inheriting the disease-causing mutation. Whenever such a mutation is identified, family-specific genetic testing can be used to identify relatives at-risk for the disease, although clinical severity and age of onset cannot be predicted.An insertion/deletion polymorphism in the gene encoding for angiotensin converting enzyme (ACE) alters the clinical phenotype of the disease. The D/D (deletion/deletion) genotype of ACE is associated with more marked hypertrophy of the left ventricle and may be associated with higher risk of adverse outcomes.Some mutations could have more harmful potential compared to others (β-myosin heavy chain). For example, troponin T mutations were originally associated with a 50% mortality before the age of 40. However, a more recent and larger study found a similar risk to other sarcomeric protein mutations. The age at disease onset of HCM with MYH7 is earlier and leads to more severe symptoms. Moreover, mutations on troponin C can alter Ca+2 sensibility on force development in cardiac muscle, these mutations are named after the amino acid that was changed after the location in which it happened, such as A8V, A31S, C84Y and D145E. Diagnosis A diagnosis of hypertrophic cardiomyopathy is based upon a number of features of the disease process. While there is use of echocardiography, cardiac catheterization, or cardiac MRI in the diagnosis of the disease, other important considerations include ECG, genetic testing (although not primarily used for diagnosis), and any family history of HCM or unexplained sudden death in otherwise healthy individuals. In about 60 to 70% of the cases, cardiac MRI shows thickening of more than 15 mm of the lower part of the ventricular septum. T1-weighted imaging may identify scarring of cardiac tissues while T2-weighted imaging may identify oedema and inflammation of cardiac tissue which is associated with acute clinical signs of chest pain and fainting episodes.Pulsus bisferiens may occasional be found during examination. Variants Depending on whether the distortion of normal heart anatomy causes an obstruction of the outflow of blood from the left ventricle of the heart, HCM can be classified as obstructive or non-obstructive. The obstructive variant of HCM is hypertrophic obstructive cardiomyopathy (HOCM), also historically known as idiopathic hypertrophic subaortic stenosis(IHSS) or asymmetric septal hypertrophy (ASH). The diagnosis of left ventricular outflow tract obstruction is usually made by echocardiographic assessment and is defined as a peak left ventricular outflow tract gradient of ≥ 30 mmHg.Another, non-obstructive variant of HCM is apical hypertrophic cardiomyopathy (AHC), also called Yamaguchi syndrome. It was first described in individuals of Japanese descent. Cardiac catheterization Upon cardiac catheterization, catheters can be placed in the left ventricle and the ascending aorta, to measure the pressure difference between these structures. In normal individuals, during ventricular systole, the pressure in the ascending aorta and the left ventricle will equalize, and the aortic valve is open. In individuals with aortic stenosis or with HCM with an outflow tract gradient, there will be a pressure gradient (difference) between the left ventricle and the aorta, with the left ventricular pressure higher than the aortic pressure. This gradient represents the degree of obstruction that has to be overcome in order to eject blood from the left ventricle.The Brockenbrough–Braunwald–Morrow sign is observed in individuals with HCM with outflow tract gradient. This sign can be used to differentiate HCM from aortic stenosis. In individuals with aortic stenosis, after a premature ventricular contraction (PVC), the following ventricular contraction will be more forceful, and the pressure generated in the left ventricle will be higher. Because of the fixed obstruction that the stenotic aortic valve represents, the post-PVC ascending aortic pressure will increase as well. In individuals with HCM, however, the degree of obstruction will increase more than the force of contraction will increase in the post-PVC beat. The result of this is that the left ventricular pressure increases and the ascending aortic pressure decreases, with an increase in the LVOT gradient.While the Brockenbrough–Braunwald–Morrow sign is most dramatically demonstrated using simultaneous intra-cardiac and intra-aortic catheters, it can be seen on routine physical examination as a decrease in the pulse pressure in the post-PVC beat in individuals with HCM. Screening Although HCM may be asymptomatic, affected individuals may present with symptoms ranging from mild to critical heart failure and sudden cardiac death at any point from early childhood to seniority. HCM is the leading cause of sudden cardiac death in young athletes in the United States, and the most common genetic cardiovascular disorder. One study found that the incidence of sudden cardiac death in young competitive athletes declined in the Veneto region of Italy by 89% since the 1982 introduction of routine cardiac screening for athletes, from an unusually high starting rate. As of 2010, however, studies have shown that the incidence of sudden cardiac death, among all people with HCM, has declined to one percent or less. Screen-positive individuals who are diagnosed with cardiac disease are usually told to avoid competitive athletics.HCM can be detected with an echocardiogram (ECHO) with 80%+ accuracy, which can be preceded by screening with an electrocardiogram (ECG) to test for heart abnormalities. Cardiac magnetic resonance imaging (CMR), considered the gold standard for determining the physical properties of the left ventricular wall, can serve as an alternative screening tool when an echocardiogram provides inconclusive results. For example, the identification of segmental lateral ventricular hypertrophy cannot be accomplished with echocardiography alone. Also, left ventricular hypertrophy may be absent in children under thirteen years of age. This undermines the results of pre-adolescents echocardiograms. Researchers, however, have studied asymptomatic carriers of an HCM-causing mutation through the use of CMR and have been able to identify crypts in the interventricular septal tissue in these people. It has been proposed that the formation of these crypts is an indication of myocyte disarray and altered vessel walls that may later result in the clinical expression of HCM. A possible explanation for this is that the typical gathering of family history only focuses on whether sudden death occurred or not. It fails to acknowledge the age at which relatives had had sudden cardiac death, as well as the frequency of the cardiac events. Furthermore, given the several factors necessary to be considered at risk for sudden cardiac death, while most of the factors do not have strong predictive value individually, there exists ambiguity regarding when to implement special treatment. United States There are several potential challenges associated with routine screening for HCM in the United States. First, the U.S. athlete population of 15 million is almost twice as large as Italys estimated athlete population. Second, these events are rare, with fewer than 100 deaths in the U.S. due to HCM in competitive athletes per year, or about 1 death per 220,000 athletes. Lastly, genetic testing would provide a definitive diagnosis; however, due to the numerous HCM-causing mutations, this method of screening is complex and is not cost-effective. Therefore, genetic testing in the United States is limited to individuals who exhibit clear symptoms of HCM, and their family members. This ensures that the test is not wasted on detecting other causes of ventricular hypertrophy (due to its low sensitivity), and that family members of the individual are educated on the potential risk of being carriers of the mutant gene(s). Canada Canadian genetic testing guidelines and recommendations for individuals diagnosed with HCM are as follows: The main purpose of genetic testing is for screening family members. According to the results, at-risk relatives may be encouraged to undergo extensive testing. Genetic testing is not meant for confirming a diagnosis. If the diagnosed individual has no relatives that are at risk, then genetic testing is not required. Genetic testing is not intended for risk assessment or treatment decisions. Evidence only supports clinical testing in predicting the progression and risk of developing complications of HCM.For individuals suspected of having HCM: Genetic testing is not recommended for determining other causes of left ventricular hypertrophy (such as "athletes heart", hypertension, and cardiac amyloidosis). HCM may be differentiated from other hypertrophy-causing conditions using clinical history and clinical testing. Treatment Asymptomatic people A significant number of people with hypertrophic cardiomyopathy do not have any symptoms and will have a normal life expectancy, although they should avoid particularly strenuous activities or competitive athletics. Asymptomatic people should be screened for risk factors for sudden cardiac death. In people with resting or inducible outflow obstructions, situations that will cause dehydration or vasodilation (such as the use of vasodilatory or diuretic blood pressure medications) should be avoided. Septal reduction therapy is not recommended in asymptomatic people. Medications The primary goal of medications is to relieve symptoms such as chest pain, shortness of breath, and palpitations. Beta blockers are considered first-line agents, as they can slow down the heart rate and decrease the likelihood of ectopic beats. For people who cannot tolerate beta blockers, nondihydropyridine calcium channel blockers such as verapamil can be used, but are potentially harmful in people who also have low blood pressure or severe shortness of breath at rest. These medications also decrease the heart rate, though their use in people with severe outflow obstruction, elevated pulmonary artery wedge pressure, and low blood pressures should be done with caution. Dihydropyridine calcium channel blockers should be avoided in people with evidence of obstruction. For people whose symptoms are not relieved by the above treatments, disopyramide can be considered for further symptom relief. Diuretics can be considered for people with evidence of fluid overload, though cautiously used in those with evidence of obstruction. People who continue to have symptoms despite drug therapy can consider more invasive therapies. Intravenous phenylephrine (or another pure vasoconstricting agent) can be used in the acute setting of low blood pressure in those with obstructive hypertrophic cardiomyopathy who do not respond to fluid administration.Mavacamten was approved for medical use in the United States in April 2022. Surgical septal myectomy Surgical septal myectomy is an open-heart operation done to relieve symptoms in people who remain severely symptomatic despite medical therapy. It has been performed successfully since the early 1960s. Surgical septal myectomy uniformly decreases left ventricular outflow tract obstruction and improves symptoms, and in experienced centers has a surgical mortality of less than 1%, as well as 85% success rate. It involves a median sternotomy (general anesthesia, opening the chest, and cardiopulmonary bypass) and removing a portion of the interventricular septum. Surgical myectomy resection that focuses just on the subaortic septum, to increase the size of the outflow tract to reduce Venturi forces, may be inadequate to abolish systolic anterior motion (SAM) of the anterior leaflet of the mitral valve. With this limited resection, the residual mid-septal bulge still redirects flow posteriorly; SAM persists because flow still gets behind the mitral valve. It is only when the deeper portion of the septal bulge is resected that flow is redirected anteriorly away from the mitral valve, abolishing SAM. With this in mind, a modification of the Morrow myectomy termed extended myectomy, mobilization and partial excision of the papillary muscles has become the excision of choice. In people with particularly large redundant mitral valves, anterior leaflet plication may be added to complete separation of the mitral valve and outflow. Complications of septal myectomy surgery include possible death, arrhythmias, infection, incessant bleeding, septal perforation/defect, and stroke. Alcohol septal ablation Alcohol septal ablation, introduced by Ulrich Sigwart in 1994, is a percutaneous technique that involves injection of alcohol into one or more septal branches of the left anterior descending artery. This is a catheter technique with results similar to the surgical septal myectomy procedure but is less invasive, since it does not involve general anaesthesia and opening of the chest wall and pericardium (which are done in a septal myectomy). In a select population with symptoms secondary to a high outflow tract gradient, alcohol septal ablation can reduce the symptoms of HCM. In addition, older individuals and those with other medical problems, for whom surgical myectomy would pose increased procedural risk, would likely benefit from the less-invasive septal ablation procedure.When performed properly, an alcohol septal ablation induces a controlled heart attack, in which the portion of the interventricular septum that involves the left ventricular outflow tract is infarcted and will contract into a scar. There is debate over which people are best served by surgical myectomy, alcohol septal ablation, or medical therapy. Mitral clip Since 2013, mitral clips have been implanted via catheter as a new strategy to correct the motion of the mitral valve in people with severe obstructive HCM. The device fastens together the mitral valve leaflets to improve the hearts blood outflow. The mitral clip has not yet established the long-term reliability of septal myectomy or alcohol septal ablation, but HCM specialists are increasingly offering the clip as a less-invasive treatment option. Implantable pacemaker or defibrillator The use of a pacemaker has been advocated in a subset of individuals, in order to cause asynchronous contraction of the left ventricle. Since the pacemaker activates the interventricular septum before the left ventricular free wall, the gradient across the left ventricular outflow tract may decrease. This form of treatment has been shown to provide less relief of symptoms and less of a reduction in the left ventricular outflow tract gradient when compared to surgical myectomy. Technological advancements have also led to the development of a dual-chamber pacemaker, which is only turned on when needed (in contrast to a regular pacemaker which provides a constant stimulus). Although the dual-chamber pacemaker has shown to decrease ventricular outflow tract obstruction, experimental trials have found only a few individuals with improved symptoms. Researchers suspect that these reports of improved symptoms are due to a placebo effect.The procedure includes an incision on the anterolateral area below the clavicle. Two leads are then inserted; one into the right atrium and the other into the right ventricular apex via the subclavian veins. Once in place, they are secured and attached to the generator which will remain inside the fascia, anterior to the pectoral muscle. Complications of this procedure include infection, electrical lead and generator malfunction which will require replacement.For people with HCM who exhibit one or more of the major risk factors for sudden cardiac death, an implantable cardioverter-defibrillator (ICD) or a combination pacemaker/ICD all-in-one unit may be recommended as an appropriate precaution. In 2014, European Society of Cardiology suggested a practical risk score to calculate that risk. Cardiac transplantation In cases that are unresponsive to all other forms of treatment, cardiac transplantation is one option. It is also the only treatment available for end-stage heart failure. However, transplantation must occur before the onset of symptoms such as pulmonary vessel hypertension, kidney malfunction, and thromboembolism in order for it to be successful. Studies have indicated a seven-year survival rate of 94% in people with HCM after transplantation. Prognosis A systematic review from 2002 concluded that: "Overall, HCM confers an annual mortality rate of about 1%... HCM may be associated with important symptoms and premature death but more frequently with no or relatively mild disability and normal life expectancy." Children Even though hypertrophic cardiomyopathy (HCM) may be present early in life and is most likely congenital, it is one of the most-uncommon cardiac malformations encountered in pediatric cardiology, largely because the presentation of symptoms is usually absent, incomplete, or delayed into adulthood. Most of the current information pertaining to HCM arises from studies in adult populations, and the implication of these observations for pediatric population is often uncertain. Nonetheless, recent studies in pediatric cardiology have revealed that HCM accounts for 42% of childhood cardiomyopathies, with an annual incidence rate of 0.47/100,000 in children. Further, in asymptomatic cases, sudden death is considered one of the most-feared complications associated with the disease in select pediatric populations. Consequently, the recommended practice is to screen children of affected individuals throughout childhood to detect cardiac abnormalities at an early stage, in the hope of preventing further complications of the disease.Generally, the diagnosis of HCM in a pediatric population is made during assessment for murmur, congestive heart failure, physical exhaustion, and genetic testing of children of affected individuals. Specifically, echocardiogram (ECHO) has been used as a definitive noninvasive diagnostic tool in nearly all children. ECHO assesses cardiac ventricular size, wall thickness, systolic and diastolic function, and outflow obstruction. Thus, ECHO has been chosen as an ideal means to detect excessive wall thickening of cardiac muscle in HCM.For children with HCM, treatment strategies aim to reduce disease symptoms and lower the risk of sudden death. Due to the heterogeneity of the disease, treatment is usually modified according to individuals needs. β-blockers improve left ventricular filling and relaxation and thereby improve symptoms. In some children, β–blockers (e.g., propranolol) were shown effective to reduce the risk of sudden death. Further, calcium channel blockers (verapamil) and antiarrhythmic drugs may be used as an adjunct therapy to β-blockers in symptomatic children. Nonetheless, further testing is needed to determine their definitive benefits. Other animals Cats Feline hypertrophic cardiomyopathy (HCM) is the most common heart disease in domestic cats; the disease process and genetics are believed to be similar to the disease in humans. In Maine Coon cats, HCM has been confirmed as an autosomal dominant inherited trait. Numerous cat breeds have HCM as a problem in the breed. The first genetic mutation (in cardiac myosin binding protein C) responsible for feline HCM was discovered in 2005 in Maine Coon cats. A test for this mutation (A31P) is available. About one-third of Maine Coon cats tested for the mutation are either heterozygous or homozygous for the mutation, although many of the cats that are heterozygous have no overt evidence of the disease on an echocardiogram (low penetrance). Some Maine Coon cats with clinical evidence of hypertrophic cardiomyopathy test negative for this mutation, strongly suggesting that another cause exists in the breed. The cardiac myosin binding protein C mutation identified in Maine Coon cats has not been found in any other breed of cat with HCM, but more recently another myosin binding protein C mutation has been identified in Ragdoll cats with HCM. As in humans, feline HCM is not present at birth but develops over time. It has been identified for the first time in cats as young as 6 months of age and at least as old as 7 years of age.Clinically, cats with hypertrophic cardiomyopathy commonly have a systolic anterior motion (SAM) of the mitral valve (see graphic). Cats with severe HCM often develop left heart failure (pulmonary edema; pleural effusion) because of severe diastolic dysfunction of the left ventricle. They may also develop a left atrial thrombus that embolizes, most commonly, to the terminal aorta creating acute pain and rear limb paralysis (see below). Sudden death can also occur but appears to be uncommon.Ultrasound of the heart (echocardiography) is necessary to diagnose HCM in cats. Measurement of circulating cardiac biomarkers, like N‐terminal‐proBNP (NT‐proBNP) and troponin I (TnI) may be used in cats to strengthen the suspicion of cardiac disease. There is a Point-of-care test for feline NT-proBNP available which can be used at the veterinary clinic when echocardiography is not possible to perform.There is no cure for feline HCM. Many but not all cats have a heart murmur. Many cats that have a heart murmur do not have HCM. Frequently the first signs that a cat has HCM are tachypnea/dyspnea due to heart failure or acute pain and paralysis due to systemic thromboembolism. While medication is commonly given to cats with HCM that have no clinical signs, no medication has been shown to be helpful at this stage and it has been shown that an ACE inhibitor is not beneficial until heart failure is present (at which time a diuretic is most beneficial). Diltiazem generally produces no demonstrable benefit. Atenolol is commonly administered when a severe systolic anterior motion of the mitral valve is present.Feline arterial thromboembolism (FATE) is a relatively common and devastating complication of feline HCM and other feline cardiomyopathies. The thrombus generally forms in the left atrium, most commonly the left auricle. The formation is thought to be primarily due to blood flow stasis. Classically, the thromboembolism lodges at the iliac trifurcation of the aorta, occluding either one or both of the common iliac arteries. Because this split is called the saddle, and is the most frequent location for the thrombus, FATE is commonly known as saddle thrombus. Clinically this presents as a cat with complete loss of function in one or both hind limbs. The hind limbs are cold and the cat is in considerable pain. Emboli may, rarely, lodge in other locations, most commonly the right front limb and the renal arteries.Clopidogrel is used to try to prevent left atrial thrombus formation in cats with HCM and a large left atrium. The FATCAT study at Purdue University demonstrated that it is superior to aspirin for the prevention of a second thrombus from forming in cats that have already experienced a clot. Thrombolytic agents (e.g., tissue plasminogen activator) have been used with some success to break down an existing aortic thromboembolism, but their cost is high and outcome appears to be no better than giving a cat time (48–72 hours) to break down its own clot. Pain management is extremely important. The prognosis for cats with FATE is often poor as they are likely to have significant HCM already and a recurrent bout of FATE is likely. For this reason, euthanasia is often a valid consideration. Gorillas In July 2013, Rigo, a 42-year-old western lowland gorilla, resident in Melbourne Zoo and father of Mzuri, the first gorilla born by artificial insemination, died unexpectedly as a result of HCM. The condition is not uncommon in male gorillas over the age of 30, and in many cases, there is no sign of the disease until the individuals sudden death. References External links Hypertrophic cardiomyopathy at Curlie GeneReviews/NIH/NCBI/UW entry on Familial Hypertrophic Cardiomyopathy Overview National Heart, Blood, and Lung Institute Cardiomyopathy Page
Linear IgA bullous dermatosis
Linear IgA bullous dermatosis is a rare immune-mediated blistering skin disease frequently associated with medication exposure, especially vancomycin, with men and women being equally affected.: 135  It was first described by Tadeusz Chorzelski in 1979 and may be divided into two types:: 587  Adult linear IgA disease is an acquired, autoimmune blistering disease that may present with a clinical pattern of vesicles indistinguishable from dermatitis herpetiformis, or with vesicles and bullae in a bullous pemphigoid-like appearance. This disease can often be difficult to treat even with usually effective medications such as rituximab. Childhood linear IgA disease (also known as "Chronic bullous disease of childhood") is an acquired, self-limited bullous disease that may begin by the time the patient is age 2 to 3 and usually remits by age 13. See also Skin lesion List of cutaneous conditions List of target antigens in pemphigoid List of immunofluorescence findings for autoimmune bullous conditions References == External links ==
Burping
Burping (also called belching and eructation) is the release of gas from the upper digestive tract (esophagus and stomach) of animals through the mouth. It is usually audible. In humans, burping can be caused by normal eating processes, or as a side effect of other medical conditions. There is a range of levels of social acceptance for burping: within certain context and cultures, burping is acceptable, while in others it is offensive or unacceptable. Failure to burp can cause pain or other negative effects. Humans are not the only animals that burp: it is very common among other mammals. In particular, burping by domesticated ruminants, such as cows or sheep, is a major contributor of methane emissions which cause climate change and negatively impacts animal agriculture on the environment. Significant research is being done to find mitigation strategies for ruminant burping, i.e. modifying the animals diets with Asparagopsis taxiformis (red seaweed). Causes Burping is usually caused by swallowing air when eating or drinking and subsequently expelling it, in which case the expelled gas is mainly a mixture of nitrogen and oxygen. Burps can be caused by drinking beverages containing carbon dioxide, such as beer and soft drinks, in which case the expelled gas is mainly carbon dioxide. Diabetes drugs such as metformin and exenatide can cause burping, especially at higher doses. This often resolves in a few weeks. Burping combined with other symptoms such as dyspepsia, nausea and heartburn may be a sign of an ulcer or hiatal hernia, and should be reviewed by a physician. Other causes of burping include food allergies, gallbladder diseases, H. pylori, acid reflux disease and gastritis. Complications In microgravity environments, burping is frequently associated with regurgitation. With reduced gravity, the stomach contents are more likely to rise up into the esophagus when the gastroesophageal sphincter is relaxed, along with the expelled air. Disorders Inability to burp is uncommon. Chest pain associated with burping can occur, but is rare. Retrograde cricopharyngeus dysfunction (R-CPD) involves the cricopharyngeus muscle not being able to relax. R-CPD was first discovered in 2015 when a user inquired with symptoms on Reddit. Several other users found the post online, mentioning they had a similar condition. Common symptoms include gurgling noises, bloating, flatulence, lesser but common symptoms can be potentially painful hiccups, nausea, constipation, hypersalivation, or shortness of breath. 80% of patients were successfully treated with Botox after a single injection. An alternative if the injection is unsuccessful is to undergo partial cricopharyngeal myotomy. Society and culture Acceptance Some South Asian cultures view burping as acceptable in particular situations. For example, a burping guest can be a sign to the host that the meal satisfied them and they are full.In Japan, burping during a meal is considered bad manners. Burping during a meal is also considered unacceptable in Western cultures, such as North America and Europe. In Middle Eastern countries, it is not acceptable to burp out loud in public, and one should silence ones burp, or at least attempt to do so.Despite virtually no scientific research on the subject, small online communities exist for burping as a sexual fetish. Online, both men and women of any sexual orientation anecdotally report some attraction to burping, with what appears to be psychological and/or behavioural overlaps with other sexual fetishes including body inflation, feederism, vorarephilia, and farting fetishes. Anecdotally, the loudness aspect appears to be an important element to burp fetishists. Despite being a rather uncommon fetish, it continues to follow a general well-known pattern of sexual behaviour where hearing influences sexual arousal and response, noting that "it is the noise made rather than the action itself that appears to be what is sexualized and/or interpreted by the fetishist as sexually pleasurable and arousing". Infants Babies are likely to accumulate gas in the stomach while feeding and experience considerable discomfort (and agitation) until assisted. Burping an infant involves placing the child in a position conducive to gas expulsion (for example against the adults shoulder, with the infants stomach resting on the adults chest) and then lightly patting the lower back. Because burping can cause vomiting, a "burp cloth" or "burp pad" is sometimes employed on the shoulder to protect clothing. Contest The Guinness World Record for the loudest burp is 109.9 dB, set by Paul Hunn at Butlins Bognor Regis, United Kingdom, on 23 August 2009. This is louder than a jackhammer at a distance of 1 m (3 ft 3 in). Burped speech It is possible to voluntarily induce burping through swallowing air and then expelling it, and by manipulation of the vocal tract produce burped speech. While this is often employed as a means of entertainment or competition, it can also act as an alternative means of vocalisation for people who have undergone a laryngectomy, with the burp replacing laryngeal phonation. This is known as esophageal speech. Other animals Many other mammals, such as cows, dogs and sheep, also burp. Ruminants Much of the gas expelled is produced as a byproduct of the ruminants digestive process. These gases notably include a large volume of methane, produced exclusively by a narrow cohort of methanogenic archaea in the animals gut; Escherichia coli (E. coli) and other bacteria lack the enzymes and cofactors required for methane production. A lactating cow produces about 322g of methane per day, i.e. more than 117 kg per year through burping and exhalation, making commercially farmed cows a major (37%) contributor to anthropogenic methane emissions, and hence to the greenhouse effect. 95% of this gas (wind) is emitted through burping. This has led scientists at the Commonwealth Scientific and Industrial Research Organisation of Perth, Australia, to develop an anti-methanogen vaccine to minimize methane in cow burps.One reason why cows burp so much is that they are often fed foods that their digestive systems cannot fully process, such as corn and soy. Some farmers have reduced burping in their cows by feeding them alfalfa and flaxseed, which are closer to the grasses that they had eaten in the wild before they were domesticated.The failure to burp successfully can be fatal. This is particularly common among domesticated ruminants that are allowed to gorge themselves on spring clover or alfalfa. The condition, known as ruminal tympany, is a high-pressure buildup of gas in the stomach(s) and requires immediate treatment to expel the gas, usually the insertion of a flexible rubber hose down the esophagus, or in extreme cases the lancing of the animals side with a trochar and cannula. Birds There is no documented evidence that birds burp, though ornithologists believe that there is nothing which physiologically prevents them from doing so. However, since the microbiota of birds do not include the same set of gas-producing bacteria that mammals have to aid in digestion, gas rarely builds up in the gastrointestinal tracts of birds. See also Flatulence Hiccup Penelope and the Humongous Burp References == External links ==
Cardiac fibrosis
Cardiac fibrosis commonly refers to the excess deposition of extracellular matrix in the cardiac muscle, but the term may also refer to an abnormal thickening of the heart valves due to inappropriate proliferation of cardiac fibroblasts. Fibrotic cardiac muscle is stiffer and less compliant and is seen in the progression to heart failure. The description below focuses on a specific mechanism of valvular pathology but there are other causes of valve pathology and fibrosis of the cardiac muscle. Fibrocyte cells normally secrete collagen, and function to provide structural support for the heart. When over-activated this process causes thickening and fibrosis of the valve, with white tissue building up primarily on the tricuspid valve, but also occurring on the pulmonary valve. The thickening and loss of flexibility eventually may lead to valvular dysfunction and right-sided heart failure. Types Following are types of myocardial fibrosis: Interstitial fibrosis, which is unspecific, and has been described in congestive heart failure, hypertension, and normal aging. Subepicardial fibrosis, also unspecific, and is associated with non-infarction diagnoses such as myocarditis and non-ischemic cardiomyopathy. Replacement fibrosis, which indicates an older infarction. Connection with excess blood serotonin (5-HT) Certain diseases such as neuroendocrine tumor of the small intestine (also known by the obsolete term carcinoid), which sometimes release large amounts of 5-hydroxytryptamine, commonly known as 5-HT or serotonin into the blood, may produce a characteristic pattern of mostly right-sided cardiac fibrosis which can be identified with echocardiography. Cardiac fibrosis is a significant source of morbidity and mortality in patients with functional neuroendocrine tumors. This pathology has also been seen in certain East-African tribes who eat foods (Matoke —a green banana) containing excess amounts of serotonin. Connection with direct serotonergic agonist drugs Elevated prevalence of cardiac fibrosis and related valvopathies was found to be associated with use of a number of unrelated drugs following long-term statistical analysis once the drugs had been on the market for some time. The cause of this was unknown at the time, but eventually it was realised that all the implicated drugs acted as agonists at 5-HT2B receptors in the heart in addition to their intended sites of action elsewhere in the body.The precise mechanisms involved remain elusive however, as while the cardiotoxicity shows some dose–response relationship, it does not always develop, and consistent daily use over an extended period tends to be most strongly predictive of development of valvopathy.The drugs most classically associated with the condition are weight loss drugs such as fenfluramine and chlorphentermine, and antiparkinson drugs such as pergolide and cabergoline, which are prescribed for chronic use.The heart valve changes seen with moderate and intermittent use can result in permanent damage and life-threatening heart problems if use of the causative drug is increased or continued, however longitudinal studies of former patients suggest that the damage will heal over time to some extent at least. Anorectics Some appetite suppressant drugs such as fenfluramine (which in combination with phentermine was marketed as Pondimin and commonly referred to as fen-phen), chlorphentermine, and aminorex (along with its analogue 4-Methylaminorex which has seen sporadic use as a recreational drug) induce a similar pattern of cardiac fibrosis (and pulmonary hypertension), apparently by overstimulating 5HT2B receptors on the cardiac fibroblast cells.These drugs consequently tend to cause increased risk of heart valve damage and subsequent heart failure, which eventually led to them being withdrawn from the market. Antimigraine drugs Certain antimigraine drugs which are targeted at serotonin receptors as vasoconstrictive agents, have long been known to be associated with pulmonary hypertension and Raynauds phenomenon (both vasoconstrictive effects), as well as retroperitoneal fibrosis (a fibrotic cell/fibrocyte proliferation effect, thought to be similar to cardiac valve fibrosis).These drugs include ergotamine and methysergide and both drugs can also cause cardiac fibrosis. Antiparkinson drugs Certain antiparkinson drugs, although targeted at dopaminergic receptors, cross-react with serotoninergic 5-HT2B receptors as well, and have been reported to cause cardiac fibrosis. These drugs include pergolide and cabergoline. Antihypertensive drugs Guanfacine may be a 5-HT2B agonist, based on the results of theoretical modeling and high-throughput screening. Pergolide Pergolide was an antiparkinson medications that was in decreasing use since reported in 2003 to be associated with cardiac fibrosis. In March 2007, pergolide was withdrawn from the U.S. market due to serious valvular damage that was shown in two independent studies. Cabergoline Like pergolide, cabergoline has been linked to cardiac damage. Among similar antiparkinsonian drugs, cabergoline exhibits the same type of serotonin receptor binding as pergolide. Although lisuride, a related drug, also binds to the 5-HT2B receptor, it acts as an antagonist rather than as an agonist.In January 2007, cabergoline (Dostinex) was reported also to be associated with valvular proliferation heart damage. Recreational drugs Several serotonergic recreational drugs, including the empathogens MDA and MDMA ("ecstasy"), and some hallucinogens such as DOI and Bromo-DragonFLY, have all been shown to act as 5-HT2B agonists in vitro, but how significant this may be as a risk factor associated with their recreational use is unclear. The piperazine derivative mCPP (a major metabolite of trazodone) is a 5-HT2B agonist in animal models, but actually behaves as a 5-HT2B antagonist in humans. MDMA One study of human users of MDMA ("ecstasy") found that they did have heart valve changes suggestive of early cardiac fibrosis, which were not present in non-MDMA using controls, suggesting that MDMA use certainly has the potential to cause this kind of heart damage.On the other hand, there is as yet no statistical evidence to establish or negate significant increases in rates of cardiac valvopathies in current or former MDMA users. Absent studies on point, it may be speculated that as with other 5-HT2B agonists, development of heart valve damage may be dependent on the frequency and duration of use and the total cumulative exposure over time. If that is the case, then the heaviest users are likely to face the greatest risk of heart damage. Other serotonergic pharmacologics in question The SSRI antidepressants raise blood serotonin levels, and thus may be capable of the same risks, though it is thought that the risk is substantially lower with such drugs. The amino acid L-tryptophan also raises blood serotonin, and may present the same risk as well; though, again, the risk is considered to be low.However, the tryptophan derivative 5-HTP (5-hydroxytryptophan), used in the treatment of depression, raises blood serotonin level considerably. It has yet to be reported to be associated with valve disease or other fibrosis, but for the previous theoretical reasons, it has been suggested as a possible danger.When 5-HTP is used in medicine, it is generally administered along with carbidopa, which prevents the peripheral decarboxylation of 5-HTP to serotonin and so ensures that only brain serotonin levels are increased without producing peripheral side effects, however 5-HTP is also sold without carbidopa as a dietary supplement, and may have increased risks when taken by itself without carbidopa. In non-human great apes Cardiac fibrosis is common in non-human great apes in human care. The term idiopathic myocardial fibrosis was coined to emphasize this disease is likely different from the above described forms of cardiac fibrosis in humans. The etiology is not known, though vitamin D deficiency is a potential suspected cause at least in chimpanzees. Possible treatments The most obvious treatment for cardiac valve fibrosis or fibrosis in other locations, consists of stopping the stimulatory drug or production of serotonin. In the case of a functional neuroendocrine tumor, somatostatin analogs such as octreotide are used to reduce the production of serotonin by tumor cells, which often highly express inhibitory somatostatin receptors.Surgical tricuspid valve replacement, sometimes combined with a pulmonary valve replacement, can be necessary in some patients.A compound found in red wine, resveratrol has been found to slow the development of cardiac fibrosis. More sophisticated approaches of countering cardiac fibrosis like microRNA inhibition (miR-21, for example) are being tested in animal models. == References ==