title
stringlengths 3
71
| text
stringlengths 538
109k
| relevans
float64 0.76
0.83
| popularity
float64 0.92
1
| ranking
float64 0.75
0.83
|
---|---|---|---|---|
Management of borderline personality disorder | The mainstay of management of borderline personality disorder is various forms of psychotherapy with medications being found to be of little use.
Psychotherapy
There has traditionally been skepticism about the psychological treatment of personality disorders, but several specific types of psychotherapy for BPD have developed in recent years. There is growing evidence for the role of psychotherapy in the treatment of people with BPD, with indications that both comprehensive and non-comprehensive psychotherapeutic interventions may have a beneficial effect. Supportive therapy alone may enhance self-esteem and mobilize the existing strengths of individuals with BPD. Specific psychotherapies may involve sessions over several months or, as is particularly common for personality disorders, several years. Psychotherapy can often be conducted either with individuals or with groups. Group therapy can aid the learning and practice of interpersonal skills and self-awareness by individuals with BPD, though drop-out rates may be problematic.
Dialectical behavioral therapy
University of Washington psychology professor Marsha Linehan is credited with developing the first empirically supported standard treatment for BPD, termed dialectical behavioral therapy (DBT). DBT grew dramatically in popularity among mental health professionals following the publication of Linehan's treatment manuals for DBT in 1993. DBT was originally developed as an intervention for patients who meet criteria for BPD and particularly those who are highly suicidal.
DBT draws its principles from behavioral science (including cognitive-behavioral techniques), dialectical philosophy and Zen practice. The treatment emphasizes balancing acceptance and change (hence dialectic), with the overall goal of helping patients not just survive but build a life worth living. Treatment is delivered in four stages, with self-harm and other life-threatening issues taking priority. In the second stage, patients are encouraged to experience the painful emotions that they have been avoiding. Stage three addresses problems of living such as career and marital problems. Finally, stage four focuses on helping clients feel complete and reducing feelings of emptiness and boredom.
DBT encompasses four modes of therapy:
The first mode is traditional individual therapy between a single therapist and client.
The second mode of therapy is skills training; a core component of DBT is learning new behavioral skills, including mindfulness, interpersonal effectiveness (e.g. assertiveness and social skill), coping adaptively with distress and crises, and identifying and regulating emotional reactions.
The third mode of therapy used is skills generalization, which focuses on helping clients integrate the skills taught in DBT into real-life situations. This usually involves coaching in the form of telephone contact outside of normal therapy hours. The calls are usually brief interactions focused on helping clients apply specific skills to circumstances they are experiencing.
The fourth mode of therapy is the use of a consultation team designed to support the therapists. These teams have several important functions including reducing therapist burnout, providing therapy for the therapists, improving empathy for clients and providing ongoing consultations for client difficulties.
The goal of all DBT treatment approaches is to reduce the ineffective action tendencies linked to dysregulated emotions. DBT is based on a biosocial theory of personality functioning in which the core problem is seen as the breakdown of the patient's cognitive, behavioral and emotional regulation systems when experiencing intense emotions. The etiology of BPD is seen as a biological predisposition toward emotional dysregulation combined with a perceived invalidating social environment.
DBT can be based on a biosocial theory of personality functioning in which BPD is seen as a biological disorder of emotional regulation in a social environment experienced as invalidating by the borderline patient.
Several random controlled trials comparing DBT to other forms of cognitive-behavioral treatments have favored the use of DBT to treat borderline patients. Specifically, DBT has been found to significantly reduce self-injury, suicidal behavior, impulsivity, self-rated anger and the use of crisis services among borderline patients. These reductions have been found even when controlling for other treatment factors such as therapist experience, affordability of treatment, gender of therapist and the number of hours spent in individual therapy. In a meta-analysis it was found that DBT was moderately effective. However, none of the studied therapies (including CBT) "fulfilled the criteria for empirically supported treatment." The additional efficacy in the overall treatment of BPD is less clear; future research is needed to isolate the specific components of DBT that are most effective in treating BPD. Furthermore, little research has examined the efficacy of DBT in treating male and minority patients with BPD. Training nurses in the use of DBT has been found to replace a therapeutic pessimism with a more optimistic understanding and outlook.
Schema therapy
Schema therapy (also called schema-focused therapy) is an integrative approach based on cognitive-behavioral or skills-based techniques along with object relations and gestalt approaches. It directly targets deeper aspects of emotion, personality and schemas (fundamental ways of categorizing and reacting to the world). The treatment also focuses on the relationship with the therapist (including a process of "limited re-parenting"), daily life outside of therapy and traumatic childhood experiences. It was developed by Jeffrey Young and became established in the 1990s. Limited recent research suggests it is significantly more effective than transference-focused psychotherapy, with half of individuals with borderline personality disorder assessed as having achieved full recovery after four years, with two-thirds showing clinically significant improvement. Another very small trial has also suggested efficacy.
Cognitive behavioral therapy
Cognitive behavioral therapy (CBT) is the most widely used and established psychological treatment for mental disorders, but has appeared less successful in BPD, due partly to difficulties in developing a therapeutic relationship and treatment adherence. Approaches such as DBT and schema-focused therapy developed partly as an attempt to expand and add to traditional CBT, which uses a limited number of sessions to target specific maladaptive patterns of thought, perception and behavior. A recent study did find a number of sustained benefits of CBT, in addition to treatment as usual, after an average of 16 sessions over one year.
Psychoanalysis
It is in the DSM-IV that the term took two orientations: one psychiatric, and the other behavioral, included in a psychoanalytic psychopathology. According to this split, the diagnosis takes on, or a character of symptoms to be eradicated, or a particular type of patient of psychoanalysts.
Psychodynamic psychotherapy generally
Psychodynamic psychotherapy (PP) are different types of psychotherapy derived from psychoanalysis. The duration of psychodynamic psychotherapy ranges from 10 to 25 sessions (short term psychodynamic psychotherapy) to over 200 sessions. The main emphasis of these measures are very different. Similar treatment principles mainly focus on one or several target problems by using the foundation of modern psychoanalytic theory. Results of meta-analysis show that psychodynamic psychotherapy has large effects in the treatment of personality disorders. The results indicate that psychodynamic psychotherapy causes long term changes in personality disorders.
Transference-focused psychotherapy
Transference-focused psychotherapy (TFP) is a form of psychoanalytic therapy dating to the 1960s, rooted in the conceptions of Otto Kernberg on BPD and its underlying structure (borderline personality organization). Unlike in the case of traditional psychoanalysis, the therapist plays a very active role in TFP. In session the therapist works on the relationship between the patient and the therapist. The main focus is on the patient's emotions concerning their relationship with the therapist and the therapist's use of psychodynamic techniques (e.g., interpretation). The therapist will try to explore and clarify aspects of this relationship so the underlying object relations dyads become clear. Some limited research on TFP suggests it may reduce some symptoms of BPD by affecting certain underlying processes, and that TFP in comparison to dialectical behavioral therapy and supportive therapy results in increased reflective functioning (the ability to realistically think about how others think) and a more secure attachment style. Furthermore, TFP has been shown to be as effective as DBT in improvement of suicidal behavior, and has been more effective than DBT in alleviating anger and in reducing verbal or direct assaultive behavior. Limited research suggests that TFP appears to be less effective than schema-focused therapy, while being more effective than no treatment.
Cognitive analytic therapy
Cognitive analytic therapy combines cognitive and psychoanalytic approaches and has been adapted for use with individuals with BPD with mixed results.
Mentalization-based treatment
Mentalization-based treatment, developed by Peter Fonagy and Antony Bateman, rests on the assumption that people with BPD have a disturbance of attachment due to problems in the early childhood parent-child relationship. Fonagy and Bateman hypothesize that inadequate parental mirroring and attunement in early childhood lead to a deficit in mentalization, "the capacity to think about mental states as separate from, yet potentially causing actions"; in other words the capacity to intuitively understand the thoughts, intentions and motivations of others, and the connections between one's own thoughts, feelings and actions. Mentalization failure is thought to underlie BPD patients' problems with impulse control, mood instability and difficulties sustaining intimate relationships. Mentalization based treatment aims to develop patients' self-regulation capacity through a psychodynamically informed multi-modal treatment program that incorporates group psychotherapy and individual psychotherapy in a therapeutic community, partial hospitalization or outpatient context. In a randomized controlled trial, a group of BPD patients received 18 months of intensive partial-hospitalization MBT followed by 18 months of group psychotherapy, and were followed up over five years. The treatment group showed significant benefits across a range of measures including number of suicide attempts, reduced time in hospital and reduced use of medication.
Marital or family therapy
Marital therapy can be helpful in stabilizing the marital relationship and in reducing marital conflict and stress that can worsen BPD symptoms. Family therapy or family psychoeducation can help educate family members regarding BPD, improve family communication and problem solving, and provide support to family members in dealing with their loved one's illness.
Two patterns of family involvement can help clinicians plan family interventions: overinvolvement and neglect. Borderline patients who are from overinvolved families are often actively struggling with a dependency issue by denial or by anger at their parents.
Interest in the use of psychoeducation and skills training approaches for families with borderline members is growing.
Medication
The UK's National Institute for Health and Clinical Excellence (NICE) in 2009 advises against the use of medication for treating borderline personality disorder, recommending that they only be considered for comorbid conditions. A Cochrane review from 2006 arrived at the same conclusion, but a 2010 update found that some pharmacological interventions (second generation antipsychotics, mood stabilisers and dietary supplementation with omega 3 fatty acids) might provide beneficial effects.
However, the authors warned that total BPD severity is not significantly influenced by any drug and that the evidence generated by the review was based on single study effect estimates. No promising results were available for the core BPD symptoms of chronic feelings of emptiness, identity disturbance and abandonment.
Antidepressants
Selective serotonin reuptake inhibitor (SSRI) antidepressants have been shown in randomized controlled trials to improve the attendant symptoms of anxiety and depression, such as anger and hostility, associated with BPD in some patients. According to Listening to Prozac, it takes a higher dose of an SSRI to treat mood disorders associated with BPD than depression alone. It also takes about three months for benefit to appear, compared to the three to six weeks for depression.
Antipsychotics
The newer atypical antipsychotics are claimed to have an improved adverse effect profile than the typical antipsychotics. Antipsychotics are also sometimes used to treat distortions in thinking or false perceptions. One meta-analysis of two randomly controlled trials, four non-controlled open-label studies and eight case reports has suggested that several atypical antipsychotics, including olanzapine, clozapine, quetiapine and risperidone, may help BPD patients with psychotic-like, impulsive or suicidal symptoms. However, there are numerous adverse effects of antipsychotics, notably tardive dyskinesia (TD). Atypical antipsychotics are known for often causing considerable weight gain, with associated health complications.
Mood stabilizers
Mood stabilizers are anticonvulsant drugs used for both epilepsy and reduction in mood variations in patients with excessive and often dangerous mood variabilities. Often, the goal of the anticonvulsants are to bring certain areas of the brain to equilibrium and control outbursts and seizures. Mood stabilizers (used primarily to treat bipolar disorder) such as lithium or lamotrigine may be of some use to help depressed or labile periods, as well as rapid changes in mood. A random controlled trial by Lieb (2010) found mood stabilizer valproate semisodium showed a significant decrease in interpersonal conflicts and depression. It was also found that topiramate showed a significant decrease in interpersonal issues and depression. Lamotrigine showed a significant decrease in impulsivity and anger-related behaviors. Carbamazepine showed no significant effects on patients with BPD. Mood stabilizers are often used to treat comorbid disorders in BPD patients. There is currently no medicine with an overall significant effect on BPD as a whole.
Services and recovery
Individuals with BPD sometimes use mental health services extensively. People with this diagnosis accounted for about 20 percent of psychiatric hospitalizations in one survey. The majority of BPD patients continue to use outpatient treatment in a sustained manner for several years, but the number using the more restrictive and costly forms of treatment, such as inpatient admission, declines with time. Experience of services varies. Assessing suicide risk can be a challenge for mental health services (and patients themselves tend to underestimate the lethality of self-injurious behaviours) with typically a chronically elevated risk of suicide much above that of the general population and a history of multiple attempts when in crisis.
Particular difficulties have been observed in the relationship between care providers and individuals diagnosed with BPD. A majority of psychiatric staff report finding individuals with BPD moderately to extremely difficult to work with, and more difficult than other client groups. On the other hand, those with the diagnosis of BPD have reported that the term "BPD" felt like a pejorative label rather than a helpful diagnosis, that self-destructive behaviour was incorrectly perceived as manipulative, and that they had limited access to care. Attempts are made to improve public and staff attitudes.
Combining pharmacotherapy and psychotherapy
In practice, psychotherapy and medication may often be combined, but there are limited data on clinical practice. Efficacy studies often assess the effectiveness of interventions when added to "treatment as usual" (TAU), which may involve general psychiatric services, supportive counselling, medication and psychotherapy.
One small study, which excluded individuals with a comorbid Axis 1 disorder, has indicated that outpatients undergoing dialectical behavioral therapy and taking the antipsychotic olanzapine show significantly more improvement on some measures related to BPD, compared to those undergoing DBT and taking a placebo pill, although they also experienced weight gain and raised cholesterol. Another small study found that patients who had undergone DBT and then took fluoxetine (Prozac) showed no significant improvements, whereas those who underwent DBT and then took a placebo pill did show significant improvements.
Difficulties in therapy
There can be unique challenges in the treatment of BPD, such as hospital care. In psychotherapy, a client may be unusually sensitive to rejection and abandonment and may react negatively (e.g., by harming themselves or withdrawing from treatment) if they sense this. In addition, clinicians may emotionally distance themselves from individuals with BPD for self-protection or due to the stigma associated with the diagnosis, leading to a self-fulfilling prophecy and a cycle of stigmatization to which both patient and therapist can contribute.
Some psychotherapies, including DBT, were developed partly to overcome problems with interpersonal sensitivity and maintaining a therapeutic relationship. Adherence to medication regimens is also a problem, due in part to adverse effects, with drop-out rates of between 50 percent and 88 percent in medication trials. Comorbid disorders, particularly substance use disorders, can complicate attempts to achieve remission.
Other strategies
Psychotherapies and medications form a part of the overall context of mental health services and psychosocial needs related to BPD. The evidence base is limited for both, and some individuals may forego them or not benefit (enough) from them. It has been argued that diagnostic categorization can have limited utility in directing therapeutic work in this area, and that in some cases it is only with reference to past and current relationships that "borderline" behavior can be understood as partly adaptive and how people can best be helped.
Numerous other strategies may be used, including alternative medicine techniques (see List of branches of alternative medicine); exercise and physical fitness, including team sports; occupational therapy techniques, including creative arts; having structure and routine to the days, particularly through employment - helping feelings of competence (e.g. self-efficacy), having a social role and being valued by others, boosting self-esteem.
Group-based psychological services encourage clients to socialize and participate in both solitary and group activities. These may be in day centers. Therapeutic communities are an example of this, particularly in Europe; although their usage has declined many have specialised in the treatment of severe personality disorder.
Psychiatric rehabilitation services aimed at helping people with mental health problems reduce psychosocial disability, engage in meaningful activities and avoid stigma and social exclusion may be of value to people who have BPD. There are also many mutual-support or co-counseling groups run by and for individuals with BPD. Services, or individual goals, are increasingly based on a recovery model that supports and emphasizes an individual's personal journey and potential.
Data indicate that the diagnosis of BPD is more variable over time than the DSM implies. Substantial percentages (for example around a third, depending on criteria) of people diagnosed with BPD achieve remission within a year or two. A longitudinal study found that, six years after being diagnosed with BPD, 56 percent showed good psychosocial functioning, compared to 26 percent at baseline. Although vocational achievement was more limited even compared to those with other personality disorders, those whose symptoms had remitted were significantly more likely to have a good relationship with a spouse/partner and at least one parent, good work/school performance, a sustained work/school history, good global functioning and good psychosocial functioning.
References
Borderline personality disorder | 0.780445 | 0.976051 | 0.761754 |
Human variability | Human variability, or human variation, is the range of possible values for any characteristic, physical or mental, of human beings.
Frequently debated areas of variability include cognitive ability, personality, physical appearance (body shape, skin color, etc.) and immunology.
Variability is partly heritable and partly acquired (nature vs. nurture debate).
As the human species exhibits sexual dimorphism, many traits show significant variation not just between populations but also between the sexes.
Sources of human variability
Human variability is attributed to a combination of environmental and genetic sources including:
For the genetic variables listed above, few of the traits characterizing human variability are controlled by simple Mendelian inheritance. Most are polygenic or are determined by a complex combination of genetics and environment.
Many genetic differences (polymorphisms) have little effect on health or reproductive success but help to distinguish one population from another. It is helpful for researchers in the field of population genetics to study ancient migrations and relationships between population groups.
Environmental factors
Climate and disease
Other important factors of environmental factors include climate and disease. Climate has effects on determining what kinds of human variation are more adaptable to survive without much restrictions and hardships. For example, people who live in a climate where there is a lot of exposure to sunlight have a darker color of skin tone. Evolution has caused production of folate (folic acid) from UV radiation, thus giving them darker skin tone with more melanin to make sure child development is smooth and successful. Conversely, people who live farther away from the equator have a lighter skin tone. This is due to a need for an increased exposure and absorbance of sunlight to make sure the body can produce enough vitamin D for survival.
Blackfoot disease is a disease caused by environmental pollution and causes people to have black, charcoal-like skin in the lower limbs. This is caused by arsenic pollution in water and food source. This is an example of how disease can affect human variation. Another disease that can affect human variation is syphilis, a sexual transmitted disease. Syphilis does not affect human variation until the middle stage of the disease. It then starts to grow rashes all over the body, affecting people's human variation.
Nutrition
Phenotypic variation is a combination of one's genetics and their surrounding environment, with no interaction or mutual influence between the two. This means that a significant portion of human variability can be controlled by human behavior. Nutrition and diet play a substantial role in determining phenotype because they are arguably the most controllable forms of environmental factors that create epigenetic changes. This is because they can be changed or altered relatively easily as opposed to other environmental factors like location.
If people are reluctant to changing their diets, consuming harmful foods can have chronic negative effects on variability. One such instance of this occurs when eating certain chemicals through one's diet or consuming carcinogens, which can have adverse effects on individual phenotype. For example, Bisphenol A (BPA) is a known endocrine disruptor that mimics the hormone estradiol and can be found in various plastic products. BPA seeps into food or drinks when the plastic containing it is heated up and begins to melt. When these contaminated substances are consumed, especially often and over long periods of time, one's risk of diabetes and cardiovascular disease increases. BPA also has the potential to alter "physiological weight control patterns." Examples such as this demonstrate that preserving a healthy phenotype largely rests on nutritional decision-making skills.
The concept that nutrition and diet affect phenotype extends to what the mother eats during pregnancy, which can have drastic effects on the outcome of the phenotype of the child. A recent study by researchers at the MRC International Nutrition Group shows that "methylation machinery can be disrupted by nutrient deficiencies and that this can lead to disease" susceptibility in newborn babies. The reason for this is because methyl groups have the ability to silence certain genes. Increased deficiencies of various nutrients such as this have the potential to permanently change the epigenetics of the baby.
Genetic factors
Genetic variation in humans may mean any variance in phenotype which results from heritable allele expression, mutations, and epigenetic changes. While human phenotypes may seem diverse, individuals actually differ by only 1 in every 1,000 base pairs and is primarily the result of inherited genetic differences. Pure consideration of alleles is often referred to as Mendelian Genetics, or more properly Classical Genetics, and involves the assessment of whether a given trait is dominant or recessive and thus, at what rates it will be inherited. The color of one's eyes was long believed to occur with a pattern of brown-eye dominance, with blue eyes being a recessive characteristic resulting from a past mutation. However, it is now understood that eye color is controlled by various genes, and thus, may not follow as distinct a pattern as previously believed. The trait is still the result of variance in genetic sequence between individuals as a result of inheritance from their parents. Common traits which may be linked to genetic patterns are earlobe attachment, hair color, and hair growth patterns.
In terms of evolution, genetic mutations are the origins of differences in alleles between individuals. However, mutations may also occur within a person's life-time and be passed down from parent to offspring. In some cases, mutations may result in genetic diseases, such as Cystic Fibrosis, which is the result of a mutation to the CFTR gene that is recessively inherited from both parents. In other cases, mutations may be harmless or phenotypically unnoticeable. We are able to treat biological traits as manifestations of either a single loci or multiple loci, labeling said biological traits as either monogenic or polygenic, respectively. Concerning polygenic traits it may be essential to be mindful of inter-genetic interactions or epistasis. Although epistasis is a significant genetic source of biological variation, it is only additive interactions that are heritable as other epistatic interactions involve recondite inter-genetic relationships. Epistatic interactions in of themselves vary further with their dependency on the results of the mechanisms of recombination and crossing over.
The ability of genes to be expressed may also be a source of variation between individuals and result in changes to phenotype. This may be the result of epigenetics, which are founded upon an organism's phenotypic plasticity, with such a plasticity even being heritable. Epigenetics may result from methylation of gene sequences leading to the blocking of expression or changes to histone protein structuring as a result of environmental or biological cues. Such alterations influence how genetic material is handled by the cell and to what extent certain DNA sections are expressed and compose the epigenome. The division between what can be considered as a genetic source of biological variation and not becomes immensely arbitrary as we approach aspects of biological variation such as epigenetics. Indeed, gene specific gene expression and inheritance may be reliant on environmental influences.
Cultural factors
Archaeological findings such as those that indicate that the Middle Stone Age and the Acheulean – identified as a specific 'cultural phases' of humanity with a number of characteristics – lasted substantially longer in some places or 'ended' at times over 100,000 years apart, highlight a significant spatiotemporal cultural variability in and complexity of the sociocultural history and evolution of humanity. In some cases cultural factors may be intertwined with genetic and environmental factors.
Measuring variation
Scientific
Measurement of human variation can fall under the purview of several scholarly disciplines, many of which lie at the intersection of biology and statistics. The methods of biostatistics, the application of statistical methods to the analysis of biological data, and bioinformatics, the application of information technologies to the analysis of biological data, are utilized by researchers in these fields to uncover significant patterns of variability. Some fields of scientific research include the following:
Demography is a branch of statistics and sociology concerned with the statistical study of populations, especially humans. A demographic analysis can measure various metrics of a population, most commonly metrics of size and growth, diversity in culture, ethnicity, language, religious belief, political belief, etc. Biodemography is a subfield which specifically integrates biological understanding into demographics analysis.
In the social sciences, social research is conducted and collected data is analyzed under statistical methods. The methodologies of this research can be divided into qualitative and quantitative designs. Some example subdisciplines include:
Anthropology, the study of human societies. Comparative research in subfields of anthropology may yield results on human variation with respect to the subfield's topic of interest.
Psychology, the study of behavior from a mental perspective. Does a lot of experiments and analysis grouped into quantitative or qualitative research methods.
Sociology, the study of behavior from a social perspective. Sociological research can be conducted in either quantitative or qualitative formats, depending on the nature of data collected and the subfield of sociology under which the research falls. Analysis of this data is subject to quantitative or qualitative methods. Computational sociology is also a method of producing useful data for studies of social behavior.
Anthropometry
Anthropometry is the study of the measurements of different parts of the human body. Common measurements include height, weight, organ size (brain, stomach, penis, vagina), and other bodily metrics such as waist–hip ratio. Each measurement can vary significantly between populations; for instance, the average height of males of European descent is 178 cm ± 7 cm and of females of European descent is 165 cm ± 7 cm. Meanwhile, average height of Nilotic males in Dinka is 181.3 cm.
Applications of anthropometry include ergonomics, biometrics, and forensics. Knowing the distribution of body measurements enable designers to build better tools for workers. Anthropometry is also used when designing safety equipment such as seat belts. In biometrics, measurements of fingerprints and iris patterns can be used for secure identification purposes.
Measuring genetic variation
Human genomics and population genetics are the study of the human genome and variome, respectively. Studies in these areas may concern the patterns and trends in human DNA. The Human Genome Project and The Human Variome Project are examples of large scale studies of the entire human population to collect data which can be analyzed to understand genomic and genetic variation in individuals, respectively.
The Human Genome Project is the largest scientific project in the history of biology. At a cost of $3.8 billion in funding and over a period of 13 years from 1990 to 2003, the project processed through DNA sequencing the approximately 3 billion base pairs and catalogued the 20,000 to 25,000 genes in human DNA. The project made the data available to all scientific researchers and developed analytical tools for processing this information. A particular finding regarding human variability due to difference in DNA made possible by the Human Genome Project is that any two individuals share 99.9% of their nucleotide sequences.
The Human Variome Project is a similar undertaking with the goal of identification and categorization of the set of human genetic variation, specifically variations which are medically pertinent. This project will also provide a data repository for further research and analysis of disease. The Human Variome Project was launched in 2006 and is being run by an international community of researchers and representatives, including collaborators from the World Health Organization and the United Nations Educational, Scientific, and Cultural Organization.
Genetic drift
Genetic drift is one method by which variability occurs in populations. Unlike natural selection, genetic drift occurs when alleles decrease randomly over time and not as a result of selection bias. Over a long history, this can cause significant shifts in the underlying genetic distribution of a population. We can model genetic drift with the Wright-Fisher model. In a population of N with 2N genes, there are two alleles with frequencies p and q. If the previous generation had an allele with frequency p, then the probability that the next generation has k of that allele is:
Over time, one allele will be fixed when the frequency of that allele reaches 1 and the frequency of the other allele reaches 0. The probability that any allele is fixed is proportional to the frequency of that allele. For two alleles with frequencies p and q, the probability that p will be fixed is p. The expected number of generations for an allele with frequency p to be fixed is:
Where Ne is the effective population size.
Single-nucleotide polymorphism
Single-nucleotide polymorphism or SNPs are variations of a single nucleotide. SNPs can occur in coding or non-coding regions of genes and on average occur once every 300 nucleotides. SNPs in coding regions can cause synonymous, missense, and nonsense mutations. SNPs have shown to be correlated with drug responses and risk of diseases such as sickle-cell anemia, Alzheimer's disease, cystic fibrosis, and more.
DNA fingerprinting
DNA profiling, whereby a DNA fingerprint is constructed by extracting a DNA sample from body tissue or fluid. Then, it is segmented using restriction enzymes and each segment marked with probes then exposed on X-ray film. The segments form patterns of black bars;the DNA fingerprint. DNA Fingerprints are used in conjunction with other methods in order to individuals information in Federal programs such as CODIS (Combined DNA Index System for Missing Persons) in order to help identify individuals
Mitochondrial DNA
Mitochondrial DNA, which is only passed from mother to child. The first human population studies based on mitochondrial DNA were performed by restriction enzyme analyses (RFLPs) and revealed differences between the four ethnic groups (Caucasian, Amerindian, African, and Asian). Differences in mtDNA patterns have also been shown in communities with a different geographic origin within the same ethnic group
Alloenzymic variation
Alloenzymic variation, a source of variation that identifies protein variants of the same gene due to amino acid substitutions in proteins. After grinding tissue to release the cytoplasm, wicks are used to absorb the resulting extract and placed in a slit cut into a starch gel. A low current is run across the gel resulting in a positive and negative ends. Proteins are then separated by charge and size, with the smaller and more highly charged molecules moving more quickly across the gel. This techniques does underestimate true genetic variability as there may be an amino acid substitution but if the amino acid is not charged differently than the original no difference in migration will appear it is estimated that approximately 1/3 of the true genetic variation is not expressed by this technique.
Structural variation
Structural variation, which can include insertions, deletions, duplications, and mutations in DNA. Within the human population, about 13% of the human genome is defined as structurally variant.
Phenotypic variation
Phenotypic variation, which accounts for both genetic and epigenetic factors that affect what characteristics are shown. For applications such as organ donations and matching, phenotypic variation of blood type, tissue type, and organ size are considered.
Civic
Measurement of human variation may also be initiated by governmental parties. A government may conduct a census, the systematic recording of an entire population of a region. The data may be used for calculating metrics of demography such as sex, gender, age, education, employment, etc.; this information is utilized for civic, political, economic, industrial, and environmental assessment and planning.
Commercial
Commercial motivation for understanding variation in human populations arises from the competitive advantage of tailoring products and services for a specific target market. A business may undertake some form of market research in order to collect data on customer preference and behavior and implement changes which align with the results.
Social significance and valuation
Both individuals and entire societies and cultures place values on different aspects of human variability; however, values can change as societies and cultures change. Not all people agree on the values or relative rankings, and neither do all societies and cultures. Nonetheless, nearly all human differences have a social value dimension. Examples of variations which may be given different values in different societies include skin color and/or body structure. Race and sex have a strong value difference, while handedness has a much weaker value difference. The values given to different traits among human variability are often influenced by what phenotypes are more prevalent locally. Local valuation may affect social standing, reproductive opportunities, or even survival.
Differences may vary or be distributed in various ways. Some, like height for a given sex, vary in close to a "normal" or Gaussian distribution. Other characteristics (e.g., skin color) vary continuously in a population, but the continuum may be socially divided into a small number of distinct categories. Then, there are some characteristics that vary bimodally (for example, handedness), with fewer people in intermediate categories.
Classification and evaluation of traits
When an inherited difference of body structure or function is severe enough to cause a significant hindrance in certain perceived abilities, it is termed a genetic disease, but even this categorization has fuzzy edges. There are many instances in which the degree of negative value of a human difference depends completely on the social or physical environment. For example, in a society with a large proportion of deaf people (as Martha's Vineyard in the 19th century), it was possible to deny that deafness is a disability. Another example of social renegotiation of the value assigned to a difference is reflected in the controversy over management of ambiguous genitalia, especially whether abnormal genital structure has enough negative consequences to warrant surgical correction.
Furthermore, many genetic traits may be advantageous in certain circumstances and disadvantageous in others. Being a heterozygote or carrier of the sickle-cell disease gene confers some protection against malaria, apparently enough to maintain the gene in populations of malarial areas. In a homozygous dose it is a significant disability.
Each trait has its own advantages and disadvantages, but sometimes a trait that is found desirable may not be favorable in terms of certain biological factors such as reproductive fitness, and traits that are not highly valued by the majority of people may be favorable in terms of biological factors. For example, women tend to have fewer pregnancies on average than before and therefore net worldwide fertility rates are dropping. Moreover, this leads to the fact that multiple births tend to be favorable in terms of number of children and therefore offspring count; when the average number of pregnancies and the average number of children was higher, multiple births made only a slight relative difference in number of children. However, with fewer pregnancies, multiple births can make the difference in number of children relatively large. A hypothetical scenario would be that couple 1 has ten children and couple 2 has eight children, but in both couples, the woman undergoes eight pregnancies. This is not a large difference in ratio of fertility. However, another hypothetical scenario can be that couple 1 has three children and couple 2 has one child but in both couples the woman undergoes one pregnancy (in this case couple 2 has triplets). When the proportion of offspring count in the latter hypothetical scenario is compared, the difference in proportion of offspring count becomes higher. A trait in women known to greatly increase the chance of multiple births is being a tall woman (presumably the chance is further increased when the woman is very tall among both women and men). Yet very tall women are not viewed as a desirable phenotype by the majority of people, and the phenotype of very tall women has not been highly favored in the past. Nevertheless, values placed on traits can change over time.
Such an example is homosexuality. In Ancient Greece, what in present terms would be called homosexuality, primarily between a man and a young boy, was not uncommon and was not outlawed. However, homosexuality became more condemned. Attitudes towards homosexuality alleviated in modern times.
Acknowledgement and study of human differences does have a wide range of uses, such as tailoring the size and shape of manufactured items. See Ergonomics.
Controversies of sociocultural and personal implications
Possession of above average amounts of some abilities is valued by most societies. Some of the traits that societies try to measure by perception are intellectual aptitude in the form of ability to learn, artistic prowess, strength, endurance, agility, and resilience.
Each individual's distinctive differences, even the negatively valued or stigmatized ones, are usually considered an essential part of self-identity.
Membership or status in a social group may depend on having specific values for certain attributes. It is not unusual for people to deliberately try to amplify or exaggerate differences, or to conceal or minimize them, for a variety of reasons. Examples of practices designed to minimize differences include tanning, hair straightening, skin bleaching, plastic surgery, orthodontia, and growth hormone treatment for extreme shortness. Conversely, male-female differences are enhanced and exaggerated in most societies.
In some societies, such as the United States, circumcision is practiced on a majority of males, as well as sex reassignment on intersex infants, with substantial emphasis on cultural and religious norms. Circumcision is highly controversial because although it offers health benefits, such as less chance of urinary tract infections, STDs, and penile cancer, it is considered a drastic procedure that is not medically mandatory and argued as a decision that should be taken when the child is old enough to decide for himself. Similarly, sex reassignment surgery offers psychiatric health benefits to transgender people but is seen as unethical by some Christians, especially when performed on children.
Much controversy surrounds the assigning or distinguishing of some variations, especially since differences between groups in a society or between societies is often debated as part of either a person's "essential" nature or a socially constructed attribution. For example, there has long been a debate among sex researchers on whether sexual orientation is due to evolution and biology (the "essentialist" position), or a result of mutually reinforcing social perceptions and behavioral choices (the "constructivist" perspective). The essentialist position emphasizes inclusive fitness as the reason homosexuality has not been eradicated by natural selection. Gay or lesbian individuals have not been greatly affected by evolutionary selection because they may help the fitness of their siblings and siblings' children, thus increasing their own fitness through inclusive fitness and maintaining evolution of homosexuality. Biological theories for same gender sexual orientation include genetic influences, neuroanatomical factors, and hormone differences but research so far has not provided any conclusive results. In contrast, the social constructivist position argues that sexuality is a result of culture and has originated from language or dialogue about sex. Mating choices are the product of cultural values, such as youth and attractiveness, and homosexuality varies greatly between cultures and societies. In this view, complexities, such as sexual orientation changing during the course of one's lifespan, are accounted for.
Controversy also surrounds the boundaries of "wellness", "wholeness," or "normality." In some cultures, differences in physical appearance, mental ability, and even sex can exclude one from traditions, ceremonies, or other important events, such as religious service. For example, in India, menstruation is not only a taboo subject but also traditionally considered shameful. Depending on beliefs, a woman who is menstruating is not allowed to cook or enter spiritual areas because she is "impure" and "cursed". There has been large-scale renegotiation of the social significance of variations which reduce the ability of a person to do one or more functions in western culture. Laws have been passed to alleviate the reduction of social opportunity available to those with disabilities. The concept of "differently abled" has been pushed by those persuading society to see limited incapacities as a human difference of less negative value.
Ideologies of superiority and inferiority
The extreme exercise of social valuation of human difference is in the definition of "human." Differences between humans can lead to an individual's "nonhuman" status, in the sense of withholding identification, charity, and social participation. Views of these variations can change enormously between cultures over time. For example, nineteenth-century European and American ideas of race and eugenics culminated in the attempts of the Nazi-led German society of the 1930s to deny not just reproduction, but life itself to a variety of people with "differences" attributed in part to biological characteristics. Hitler and Nazi leaders wanted to create a "master race" consisting of only Aryans, or blue-eyed, blonde-haired, and tall individuals, thus discriminating and attempting to exterminate those who didn't fit into this ideal.
Contemporary controversy continues over "what kind of human" is a fetus or child with a significant disability. On one end are people who would argue that Down syndrome is not a disability but a mere "difference," and on the other those who consider it such a calamity as to assume that such a child is better off "not born". For example, in India and China, being female is widely considered such a negatively valued human difference that female infanticide occurs such to severely affect the proportion of sexes.
Common human variations
See also
Anthropometry
Human genetic variation
Human physical appearance
Mendelian traits in humans
Quantitative trait locus
Human behaviour genetics
Big Five personality traits
References
Bibliography
Further reading
Books
Humans
Comparisons | 0.77251 | 0.986059 | 0.76174 |
Analytical psychology | Analytical psychology (, sometimes translated as analytic psychology and referred to as Jungian analysis) is a term coined by Carl Jung, a Swiss psychiatrist, to describe research into his new "empirical science" of the psyche. It was designed to distinguish it from Freud's psychoanalytic theories as their seven-year collaboration on psychoanalysis was drawing to an end between 1912 and 1913. The evolution of his science is contained in his monumental opus, the Collected Works, written over sixty years of his lifetime.
The history of analytical psychology is intimately linked with the biography of Jung. At the start, it was known as the "Zurich school", whose chief figures were Eugen Bleuler, Franz Riklin, Alphonse Maeder and Jung, all centred in the Burghölzli hospital in Zurich. It was initially a theory concerning psychological complexes until Jung, upon breaking with Sigmund Freud, turned it into a generalised method of investigating archetypes and the unconscious, as well as into a specialised psychotherapy.
Analytical psychology, or "complex psychology", from the , is the foundation of many developments in the study and practice of psychology as of other disciplines. Jung has many followers, and some of them are members of national societies around the world. They collaborate professionally on an international level through the International Association of Analytical Psychologists (IAAP) and the International Association for Jungian Studies (IAJS). Jung's propositions have given rise to a multidisciplinary literature in numerous languages.
Among widely used concepts specific to analytical psychology are anima and animus, archetypes, the collective unconscious, complexes, extraversion and introversion, individuation, the Self, the shadow and synchronicity. The Myers–Briggs Type Indicator (MBTI) is loosely based on another of Jung's theories on psychological types. A lesser known idea was Jung's notion of the Psychoid to denote a hypothesised immanent plane beyond consciousness, distinct from the collective unconscious, and a potential locus of synchronicity.
The approximately "three schools" of post-Jungian analytical psychology that are current, the classical, archetypal and developmental, can be said to correspond to the developing yet overlapping aspects of Jung's lifelong explorations, even if he expressly did not want to start a school of "Jungians". Hence as Jung proceeded from a clinical practice which was mainly traditionally science-based and steeped in rationalist philosophy, anthropology and ethnography, his enquiring mind simultaneously took him into more esoteric spheres such as alchemy, astrology, gnosticism, metaphysics, myth and the paranormal, without ever abandoning his allegiance to science as his long-lasting collaboration with Wolfgang Pauli attests. His wide-ranging progression suggests to some commentators that, over time, his analytical psychotherapy, informed by his intuition and teleological investigations, became more of an "art".
The findings of Jungian analysis and the application of analytical psychology to contemporary preoccupations such as social and family relationships, dreams and nightmares, work–life balance, architecture and urban planning, politics and economics, conflict and warfare, and climate change are illustrated in several publications and films.
Origins
Jung began his career as a psychiatrist in Zürich, Switzerland. Already employed at the Burghölzli hospital in 1901, in his academic dissertation for the medical faculty of the University of Zurich he took the risk of using his experiments on somnambulism and the visions of his mediumistic cousin, Helly Preiswerk. The work was entitled, "On the Psychology and Pathology of So-Called Occult Phenomena". It was accepted but caused great upset among his mother's family. Under the direction of psychiatrist Eugen Bleuler, he also conducted research with his colleagues using a galvanometer to evaluate the emotional sensitivities of patients to lists of words during word association. Jung has left a description of his use of the device in treatment. His research earned him a worldwide reputation and numerous honours, including honorary Doctorates from Clark and Fordham Universities in 1909 and 1910 respectively. Other honours followed later.
In 1907, Jung travelled to meet Sigmund Freud in Vienna, Austria; they had begun corresponding a year earlier. At that stage, Jung, aged thirty-two, had a much greater international renown than the forty-nine-year-old neurologist. For a further six years, the two scholars worked and travelled to the United States together. In 1911, they founded the International Psychoanalytical Association, of which Jung was the first president. However, early in the collaboration, Jung had already observed that Freud would not tolerate ideas that were different from his own.
Unlike most modern psychologists, Jung did not believe in restricting himself to the scientific method as a means to understanding the human psyche. He saw dreams, myths, coincidence, and folklore as empirical evidence to further understanding and meaning. So although the unconscious cannot be studied by using direct methods, it acts as a useful working hypothesis, according to Jung. As he said, "The beauty about the unconscious is that it is really unconscious." Hence, the unconscious is 'untouchable' by experimental researches, or indeed any possible kind of scientific or philosophical reach, precisely because it is unconscious.
The break with Freud
It was the publication of a book by Jung which provoked the break with psychoanalysis and led to the founding of analytical psychology. In 1912 Jung met "Miss Miller", brought to his notice by the work of Théodore Flournoy and whose case gave further substance to his theory of the collective unconscious. The study of her visions supplied the material which would go on to furnish his reasoning which he developed in Psychology of the Unconscious (re-published as Symbols of Transformation in 1952) (C.W. Vol. 5). At this, Freud muttered about "heresy". It was the second part of the work that brought the divergence to light. Freud mentioned to Ernest Jones that it was on page 174 of the original German edition, that Jung, according to him, had "lost his way". It is the extract where Jung enlarged on his conception of the libido. The sanction was immediate: Jung was officially banned from the Vienna psychoanalytic circle from August 1912. From that date the psychoanalytic movement split into two obediences, with Freud's partisans on one side, Karl Abraham being delegated to write a critical notice about Jung, and with Ernest Jones as defender of Freudian orthodoxy; while on the other side, were Jung's partisans, including Leonhard Seif, Franz Riklin, Johan van Ophuijsen and Alphonse Maeder.
Jung's innovative ideas with a new formulation of psychology and lack of contrition sealed the end of the Jung-Freud friendship in 1913. From then, the two scholars worked independently on personality development: Jung had already termed his approach analytical psychology (1912), while the approach Freud had founded is referred to as the Psychoanalytic School, (psychoanalytische Schule).
Jung's postulated unconscious was quite different from the model proposed by Freud, despite the great influence that the founder of psychoanalysis had had on him. In particular, tensions manifested between him and Freud because of various disagreements, including those concerning the nature of the libido. Jung the importance of sexual development as an instinctual drive and focused on the collective unconscious: the part of the unconscious that contains memories and ideas which Jung believed were inherited from generations of ancestors. While he accepted that libido was an important source for personal growth, unlike Freud, Jung did not consider that libido alone was responsible for the formation of the core personality. Due to the particular hardships Jung had endured growing up, he believed his personal development and that of everyone was influenced by factors unrelated to sexuality.
The overarching aim in life, according to Jungian psychology, is the fullest possible actualisation of the "Self" through individuation. Jung defines the "self" as "not only the centre but also the whole circumference which embraces both conscious and unconscious; it is the centre of this totality, just as the ego is the centre of the conscious mind". Central to this process of individuation is the individual's continual encounter with the elements of the psyche by bringing them into consciousness. People experience the unconscious through symbols encountered in all aspects of life: in dreams, art, religion, and the symbolic dramas enacted in relationships and life pursuits. Essential to the process is the merging of the individual's consciousness with the collective unconscious through a huge range of symbols. By bringing conscious awareness to bear on what is unconscious, such elements can be integrated with consciousness when they "surface". To proceed with the individuation process, individuals need to be open to the parts of themselves beyond their own ego, which is the "organ" of consciousness. In a famous dictum, Jung said, "the Self, like the unconscious is an a priori existent out of which the ego evolves. It is ... an unconscious prefiguration of the ego. It is not I who create myself, rather I happen to myself'.
It follows that the aim of (Jungian) psychotherapy is to assist the individual to establish a healthy relationship with the unconscious so that it is neither excessively out of balance in relation to it, as in neurosis, a state that can result in depression, anxiety, and personality disorders or so flooded by it that it risks psychosis resulting in mental breakdown. One method Jung applied to his patients between 1913 and 1916 was active imagination, a way of encouraging them to give themselves over to a form of meditation to release apparently random images from the mind to bridge unconscious contents into awareness.
"Neurosis" in Jung's view results from the build up of psychological defences the individual unconsciously musters in an effort to cope with perceived attacks from the outside world, a process he called a "complex", although complexes are not merely defensive in character. The psyche is a self-regulating adaptive system. People are energetic systems, and if the energy is blocked, the psyche becomes sick. If adaptation is thwarted, the psychic energy stops flowing and becomes rigid. This process manifests in neurosis and psychosis. Jung proposed that this occurs through maladaptation of one's internal realities to external ones. The principles of adaptation, projection, and compensation are central processes in Jung's view of psyche's attempts to adapt.
Innovations of Jungian analysis
Philosophical and epistemological foundations
Philosophy
Jung was an adept principally of the American philosopher William James, founder of pragmatism, whom he met during his trip to the United States in 1909. He also encountered other figures associated with James, such as John Dewey and the anthropologist, Franz Boas. Pragmatism was Jung's favoured route to base his psychology on a sound scientific basis according to historian Sonu Shamdasani. His theories consist of observations of phenomena, and according to Jung it is phenomenology. In his view psychologism was suspect. Throughout his writings, Jung sees in empirical observation not only a precondition of an objective method but also respect for an ethical code which should guide the psychologist, as he stated in a letter to Joseph Goldbrunner:
According to the Italo-French psychoanalyst Luigi Aurigemma, Jung's reasoning is also marked by Immanuel Kant, and more generally by German rationalist philosophy. His lectures are evidence of his assimilation of Kantian thought, especially the Critique of Pure Reason and Critique of Practical Reason. Aurigemma characterises Jung's thinking as "epistemological relativism" because it does not postulate any belief in the metaphysical. In fact, Jung uses Kant's teleology to bridle his thinking and to guard himself from straying into any metaphysical excursions. On the other hand, for French historian of psychology, Françoise Parot, contrary to the alleged rationalist vein, Jung is "heir" to mystics, (Meister Eckhart, Hildegard of Bingen, or Augustine of Hippo,) and to the romantics be they scientists, such as Carl Gustav Carus or Gotthilf Heinrich von Schubert in particular, or to philosophers and writers, along the lines of Nietzsche, Goethe, and Schopenhauer, in the way he conceptualised the unconscious in particular. Whereas his typology is profoundly dependent on Carl Spitteler.
Scientific heritage
As a trained psychiatrist, Jung had a grounding in the state of science in his day. He regularly refers to the experimental psychology of Wilhelm Wundt. His Word Association Test designed with Franz Riklin is actually the direct application of Wundt's theory. Notwithstanding the great debt of analytical psychology to Sigmund Freud, Jung borrowed concepts from other theories of his time. For instance, the expression "abaissement du niveau mental" comes directly from the French psychologist Pierre Janet whose courses Jung attended during his studies in France, during 1901. Jung had always acknowledged how much Janet had influenced his career.
Jung's use of the concept of "participation mystique" is owed to the French ethnologist Lucien Lévy-Bruhl: which he uses to illustrate the surprising fact, to him, that some native peoples can experience relations that defy logic, as for instance in the case of the South American tribe, whom he met during his travels, where the men pretended they were scarlet aras birds. Finally, his use of the English expression, "pattern of behaviour", which is synonymous with the term archetype, is drawn from British studies in ethology.
The principal contribution to analytical psychology, nevertheless, remains that of Freud's psychoanalysis, from which Jung took a number of concepts, especially the method of inquiring into the unconscious through free association. Individual analysts' thinking was also integrated into his project, among whom are Sándor Ferenczi (Jung refers to his notion of "affect") or Ludwig Binswanger and his , (Daseinsanalysis). Jung affirms also Freud's contribution to our knowledge of the psyche as being, without doubt, of the highest importance. It reveals penetrating information about the dark corners of the soul and of the human personality, which is of the same order as Nietzsche's On the Genealogy of Morality (1887). In this context, Freud was, according to Jung, one of the great cultural critics of the 19th century.
Divergences from psychoanalysis
Jungian analysis is, like psychoanalysis, a method to access, experience and integrate unconscious material into awareness. It is a search for the meaning of behaviours, feelings and events. Many are the channels to extend knowledge of the self: the analysis of dreams is one important avenue. Others may include expressing feelings about and through art, poetry or other expressions of creativity, the examination of conflicts and repeating patterns in a person's life. A comprehensive description of the process of dream interpretation is complex, in that it is highly specific to the person who undertakes it. Most succinctly it relies on the associations which the particular dream symbols suggest to the dreamer, which at times may be deemed "archetypal" in so far as they are supposed common to many people throughout history. Examples could be a hero, an old man or woman, situations of pursuit, flying or falling.
Whereas Freudian psychoanalysis relies entirely on the development of the transference in the analysand (the person under treatment) to the analyst, Jung initially used the transference and later concentrated more on a dialectical and didactic approach to the symbolic and archetypal material presented by the patient. Moreover, his attitude towards patients departed from what he had observed in Freud's method. Anthony Stevens has explained it thus:
Though [Jung's] initial formulations arose mainly out of his own creative illness, they were also a conscious reaction against the stereotype of the classical Freudian analyst, sitting silent and aloof behind the couch, occasionally emitting ex cathedra pronouncements and interpretations, while remaining totally uninvolved in the patient's guilt, anguish, and need for reassurance and support. Instead, Jung offered the radical proposal that analysis is a dialectical procedure, a two-way exchange between two people, who are equally involved. Although it was a revolutionary idea when he first suggested it, it is a model which has influenced psychotherapists of most schools, though many seem not to realise that it originated with Jung.
In place of Freud's "surgical detachment", Jung demonstrated a more relaxed and warmer welcome in the consulting room. He remained aware nonetheless that exposure to a patient's unconscious contents always posed a certain risk of contagion (he calls it "psychic infection") to the analyst, as experienced in the countertransference. The process of contemporary Jungian analysis depends on the type of "school of analytical psychology" to which the therapist adheres, (see below). The "Zurich School" would reflect the approach Jung himself taught, while those influenced by Michael Fordham and associates in London, would be significantly closer to a Kleinian approach and therefore, concerned with analysis of the transference and countertransference as indicators of repressed material along with the attendant symbols and patterns.
Dream work
Jung's preoccupation with dreams can be dated from 1902. It was only after the break with Freud that he published in 1916 his "Psychology of the Unconscious" where he elaborated his view of dreams, which contrasts sharply with Freud's conceptualisation. While he agrees that dreams are a highway into the unconscious, he enlarges on their functions further than psychoanalysis did. One of the salient differences is the compensatory function they perform by reinstating psychic equilibrium in respect of judgments made during waking life: thus a man consumed by ambition and arrogance may, for example, dream about himself as small and vulnerable person.
According to Jung, this demonstrates that the man's attitude is excessively self-assured and thereby refuses to integrate the inferior aspects of his personality, which are denied by his defensive arrogance. Jung calls this a compensation mechanism, necessary for the maintenance of a healthy mental balance. Shortly before his death in 1961, he wrote: Unconscious material is expressed in images through the deployment of symbolism which, in Jungian terms, means it has an affective role (in that it can sometimes give rise to a numinous feeling, when associated with an archetypal force) and an intellectual role. Some dreams are personal to the dreamer, others may be collective in origin or "transpersonal" in so far as they relate to existential events. They can be taken to express phases of the individuation process (see below) and may be inspired by literature, art, alchemy or mythology.
Analytical psychology is recognized for its historical and geographical study of myths as a means to deconstruct, with the aid of symbols, the unconscious manifestations of the psyche. Myths are said to represent directly the elements and phenomena arising from the collective unconscious and though they may be subject to alteration in their detail through time, their significance remains similar. While Jung relies predominantly on Christian or on Western pagan mythology (Ancient Greece and Rome), he holds that the unconscious is driven by mythologies derived from all cultures. He evinced an interest in Hinduism, in Zoroastrianism and Taoism, which all share fundamental images reflected in the psyche. Thus analytical psychology focusses on meaning, based on the hypothesis that human beings are potentially in constant touch with universal and symbolic aspects common to humankind. In the words of André Nataf:
Principal concepts
In analytical psychology two distinct types of psychological process may be identified: that deriving from the individual, characterised as "personal", belonging to a subjective psyche, and that deriving from the collective, linked to the structure of an objective psyche, which may be termed "transpersonal". These processes are both said to be archetypal. Some of these processes are regarded as specifically linked to consciousness, such as the animus or anima, the persona or the shadow. Others pertain more to the collective sphere. Jung tended to personify the anima and animus as they are, according to him, always attached to a person and represent an aspect of his or her psyche.
Anima and animus
Jung identified the archetypal anima as being the unconscious feminine component of men and the archetypal animus as the unconscious masculine component in women. These are shaped by the contents of the collective unconscious, by others, and by the larger society. However, many modern-day Jungian practitioners do not ascribe to a literal definition, citing that the Jungian concept points to every person having both an anima and an animus. Jung considered, for instance, an "animus of the anima" in men, in his work Aion and in an interview in which he says:Jung stated that the anima and animus act as guides to the unconscious unified Self, and that forming an awareness and a connection with the anima or animus is one of the most difficult and rewarding steps in psychological growth. Jung reported that he identified his anima as she spoke to him, as an inner voice, unexpectedly one day.
In cases where the anima or animus complexes are ignored, they vie for attention by projecting itself on others. This explains, according to Jung, why we are sometimes immediately attracted to certain strangers: we see our anima or animus in them. Love at first sight is an example of anima and animus projection. Moreover, people who strongly identify with their gender role (e.g. a man who acts aggressively and never cries) have not actively recognized or engaged their anima or animus.
Jung attributes human rational thought to be the male nature, while the irrational aspect is considered to be natural female (rational being defined as involving judgment, irrational being defined as involving perceptions). Consequently, irrational moods are the progenies of the male anima shadow and irrational opinions of the female animus shadow.
Archetypes
The use of archetypes in psychology was advanced by Jung in an essay entitled "Instinct and the Unconscious" in 1919. The first element in Greek 'arche' signifies 'beginning, origin, cause, primal source principle', by extension it can signify 'position of a leader, supreme rule and government'. The second element 'type' means 'blow or what is produced by a blow, the imprint of a coin ...form, image, prototype, model, order, and norm', ...in the figurative, modern sense, 'pattern underlying form, primordial form'. In his psychological framework, archetypes are innate, universal or personal prototypes for ideas and may be used to interpret observations. The method he favoured was hermeneutics which was central in his practice of psychology from the start. He made explicit references to hermeneutics in the Collected Works and during his theoretical development of the notion of archetypes. Although he lacks consistency in his formulations, his theoretical development of archetypes is rich in hermeneutic implications. As noted by Smythe and Baydala (2012), A group of memories and attitudes associated with an archetype can become a complex, e.g. a mother complex may be associated with a particular mother archetype. Jung treated the archetypes as psychological organs, analogous to physical ones in that both are morphological givens which probably arose through evolution.
Archetypes have been regarded as collective as well as individual, and identifiable in a variety of creative ways. As an example, in his book Memories, Dreams, Reflections, Jung states that he began to see and talk to a manifestation of anima and that she taught him how to interpret dreams. As soon as he could interpret on his own, Jung said that she ceased talking to him because she was no longer needed. However, the essentialism inherent in archetypal theory in general and concerning the anima, in particular, has called for a re-evaluation of Jung's theory in terms of emergence theory. This would emphasise the role of symbols in the construction of affect in the midst of collective human action. In such a reconfiguration, the visceral energy of a numinous experience can be retained while the problematic theory of archetypes has outlived its usefulness.
Collective unconscious
Jung's concept of the collective unconscious has undergone re-interpretation over time. The term "collective unconscious" first appeared in Jung's 1916 essay, "The Structure of the Unconscious". This essay distinguishes between the "personal", Freudian unconscious, filled with fantasies (e. g. sexual) and repressed images, and the "collective" unconscious encompassing the soul of humanity at large.
In "The Significance of Constitution and Heredity in Psychology" (November 1929), Jung wrote:
Given that in his day he lacked the advances of complexity theory and especially complex adaptive systems (CAS), it has been argued that his vision of archetypes as a stratum in the collective unconscious, corresponds to nodal patterns in the collective unconscious which go on to shape the characteristic patterns of human imagination and experience and in that sense, "seems a remarkable, intuitive articulation of the CAS model".
Individuation
Individuation is a complex process that involves going through different stages of growing awareness through the progressive confrontation and integration of personal unconscious elements. This is the central concept of analytical psychology first introduced in 1916. It is the objective of Jungian psychotherapy to the extent that it enables the realisation of the Self. As Jung stated: Jung started experimenting with individuation after his split with Freud as he confronted what was described as eruptions from the collective unconscious driven by a contemporary malaise of spiritual alienation. According to Jung, individuation means becoming an individual and implies becoming one's own self. Unlike individuality, which emphasizes some supposed peculiarity, Jung described individuation as a better and more complete fulfillment of the collective qualities of the human being. In his experience, Jung explained that individuation helped him, "from the therapeutic point of view, to find the particular images that lie behind emotions".
Individuation is from the first what the analysand must undergo, to integrate the other elements of the psyche. This pursuit of wholeness aims to establish the Self, which include both the rational conscious mind of the ego and the irrational contents of the unconscious, as the new personality center. Prior to individuation, the analysand is carefully assessed to determine if the ego is strong enough to take the intensity of this process. The elements to be integrated include the persona which acts as the representative of the person in her/his role in society, the shadow which contains all that is personally unknown and what the person considers morally reprehensible and, the anima or the animus, which respectively carry their feminine and masculine values. For Jung many unconscious conflicts at the root of neurosis are caused by the difficulty to accept that such a dynamic can unbalance the subject from his habitual position and confronts her/him with aspects of the self they were accustomed to ignore. Once individuation is completed the ego is no longer at the centre of the personality. The process, however, does not lead to a complete self-realization and that individuation can never be a fixed state due to the unfathomable nature of the depths of the collective unconscious.
Shadow
The shadow is an unconscious complex defined as the repressed, suppressed or disowned qualities of the conscious self. According to Jung, the human being deals with the reality of the shadow in four ways: denial, projection, integration and/or transmutation. Jung himself asserted that "the result of the Freudian method of elucidation is a minute elaboration of man's shadow-side unexampled in any previous age." According to analytical psychology, a person's shadow may have both constructive and destructive aspects. In its more destructive aspects, the shadow can represent those things people do not accept about themselves. For instance, the shadow of someone who identifies as being kind may be harsh or unkind. Conversely, the shadow of a person who perceives himself to be brutal may be gentle. In its more constructive aspects, a person's shadow may represent hidden positive qualities. This has been referred to as the "gold in the shadow". Jung emphasized the importance of being aware of shadow material and incorporating it into conscious awareness to avoid projecting shadow qualities on others.
The shadow in dreams is often represented by dark figures of the same gender as the dreamer.
The shadow may also concern great figures in the history of human thought or even spiritual masters, who became great because of their shadows or because of their ability to live their shadows (namely, their unconscious faults) in full without repressing them.
Persona
Just like the anima and animus, the persona (derived from the Latin term for a mask, as would have been worn by actors) is another key concept in analytical psychology. It is the part of the personality which manages an individual's relations with society in the outside world and works the same way for both sexes. The persona, which is at the heart of the psyche, is contrary to the shadow which is actually the true personality but denied by the self. The conscious self identifies primarily with the persona during development in childhood as the individual develops a psychological framework for dealing with others. Identifications with diplomas, social roles, with honours and awards, with a career, all contribute to the apparent constitution of the persona and which do not lead to knowledge of the self. For Jung, the persona has nothing real about it. It can only be a compromise between the individual and society, yielding an illusion of individuality. Individuation consists, in the first instance, of discarding the individual's mask, but not too quickly as often, it is all the patient has as a means of identification. The persona is implicated in a number of symptoms such as compulsive disorders, phobias, shifting moods, and addictions, among others.
Psychological types
Analytical psychology distinguishes several psychological types or temperaments.
Extravert
Introvert
According to Jung, the psyche is an apparatus for adaptation and orientation, and consists of a number of different psychic functions. Among these he distinguishes four basic functions:
Sensation – Perception by means of the sense organs
Intuition – Perceiving in unconscious way or perception of unconscious contents
Thinking – Function of intellectual cognition; the forming of logical conclusions
Feeling – Function of subjective estimation
Complexes
Early in Jung's career he coined the term and described the concept of the "complex". Jung claims to have discovered the concept during his free association and galvanic skin response experiments. Freud obviously took up this concept in his Oedipus complex amongst others. Jung seemed to see complexes as quite autonomous parts of psychological life. It is almost as if Jung were describing separate personalities within what is considered a single individual, but to equate Jung's use of complexes with something along the lines of multiple personality disorder would be a step out of bounds.
Jung saw an archetype as always being the central organizing structure of a complex. For instance, in a "negative mother complex", the archetype of the "negative mother" would be seen to be central to the identity of that complex. This is to say, our psychological lives are patterned on common human experiences. Jung saw the Ego (which Freud wrote about in German literally as the "I", one's conscious experience of oneself) as a complex. If the "I" is a complex, what might be the archetype that structures it? Jung, and many Jungians, might say "the hero", one who separates from the community to ultimately carry the community further.
Synchronicity
Jung first officially used the term synchronicity during a conference held in memory of his sinologist friend, Richard Wilhelm in 1930. It was part of his explanation of the modus operandi of the I Ching. The second reference was made in 1935 in his Tavistock Lectures.
For an overview of the origins of the concept, see Joseph Cambray: "Synchronicity as emergence". It was used to denote the simultaneous occurrence of two events with no causal physical connection, but whose association evokes a meaning for the person experiencing or observing it. The often cited example of the phenomenon is Jung's own account of a beetle (the common rose-chafer, Cetonia aurata) flying into his consulting room directly following on from his patient telling him a dream featuring a golden scarab. The concept only makes sense psychologically and cannot be reduced to a verified or scientific fact. For Jung it constitutes a working hypothesis which has subsequently given rise to many ambiguities.
According to Jung, an archetype which has been constellated in the psyche can, under certain circumstances, transgress the boundary between substance and psyche.
Jung had studied such phenomena with the physicist and Nobel Prize winner, Wolfgang Pauli, who did not always agree with Jung, and with whom he carried on an extensive correspondence, enriched by the contributions of both specialists in their own fields. Pauli had given a series of lectures to the C. G. Jung Institute, Zürich whose member and patron he had been since 1947. It gave rise to a joint essay: Synchronicity, an a-causal principle (1952) The two men saw in the idea of synchronicity a potential way of explaining a particular relationship between "incontrovertible facts", whose occurrence is tied to unconscious and archetypal manifestations, Borrowing the notion from Arthur Schopenhauer, Jung calls it Unus mundus, a state where neither matter nor the psyche are distinguishable. whereas for Pauli it was a limiting concept, in two senses, in that it is at once scientific and symbolic. According to him, the phenomenon is dependent on the observer. Nevertheless, both men were in accord that there existed the possibility of a conjunction between physics and psychology. Jung wrote in a letter to Pauli:
Marie-Louise von Franz also had a lengthy exchange of letters with Wolfgang Pauli. On Pauli's death in 1958, his widow, Franca, deliberately destroyed all the letters von Franz had sent to her husband, and which he had kept locked inside his writing desk. However, the letters from Pauli to von Franz were all saved and were later made available to researchers and published.
Synchronicity is among the most developed ideas by Jung's followers, notably by , James Hillman, Roderick Main, Carl Alfred Meier and by the British developmental clinician, George Bright. It has been explored also in a range of spiritual currents who have sought in it a scientific rigour.
Although Synchronicity as conceived by Jung within the bounds of the science available in his day, has been categorised as pseudoscience, recent developments in complex adaptive systems argue for a revision of such a view. Critics cite that Jung's experiments that sought to provide statistical proof for this theory did not yield satisfactory result. His experiment was also faulted for not using a true random sampling method as well as for the use of dubious statistics and astrological material.
Post-Jungian approaches
Andrew Samuels (1985) has distinguished three distinct traditions or approaches of "post-Jungian" psychology – classical, developmental and archetypal. Today there are more developments.
Classical
The classical approach tries to remain faithful to Jung's proposed model, his teachings and the substance of his 20 volume Collected Works, together with recently published works, such as the Liber Novus, and the Black Books. Prominent advocates of this approach, according to Samuels (1985), include Emma Jung, Jung's wife, an analyst in her own right, Marie-Louise von Franz, Joseph L. Henderson, Aniela Jaffé, Erich Neumann, Gerhard Adler and Jolande Jacobi. Jung credited Neumann, author of "Origins of Conscious" and "Origins of the Child", as his principal student to advance his (Jung's) theory into a mythology-based approach. He is associated with developing the symbolism and archetypal significance of several myths: the Child, Creation, the Hero, the Great Mother and Transcendence.
Archetypal
One archetypal approach, sometimes called "the imaginal school" by James Hillman, was written about by him in the late 1960s and early 1970s. Its adherents, according to Samuels (1985), include Gerhard Adler, Irene Claremont de Castillejo, Adolf Guggenbühl-Craig, Murray Stein, and Wolfgang Giegerich. Thomas Moore also was influenced by some of Hillman's work. Developed independently, other psychoanalysts have created strong approaches to archetypal psychology. Mythopoeticists and psychoanalysts such as Clarissa Pinkola Estés who believes that ethnic and aboriginal people are the originators of archetypal psychology and have long carried the maps for the journey of the soul in their songs, tales, dream-telling, art and rituals; Marion Woodman who proposes a feminist viewpoint regarding archetypal psychology. Some of the mythopoetic/archetypal psychology creators either imagine the Self not to be the main archetype of the collective unconscious as Jung thought, but rather assign each archetype equal value. Others, who are modern progenitors of archetypal psychology (such as Estés), think of the Self as the thing that contains and yet is suffused by all other archetypes, each giving life to the other.
Robert L. Moore has explored the archetypal level of the human psyche in a series of five books co-authored with Douglas Gillette, which have played an important role in the men's movement in the United States. Moore studies computerese so he uses a computer's hard wiring (its fixed physical components) as a metaphor for the archetypal level of the human psyche. Personal experiences influence the access to the archetypal level of the human psyche, but personalized ego consciousness can be likened to computer software.
In the 21st century, Jordan Peterson is a prominent Jungian psychologist whose extensive work Maps of Meaning is centered around the theories of Jungian archetypes.
Developmental
A major expansion of Jungian theory is credited to Michael Fordham and his wife, Frieda Fordham. It can be considered a bridge between traditional Jungian analysis and Melanie Klein's object relations theory. Judith Hubback and William Goodheart MD are also included in this group. Andrew Samuels (1985) considers J.W.T. Redfearn, Richard Carvalho and himself as representatives of the developmental approach. Samuels notes how this approach differs from the classical by giving less emphasis to the Self and more emphasis to the development of personality; he also notes how, in terms of practice in therapy, it gives more attention to transference and counter-transference than either the classical or the archetypal approaches.
Sandplay therapy
Sandplay is a non-directive, creative form of therapy using the imagination, originally used with children and adolescents, later also with adults. Jung had stressed the importance of finding the image behind the emotion. The use of sand in a suitable tray with figurines and other small toys, farm animals, trees, fences and cars enables a narrative to develop through a series of scenarios. This is said to express an ongoing dialogue between the conscious and the unconscious aspects of the psyche, which in turn activates a healing process whereby the patient and therapist can together view the evolving sense of self.
Jungian Sandplay started as a therapeutic method in the 1950s. Although its origin has been credited to a Swiss Jungian analyst, Dora Kalff it was in fact, her mentor and trainer, Dr. Margaret Lowenfeld, a British paediatrician, who had developed the Lowenfeld World Technique inspired by the writer H. G. Wells in her work with children, using a sand tray and figurines in the 1930s. Jung had witnessed a demonstration of the technique while on a visit to the UK in 1937. Kalff saw in it potential as a further application of analytical psychology. Encouraged by Jung, Kalff developed the new application over a number of years and called it Sandplay. From 1962 she began to train Jungian Analysts in the method including in the United States, Europe and Japan. Both Kalff and Jung believed an image can offer greater therapeutic engagement and insight than words alone. Through the sensory experience of working with sand and objects, and their symbolic resonance new areas of awareness can be brought into consciousness, as in dreams, which through their frames and storyline can bring material into consciousness as part of an integrating and healing process. The historian of psychology, Sonu Shamdasani has commented:
One of Dora Kalff's trainees was the American concert pianist, Joel Ryce-Menuhin, whose music career was ended by illness and who retrained as a Jungian analyst and exponent of sandplay.
Process-oriented psychology
Process-oriented psychology (also called Process work) is associated with the Zurich-trained Jungian analyst Arnold Mindell. Process work developed in the late 1970s and early 1980s and was originally identified as a "daughter of Jungian psychology". Process work stresses awareness of the "unconscious" as an ongoing flow of experience. This approach expands Jung's work beyond verbal individual therapy to include body experience, altered and comatose states as well as multicultural group work.
The Analytic attitude
Formally Jungian analysis differs little from psychoanalysis. However, variants of each school have developed overlaps and specific divergences through the century, or more, of their existence. They share a "frame" consisting of regular spatio-temporal meetings, one or more times a week, focusing on patient material, using dialogue which may consist of elaboration, amplification and abreaction and which may last on average three years (sometimes more briefly or far longer). The spatial arrangement between analyst and analysand may differ: seated face to face or the patient may use the couch with the analyst seated behind.
In some approaches alternative elements of expression can take place, such as active imagination, sandplay, drawing or painting, even music. The session may at times become semi-directed (in contrast to psychoanalytic treatment which is essentially a non-directive encounter). The patient is at the heart of the therapy, as Marie Louise von Franz has it in her work, "Psychotherapy: the practitioner's experience", where she recounts Jung's thinking on that point. The transference is sought out (contrary to psychoanalytic treatment which distinguishes positive and negative transferences) and, the interpretation of dreams is one of the central pillars of Jungian psychotherapy. In all other respects, the rules correspond to those of classical psychoanalysis: the analyst examines free associations and tries to be objective and ethical, meaning respectful of the patient's pace and rhythm of unfolding progress. In fact, the task of Jungian analysis is not merely to explore the patient's past, but to connect conscious awareness with the unconscious such that a better adaptation to their emotional and social life may ensue.
Neurosis is not a symptom of the re-emergence of a repressed past, but is regarded as the functional, sometimes somatic, incapacity to face certain aspects of lived reality. In Jungian analysis the unconscious is the motivator whose task it is to bring into awareness the patient's shadow, in alliance with the analyst, the more so since unconscious processes enacted in the transference provoke a dependent relationship by the analysand on the analyst, leading to a falling away of the usual defences and references. This requires that the analyst guarantee the safety of the transference. The responsibilities and accountability of individual analysts and their membership organisations, matters of clinical confidentiality and codes of ethics and professional relations with the public sphere are explored in a volume edited by Solomon and Twyman, with contributions from Jungian analysts and psychoanalysts. Solomon has characterised the nature of the patient – analyst relationship as one where the analytic attitude is an ethical attitude since:
Jungian social, literary and art criticism
Analytical psychology has inspired a number of contemporary academic researchers to revisit some of Jung's own preoccupations with the role of women in society, with philosophy and with literary and art criticism. Leading figures to explore these fields include the British-American, Susan Rowland, who produced the first feminist revision of Jung and the fundamental contributions made to his work by the creative women who surrounded him. She has continued to mine his work by evaluating his influence on modern literary criticism and as a writer. Leslie Gardner has devoted a series of volumes to analytical psychology in 21st century life, one of which concentrates on the "Feminine Self". Paul Bishop, a British German scholar, has placed analytical psychology in the context of precursors such as, Goethe, Schiller and Nietzsche.
The Franco-Swiss art historian and analytical psychologist, Christian Gaillard, has examined Jung's place as an artist and art critic in his series of Fay lectures at the Texas A&M University. These scholars draw from Jung's works that apply analytical psychology to literature such as the lecture "On the Relation of Analytical Psychology to Poetry". In this presentation, which was delivered in 1922, Jung stated that the psychologist cannot replace the art critic. He rejected the Freudian art criticism for reducing complex works of art to Oedipal fantasies of their creators, stressing the danger of simplifying literature to causes found outside of the actual work.
Criticism
Since its inception, analytical psychology has been the object of criticism, emanating from the psychoanalytic sphere. Freud himself characterised Jung as a "mystic and a snob". In his introduction to the 2011 edition of Jung's "Lectures on the Theory of Psychoanalysis", given in New York City in 1912, Sonu Shamdasani contends that Freud orchestrated a round of critical reviews of Jung's writings from Karl Abraham, Jung's former colleague at the Burghölzli hospital, and from the early Welsh Freudian, Ernest Jones. Such criticisms multiplied during the 20th century, focusing primarily on the "mysticism" in Jung's writings. Other psychoanalysts, including Jungian analysts, objected to the cult of personality around the Swiss psychiatrist. It reached a crescendo with Jung's perceived collusion with Nazism in the build-up and during World War II and is still a recurrent theme. Thomas Kirsch writes: "Successive generations of Jungian analysts and analysands have wrestled with the question of Jung's complex relations to Germany." Other considered evaluations come from Andrew Samuels and from Robert Withers.
The French philosopher, , considers that the concept of the collective unconscious, "shows also how easily one can slip from the psychological unconscious into perspectives from a universe of thought, quite alien from traditional philosophy and science, where this idea arose." ("Le concept jungien d'inconscient collectif "témoigne également de la facilité avec laquelle on peut glisser du concept d'inconscient psychologique vers des perspectives relevant d'un univers de pensée étranger à la tradition philosophique et scientifique dans laquelle ce concept est né'").
In his Le Livre Rouge de la psychanalyse ("Red Book of psychoanalysis"), the French psychoanalyst, , criticizes Jung's tendency to be fascinated by the image and to reduce the human to an archetype. He contends that Jung dwells in a world of ideas and abstractions, in a world of books and old secrets lost in ancient books of spells (fr: grimoires). While claiming to be an empiricist, Amselek finds Jung to be an idealist, a pure thinker who has unquestionably demonstrated his intellectual talent for speculation and the invention of ideas. While he considers his epistemology to be in advance of that of Freud, Jung remains stuck in his intellectualism and in his narrow provincial outlook. In fact, his hypotheses are determined by the concept of his postulated pre-existing world and he has constantly sought to find confirmations of it in the old traditions of Western Medieval Europe.
More problematic has been, at times, the ad hominem criticism of academics outside the field of analytical psychology. One, a Catholic historian of psychiatry, Richard Noll, wrote three volumes but was able to publish only the first two in 1994 and 1997. Nolls argued that analytical psychology is based on a neo-pagan Hellenistic cult. These attacks on Jung and his work prompted the French psychoanalyst, Élisabeth Roudinesco, to state in a review: "Even if Noll's theses are based on a solid familiarity with the Jungian corpus [...], they deserve to be re-examined, such is the detestation of the author for the object of his study that it diminishes the credibility of the arguments." ("Même si les thèses de Noll sont étayées par une solide connaissance du corpus jungien [...], elles méritent être réexaminées, tant la détestation de l'auteur vis-à-vis de son objet d'étude diminue la crédibilité de l'argumentation.") Another, a French ethnographer and anthropologist, , criticized Jung over his alleged misuse of the term archetype and his "suspect motives" in dealings with some of his colleagues.
See also
Active imagination
Michael Fordham
Jungian interpretation of religion
Keirsey Temperament Sorter
Mythopoetic men's movement
Positive disintegration
Socionics
Edward Armstrong Bennet
References
Further reading
Dohe, Carrie B. Jung's Wandering Archetype: Race and Religion in Analytical Psychology. London: Routledge, 2016.
Glinka, Lukasz Andrzej (2014) Aryan Unconscious: Archetype of Discrimination, History & Politics. Great Abington: Cambridge International Science Publishing.
Remo, F. Roth: Return of the World Soul, Wolfgang Pauli, C.G. Jung and the Challenge of Psychophysical Reality [unus mundus], Part 1: The Battle of the Giants. Pari Publishing, 2011, .
External links
Archive for Research in Archetypal Symbolism
Website of the Journal of Analytical Psychology, considered the foremost international publication on AP
Harvest a scholarly journal of the Jung Club, London
International Association of Analytical Psychology
International Association for Jungian Studies
Pacifica Graduate Institute, a graduate school offering programs in Jungian and post-Jungian studies
Jung Arena – Analytical psychology books, journals and resources
ADEPAC Colombia, Colombian analytical psychology news, biographies and resources
Analytical psychology
History of psychiatry
Psychoanalytic schools
Psychodynamic psychotherapy | 0.763309 | 0.997933 | 0.761731 |
Psychological operations (United States) | Psychological operations (PSYOP) are operations to convey selected information and indicators to audiences to influence their motives and objective reasoning, and ultimately the behavior of governments, organizations, groups, and large foreign powers.
The purpose of United States psychological operations is to induce or reinforce behavior perceived to be favorable to U.S. objectives. They are an important part of the range of diplomatic, informational, military and economic activities available to the U.S. They can be utilized during both peacetime and conflict. There are three main types: strategic, operational, and tactical. Strategic PSYOP includes informational activities conducted by the U.S. government agencies outside of the military arena, though many utilize Department of Defense (DOD) assets. Operational PSYOP are conducted across the range of military operations, including during peacetime, in a defined operational area to promote the effectiveness of the joint force commander's (JFC) campaigns and strategies. Tactical PSYOP are conducted in the area assigned to a tactical commander across the range of military operations to support the tactical mission against opposing forces.
PSYOP can encourage popular discontent with the opposition's leadership, and by combining persuasion with a credible threat, degrade an adversary's ability to conduct or sustain military operations. They can also disrupt, confuse, and protract the adversary's decision-making process, undermining command and control. When properly employed, PSYOP have the potential to save the lives of friendly or enemy forces by reducing the adversary's will to fight. By lowering the adversary's morale and then its efficiency, PSYOP can also discourage aggressive actions by creating indifference within their ranks, ultimately leading to surrender.
The integrated employment of the core capabilities of electronic warfare, computer network operations, psychological operations, military deception, and operations security, in concert with specified supporting and related capabilities, to influence, disrupt, corrupt or usurp adversarial human and automated decision making while protecting our own.
Between 2010 and 2014, PSYOP was renamed Military Information Support Operations (MISO), then briefly renamed PSYOP in August 2014, only to return to MISO shortly thereafter in 2015. The term was again renamed back to PSYOP in October 2017.
Products
PSYOP involves the careful creation and dissemination of a product message. There are three types of products that are used to create these messages. They include White products which are used in overt operations and Gray or Black products which are used in covert PSYOP. White, Gray, and Black don't refer to the product's content but rather the methods used to carry out the operation.
In order for PSYOP to be successful they must be based in reality. All messages must be consistent and must not contradict each other. Any gap between the product and reality will be quickly noticed. A credible "truth" must be presented which is consistent to all audiences. Primarily it is a component of offensive counterinformation but can be used defensively as well.
PSYOP are used in support of special operations, unconventional warfare, and counterinsurgency (COIN) operations. PSYOP can include military operations other than warfare and also include joint operations. They include counterterrorism operations, peace operations, noncombatant evacuation, enforcement of sanctions and maritime interception operations, strikes and raids, etc.
White PSYOP
White PSYOP is attributable to PSYOP as a source. White is acknowledged as an official statement or act of the U.S. government, or emanates from a source associated closely enough with the U.S. government to reflect an official viewpoint. The information should be true and factual. It also includes all output identified as coming from U.S. official sources.
Authorized to engage in white activity directed at foreign audiences are: The State Department, USIA, the Foreign Operations Administration (a predecessor of the Agency for International Development), the Defense Department and other U.S. government departments and agencies as necessary.
Gray PSYOP
The source of the gray PSYOP product is deliberately ambiguous. According to a 2007 State Department from the Historian of the Department of State:
The true source (U.S. Government) is not revealed to the target audience. The activity engaged in plausibly appears to emanate from a non-official American source, or an indigenous, non-hostile source, or there may be no attribution.
...
Gray information is information whose content is such that the effect will be increased if the hand of the U.S. Government and in some cases any American participation are not revealed. It is simply a means for the U.S. to present viewpoints which are in the interest of U.S. foreign policy, but which will be acceptable or more acceptable to the intended target audience than an official government statement will be.
Black PSYOP
The activity engaged in appears to emanate from a source (government, party, group, organization, person) usually hostile in nature. The interest of the U.S. government is concealed and the U.S. government would deny responsibility. It is best used in support of strategic plans.
Covert PSYOP is not a function of the U.S. military but instead is used in special operations due to their political sensitivity and need for higher level compartmentalization. Further, black PSYOP, to be credible, may need to disclose sensitive material, with the damage caused by information disclosure considered to be outweighed by the impact of successful deception. In order to achieve maximum results and to prevent the compromise of overt PSYOP, overt and covert operations need to be kept separate. Personnel involved in one must not be engaged in the other.
Media
PSYOP conveys messages via visual, audio, and audiovisual media. Military psychological operations, at the tactical level, are usually delivered by loudspeaker, and face to face communication. For more deliberate campaigns, they may use leaflets, radio or television. Strategic operations may use social media, radio or television broadcasts, various publications, airdropped leaflets, or, as part of a covert operation, with material placed in foreign news media.
Process
In order to create a successful PSYOP the following must be established:
clearly define the mission so that it aligns with national objectives
need a PSYOP estimate of the situations
prepare the plan
media selection
product development
pretesting - determines the probable impact of the PSYOP on the target audience
production and dissemination of PSYOP material
implementation
posttesting - evaluates audience responses
feedback
Before these steps can occur, intelligence analysts must profile potential targets in order to determine which ones it would be most beneficial to target. In order to figure this out, analysts must determine the vulnerabilities of these groups and what they would be susceptible to. The analysts also determine the attitudes of the targets toward the current situation, their complaints, ethnic origin, frustrations, languages, problems, tensions, attitudes, motivations, and perceptions, and so on. Once the appropriate target(s) have been determined, the PSYOP can be created.
Psychological operations should be planned carefully, in that even a tactical message, with modern news media, can spread worldwide and be treated as the policy of the United States. The U.S. Army is responsible for military psychological warfare doctrine. See the World War I section for an example of how a tactical leaflet, not properly coordinated, can cause national-level harm.
The message to be delivered can be adapted to tactical situations, but promises made must be consistent with national policy.
U.S. PSYOP forces are generally forbidden to attempt to change the opinions of "U.S. persons" (citizens, residents, or legal entities), in any location globally. However, commanders may use PSYOP forces to provide public information to U.S. audiences during times of disaster or crisis. Information support to a noncombatant evacuation operation (NEO) by PSYOP forces to provide evacuation information to U.S. and third-country nationals would also adhere to the order. The use of PSYOP forces to deliver necessary public information to a U.S. audience was established in relief activities after Hurricane Andrew in 1992. Tactical Psychological Operations teams (TPTs) were employed to disseminate information by loudspeaker on locations of relief shelters and facilities.
During such Defense Support of Civil Authorities (DSCA) operations, military public affairs activities, military civil authority information support (CAIS) activities, public information actions, and news media access to the DSCA operational area are subject to approval by the federal department or agency assigned primary responsibility for managing and coordinating a specific emergency support function in the National Response Framework. Since October 2018, PSYOP forces may be employed domestically for such CAIS activities during an emergency under direction of the Department of Homeland Security. In such instances,
Psychological operations were a key Battlefield Operating System used extensively to support Unified Task Force (UNITAF) Somalia operations. In order to maximize the PSYOP impact, the United States established a Joint PSYOP Task Force under the supervision of the Director of Operations, integrated PSYOP into all plans and operations, and limited the PSYOP focus to the operational and tactical levels. Psychological operations do not accomplish missions alone. They work best when they are combined with and integrated in an overall theater campaign plan. In Operation RESTORE HOPE, U.S. physchological operations were successful in doing that.
An emerging field of Strategic Psychological Operations is the "Battle of the Narratives". The battle of the narrative is a full-blown battle in the cognitive dimension of the information environment, just as traditional warfare is fought in the physical domains (air, land, sea, space, and cyberspace). One of the foundational struggles, in warfare in the physical domains, is to shape the environment such that the contest of arms will be fought on terms that are to your advantage. Likewise, a key component of the “Battle of the Narrative” is to succeed in establishing the reasons for and potential outcomes of the conflict, on terms favorable to your efforts.
Psychological operations units
The majority of U.S. military psychological operations units are in the Army. White PSYOP can come from the Voice of America or regional radio/TV. Central Intelligence Agency units are apt to have responsibility, on a strategic level, for black and some gray PSYOP. White PSYOP, especially at the strategic level, comes from the Voice of America or United States Information Agency.
In the United States Department of Defense, Psychological Operations units exist as the Army's 4th Psychological Operations Group, 8th Psychological Operations Group and Air Force with COMMANDO SOLO units under the Air Force Special Operations Command's 193rd Special Operations Wing. The United States Navy also plans and executes limited PSYOP missions.
United States PSYOP units and soldiers of all branches of the military are prohibited by law from conducting PSYOP missions on domestic audiences. While PSYOP soldiers may offer non-PSYOP related support to domestic military missions, PSYOP can only target foreign audiences. It is worth noting that this does not rule out PSYOP targeting foreign audiences of allied nations. In the Information Operations Roadmap made public January 2006 but originally approved by Defense Secretary Donald Rumsfeld in October 2003, it stated "information intended for foreign audiences, including public diplomacy and PSYOP, increasingly is consumed by our domestic audience and vice-versa."
Army
Until shortly after the start of the war on terror, the Army's Psychological Operations elements were administratively organized alongside Civil Affairs to form the U.S. Army Civil Affairs and Psychological Operations Command (USACAPOC), forming a part of the U.S. Army Special Operations Command (USASOC).
In May 2006 the USCAPOC was reorganized to instead fall under the Army reserve command, and all active duty PSYOP elements were placed directly into USASOC. While reserve PSYOP forces no longer belong to USASOC, that command retains control of PSYOP doctrine. Operationally, PSYOP individuals and organizations support Army and Joint maneuver forces or interagency organizations.
Army Psychological Operations provide support to operations ranging from strategic planning down to tactical employment.
PSYOP Units generally support Corps sized elements. Tactical Psychological Operations Companies typically support Division sized elements, with Tactical Control through G-3. Brigades are typically supported by a Tactical PSYOP Detachment. The PSYOP Commander maintains Operational Control of PSYOP elements, advises the Commander and General Staff on the psychological battlespace.
The smallest organizational PSYOP element is the Tactical PSYOP Team (TPT). A TPT generally consists of a PSYOP team chief (Staff Sergeant or Sergeant), an assistant team chief (Sergeant or Specialist), and an additional soldier to serve as a gunner and to operate the speaker system (Specialist). A team is equipped with a Humvee fitted with a loud speaker, and often works with a local translator indigenous to the host or occupied country.
Generally, each maneuver battalion-sized element in a theater of war or operational area has at least one TPT attached to it.
All active duty PSYOP soldiers must initially volunteer for Psychological Operations Assessment and Selection, held year-round at Camp Mackall. Upon selection for Psychological Operations, Soldiers then enter the Psychological Operations Qualification Course (POQC) or "Q-Course" consisting of Special Operations Language training, advance cultural and regional studies, MOS specific training, special operations particular training along with a culmination exercise which incorporates and validates the new skillsets attained by the Soldier.
At the conclusion of the POQC the new PSYOP Soldier is typically assigned to either 4th Psychological Operations Group or 8th Psychological Operations Group. Certain reserve soldiers serving in units designated as Airborne are also required to attend Airborne training, while language training and Airborne qualification for PSYOP soldiers assigned to non-Airborne units is awarded on a merit and need basis.
A U.S. Army field manual released in January 2013 states that "Inform and Influence Activities" are critical for describing, directing, and leading military operations. Several Army Division leadership staff are assigned to “planning, integration and synchronization of designated information-related capabilities."
Army units
There are four psychological operations units in the U.S. Army:
2nd Psychological Operations Group
4th Psychological Operations Group (Airborne)
7th Psychological Operations Group
8th Psychological Operations Group (Airborne)
The 4th Psychological Operations Group (Airborne), based at Fort Bragg, was historically the only active duty PSYOP unit remaining in the United States Army following the close of the Vietnam War, until the August 26th, 2011 activation of 8th Psychological Operations Group (Airborne). The 2nd and the 7th Psychological Operations Groups are in the Army Reserve.
Historic units
245th Psychological Operations Company (POC): Dallas, Texas
Reactivated and became the 345th PSYOP Company. Deployed soldiers during Operation Desert Storm (the Gulf War).
The 345th also deployed post-9/11 to Afghanistan working with U.S. Army Special Forces and Conventional Forces.
In 2003 the 345th deployed to Iraq in support of Operation Iraqi Freedom.
Since November 2001, the 345th Tactical Psychological Operations Company (Airborne) has deployed detachments of soldiers to Afghanistan in support of Operation Enduring Freedom (2008-2009), Iraq, and the Horn of Africa.
244th Psychological Operations Company (POC): Austin, Texas
Formed during Vietnam as an Active component unit February 10th, 1965, transferred to the Reserves in Abilene Texas on October 30th, 1975.
Deployed soldiers during Operation Desert Storm (The Gulf War). Inactivated September 15th, 1994.
Reactivated and became the 344th PSYOP Company. Since September 16th, 2008, has deployed detachments of soldiers to Afghanistan in Operation Enduring Freedom (2010-2011) and Operation Enduring Freedom - Horn of Africa (2019) from its new location in Austin Texas.
Navy
Navy psychological operations policy is specified in OPNAVINST 3434.1, "Psychological Operations".
The Navy provides support to Joint PSYOP programs by providing assets (such as broadcast platforms using shortwave and very high frequency (VHF) frequencies) for the production and dissemination of PSYOP materials. With the ability of naval vessels (especially the larger task forces) to produce audio-visual materials the Navy can often produce PSYOP products for use in denied areas. Leaflets are dropped utilizing the PDU-5B dispenser unit (aka Leaflet Bomb). The Navy coordinates extensively with the Army as the majority of PSYOP assets reside within USASOC. PSYOP planning and execution is coordinated through the Naval Network Warfare Command (NETWARCOM) and the Naval Information Operations Command (NIOC), both located in Norfolk, VA.
The U.S. Navy possesses the capability to produce audiovisual products in the Fleet Audiovisual Command, Pacific; the Fleet Imagery Command, Atlantic; the Fleet Combat Camera Groups; Naval Imaging Command; various film libraries; and limited capability from ships and aircraft of the fleet. A Naval Reserve PSYOP audiovisual unit supports the Atlantic Fleet. Navy personnel assets have the capability to produce documents, posters, articles, and other material suitable for PSYOP. Administrative capabilities exist ashore and afloat that prepare and produce various quantities of printed materials. Language capabilities exist in naval intelligence and among naval personnel for most European and Asian languages.
The Fleet Tactical Readiness Group provides equipment and technical maintenance support to conduct civil radio broadcasts and broadcast jamming in the amplitude modulation frequency band. This unit is not trained to produce PSYOP products and must be augmented with PSYOP personnel or linguists when necessary. The unit is capable of being fully operational within 48 hours of receipt of tasking. The unit's equipment consists of a 10.6 kW AM band broadcast radio transmitter; a broadcast studio van; antenna tuner; two antennas (a pneumatically raised top-loaded antenna mast and a wire helium balloon antenna); and a 30 kW generator that provides power to the system.
Air Force
The Air National Guard provides support for Psychological Operations using a modified C-130 Hercules aircraft named EC-130 COMMANDO SOLO, operated by the 193d Special Operations Wing. The purpose of COMMANDO SOLO is to provide an aerial platform for broadcast media on both television and radio. The media broadcast is created by various agencies and organizations. As part of the broader function of information operations, COMMANDO SOLO can also jam the enemy's broadcasts to his own people, or his psychological warfare broadcasting.
The Commando Solo aircraft currently is the only stand-off, high-altitude means available to PSYOP forces to disseminate information to large denied areas. Two orbits were established during Operation Iraqi Freedom, the 2003 invasion of Iraq, one in the northern area and one in the southern part of the country, both far enough from harm’s way to keep the aircraft out of reach of potential enemy attack. At their operational altitude of and assuming clear channels, these aircraft can transmit radio and TV signals approximately , which does not reach the objective areas near Baghdad. Straightforward physics dictate the range, given the power installed and the antenna configuration and assuming clear channels.
The enhanced altitude capability of the Commando Solo EC–130J (now funded) is increasing transmitter range. While this is an improvement over 130E capability, it is a small step, since the increase in altitude is only 7,000 feet (less than 50 percent) and the range increase is governed by a square root function (that is, a 14 percent increase in range).
A challenge to COMMANDO SOLO is the increasing use of cable television, which will not receive signals from airborne, ground, or any other transmitters that the cable operator does not want to connect to the system. At best, in the presence of cable TV, COMMANDO SOLO may be able to jam enemy broadcasts that are not, themselves, transmitted by cable.
Central Intelligence Agency
Psychological operations was assigned to the pre-CIA Office of Policy Coordination, with oversight by the Department of State. The overall psychological operations of the United States, overt and covert, were to be under the policy direction of the U.S. Department of State during peacetime and the early stages of war:
After the OPC was consolidated into the CIA, there has been a psychological operations staff, under various names, in what has variously been named the Deputy Directorate of Plans, the Directorate of Operations, or the National Clandestine Service.
Marines
The career field is fairly new. It was only in the summer of 2018 that the Marine Corps created the new Military Occupational Specialty (MOS) with designator 0521. The aim of PSYOP Marines (0521) is to influence target audiences and advise higher officers and partner forces on cultural considerations to ensure mission success. Candidates will have to complete the Army’s Psychological Qualification Operations Course. The qualification course includes classes in psychology, sociology, cultural training blocks, language training, and human dynamics training, among other training components.
The Commandant of the Marine Corps has placed a renewed emphasis on operating in the information environment. The Marine Corps seeks to meet this demand by building a capability to influence target audiences and to advise staffs and partners on cultural considerations to achieve tactical and operational objectives. Information Operations (IO) includes all actions taken to affect enemy information and information systems while defending friendly information and information systems. IO is focused on the adversary’s key decision-makers and is conducted during all phases of an operation, across the range of military operations, and at every level. IO, in planning and execution, seeks to coordinate and integrate MAGTF actions and communications and enhance all MAGTF Operations. PSYOP conveys selected information and indicators to foreign audiences to influence their emotions, motives, objective reasoning, and ultimately the behavior of foreign organizations, groups, and individuals in a manner favorable to the commander’s objectives. PSYOP enables the Marine Corps to understand and achieve effects in the information environment. PSYOP Marines are primarily responsible for the analysis of cultural factors, as well as the development and distribution of products used to cause informational and psychological effects. PSYOP researches and determines effective methods of influencing foreign populations from a variety of information sources. PSYOP Marines operate and maintain a variety of message dissemination equipment. Interpersonal communication skills, interest in foreign cultures and languages, and the ability to analyze and organize information are required. PSYOP Marines build rapport with key leaders in unfamiliar surroundings.
PSYOP Marines enable the Marine Corps to achieve targeted effects in the information environment (IE) by conducting Military Information Support Operations (MISO), providing Civil Authorities Information Support (CAIS), or supporting Military Deception (MILDEC). MISO are missions that convey selected information and indicators to foreign organizations, groups, and individuals to influence their emotions, motives, objective reasoning, and ultimately their behavior in a manner favorable to the Commander’s objectives.
History
World War I
During World War I, the Propaganda Sub-Section was established under the American Expeditionary Force (AEF) Military Intelligence Branch within the Executive Division of the General Staff in early 1918. Although they produced most propaganda, the AEF Propaganda Sub-Section did not produce a few of the leaflets. General Pershing is supposed to have personally composed Leaflet “Y,” Austria Is Out of the War, which was run off on First Army presses, but distributed by the Propaganda Sub-Section. That Sub-Section, perhaps reflecting some professional jealousy, thought the leaflet sound in principle, but too prolix and a little too “brotherly.” Corps and Army presses issued several small leaflet editions containing a “news flash,” after the Sub-Section had approved their content. But in one or two cases that approval was not obtained, and in one unfortunate example a leaflet in Romanian committed the Allies and the United States to the union of all Romanians in Austria-Hungary with Romania. Such geopolitics was emphatically not the job of AEF propaganda and had the potential to cause serious embarrassment.
World War II
There was extensive use of psychological operations in World War II, from the strategic to the tactical. National-level white propaganda was the responsibility of the Office of War Information, while black propaganda was most often the responsibility of the Morale Operations branch of the Office of Strategic Services (OSS).
Psychological operations planning started before the U.S. entry into the war, with the creation of the Office of the Coordinator of Inter-American Affairs (OCIAA), under Nelson Rockefeller, with the responsibility for psychological operations targeted at Latin America. Special operations and intelligence concerning Latin America was a bureaucratic problem throughout the war. Where the OSS eventually had most such responsibilities, the FBI had its own intelligence system in Latin America.
On 11 July 1941, William Donovan was named the Coordinator of Information, which subsequently became the OSS. At first, there was a unit called the Foreign Information Service inside COI, headed by Robert Sherwood, which produced white propaganda outside Latin America.
To deal with some of the bureaucratic problems, the Office of War Information (OWl) was created with Elmer Davis as director. FIS, still under Sherwood, became the Overseas Branch of OWl, dealing in white propaganda. The OSS was created at the same time. Donovan obtained considerable help from the British, especially with black propaganda, from the British Political Warfare Executive (PWE), part of the Ministry of Economic Warfare. PWE was a sister organization to the Special Operations Executive, which conducted guerrilla warfare. The British Secret Intelligence Service (SIS, also known as MI6), was an essentially independent organization. For the U.S., the OSS included the functions of SIS and SOE, and the black propaganda work of PWE.
The OSS Morale Operations (MO) branch was the psychological operations arm of the OSS. In general, its units worked on a theater-by-theater basis, without a great deal of central coordination. It was present in most theaters, with the exception of the Southwest Pacific theater under Douglas MacArthur, who was hostile to the OSS.
The OSS was responsible for strategic propaganda, while the military commanders had operational and tactical responsibility. Dwight Eisenhower was notably supportive of psychological operations, had psychological warfare organization in the staff of all his commands, and worked with OSS and OWI. The military did theater-level white propaganda, although the black propaganda function varied, often carried out by joint U.S.-UK organizations.
For the first time in U.S. history, American psywarriors employed electronic psywar in the field, in September 1944. Engineers of the 1st Radio Section of the 1st MRBC recorded POW interviews for front- line broadcasts, and reproduced the sound effects of vast numbers of tanks and other motor vehicles for Allied armored units in attempts to mislead German intelligence and lower enemy morale.
Leaflets were delivered principally from aircraft, but also with artillery shells.
Cold War
Radio
The U.S. engaged in major worldwide radio broadcasts to contain communism, through Radio Free Europe and Radio Liberty.
Korea
Psychological operations were used extensively during the Korean War. The first unit, the 1st Loudspeaker and Leaflet Company, was sent to Korea in fall 1950. Especially for the operations directed against troops of the Democratic People's Republic of Korea (DPRK; North Korea), it was essential to work with Republic of Korea (ROK; South Korea personnel) to develop propaganda with the most effective linguistic and cultural context.
Since the war was a United Nations mandated operation, political sensitivities were high. While rules limited mentioning the People's Republic of China or the Soviet Union, first due to fear it would increase their intervention, and later because it might demoralize ROK civilians, Stalin was depicted and Chinese troops were targeted in leafleting.
Various methods were used to deliver propaganda, with constraints imposed by exceptionally rugged terrain and that radios were relatively uncommon among DPRK and PRC troops. Loudspeaker teams often had to get dangerously close to enemy positions. Artillery and light aircraft delivered leaflets on the front lines, while heavy bombers dropped leaflets in the rear. Over 2.5 billion leaflets were dropped over North Korea during the war. There was a somewhat artificial distinction made between strategic and tactical leaflets: rather than differentiating by the message, tactical leaflets were delivered within of the front lines and strategic leaflets were those delivered farther away.
Less direct and immediate correlation between tactical PSYOP efforts and target audience behavior may still be substantiated after the fact, especially by means of polling and interviews. For example, in the Korean War, approximately one-third of the total prisoner of war (POW) population polled by the United Nations (UN) forces claimed to have surrendered at least in part because of the propaganda leaflets. The contributions of PSYOP in the first Persian Gulf War have also been corroborated through POW interviews. Ninety-eight percent of the 87,000 POWs captured either possessed or had seen PSYOP leaflets that provided them with instructions on how to approach U.S. troops to surrender. Fifty-eight percent of the prisoners interviewed claimed to have heard coalition radio broadcasts, and 46 percent believed that the coalition broadcasts were truthful despite coming from their enemy. Again, some portion of the surrenders might have occurred even without PSYOP encouragement; but certainly, there would appear to be a correlation between PSYOP, which offered the enemy a way to escape the onslaught of U.S. military power, and their compliance with those instructions.
One such operation, is Operation Moolah. The objective of the psychological operation was to target Communist pilots to defect to South Korea with a MiG-15, in order for the U.S. to conduct analysis of the capabilities of the MiG.
Some leafleting of North Korea was resumed after the Korean War, such as in the Cold War Operation Jilli from 1964 to 1968.
Guatemala
The CIA's operation to overthrow the Government of Guatemala in 1954 marked an early zenith in the Agency's long record of covert action. Following closely on two successful operations, one of which was the installation of the Shah as ruler of Iran in August 1953, the Guatemalan operation, known as PBSuccess, was both more ambitious and more thoroughly successful than either precedent. Rather than helping a prominent contender gain power with a few inducements, PBSuccess used an intensive paramilitary and psychological campaign to replace a popular, elected government with a political non-entity. In method scale and conception it had no antecedent, and its triumph confirmed the belief of many in the Eisenhower Administration that covert operations offered a safe, inexpensive substitute for armed force in resisting what they declared was Communist inroad in the Third World.
Vietnam
Psychological operations were extensively used in Vietnam, with white propaganda under the United States Information Agency and Military Assistance Command Vietnam, and grey and black propaganda under the Central Intelligence Agency and the Studies and Observation Group.
As early as August 1964, almost one year before the activation of the Joint U.S. Public Affairs Office (JUSPAO), General William Westmoreland told a CA and PSYOP conference that “psychological warfare and civic action are the very essence of the counterinsurgency campaign here in Vietnam…you cannot win this war by military means alone.” Westmoreland’s successor, Creighton Abrams, is known to have sent down guidelines to the 4th Psychological Operations Group that resulted in the drawing up of no fewer than 17 leaflets along those lines. In fact, the interest in PSYOP went all the way up to the Presidency; weekly reports from JUSPAO were sent to the White House, as well as to the Pentagon and the Ambassador in Saigon. In sum, it is a myth that the United States, stubbornly fixated on a World War II-style conventional war, was unaware of the "other war."
The first official American Vietnam PSYOP unit was the 24th Detachment, sent from Fort Bragg to II Corps in September 1965. Later that year, the 24th Detachment became the 245th Company, based at Nha Trang in II Corps. In February 1966, the 245th Company became the II Corps unit belonging to the 6th PSYOP Battalion, which had just arrived from Fort Bragg. 6th PSYOP Battalion was based in downtown Saigon, with subordinate companies of 244th in DaNang (I Corps), 245th in Nha Trang (II Corps), and 246th in Bien Hoa (III Corps). In late 1967, the 19th PSYOP Company was reassigned from the 3rd Special Forces Group at Fort Bragg to be a subordinate company based in the Mekong Delta (IV Corps) of the 6th Battalion. These companies were later enlarged to each become a battalion, and 6th PSYOP Battalion became the 4th PSYOP Group, the headquarters unit in Saigon for the four companies, now battalions.
During the Vietnam era, the organization of the 4th Psychological Operations Group was very different. The four battalions of the group were divided by geographic region rather than area of expertise as they are now.
The 6th PSYOP Battalion was stationed at Bien Hoa and provided services to the tactical units, both American and Vietnamese, and to the various political entities such as provinces and cities in the area of III Corps.
The 7th PSYOP Battalion was stationed in Da Nang and provided service to I Corps.
The 8th PSYOP Battalion was based at Nha Trang, but its B Company, which was its field teams, was based out of Pleiku nearly 100 kilometers away. The 8th Battalion served the II Corps area of Vietnam.
The 10th PSYOP Battalion was stationed in Can Tho and served IV Corps.
The A company of each battalion consisted of a command section, S-1, S-2, S-3, and a PSYOP Development Center (PDC). Additionally, they generally had extensive printing facilities.
The B companies consisted of the field teams that were stationed throughout their respective corps billeted with MACV teams and combat units.
Nicaragua
The CIA wrote a manual for right-wing rebels—the Contras—entitled Psychological Operations in Guerrilla Warfare in order to bolster their fight against the Marxist Sandinistas.
See also CIA activities in Nicaragua
Sweden
Ola Tunander, a Swedish author, claimed that U.S. submarines as well as other vessels "frequently" and "regularly" operated in the territorial waters of neutral Sweden in the early 1980s, including Stockholm harbor, as part of an elaborate psychological warfare operation whose target was the Swedish people. It is claimed that U.S. operations were conducted by the National Underwater Reconnaissance Office (NURO) and aspects of the operations were coordinated with the secret NATO "stay-behind" network deployed in Sweden. See Strategy of tension and Operation Gladio. It is also claimed that British submarines also participated in such secret operations.
Wars after 1989
Panama
Most PSYOP activities and accomplishments in the U.S. invasion of Panama were hardly noticed by either the U.S. public or the general military community. But the special operations community did notice. The lessons learned in Panama were incorporated into standard operating procedures. Where possible, immediate changes were made to capitalize on the PSYOP successes of the Grenada and Panama operations. This led to improved production, performance, and effect in the next contingency, which took place within 6 months after the return of the last PSYOP elements from Panama. Operations [in Iraq] employed PSYOP of an order of magnitude and effectiveness which many credit to the lessons learned from Panama.
The broader scope of information operations in Panama included denying the Noriega regime use of their own broadcasting facilities. A direct action mission removed key parts of the transmitters. After-action reports indicate that this action should have had a much higher priority and been done very early in the operation.
An unusual technique, developed in real time, was termed the "Ma Bell Mission", or, more formally, capitulation missions. There were a number of Panamian strongpoints that continued to have telephone access. By attaching Spanish-speaking Special Forces personnel to a combat unit that would otherwise take the strongpoint by force, the Spanish-speaking personnel would phone the Panamian commander, tell him to put away his weapons and assemble his men on the parade ground, or face lethal consequences. Because of the heavy reliance on telephones, these missions were nicknamed "Ma Bell" operations. "During this ten day period, TF BLACK elements were instrumental in the surrender of 14 cuartels (strongpoints), almost 2,000 troops, and over 6,000 weapons without a single U.S. casualty. Several high-ranking cronies of Manuel Noriega who were on the "most wanted" list were also captured in Ma Bell operations.
Psychological operations sometimes are intimately linked to combat operations, with the use of force driving home the propaganda mission. During the Panamanian operation, it was necessary. In Ft. Amador, an installation shared by the U.S. and Panamanian Defence Forces (PDF). There were U.S. dependents at the installation, but security considerations prevented evacuating them before the attack. Concern for U.S. citizens, and rules of engagement (ROE) that directed casualties be minimized, PSYOP loudspeaker teams, from the 1st Bn, 4th PSYOP Gp, became a key asset. When the PDF did not surrender after initial appeals, the message changed, with the tactical commander warning "that resistance was hopeless in the face of overwhelming firepower and a series of demonstrations took place, escalating from small arms to 105 mm howitzer rounds. Subsequent broadcasts convinced the PDF to give up. The entire process allowed Ft. Amador to be secured with few casualties and minimal damage."
The 1991 Gulf War
Psychological operations were particularly valuable during the Gulf War due to the reluctance of many in the Iraqi military to engage in combat. Through leaflets and loudspeaker broadcasts, PSYOP forces walked many enemy soldiers through successful surrender.
Coalition forces worked extensively with Saudi, Kuwaiti, and other partners, to be sure psychological operations were culturally and linguistically appropriate.
One unusual technique employed during the Gulf War was dropping leaflets in advance of a strike. These leaflets informed the Iraqi soldiers that they would be bombed the next day by B-52 bombers, and urged them to surrender and save their own lives. The subsequent bombings were then performed in a way that did not necessarily maximize casualties. Subsequently, another set of leaflets were dropped, saying the promise was kept and the survivors should surrender to save themselves. Afterwards, this technique would be employed on other units, telling them the specific unit that had been bombed the previous day.
Bosnia-Herzegovina
Following the signing of the Dayton Peace Accords in 1995, active duty PSYOP units reinforced with US Army reserve personnel deployed to Bosnia in support of NATO Peace Implementation Forces (IFOR). Elements of the 6th PSYOP Battalion served as the "Headquarters, Coalition Joint IFOR Information Campaign" (IFOR-CJIIC) at Sarajevo, initially operating out of the former Zetra Olympic Stadium. Security at Sarajevo was provided by British, French, Italian and Turkish conventional military forces, who had been operating there under United Nations control until NATO initiated operations. Elements of the 3rd PSYOP Battalion also deployed to Sarajevo and conducted print, radio and television product development. Elements of the 9th PSYOP Battalion deployed to Tuzla in direct support of the 1st Armored Division conducting media dissemination by radio and handbill.
The initial mission was to provide information to military and civilians of all three warring factions (Croat, Bosniak and Serb) helping to restore a peaceful environment with the ultimate goal of saving lives and reducing tensions. The primary means of information dissemination was through radio and television as well as considerable handbill, poster and souvenir distribution such as soccer balls and coloring books. At the start of the mission, PSYOP forces in Sarajevo often came under sniper fire. Although several 6th PSYOP Battalion HUMMWV vehicles were damaged by gunfire, no casualties were sustained. Gunfire incidents largely subsided within the first 90 days of the mission.
As the mission continued to develop, PSYOP forces assumed new information support missions focused on educating the civilian population to the considerable danger of landmines and unexploded ordnance littering the countryside. A reporting system was developed for the civilian population similar to 911 in the United States, with the ultimately successful goal of encouraging civilians to report the presence of landmines and unexploded ordnance for safe removal and destruction. The threat was so significant and the civilian casualty rate so alarming that this mission became the major focus. Support was sought and achieved from DC Comics, who produced special editions of Superman comics printed in the Croatian and Serbian dialects, with equal editions printed in Latinic and Cyrillic alphabets for appropriate audiences. German organizations also contributed with print editions of a children's magazine developed in Germany specifically for this mission called "Mirko", a play on the Serbo-Croatian word "mir", meaning "peace".
By summer of 1996, most PSYOP missions in Bosnia were being assumed by Reserve PSYOP forces.
Second Gulf War
Arguably the most visible image of the 2003 invasion of Iraq was the toppling of a statue of Saddam Hussein in Firdos Square in central Baghdad. This widely reported event led to allegations of American manipulation and staging for mass consumption and pro-US propaganda value. Further claims have been made that the toppling of Saddam's statue was not the natural and spontaneous celebration of the local population in Baghdad, but was carefully orchestrated and overseen by a team within US Army PSYOP. Accusations centered around inaccurate media depictions that included inflation of the number of "locals" who were present and cheering that day, as well as the charge that the "local" Iraqis were not even from the area, but were in fact recruited by American Intelligence, and brought in for the sole purpose of participating in the pre-planned toppling.
Recent controversies
CNN and NPR interns incident
In 2000, it came to light that soldiers from the 4th Psychological Operations Group had been interning at the American news networks Cable News Network (CNN) and National Public Radio (NPR) during the late 1990s. The program was an attempt to provide its PSYOP personnel with the expertise developed by the private sector under its "Training with Industry" program. The program caused concern about the influence these soldiers might have on American news and the programs were terminated.
National Public Radio reported on April 10, 2000:
The U.S. Army's Psychological Operations unit placed interns at CNN and NPR in 1998 and 1999. The placements at CNN were reported in the European press in February of this year and the program was terminated. The NPR placements will be reported this week in TV Guide.
Use of music in the interrogation of prisoners
In 2003 Sergeant Mark Hadsell claimed to have used loud music during the interrogation of Iraqi prisoners:
Other reports of the use of music during interrogation have occasionally plagued PSYOP.
On 9 December 2008 the Associated Press reported that various musicians were coordinating their objections to the use of their music as a technique for softening up captives through an initiative called Zero dB. However, not all musicians have taken issue with the possibility that their music is being used during interrogations. Stevie Benton of the group Drowning Pool commented supportively:
Afghanistan burning bodies incident
On 1 October 2005 in Gumbad, Afghanistan, soldiers from the 173rd Airborne burned the bodies of two Taliban fighters killed in a firefight the previous day for hygienic reasons, despite Islamic customs that forbid cremation. The platoon leader also failed to properly notify his battalion commander of the decision prior to burning the bodies. When his battalion commander was notified, he ordered the flaming bodies to be extinguished. An official investigation into the incident found evidence of poor decision making, poor judgement, poor reporting, and a lack of knowledge and respect for local Afghan custom and tradition. The infantry officer received a general officer letter of reprimand. Reserve PSYOP soldiers were involved because they heard about the incident and used the information to incite Taliban fighters in another area where freelance journalist Stephen Dupont was located. Dupont reported that the PSYOP soldiers claimed the bodies were to be burned due to hygiene concerns.
During the War on Terror, U.S. PSYOP teams often broadcast abrasive messages over loudspeakers to try to tempt enemy fighters into direct confrontation, where the Americans have the upper hand. Other times, they use their loudspeaker to convince enemy soldiers to surrender. In the Afghanistan incident, a PSYOP sergeant allegedly broadcast the following message to the Taliban:
Attention, Taliban, you are all cowardly dogs. You allowed your fighters to be laid down facing west and burned. You are too scared to retrieve their bodies. This just proves you are the lady boys we always believed you to be.
Another soldier stated:
You attack and run away like women. You call yourself Talibs but you are a disgrace to the Muslim religion and you bring shame upon your family. Come and fight like men instead of the cowardly dogs you are.
U.S. authorities investigated the incident and the two Reserve PSYOP soldiers received administrative punishment for broadcasting messages which were not approved. Investigators found no evidence that the bodies were burned for a psychological effect. They concluded that the broadcast violated standing policies for the content of loudspeaker messages and urged that all soldiers in the command undergo training on Afghan sensitivities.
Pentagon analysts and the mainstream media
In 2008, The New York Times exposed how analysts portrayed in the U.S. news media as independent and objective were in fact under the tutelage of the Pentagon. According to the NYT: Hidden behind that appearance of objectivity, though, is a Pentagon information apparatus that has used those analysts in a campaign to generate favorable news coverage of the administration's wartime performance
2009 congressional delegation to Afghanistan
In February 2011, journalist Michael Hastings reported in Rolling Stone that Lt. Colonel Michael Holmes, the supposed leader of a PSYOP group in Afghanistan, alleged that Lt. Gen. William B. Caldwell a three-star General in charge of training troops in Afghanistan, ordered Holmes and his group to perform in-depth research on visiting U.S. congressmen in order to spin presentations and visits. According to Holmes, his team was tasked with "illegally providing themes and messages to influence the people and leadership of the United States."
Reported targets included United States Senators John McCain, Joe Lieberman, Jack Reed, Al Franken, Carl Levin, Rep. Steve Israel of the House Appropriations Committee; Adm. Mike Mullen of the Joint Chiefs of Staff; the Czech ambassador to Afghanistan; the German interior minister, and think-tank analysts. Under the 1948 Smith–Mundt Act, such operations may not be used to target Americans. When Holmes attempted to seek counsel and to protest, he was placed under investigation by the military at the behest of General Caldwell's chief of staff.
Caldwell's spokesman, Lt. Col. Shawn Stroud, denied Holmes's assertions, and other unnamed military officials disputed Holmes's claims as false and misleading, saying there are no records of him ever completing any PSYOP training. Subsequently, Holmes conceded that he was not a Psychological Operations officer nor was he in charge of a Psychological Operations unit and acknowledged that Caldwell's orders were "fairly innocuous." Officials say that Holmes spent his time in theater starting a strategic communications business with Maj. Laural Levine, with whom he conducted an improper relationship in Afghanistan. A former aid said, "At no point did Holmes ever provide a product to Gen. Caldwell". General David Petraeus has since ordered an investigation into the alleged incident.
Internet influence operation
In 2022, Meta and the Stanford Internet Observatory found that for five years people associated with the U.S. military, who tried to conceal their identities, created fake accounts on social media systems including Balatarin, Facebook, Instagram, Odnoklassniki, Telegram, Twitter, VKontakte and YouTube in an influence operation in Central Asia and the Middle East. Their posts, including nearly 300,000 tweets, were primarily in Arabic, Farsi and Russian. They criticized Iran, China and Russia and gave pro-Western narratives. Data suggested the activity was a series of covert campaigns rather than a single operation.
Anti-vaccine propaganda targeted at the Philippines
In 2024, Reuters revealed that the Donald Trump administration launched in 2020 a secret PSYOP against Chinese COVID-19 vaccines in several Asian countries, mainly the Philippines. The campaign consisted of hundreds of fake social network profiles manned by PSYOP staff in Florida that sowed doubts about the Chinese vaccine's efficacy and argued Muslims should reject it because it allegedly contained pork protein. In 2021, a few months after Trump's defeat in the presidential election, the Biden administration cancelled the campaign.
Portrayals in popular culture
The general's daughter from both the novel and blockbuster movie The General's Daughter was a PSYOP officer.
A USACAPOC combat patch (FWS-SSI) can be seen being worn by a soldier in the film X-Men: The Last Stand in the President's command center.
The book The Men Who Stare at Goats and film deal extensively with PSYOP.
The USACAPOC patch can be seen being worn by the characters portrayed by Spike Jonze, Ice Cube, and Mark Wahlberg in the movie Three Kings.
The novel Tree of Smoke by writer Denis Johnson revolves around PSYOP.
In the 9th season of the television series NCIS, Jamie Lee Curtis plays a recurring role as the civilian PSYOPs director at the US Department of Defense. In the 15th season they also introduced Jacqueline Sloane to the main cast, she was former Army PSYOP and it is shown to have an impact on her regularly.
In the 1979 film Apocalypse Now, during the famous helicopter attack on the beach, actor Robert Duvall, playing LTC Bill Kilgore says over the radio, "Put on psy war op. Make it loud....Shall we dance?", at which point the helicopter mounted loudspeakers start playing Richard Wagner's "Ride of the Valkyries".
In the 1959 Korean War film Pork Chop Hill, the Chinese continuously broadcast propaganda over loud speakers between battles.
In the 2012 film Safe House, Former CIA agent Tobin Frost, with excellent psychological warfare expertise.
In the 2016 film The Accountant, the father of the main character is a PSYOP Colonel.
See also
Chieu Hoi
CIA's Special Activities Division
COINTELPRO
Congress for Cultural Freedom
Disinformation
Fake news
Information warfare
Lockheed EC-130
Operation Mockingbird
Pentagon military analyst program
Propaganda
Psychological warfare
Psychological Warfare Division
References
Further reading
Bibliography
ch. 1 online
[ first 30 pages online]
Propaganda
External links
iwar.org.uk
U.S. - PSYOP producing mid-eastern kids comic book
Psychological Operations Branch Insignia
Psychological Operations: Lineage and Honors Information
Psychological warfare
Propaganda in the United States | 0.762656 | 0.998755 | 0.761707 |
Brain fag syndrome | Brain fag syndrome (BFS) describes a set of symptoms including difficulty in concentrating and retaining information, head and or neck pains, and eye pain. Brain fag is believed to be most common in adolescents and young adults due to the pressure occurring in life during these years. The term, now outdated, was first used in 19th-century Britain before becoming a colonial description of Nigerian high school and university students in the 1960s. Its consideration as a culture-bound syndrome caused by excessive pressure to be successful among the young is disputed by Ayonrinde (2020)
Etymology
The term 'brain fag' presumably stems from the verb meaning of the word "fag", "To cause (a person, animal, or part of the body) to become tired; to fatigue, wear out" chiefly found in British English.
Classification
BFS was classified in the fourth revision of the Diagnostic and Statistical Manual of Mental Disorders (DSM-IV) as a culture-bound syndrome. Individuals with symptoms of brain fag must be differentiated from those with the syndrome according to the Brain Fag Syndrome Scale (BFSS); Ola et al said it would not be "surpris[ing] if BFS was called an equivalent of either depression or anxiety".
Causes
Brain fag is typically driven in people with high anxiety and people with high stress levels. Morakinyo found in 20 people with BFS an achievement drive that was anxiety-related that led to the use of psychostimulants and consequent sleep deprivation which contributed to cognitive disruption. Omoluabi related BFS to test anxiety.
Treatment
Anumonye reported treatment success with lorazepam. Others found benefits from antidepressants and relaxation exercises.
Epidemiology
BFS has been reported in other African cultures, and in Brazil, Argentina, and Ethiopian Jews. Historic higher reported prevalence among males may be due to more males being present in higher education in African countries. Studies since the 1990s have not verified gender differences.
Other studies found a possible association with low socioeconomic status, an association with average or higher intelligence, and a high association with neuroticism. Individuals with BFS have been found to have problems with isolation, poor study habits, and the use of psychostimulants as well as physical changes including in muscle tension and heart rate.
History
The condition was first described by R. H. Prince who named the condition based on the term brain fag used by students who believed their symptoms were attributed to "brain fatigue". However, this term was used in Europe dating back to 1839. In a detailed historical account, Ayonrinde (2020) illustrates that contrary to widely held and published belief in diagnostic manuals, psychiatric, social science and educational text, the term "brain fag" and associated syndromes of anxiety, affective and somatoform symptoms in student and "brain worker" populations were first described in nineteenth century Britain (Tunstall, 1850) with dissemination across the British Empire. Ayonrinde concludes that the time has come for the decolonization of brain fag and its African syndromization in the true spirit of ethical scientific rigor in the twenty-first century.
See also
Burnout
Exhaustion
References
Syndromes of unknown causes
Culture-bound syndromes
Culture of Nigeria
Health in Nigeria | 0.773515 | 0.984732 | 0.761704 |
Maslow's hierarchy of needs | Maslow's hierarchy of needs is an idea in psychology proposed by American psychologist Abraham Maslow in his 1943 paper "A Theory of Human Motivation" in the journal Psychological Review. Maslow subsequently extended the idea to include his observations of humans' innate curiosity. His theories parallel many other theories of human developmental psychology, some of which focus on describing the stages of growth in humans. The theory is a classification system intended to reflect the universal needs of society as its base, then proceeding to more acquired emotions. The hierarchy of needs is split between deficiency needs and growth needs, with two key themes involved within the theory being individualism and the prioritization of needs. While the theory is usually shown as a pyramid in illustrations, Maslow himself never created a pyramid to represent the hierarchy of needs. The hierarchy of needs is a psychological idea and an assessment tool, particularly in education, healthcare and social work. The hierarchy remains a popular framework, including sociology research, management training, and higher education.
Moreover, the hierarchy of needs is used to study how humans intrinsically partake in behavioral motivation. Maslow used the terms "physiological", "safety", "belonging and love", "social needs" or "esteem", "self-actualization" and "transcendence" to describe the pattern through which human needs and motivations generally move. According to the theory, this means that for motivation to arise at the next stage, each prior stage must be satisfied by an individual. The hierarchy has been used to explain how effort and motivation are correlated in the context of human behavior. Each of these individual levels contains a certain amount of internal sensation that must be met in order for an individual to complete their hierarchy. The goal in Maslow's hierarchy is to attain the level or stage of self-actualization.
Although widely used and researched, Maslow's hierarchy of needs lacks conclusive supporting evidence and the validity of the theory remains contested in academia. One criticism of the original theory which has been revised into newer versions of the theory, was that the original hierarchy states that a lower level must be completely satisfied and fulfilled before moving onto a higher pursuit; there is evidence to suggest that levels continuously overlap each other.
Stages
Maslow's hierarchy of needs is often portrayed in the shape of a pyramid, with the largest, most fundamental needs at the bottom, and the need for self-actualization and transcendence at the top. In other words, the idea is that individuals' most basic needs must be met before they become motivated to achieve higher-level needs. Despite the fact that the ideas behind the hierarchy are Maslow's, the pyramid itself does not exist anywhere in Maslow's original work.
The most fundamental four layers of the pyramid contain what Maslow called "deficiency needs" or "d-needs": esteem, friendship and love, security, and physical needs. If these "deficiency needs" are not met – except for the most fundamental (physiological) need – there may not be a physical indication, but the individual will feel anxious and tense. Deprivation is what causes deficiency, so when one has unmet needs, this motivates them to fulfill what they are being denied. Maslow's idea suggests that the most basic level of needs must be met before the individual will strongly desire (or focus motivation upon) the secondary or higher-level needs. Maslow also coined the term "metamotivation" to describe the motivation of people who go beyond the scope of basic needs and strive for constant betterment.
The human brain is a complex system and has parallel processes running at the same time, thus many different motivations from various levels of Maslow's hierarchy can occur at the same time. Maslow spoke clearly about these levels and their satisfaction in terms such as "relative", "general", and "primarily". Instead of stating that the individual focuses on a certain need at any given time, Maslow stated that a certain need "dominates" the human organism. Thus Maslow acknowledged the likelihood that the different levels of motivation could occur at any time in the human mind, but he focused on identifying the basic types of motivation and the order in which they would tend to be met.
Physiological needs
Physiological needs are the base of the hierarchy. These needs are the biological component for human survival. According to Maslow's hierarchy of needs, physiological needs are factored into internal motivation. According to Maslow's theory, humans are compelled to satisfy physiological needs first to pursue higher levels of intrinsic satisfaction. To advance higher-level needs in Maslow's hierarchy, physiological needs must be met first. This means that if a person is struggling to meet their physiological needs, they are unwilling to seek safety, belonging, esteem, and self-actualization on their own.
Physiological needs may include:
Air
Water
Food
Heat
Clothes
Reproduction
Shelter
Sleep
Many of these physiological needs must be met for the human body to remain in homeostasis. Air, for example, is a physiological need; a human being requires air more urgently than higher-level needs, such as a sense of social belonging. Physiological needs are critical to "meet the very basic essentials of life". This allows for cravings such as hunger and thirst to be satisfied and not disrupt the regulation of the body.
Safety needs
Once a person's physiological needs are satisfied, their safety needs take precedence and dominate behavior. In the absence of physical safety – due to war, natural disaster, family violence, childhood abuse, etc. and/or in the absence of economic safety – (due to an economic crisis and lack of work opportunities) these safety needs manifest themselves in ways such as a preference for job security, grievance procedures for protecting the individual from unilateral authority, savings accounts, insurance policies, disability accommodations, etc. This level is more likely to predominate in children as they generally have a greater need to feel safe – especially children who have disabilities. Adults are also impacted by this, typically in economic matters; "adults are not immune to the need of safety". It includes shelter, job security, health, and safe environments. If a person does not feel safe in an environment, they will seek safety before attempting to meet any higher level of survival. This is why the "goal of consistently meeting the need for safety is to have stability in one's life", stability brings back the concept of homeostasis for humans which our bodies need.
Safety needs include:
Health
Personal security
Emotional security
Financial security
Love and social needs
After physiological and safety needs are fulfilled, the third level of human needs is interpersonal and involves feelings of belongingness. According to Maslow, humans possess an effective need for a sense of belonging and acceptance among social groups, regardless of whether these groups are large or small; being a part of a group is crucial, regardless if it is work, sports, friends or family. The sense of belongingness is "being comfortable with and connection to others that results from receiving acceptance, respect, and love." For example, some large social groups may include clubs, co-workers, religious groups, professional organizations, sports teams, gangs or online communities. Some examples of small social connections include family members, intimate partners, mentors, colleagues, and confidants. Humans need to love and be loved – both sexually and non-sexually – by others according to Maslow. Many people become susceptible to loneliness, social anxiety, and clinical depression in the absence of this love or belonging element. This need is especially strong in childhood and it can override the need for safety as witnessed in children who cling to abusive parents. Deficiencies due to hospitalism, neglect, shunning, ostracism, etc. can adversely affect the individual's ability to form and maintain emotionally significant relationships in general.
Mental health can be a huge factor when it comes to an individual's needs and development. When an individual's needs are not met, it can cause depression during adolescence. When an individual grows up in a higher-income family, it is much more likely that they will have a lower rate of depression. This is because all of their basic needs are met. Studies have shown that when a family goes through financial stress for a prolonged time, depression rates are higher, not only because their basic needs are not being met, but because this stress strains the parent-child relationship. The parent(s) is stressed about providing for their children, and they are also likely to spend less time at home because they are working more to make more money and provide for their family.
Social belonging needs include:
Family
Friendship
Intimacy
Trust
Acceptance
Receiving and giving love and affection
In certain situations, the need for belonging may overcome the physiological and security needs, depending on the strength of the peer pressure. In contrast, for some individuals, the need for self-esteem is more important than the need for belonging; and for others, the need for creative fulfillment may supersede even the most basic needs.
Esteem needs
Esteem is the respect, and admiration of a person, but also "self-respect and respect from others". Most people need stable esteem, meaning that which is soundly based on real capacity or achievement. Maslow noted two versions of esteem needs. The "lower" version of esteem is the need for respect from others and may include a need for status, recognition, fame, prestige, and attention. The "higher" version of esteem is the need for self-respect, and can include a need for strength, competence, mastery, self-confidence, independence, and freedom. This "higher" version takes guidelines, the "hierarchies are interrelated rather than sharply separated". This means that esteem and the subsequent levels are not strictly separated; instead, the levels are closely related.
Esteem comes from day-to-day experiences which provide a learning opportunity that allows us to discover ourselves. This is incredibly important for children, which is why giving them "the opportunity to discover they are competent and capable learners" is crucial. To boost this, adults must provide opportunities for children to have successful and positive experiences to give children a greater "sense of self". Adults, especially parents and educators must create and ensure an environment for children that is supportive and provides them with opportunities that "helps children see themselves as respectable, capable individuals". It can also be found that "Maslow indicated that the need for respect or reputation is most important for children ... and precedes real self-esteem or dignity", which reflects the two aspects of esteem: for oneself and others.
Extended hierarchy of needs
Cognitive needs
It has been suggested that Maslow's hierarchy of needs can be extended after esteem needs into two more categories: cognitive needs and aesthetic needs. Cognitive needs crave meaning, information, comprehension and curiosity – this creates a will to learn and attain knowledge. From an educational viewpoint, Maslow wanted humans to have intrinsic motivation to become educated people. People have cognitive needs such as creativity, foresight, curiosity, and meaning. Individuals who enjoy activities that require deliberation and brainstorming have a greater need for cognition. Individuals who are unmotivated to participate in the activity, on the other hand, have a low demand for cognitive abilities.
Aesthetic needs
After reaching one's cognitive needs, it would progress to aesthetic needs to beautify one's life. This would consist of having the ability to appreciate the beauty within the world around one's self, on a day-to-day basis. According to Maslow's theories, to progress toward Self-Actualization, humans require beautiful imagery or novel and aesthetically pleasing experiences. Humans must immerse themselves in nature's splendor while paying close attention to their surroundings and observing them in order to extract the world's beauty. One would accomplish this by making their environment pleasant to look at or be around. They might discover personal style choices that they feel represent them and make their environment a place that they fit well into. This higher level of need to connect with nature results in a sense of intimacy with nature and all that is endearing. Aesthetic needs also relate to beautifying oneself. This would consist of improving one's physical appearance to ensure its beauty to balance the rest of the body. This is done by making and finding ways one wants to dress and express oneself through personal beauty and grooming standards and ideas.
Self-actualization
"What a man can be, he must be." This quotation forms the basis of the perceived need for self-actualization. This level of need refers to the realization of one's full potential. Maslow describes this as the desire to accomplish everything that one can, to become the most that one can be. People may have a strong, particular desire to become an ideal parent, succeed athletically, or create paintings, pictures, or inventions. To understand this level of need, a person must not only succeed in the previous needs but master them. Self-actualization can be described as a value-based system when discussing its role in motivation. Self-actualization is understood as the goal or explicit motive, and the previous stages in Maslow's hierarchy fall in line to become the step-by-step process by which self-actualization is achievable; an explicit motive is the objective of a reward-based system that is used to intrinsically drive the completion of certain values or goals. Individuals who are motivated to pursue this goal seek and understand how their needs, relationships, and sense of self are expressed through their behavior. Self-actualization needs include:
Partner acquisition
Parenting
Utilizing and developing talents and abilities
Pursuing goals
Transcendence needs
Maslow later subdivided the triangle's top to include self-transcendence, also known as spiritual needs. Spiritual needs differ from other types of needs in that they can be met on multiple levels. When this need is met, it produces feelings of integrity and raises things to a higher plane of existence. In his later years, Maslow explored a further dimension of motivation, while criticizing his original vision of self-actualization. Maslow tells us that by transcending you have a set of roots in your current culture but you are able to look over it as well and see other viewpoints and ideas. By these later ideas, one finds the fullest realization in giving oneself to something beyond oneself—for example, in altruism or spirituality. He equated this with the desire to reach the infinite. "Transcendence refers to the very highest and most inclusive or holistic levels of human consciousness, behaving and relating, as ends rather than means, to oneself, to significant others, to human beings in general, to other species, to nature, and to the cosmos."
History
Maslow's hierarchy of needs was created as Maslow "studied and observed monkeys [...] noticing their unusual pattern of behavior that addressed priorities based on individual needs".
Some Indigenous academics have speculated that his theories, including the hierarchy, may have been influenced by the teachings and philosophy of the Blackfeet tribe, where he spent several weeks doing fieldwork in 1938; however, while this idea has gained attention on social media, there is no evidence to suggest he borrowed or stole ideas for his hierarchy of needs, which he only first published in 1943.
Maslow's idea was further described in his 1954 book Motivation and Personality.
At the time of its original publication in 1943, there was no empirical evidence to support the theory.
Criticism
Maslow's hierarchy of needs has widespread influence outside academia, perhaps because it explains things "that most humans immediately recognize in themselves and others". Still, academically, Maslow's idea is heavily contested. Although recent research appears to validate the existence of universal human needs, as well as shared ordering of the way in which people seek and satisfy needs, the exact hierarchy proposed by Maslow is called into question. The most common criticism is the expectation that different individuals, with similar backgrounds and at similar junctures in their respective lives, when faced with the same situation, would end up taking the same decision. Instead of that, a common observation is that humans are driven by a unique set of motivations, and their behavior cannot be reliably predicted based on the Maslowian principles. Maslow has also been criticized for originally theorizing that people generally move from the bottom of the pyramid to the top during their lifetime, when in fact, most people move up and down the pyramid constantly. However, Maslow later revised this model, proposing that the pyramid is not the same for each individual, that it is not a rigid linear process and that individuals can have various needs at the same time or shift between levels.
Methodology
Maslow studied people such as Albert Einstein, Jane Addams, Eleanor Roosevelt, and Baruch Spinoza, rather than mentally ill or neurotic people, writing that "the study of crippled, stunted, immature, and unhealthy specimens can yield only a cripple psychology and a cripple philosophy".
Ranking
Global ranking
In a 1976 review of Maslow's hierarchy of needs, little evidence was found for the specific ranking of needs that Maslow described or for the existence of a definite hierarchy at all. This refutation was claimed to be supported by the majority of longitudinal data and cross-sectional studies at the time, with the limited support for Maslow's hierarchy criticized due to poor measurement criteria and selection of control groups.
In 1984, the order in which the hierarchy is arranged was criticized as being ethnocentric by Geert Hofstede. In turn, Hofstede's work was criticized by others. Maslow's hierarchy of needs was argued as failing to illustrate and expand upon the difference between the social and intellectual needs of those raised in individualistic societies and those raised in collectivist societies. The needs and drives of those in individualistic societies tend to be more self-centered than those in collectivist societies, focusing on the improvement of the self, with self-actualization being the apex of self-improvement. In collectivist societies, the needs of acceptance and community will outweigh the needs for freedom and individuality.
Criticisms towards the theory have also been expressed on the lack of consideration towards individualism and collectivism in the context of spirituality.
Sex ranking
The position and value of sex within Maslow's hierarchy have been a source of criticism. Maslow's hierarchy places sex in the physiological needs category, alongside food and breathing. Some critics argue that this placement of sex neglects the emotional, familial, and evolutionary implications of sex within the community, although others point out that this critique could apply to all of the basic needs. However, Maslow himself acknowledged that the satisfaction of sexual desire was likely linked to other social motives as well. Furthermore, it is recognized that physiological needs such as sex and hunger can be related to higher-order motivations.
Hierarchy changes by circumstance
The higher-order (self-esteem and self-actualization) and lower-order (physiological, safety, and love) need classification of Maslow's hierarchy of needs is not universal and may vary across cultures due to individual differences and availability of resources in the region or geopolitical entity/country.
In a 1997 study, exploratory factor analysis (EFA) of a thirteen-item scale showed there were two particularly important levels of needs in the US during the peacetime of 1993 to 1994: survival (physiological and safety) and psychological (love, self-esteem, and self-actualization). In 1991, a retrospective peacetime measure was established and collected during the Persian Gulf War, and US citizens were asked to recall the importance of needs from the previous year. Once again, only two levels of needs were identified; therefore, people have the ability and competence to recall and estimate the importance of needs. For citizens in the Middle East (Egypt and Saudi Arabia), three levels of needs regarding importance and satisfaction surfaced during the 1990 retrospective peacetime. These three levels were completely different from those of US citizens.
Changes regarding the importance and satisfaction of needs from the retrospective peacetime to wartime due to stress varied significantly across cultures (the US vs. the Middle East). For the US citizens, there was only one level of needs, since all needs were considered equally important. With regards to satisfaction of needs during the war, in the US there were three levels: physiological needs, safety needs, and psychological needs (social, self-esteem, and self-actualization). During the war, the satisfaction of physiological needs and safety needs were separated into two independent needs, while during peacetime, they were combined as one. For the people of the Middle East, the satisfaction of needs changed from three levels to two during wartime.
A study of the ordering of needs in Asia found differences between the ordering of lower and higher order needs. For instance, community (related to belongingness and considered a lower order need in Maslow's hierarchy) was found to be the highest order need across Asia, followed closely by self-acceptance and growth.
A 1981 study looked at how Maslow's hierarchy might vary across age groups. A survey asked participants of varying ages to rate a set number of statements from most important to least important. The researchers found that children had higher physical need scores than the other groups, the love need emerged from childhood to young adulthood, the esteem need was highest among the adolescent group, young adults had the highest self-actualization level, and old age had the highest level of security, it was needed across all levels comparably. The authors argued that this suggested Maslow's hierarchy may be limited as a theory for developmental sequence since the sequence of the love need and the self-esteem need should be reversed according to age.
See also
ERG theory, further expands and explains Maslow's theory
First World problem reflects on trivial concerns in the context of more pressing needs
Manfred Max-Neef's Fundamental human needs, Manfred Max-Neef's model
Functional prerequisites
Human givens, a theory in psychotherapy that offers descriptions of the nature, needs, and innate attributes of humans
Need theory, David McClelland's model
Positive disintegration
Self-determination theory, Edward L. Deci's and Richard Ryan's model
References
Further reading
Reprinted from Journal of Transpersonal Psychology, 1969, 1 (2): 31–47.
External links
1943 introductions
Developmental psychology
Happiness
Human development
Interpersonal relationships
Motivational theories
Organizational behavior
Personal development
Personal life
Positive psychology
Psychological concepts | 0.761803 | 0.999864 | 0.761699 |
Frotteurism | Frotteurism is a paraphilic interest in rubbing, usually one's pelvic area or erect penis, against a non-consenting person for sexual pleasure. It may involve touching any part of the body, including the genital area. A person who practices frotteuristic acts is known as a frotteur.
Toucherism is sexual arousal based on grabbing or rubbing one's hands against an unexpecting (and non-consenting) person. It usually involves touching breasts, buttocks or genital areas, often while quickly walking across the victim's path. Some psychologists consider toucherism a manifestation of frotteurism, while others distinguish the two. In clinical medicine, treatment of frotteuristic disorder involves cognitive behavioral therapy coupled with the administration of an SSRI.
Etymology and history
Frotteuristic acts were probably first interpreted as signs of a psychological disorder by French psychiatrist Valentin Magnan, who described three acts of "frottage" in an 1890 study. "Frottage" derives from the French verb frotter, meaning "to rub". Frotteur is a French noun literally meaning "one who rubs". It was popularized by German sexologist Richard von Krafft-Ebing in his book Psychopathia Sexualis, borrowing from Magnan's French terminology. Clifford Allen later coined frotteurism in his 1969 textbook of sexual disorders.
The Diagnostic and Statistical Manual of Mental Disorders called this sexual disorder by the name frottage until the third edition (DSM III-R), but changed to frotteurism in the fourth edition, and now uses frotteuristic disorder in the fifth edition. Nevertheless, the term frottage still remains in some law codes where it is synonymous with the term frotteurism.
Symptoms and classification
The professional handbook of the American Psychiatric Association (APA), the Diagnostic and Statistical Manual of Mental Disorders, fifth edition, lists the following diagnostic criteria for frotteuristic disorder.
Over a period of at least 6 months, recurrent and intense sexual arousal from touching or rubbing against a nonconsenting person, as manifested by fantasies, urges, or behaviors.
The individual has acted on these sexual urges with a nonconsenting person, or the sexual urges or fantasies cause clinically significant distress or impairment in social, occupational, or other important areas of functioning.
If the individual has not acted on their interest and experiences no distress or impairment, they are considered to have a frotteuristic sexual interest, but not frotteuristic disorder. Some sexologists distinguish between frotteurism (as pelvic rubbing) and toucherism (as groping with hands), but the DSM does not. Sexologist Kurt Freund described frotteurism and toucherism as courtship disorders that occur at the tactile stage of human courtship.
Prevalence and legality
The prevalence of frotteurism is unknown. The DSM estimates that 10–14% of men seen in clinical settings for paraphilias or hypersexuality have frotteuristic disorder, indicating that the population prevalence is lower. However, frotteuristic acts, as opposed to frotteuristic disorder, may occur in up to 30% of men in the general population. The majority of frotteurs are male and the majority of victims are female, although female on male, female on female, and male on male frotteurs exist. This activity is often done in circumstances where the victim cannot easily respond, in a public place such as a crowded train or concert. Marco Vassi's story "Subway Dick" is an example of such acts in a train.
Usually, such nonconsensual sexual contact is viewed as a criminal offense: a form of sexual assault albeit often classified as a misdemeanor with minor legal penalties. Conviction may result in a sentence or psychiatric treatment.
See also
Eve teasing
Forcible touching
Groping
Masturbation
Non-penetrative sex
Sexual harassment
Butsukari otoko
References
Paraphilias
Sexual fetishism
sv:Parafili#Exempel på parafilier | 0.764141 | 0.99673 | 0.761643 |
Medical genetics | Medical genetics is the branch of medicine that involves the diagnosis and management of hereditary disorders. Medical genetics differs from human genetics in that human genetics is a field of scientific research that may or may not apply to medicine, while medical genetics refers to the application of genetics to medical care. For example, research on the causes and inheritance of genetic disorders would be considered within both human genetics and medical genetics, while the diagnosis, management, and counselling people with genetic disorders would be considered part of medical genetics.
In contrast, the study of typically non-medical phenotypes such as the genetics of eye color would be considered part of human genetics, but not necessarily relevant to medical genetics (except in situations such as albinism). Genetic medicine is a newer term for medical genetics and incorporates areas such as gene therapy, personalized medicine, and the rapidly emerging new medical specialty, predictive medicine.
Scope
Medical genetics encompasses many different areas, including clinical practice of physicians, genetic counselors, and nutritionists, clinical diagnostic laboratory activities, and research into the causes and inheritance of genetic disorders. Examples of conditions that fall within the scope of medical genetics include birth defects and dysmorphology, intellectual disabilities, autism, mitochondrial disorders, skeletal dysplasia, connective tissue disorders, cancer genetics, and prenatal diagnosis. Medical genetics is increasingly becoming relevant to many common diseases. Overlaps with other medical specialties are beginning to emerge, as recent advances in genetics are revealing etiologies for morphologic, endocrine, cardiovascular, pulmonary, ophthalmologist, renal, psychiatric, and dermatologic conditions. The medical genetics community is increasingly involved with individuals who have undertaken elective genetic and genomic testing.
Subspecialties
In some ways, many of the individual fields within medical genetics are hybrids between clinical care and research. This is due in part to recent advances in science and technology (for example, see the Human Genome Project) that have enabled an unprecedented understanding of genetic disorders.
Clinical genetics
Clinical genetics a medical specialty with particular attention to hereditary disorders. Branches of clinical genetics include:
1. Prenatal genetics
Couples at risk of having a child with a genetic disorder preconception or while pregnant
High risk prenatal screening results
Abnormal fetal ultrasound
2. Pediatric genetics
Birth defects
developmental delay, autism, epilepsy
short stature and skeletal dysplasia
3. Adult genetics
cardiomyopathy and cardiac dysrhythmias
inherited kidney disease
dementia and neurodegeneration
connective tissue disease
4. Cancer genetics
breast/ovarian cancer
bowel cancer
endocrine tumors
Examples of genetic syndromes that are commonly seen in the genetics clinic include chromosomal rearrangements (e.g. Down syndrome, 22q11.2 deletion syndrome, Turner syndrome, Williams syndrome), Fragile X syndrome, Marfan syndrome, neurofibromatosis, Huntington disease, familial adenomatous polyposis, and many more.
Training and qualification
In Europe, the training of physicians in Clinical/Medical Genetics is overseen by the Union Européenne des Médecins Spécialistes (UEMS). This organization aims to harmonize and raise the standards of medical specialist training across Europe. The UEMS has established European Training Requirements (ETR) for Medical Genetics to guide the education and training of medical geneticists.
Individuals seeking acceptance into clinical genetics training programs must hold an MD, or in some countries, an MB ChB or MB BS degree. These qualifications ensure that trainees have the foundational medical knowledge required to specialize in Medical Genetics. The optimal training program involves a total of five years: one year of general medical training (the "common trunk", often covering fields such as general practice, pediatrics, obstetrics and gynecology, neurology, psychiatry, and internal medicine) followed by four years of specialized training in Medical Genetics. This specialized training should include at least two years of clinical patient care and at least six months in genetic laboratory diagnostics. Trainees' progress is evaluated through a structured program that begins with observation and progresses to independent practice under supervision, culminating in the ability to manage complex cases independently.
Final certification involves a comprehensive assessment, which may include national examinations or the European Certificate in Medical Genetics and Genomics (ECMGG). This certificate serves as a benchmark for high standards in the specialty across Europe and is increasingly recognized by various national regulatory authorities.
In the United States, physicians who practice clinical genetics are accredited by the American Board of Medical Genetics and Genomics (ABMGG). In order to become a board-certified practitioner of Clinical Genetics, a physician must complete a minimum of 24 months of training in a program accredited by the ABMGG. Individuals seeking acceptance into clinical genetics training programs must hold an M.D. or D.O. degree (or their equivalent) and have completed a minimum of 12 months of training in an ACGME-accredited residency program in internal medicine, pediatrics, obstetrics and gynecology, or other medical specialty.
In Australia and New Zealand, clinical genetics is a three-year advanced training program for those who already have their primary medical qualification (MBBS or MD) and have successfully completed basic training in either paediatric medicine or adult medicine. Training is overseen by the Royal Australasian College of Physicians with the Australasian Association of Clinical Geneticists contributing to authorship of the curriculum via their parent organization, the Human Genetics Society of Australasia.
Metabolic/biochemical genetics
Metabolic (or biochemical) genetics involves the diagnosis and management of inborn errors of metabolism in which patients have enzymatic deficiencies that perturb biochemical pathways involved in metabolism of carbohydrates, amino acids, and lipids. Examples of metabolic disorders include galactosemia, glycogen storage disease, lysosomal storage disorders, metabolic acidosis, peroxisomal disorders, phenylketonuria, and urea cycle disorders.
Cytogenetics
Cytogenetics is the study of chromosomes and chromosome abnormalities. While cytogenetics historically relied on microscopy to analyze chromosomes, new molecular technologies such as array comparative genomic hybridization are now becoming widely used. Examples of chromosome abnormalities include aneuploidy, chromosomal rearrangements, and genomic deletion/duplication disorders.
Molecular genetics
Molecular genetics involves the discovery of and laboratory testing for DNA mutations that underlie many single gene disorders. Examples of single gene disorders include achondroplasia, cystic fibrosis, Duchenne muscular dystrophy, hereditary breast cancer (BRCA1/2), Huntington disease, Marfan syndrome, Noonan syndrome, and Rett syndrome. Molecular tests are also used in the diagnosis of syndromes involving epigenetic abnormalities, such as Angelman syndrome, Beckwith-Wiedemann syndrome, Prader-willi syndrome, and uniparental disomy.
Mitochondrial genetics
Mitochondrial genetics concerns the diagnosis and management of mitochondrial disorders, which have a molecular basis but often result in biochemical abnormalities due to deficient energy production.
There exists some overlap between medical genetic diagnostic laboratories and molecular pathology.
Genetic counseling
Genetic counseling is the process of providing information about genetic conditions, diagnostic testing, and risks in other family members, within the framework of nondirective counseling. Genetic counselors are non-physician members of the medical genetics team who specialize in family risk assessment and counseling of patients regarding genetic disorders. The precise role of the genetic counselor varies somewhat depending on the disorder.
When working alongside geneticists, genetic counselors normally specialize in pediatric genetics which focuses on developmental abnormalities present in newborns, infants or children. The major goal of pediatric counseling is attempting to explain the genetic basis behind the child's developmental concerns in a compassionate and articulated manner that allows the potentially distressed or frustrated parents to easily understand the information. As well, genetic counselors normally take a family pedigree, which summarizes the medical history of the patient's family. This then aids the clinical geneticist in the differential diagnosis process and help determine which further steps should be taken to help the patient.
History
Although genetics has its roots back in the 19th century with the work of the Bohemian monk Gregor Mendel and other pioneering scientists, human genetics emerged later. It started to develop, albeit slowly, during the first half of the 20th century. Mendelian (single-gene) inheritance was studied in a number of important disorders such as albinism, brachydactyly (short fingers and toes), and hemophilia. Mathematical approaches were also devised and applied to human genetics. Population genetics was created.
Medical genetics was a late developer, emerging largely after the close of World War II (1945) when the eugenics movement had fallen into disrepute. The Nazi misuse of eugenics sounded its death knell. Shorn of eugenics, a scientific approach could be used and was applied to human and medical genetics. Medical genetics saw an increasingly rapid rise in the second half of the 20th century and continues in the 21st century.
Current practice
The clinical setting in which patients are evaluated determines the scope of practice, diagnostic, and therapeutic interventions. For the purposes of general discussion, the typical encounters between patients and genetic practitioners may involve:
Referral to an out-patient genetics clinic (pediatric, adult, or combined) or an in-hospital consultation, most often for diagnostic evaluation.
Specialty genetics clinics focusing on management of inborn errors of metabolism, skeletal dysplasia, or lysosomal storage diseases.
Referral for counseling in a prenatal genetics clinic to discuss risks to the pregnancy (advanced maternal age, teratogen exposure, family history of a genetic disease), test results (abnormal maternal serum screen, abnormal ultrasound), and/or options for prenatal diagnosis (typically non-invasive prenatal screening, diagnostic amniocentesis or chorionic villus sampling).
Multidisciplinary specialty clinics that include a clinical geneticist or genetic counselor (cancer genetics, cardiovascular genetics, craniofacial or cleft lip/palate, hearing loss clinics, muscular dystrophy/neurodegenerative disorder clinics).
Diagnostic evaluation
Each patient will undergo a diagnostic evaluation tailored to their own particular presenting signs and symptoms. The geneticist will establish a differential diagnosis and recommend appropriate testing. These tests might evaluate for chromosomal disorders, inborn errors of metabolism, or single gene disorders.
Chromosome studies
Chromosome studies are used in the general genetics clinic to determine a cause for developmental delay or intellectual disability, birth defects, dysmorphic features, or autism. Chromosome analysis is also performed in the prenatal setting to determine whether a fetus is affected with aneuploidy or other chromosome rearrangements. Finally, chromosome abnormalities are often detected in cancer samples. A large number of different methods have been developed for chromosome analysis:
Chromosome analysis using a karyotype involves special stains that generate light and dark bands, allowing identification of each chromosome under a microscope.
Fluorescence in situ hybridization (FISH) involves fluorescent labeling of probes that bind to specific DNA sequences, used for identifying aneuploidy, genomic deletions or duplications, characterizing chromosomal translocations and determining the origin of ring chromosomes.
Chromosome painting is a technique that uses fluorescent probes specific for each chromosome to differentially label each chromosome. This technique is more often used in cancer cytogenetics, where complex chromosome rearrangements can occur.
Array comparative genomic hybridization is a newer molecular technique that involves hybridization of an individual DNA sample to a glass slide or microarray chip containing molecular probes (ranging from large ~200kb bacterial artificial chromosomes to small oligonucleotides) that represent unique regions of the genome. This method is particularly sensitive for detection of genomic gains or losses across the genome but does not detect balanced translocations or distinguish the location of duplicated genetic material (for example, a tandem duplication versus an insertional duplication).
Basic metabolic studies
Biochemical studies are performed to screen for imbalances of metabolites in the bodily fluid, usually the blood (plasma/serum) or urine, but also in cerebrospinal fluid (CSF). Specific tests of enzyme function (either in leukocytes, skin fibroblasts, liver, or muscle) are also employed under certain circumstances. In the US, the newborn screen incorporates biochemical tests to screen for treatable conditions such as galactosemia and phenylketonuria (PKU). Patients suspected to have a metabolic condition might undergo the following tests:
Quantitative amino acid analysis is typically performed using the ninhydrin reaction, followed by liquid chromatography to measure the amount of amino acid in the sample (either urine, plasma/serum, or CSF). Measurement of amino acids in plasma or serum is used in the evaluation of disorders of amino acid metabolism such as urea cycle disorders, maple syrup urine disease, and PKU. Measurement of amino acids in urine can be useful in the diagnosis of cystinuria or renal Fanconi syndrome as can be seen in cystinosis.
Urine organic acid analysis can be either performed using quantitative or qualitative methods, but in either case the test is used to detect the excretion of abnormal organic acids. These compounds are normally produced during bodily metabolism of amino acids and odd-chain fatty acids, but accumulate in patients with certain metabolic conditions.
The acylcarnitine combination profile detects compounds such as organic acids and fatty acids conjugated to carnitine. The test is used for detection of disorders involving fatty acid metabolism, including MCAD.
Pyruvate and lactate are byproducts of normal metabolism, particularly during anaerobic metabolism. These compounds normally accumulate during exercise or ischemia, but are also elevated in patients with disorders of pyruvate metabolism or mitochondrial disorders.
Ammonia is an end product of amino acid metabolism and is converted in the liver to urea through a series of enzymatic reactions termed the urea cycle. Elevated ammonia can therefore be detected in patients with urea cycle disorders, as well as other conditions involving liver failure.
Enzyme testing is performed for a wide range of metabolic disorders to confirm a diagnosis suspected based on screening tests.
Molecular studies
DNA sequencing is used to directly analyze the genomic DNA sequence of a particular gene. In general, only the parts of the gene that code for the expressed protein (exons) and small amounts of the flanking untranslated regions and introns are analyzed. Therefore, although these tests are highly specific and sensitive, they do not routinely identify all of the mutations that could cause disease.
DNA methylation analysis is used to diagnose certain genetic disorders that are caused by disruptions of epigenetic mechanisms such as genomic imprinting and uniparental disomy.
Southern blotting is an early technique basic on detection of fragments of DNA separated by size through gel electrophoresis and detected using radiolabeled probes. This test was routinely used to detect deletions or duplications in conditions such as Duchenne muscular dystrophy but is being replaced by high-resolution array comparative genomic hybridization techniques. Southern blotting is still useful in the diagnosis of disorders caused by trinucleotide repeats.
Treatments
Each cell of the body contains the hereditary information (DNA) wrapped up in structures called chromosomes. Since genetic syndromes are typically the result of alterations of the chromosomes or genes, there is no treatment currently available that can correct the genetic alterations in every cell of the body. Therefore, there is currently no "cure" for genetic disorders. However, for many genetic syndromes there is treatment available to manage the symptoms. In some cases, particularly inborn errors of metabolism, the mechanism of disease is well understood and offers the potential for dietary and medical management to prevent or reduce the long-term complications. In other cases, infusion therapy is used to replace the missing enzyme. Current research is actively seeking to use gene therapy or other new medications to treat specific genetic disorders.
Management of metabolic disorders
In general, metabolic disorders arise from enzyme deficiencies that disrupt normal metabolic pathways. For instance, in the hypothetical example:
A ---> B ---> C ---> D AAAA ---> BBBBBB ---> CCCCCCCCCC ---> (no D)
X Y Z X Y | (no or insufficient Z)
EEEEE
Compound "A" is metabolized to "B" by enzyme "X", compound "B" is metabolized to "C" by enzyme "Y", and compound "C" is metabolized to "D" by enzyme "Z".
If enzyme "Z" is missing, compound "D" will be missing, while compounds "A", "B", and "C" will build up. The pathogenesis of this particular condition could result from lack of compound "D", if it is critical for some cellular function, or from toxicity due to excess "A", "B", and/or "C", or from toxicity due to the excess of "E" which is normally only present in small amounts and only accumulates when "C" is in excess. Treatment of the metabolic disorder could be achieved through dietary supplementation of compound "D" and dietary restriction of compounds "A", "B", and/or "C" or by treatment with a medication that promoted disposal of excess "A", "B", "C" or "E". Another approach that can be taken is enzyme replacement therapy, in which a patient is given an infusion of the missing enzyme "Z" or cofactor therapy to increase the efficacy of any residual "Z" activity.
Diet
Dietary restriction and supplementation are key measures taken in several well-known metabolic disorders, including galactosemia, phenylketonuria (PKU), maple syrup urine disease, organic acidurias and urea cycle disorders. Such restrictive diets can be difficult for the patient and family to maintain, and require close consultation with a nutritionist who has special experience in metabolic disorders. The composition of the diet will change depending on the caloric needs of the growing child and special attention is needed during a pregnancy if a woman is affected with one of these disorders.
Medication
Medical approaches include enhancement of residual enzyme activity (in cases where the enzyme is made but is not functioning properly), inhibition of other enzymes in the biochemical pathway to prevent buildup of a toxic compound, or diversion of a toxic compound to another form that can be excreted. Examples include the use of high doses of pyridoxine (vitamin B6) in some patients with homocystinuria to boost the activity of the residual cystathione synthase enzyme, administration of biotin to restore activity of several enzymes affected by deficiency of biotinidase, treatment with NTBC in Tyrosinemia to inhibit the production of succinylacetone which causes liver toxicity, and the use of sodium benzoate to decrease ammonia build-up in urea cycle disorders.
Enzyme replacement therapy
Certain lysosomal storage diseases are treated with infusions of a recombinant enzyme (produced in a laboratory), which can reduce the accumulation of the compounds in various tissues. Examples include Gaucher disease, Fabry disease, Mucopolysaccharidoses and Glycogen storage disease type II. Such treatments are limited by the ability of the enzyme to reach the affected areas (the blood brain barrier prevents enzyme from reaching the brain, for example), and can sometimes be associated with allergic reactions. The long-term clinical effectiveness of enzyme replacement therapies vary widely among different disorders.
Other examples
Angiotensin receptor blockers in Marfan syndrome & Loeys-Dietz
Bone marrow transplantation
Gene therapy
Career paths and training
There are a variety of career paths within the field of medical genetics, and naturally the training required for each area differs considerably. The information included in this section applies to the typical pathways in the United States and there may be differences in other countries. US practitioners in clinical, counseling, or diagnostic subspecialties generally obtain board certification through the American Board of Medical Genetics.
Ethical, legal and social implications
Genetic information provides a unique type of knowledge about an individual and his/her family, fundamentally different from a typically laboratory test that provides a "snapshot" of an individual's health status. The unique status of genetic information and inherited disease has a number of ramifications with regard to ethical, legal, and societal concerns.
On 19 March 2015, scientists urged a worldwide ban on clinical use of methods, particularly the use of CRISPR and zinc finger, to edit the human genome in a way that can be inherited. In April 2015 and April 2016, Chinese researchers reported results of basic research to edit the DNA of non-viable human embryos using CRISPR. In February 2016, British scientists were given permission by regulators to genetically modify human embryos by using CRISPR and related techniques on condition that the embryos were destroyed within seven days. In June 2016 the Dutch government was reported to be planning to follow suit with similar regulations which would specify a 14-day limit.
Societies
The more empirical approach to human and medical genetics was formalized by the founding in 1948 of the American Society of Human Genetics. The Society first began annual meetings that year (1948) and its international counterpart, the International Congress of Human Genetics, has met every 5 years since its inception in 1956. The Society publishes the American Journal of Human Genetics on a monthly basis.
Medical genetics is recognized as a distinct medical specialty. In the U.S., medical genetics has its own approved board (the American Board of Medical Genetics) and clinical specialty college (the American College of Medical Genetics). The college holds an annual scientific meeting, publishes a monthly journal, Genetics in Medicine, and issues position papers and clinical practice guidelines on a variety of topics relevant to human genetics.
In Australia and New Zealand, medical geneticists are trained and certified under the auspices of the Royal Australasian College of Physicians, but professionally belong to the Human Genetics Society of Australasia and its special interest group, the Australasian Association of Clinical Geneticists, for ongoing education, networking and advocacy.
bioethics
Research
The broad range of research in medical genetics reflects the overall scope of this field, including basic research on genetic inheritance and the human genome, mechanisms of genetic and metabolic disorders, translational research on new treatment modalities, and the impact of genetic testing
Basic genetics research
Basic research geneticists usually undertake research in universities, biotechnology firms and research institutes.
Allelic architecture of disease
Sometimes the link between a disease and an unusual gene variant is more subtle. The genetic architecture of common diseases is an important factor in determining the extent to which patterns of genetic variation influence group differences in health outcomes. According to the common disease/common variant hypothesis, common variants present in the ancestral population before the dispersal of modern humans from Africa play an important role in human diseases. Genetic variants associated with Alzheimer disease, deep venous thrombosis, Crohn disease, and type 2 diabetes appear to adhere to this model. However, the generality of the model has not yet been established and, in some cases, is in doubt. Some diseases, such as many common cancers, appear not to be well described by the common disease/common variant model.
Another possibility is that common diseases arise in part through the action of combinations of variants that are individually rare. Most of the disease-associated alleles discovered to date have been rare, and rare variants are more likely than common variants to be differentially distributed among groups distinguished by ancestry. However, groups could harbor different, though perhaps overlapping, sets of rare variants, which would reduce contrasts between groups in the incidence of the disease.
The number of variants contributing to a disease and the interactions among those variants also could influence the distribution of diseases among groups. The difficulty that has been encountered in finding contributory alleles for complex diseases and in replicating positive associations suggests that many complex diseases involve numerous variants rather than a moderate number of alleles, and the influence of any given variant may depend in critical ways on the genetic and environmental background. If many alleles are required to increase susceptibility to a disease, the odds are low that the necessary combination of alleles would become concentrated in a particular group purely through drift.
Population substructure in genetics research
One area in which population categories can be important considerations in genetics research is in controlling for confounding between population substructure, environmental exposures, and health outcomes. Association studies can produce spurious results if cases and controls have differing allele frequencies for genes that are not related to the disease being studied, although the magnitude of this problem in genetic association studies is subject to debate. Various methods have been developed to detect and account for population substructure, but these methods can be difficult to apply in practice.
Population substructure also can be used to advantage in genetic association studies. For example, populations that represent recent mixtures of geographically separated ancestral groups can exhibit longer-range linkage disequilibrium between susceptibility alleles and genetic markers than is the case for other populations. Genetic studies can use this admixture linkage disequilibrium to search for disease alleles with fewer markers than would be needed otherwise. Association studies also can take advantage of the contrasting experiences of racial or ethnic groups, including migrant groups, to search for interactions between particular alleles and environmental factors that might influence health.
See also
Full genome sequencing
Inborn error of metabolism
Predictive medicine
DNA Valley
References
Further reading
External links
Genetics home reference
The National Human Genome Research Institute hosts an information center
The Phenomizer – A tool for clinical diagnostics in medical genetics. Phenomizer
Cancer research | 0.767408 | 0.992394 | 0.761572 |
Neuropsychopharmacology | Neuropsychopharmacology, an interdisciplinary science related to psychopharmacology (study of effects of drugs on the mind) and fundamental neuroscience, is the study of the neural mechanisms that drugs act upon to influence behavior. It entails research of mechanisms of neuropathology, pharmacodynamics (drug action), psychiatric illness, and states of consciousness. These studies are instigated at the detailed level involving neurotransmission/receptor activity, bio-chemical processes, and neural circuitry. Neuropsychopharmacology supersedes psychopharmacology in the areas of "how" and "why", and additionally addresses other issues of brain function. Accordingly, the clinical aspect of the field includes psychiatric (psychoactive) as well as neurologic (non-psychoactive) pharmacology-based treatments. Developments in neuropsychopharmacology may directly impact the studies of anxiety disorders, affective disorders, psychotic disorders, degenerative disorders, eating behavior, and sleep behavior.
History
Drugs such as opium, alcohol, and certain plants have been used for millennia by humans to ease suffering or change awareness, but until the modern scientific era knowledge of how the substances actually worked was quite limited, most pharmacological knowledge being more a series of observation than a coherent model. The first half of the 20th century saw psychology and psychiatry as largely phenomenological, in that behaviors or themes which were observed in patients could often be correlated to a limited variety of factors such as childhood experience, inherited tendencies, or injury to specific brain areas. Models of mental function and dysfunction were based on such observations. Indeed, the behavioral branch of psychology dispensed altogether with what actually happened inside the brain, regarding most mental dysfunction as what could be dubbed as "software" errors. In the same era, the nervous system was progressively being studied at the microscopic and chemical level, but there was virtually no mutual benefit with clinical fields—until several developments after World War II began to bring them together. Neuropsychopharmacology may be regarded to have begun in the earlier 1950s with the discovery of drugs such as MAO inhibitors, tricyclic antidepressants, thorazine and lithium which showed some clinical specificity for mental illnesses such as depression and schizophrenia. Until that time, treatments that actually targeted these complex illnesses were practically non-existent. The prominent methods which could directly affect brain circuitry and neurotransmitter levels were the prefrontal lobotomy, and electroconvulsive therapy, the latter of which was conducted without muscle relaxants and both of which often caused the patient great physical and psychological injury.
The field now known as neuropsychopharmacology has resulted from the growth and extension of many previously isolated fields which have met at the core of psychiatric medicine, and engages a broad range of professionals from psychiatrists to researchers in genetics and chemistry. The use of the term has gained popularity since 1990 with the founding of several journals and institutions such as the Hungarian College of Neuropsychopharmacology. This rapidly maturing field shows some degree of flux, as research hypotheses are often restructured based on new information.
Overview
An implicit premise in neuropsychopharmacology with regard to the psychological aspects is that all states of mind, including both normal and drug-induced altered states, and diseases involving mental or cognitive dysfunction, have a neurochemical basis at the fundamental level, and certain circuit pathways in the central nervous system at a higher level. (See also: Neuron doctrine) Thus the understanding of nerve cells or neurons in the brain is central to understanding the mind. It is reasoned that the mechanisms involved can be elucidated through modern clinical and research methods such as genetic manipulation in animal subjects, imaging techniques such as functional magnetic resonance imaging (fMRI), and in vitro studies using selective binding agents on live tissue cultures. These allow neural activity to be monitored and measured in response to a variety of test conditions. Other important observational tools include radiological imaging such as positron emission tomography (PET) and single-photon emission computed tomography (SPECT). These imaging techniques are extremely sensitive and can image tiny molecular concentrations on the order of 10−10 M such as found with extrastriatal D1 receptor for dopamine.
One of the ultimate goals is to devise and develop prescriptions of treatment for a variety of neuropathological conditions and psychiatric disorders. More profoundly, though, the knowledge gained may provide insight into the very nature of human thought, mental abilities like learning and memory, and perhaps consciousness itself. A direct product of neuropsychopharmacological research is the knowledge base required to develop drugs which act on very specific receptors within a neurotransmitter system. These "hyperselective-action" drugs would allow the direct targeting of specific sites of relevant neural activity, thereby maximizing the efficacy (or technically the potency) of the drug within the clinical target and minimizing adverse effects. However, there are some cases when some degree of pharmacological promiscuity is tolerable and even desirable, producing more desirable results than a more selective agent would. An example of this is Vortioxetine, a drug which is not particularly selective as a serotonin reuptake inhibitor, having a significant degree of serotonin modulatory activity, but which has demonstrated reduced discontinuation symptoms (and reduced likelihood of relapse) and greatly reduced incidence of sexual dysfunction, without loss in antidepressant efficacy.
The groundwork is currently being paved for the next generation of pharmacological treatments, which will improve quality of life with increasing efficiency. For example, contrary to previous thought, it is now known that the adult brain does to some extent grow new neurons—the study of which, in addition to neurotrophic factors, may hold hope for neurodegenerative diseases like Alzheimer's, Parkinson's, ALS, and types of chorea. All of the proteins involved in neurotransmission are a small fraction of the more than 100,000 proteins in the brain. Thus there are many proteins which are not even in the direct path of signal transduction, any of which may still be a target for specific therapy. At present, novel pharmacological approaches to diseases or conditions are reported at a rate of almost one per week.
Neurotransmission
So far as we know, everything we perceive, feel, think, know, and do are a result of neurons firing and resetting. When a cell in the brain fires, small chemical and electrical swings called the action potential may affect the firing of as many as a thousand other neurons in a process called neurotransmission. In this way signals are generated and carried through networks of neurons, the bulk electrical effect of which can be measured directly on the scalp by an EEG device.
By the last decade of the 20th century, the essential knowledge of all the central features of neurotransmission had been gained. These features are:
The synthesis and storage of neurotransmitter substances,
The transport of synaptic vesicles and subsequent release into the synapse,
Receptor activation and cascade function,
Transport mechanisms (reuptake) and/or enzyme degradation
The more recent advances involve understanding at the organic molecular level; biochemical action of the endogenous ligands, enzymes, receptor proteins, etc. The critical changes affecting cell firing occur when the signalling neurotransmitters from one neuron, acting as ligands, bind to receptors of another neuron. Many neurotransmitter systems and receptors are well known, and research continues toward the identification and characterization of a large number of very specific subtypes of receptors. For the six more important neurotransmitters Glu, GABA, Ach, NE, DA, and 5HT (listed at neurotransmitter) there are at least 29 major subtypes of receptor. Further "sub-subtypes" exist together with variants, totalling in the hundreds for just these 6 transmitters. - (see serotonin receptor for example.) It is often found that receptor subtypes have differentiated function, which in principle opens up the possibility of refined intentional control over brain function.
It has previously been known that ultimate control over the membrane voltage or potential of a nerve cell, and thus the firing of the cell, resides with the transmembrane ion channels which control the membrane currents via the ions K+, Na+, and Ca++, and of lesser importance Mg++ and Cl−. The concentration differences between the inside and outside of the cell determine the membrane voltage.
Precisely how these currents are controlled has become much clearer with the advances in receptor structure and G-protein coupled processes. Many receptors are found to be pentameric clusters of five transmembrane proteins (not necessarily the same) or receptor subunits, each a chain of many amino acids. Transmitters typically bind at the junction between two of these proteins, on the parts that protrude from the cell membrane. If the receptor is of the ionotropic type, a central pore or channel in the middle of the proteins will be mechanically moved to allow certain ions to flow through, thus altering the ion concentration difference. If the receptor is of the metabotropic type, G-proteins will cause metabolism inside the cell that may eventually change other ion channels. Researchers are better understanding precisely how these changes occur based on the protein structure shapes and chemical properties.
The scope of this activity has been stretched even further to the very blueprint of life since the clarification of the mechanism underlying gene transcription. The synthesis of cellular proteins from nuclear DNA has the same fundamental machinery for all cells; the exploration of which now has a firm basis thanks to the Human Genome Project which has enumerated the entire human DNA sequence, although many of the estimated 35,000 genes remain to be identified. The complete neurotransmission process extends to the genetic level. Gene expression determines protein structures through type II RNA polymerase. So enzymes which synthesize or breakdown neurotransmitters, receptors, and ion channels are each made from mRNA via the DNA transcription of their respective gene or genes. But neurotransmission, in addition to controlling ion channels either directly or otherwise through metabotropic processes, also actually modulates gene expression. This is most prominently achieved through modification of the transcription initiation process by a variety of transcription factors produced from receptor activity.
Aside from the important pharmacological possibilities of gene expression pathways, the correspondence of a gene with its protein allows the important analytical tool of gene knockout. Living specimens can be created using homolog recombination in which a specific gene cannot be expressed. The organism will then be deficient in the associated protein which may be a specific receptor. This method avoids chemical blockade which can produce confusing or ambiguous secondary effects so that the effects of a lack of receptor can be studied in a purer sense.
Drugs
The inception of many classes of drugs is in principle straightforward: any chemical that can enhance or diminish the action of a target protein could be investigated further for such use. The trick is to find such a chemical that is receptor-specific (cf. "dirty drug") and safe to consume. The 2005 Physicians' Desk Reference lists twice the number of prescription drugs as the 1990 version. Many people by now are familiar with "selective serotonin reuptake inhibitors", or SSRIs which exemplify modern pharmaceuticals. These SSRI antidepressant drugs, such as Paxil and Prozac, selectively and therefore primarily inhibit the transport of serotonin which prolongs the activity in the synapse. There are numerous categories of selective drugs, and transport blockage is only one mode of action. The FDA has approved drugs which selectively act on each of the major neurotransmitters such as NE reuptake inhibitor antidepressants, DA blocker anti-psychotics, and GABA agonist tranquilizers (benzodiazepines).
New endogenous chemicals are continually identified. Specific receptors have been found for the drugs THC (cannabis) and GHB, with endogenous transmitters anandamide and GHB. Another recent major discovery occurred in 1999 when orexin, or hypocretin, was found to have a role in arousal, since the lack of orexin receptors mirrors the condition of narcolepsy. Orexin agonism may explain the antinarcoleptic action of the drug modafinil which was already being used only a year prior.
The next step, which major pharmaceutical companies are currently working hard to develop, are receptor subtype-specific drugs and other specific agents. An example is the push for better anti-anxiety agents (anxiolytics) based on GABAA(α2) agonists, CRF1 antagonists, and 5HT2c antagonists. Another is the proposal of new routes of exploration for antipsychotics such as glycine reuptake inhibitors. Although the capabilities exist for receptor-specific drugs, a shortcoming of drug therapy is the lack of ability to provide anatomical specificity. By altering receptor function in one part of the brain, abnormal activity can be induced in other parts of the brain due to the same type of receptor changes. A common example is the effect of D2 altering drugs (neuroleptics) which can help schizophrenia, but cause a variety of dyskinesias by their action on motor cortex.
Modern studies are revealing details of mechanisms of damage to the nervous system such as apoptosis (programmed cell death) and free-radical disruption. Phencyclidine has been found to cause cell death in striatopallidal cells and abnormal vacuolization in hippocampal and other neurons. The hallucinogen persisting perception disorder (HPPD), also known as post-psychedelic perception disorder, has been observed in patients as long as 26 years after LSD use. The plausible cause of HPPD is damage to the inhibitory GABA circuit in the visual pathway (GABA agonists such as midazolam can decrease some effects of LSD intoxication). The damage may be the result of an excitotoxic response of 5HT2 interneurons. [Note: the vast majority of LSD users do not experience HPPD. Its manifestation may be equally dependent on individual brain chemistry as on the drug use itself.] As for MDMA, aside from persistent losses of 5HT and SERT, long-lasting reduction of serotonergic axons and terminals is found from short-term use, and regrowth may be of compromised function.
Neural circuits
Many functions of the brain are somewhat localized to associated areas like motor and speech ability. Functional associations of brain anatomy are now being complemented with clinical, behavioral, and genetic correlates of receptor action, completing the knowledge of neural signalling (see also: Human Cognome Project). The signal paths of neurons are hyperorganized beyond the cellular scale into often complex neural circuit pathways. Knowledge of these pathways is perhaps the easiest to interpret, being most recognizable from a systems analysis point of view, as may be seen in the following abstracts.
Almost all drugs with a known potential for abuse have been found to modulate activity (directly or indirectly) in the mesolimbic dopamine system, which includes and connects the ventral tegmental area in the midbrain to the hippocampus, medial prefrontal cortex, and amygdala in the forebrain; as well as the nucleus accumbens in the ventral striatum of the basal ganglia. In particular, the nucleus accumbens (NAc) plays an important role in integrating experiential memory from the hippocampus, emotion from the amygdala, and contextual information from the PFC to help associate particular stimuli or behaviors with feelings of pleasure and reward; continuous activation of this reward indicator system by an addictive drug can also cause previously neutral stimuli to be encoded as cues that the brain is about to receive a reward. This happens via the selective release of dopamine, a neurotransmitter responsible for feelings of euphoria and pleasure. The use of dopaminergic drugs alters the amount of dopamine released throughout the mesolimbic system, and regular or excessive use of the drug can result in a long-term downregulation of dopamine signaling, even after an individual stops ingesting the drug. This can lead the individual to engage in mild to extreme drug-seeking behaviors as the brain begins to regularly expect the increased presence of dopamine and the accompanying feelings of euphoria, but how problematic this is depends highly on the drug and the situation.
Significant progress has been made on central mechanisms of certain hallucinogenic drugs. It is at this point known with relative certainty that the primary shared effects of a broad pharmacological group of hallucinogens, sometimes called the "classical psychedelics", can be attributed largely to agonism of serotonin receptors. The 5HT2A receptor, which seems to be the most critical receptor for psychedelic activity, and the 5HT2C receptor, which is a significant target of most psychedelics but which has no clear role in hallucinogenesis, are involved by releasing glutamate in the frontal cortex, while simultaneously in the locus coeruleus sensory information is promoted and spontaneous activity decreases. 5HT2A activity has a net pro-dopaminergic effect, whereas 5HT2C receptor agonism has an inhibitory effect on dopaminergic activity, particularly in the prefrontal cortex. One hypothesis suggests that in the frontal cortex, 5HT2A promotes late asynchronous excitatory postsynaptic potentials, a process antagonized by serotonin itself through 5HT1 receptors, which may explain why SSRIs and other serotonin-affecting drugs do not normally cause a patient to hallucinate. However, the fact that many classical psychedelics do in fact have significant affinity for 5HT1 receptors throws this claim into question. The head twitch response, a test used for assessing classical psychedelic activity in rodents, is produced by serotonin itself only in the presence of beta-Arrestins, but is triggered by classical psychedelics independent of beta-Arrestin recruitment. This may better explain the difference between the pharmacology of serotonergic neurotransmission (even if promoted by drugs such as SSRIs) and that of classical psychedelics. Newer findings, however, indicate that binding to the 5HT2A-mGlu2 heterodimer is also necessary for classical psychedelic activity. This, too, may be relevant to the pharmacological differences between the two. While early in the history of psychedelic drug research it was assumed that these hallucinations were comparable to those produced by psychosis and thus that classical psychedelics could serve as a model of psychosis, it is important to note that modern neuropsychopharmacological knowledge of psychosis has progressed significantly since then, and we now know that psychosis shows little similarity to the effects of classical psychedelics in mechanism, reported experience or most other respects aside from the surface similarity of "hallucination"..
Circadian rhythm, or sleep/wake cycling, is centered in the suprachiasmatic nucleus (SCN) within the hypothalamus, and is marked by melatonin levels 2000–4,000% higher during sleep than in the day. A circuit is known to start with melanopsin cells in the eye which stimulate the SCN through glutamate neurons of the hypothalamic tract. GABAergic neurons from the SCN inhibit the paraventricular nucleus, which signals the superior cervical ganglion (SCG) through sympathetic fibers. The output of the SCG, stimulates NE receptors (β) in the pineal gland which produces N-acetyltransferase, causing production of melatonin from serotonin. Inhibitory melatonin receptors in the SCN then provide a positive feedback pathway. Therefore, light inhibits the production of melatonin which "entrains" the 24-hour cycle of SCN activity. The SCN also receives signals from other parts of the brain, and its (approximately) 24-hour cycle does not only depend on light patterns. In fact, sectioned tissue from the SCN will exhibit daily cycle in vitro for many days. Additionally, (not shown in diagram), the basal nucleus provides GABA-ergic inhibitory input to the pre-optic anterior hypothalamus (PAH). When adenosine builds up from the metabolism of ATP throughout the day, it binds to adenosine receptors, inhibiting the basal nucleus. The PAH is then activated, generating slow-wave sleep activity. Caffeine is known to block adenosine receptors, thereby inhibiting sleep among other things.
Research
Research in the field of neuropsychopharmacology encompasses a wide range of objectives. These might include the study of a new chemical compound for potentially beneficial cognitive or behavioral effects, or the study of an old chemical compound in order to better understand its mechanism of action at the cell and neural circuit levels. For example, the addictive stimulant drug cocaine has long been known to act upon the reward system in the brain, increasing dopamine and norepinephrine levels and inducing euphoria for a short time. More recently published studies however have gone deeper than the circuit level and found that a particular G-protein coupled receptor complex called A2AR-D2R-Sigma1R is formed in the NAc following cocaine usage; this complex reduces D2R signaling in the mesolimbic pathway and may be a contributing factor to cocaine addiction. Other cutting-edge studies have focused on genetics to identify specific biomarkers that may predict an individual's specific reactions or degree of response to a drug or their tendency to develop addictions in the future. These findings are important because they provide detailed insight into the neural circuitry involved in drug use and help refine old as well as develop new treatment methods for disorders or addictions. Different treatment-related studies are investigating the potential role of peptide nucleic acids in treating Parkinson's disease and schizophrenia while still others are attempting to establish previously unknown neural correlates underlying certain phenomena.
Research in neuropsychopharmacology comes from a wide range of activities in neuroscience and clinical research. This has motivated organizations such as the American College of Neuropsychopharmacology (ACNP), the European College of Neuropsychopharmacology (ECNP), and the Collegium Internationale Neuro-psychopharmacologicum (CINP) to be established as a measure of focus.
The ECNP publishes European Neuropsychopharmacology, and as part of the Reed Elsevier Group, the ACNP publishes the journal Neuropsychopharmacology, and the CINP publishes the journal International Journal of Neuropsychopharmacology with Cambridge University Press.
In 2002, a recent comprehensive collected work of the ACNP, "Neuropsychopharmacology: The Fifth Generation of Progress" was compiled. It is one measure of the state of knowledge in 2002, and might be said to represent a landmark in the century-long goal to establish the basic neurobiological principles which govern the actions of the brain.
Many other journals exist which contain relevant information such as Neuroscience.
Some of them are listed at Brown University Library.
See also
Pharmacology
Neuropharmacology
Psychopharmacology
Psychoactive drug
Notes
References
("4th Gen." and "5th Gen." refer to ACNP, see links)
"The history of HCNP: Exchanging information and catalysing progress", ECNP Newsletter, N7 (2004)
Fujita, M. and Innis, R. B., "In vivo Molecular Imaging: Ligand Development And Research Applications", (5th Gen. Prog.)
Tallman, J. F., "Neuropsychopharmacology at the New Millennium: New Industry Directions", Neuropsychopharmacology 20 (1999)
Bloom, F. E., "Introduction to Preclinical Neuropsychopharmacology", (4th Gen. Prog.)
Watson, S. J. and Cullinan, W. E., "Cytology and Circuitry", (4th Gen. Prog.)
Physicians' Desk Reference, 1990, 2005
Erowid, "The Neuropharmacology of γ-hydroxybutyrate (GHB)" (2004)
Tallman, J. F., Cassella, J., Kehne, J., "Mechanism Of Action Of Anxiolytics", (5th Gen. Prog.)
Depoortère, R., et al., "Neurochemical, Electrophysiological and Pharmacological Profiles of the Selective Inhibitor of the Glycine Transporter-1 SSR504734, a Potential New Type of Antipsychotic", Neuropsychopharmacology 30, pp1963–1985, (2005)
Abraham, H. D., Mccann, U. D., Ricaurte, G. A., "Psychedelic Drugs", (5th Gen. Prog.)
Colwell, C. S., "Circadian Rhythms", (4th Gen. Prog.)
Lewy, A. J., "Circadian Phase Sleep And Mood Disorders", (5th Gen. Prog.)
External links
ACNP resources:
American College of Neuropsychopharmacology
Neuropsychopharmacology:The Fifth Generation of Progress
Psychopharmacology:The Fourth Generation of Progress
Organisations:
Collegium Internationale Neuro-psychopharmacologicum A global organisation dedicated to neuropsychopharmacology
European College of Neuropsychopharmacology
Journals:
Neuropsychopharmacology Journal – Official publication of the American College of Neuropsychopharmacology
European Neuropsychopharmacology – An Elsevier journal
The International Journal of Neuropsychopharmacology – A Cambridge University Press publication
Neuropsychopharmacology and Therapeutics by Ivor Ebenezer (2015), John Wiley & Sons, Chichester, UK, .
Neuropsychology
Neuropharmacology
Behavioral neuroscience
Psychopharmacology | 0.780506 | 0.975665 | 0.761512 |
Autodidacticism | Autodidacticism (also autodidactism) or self-education (also self-learning, self-study and self-teaching) is the practice of education without the guidance of schoolmasters (i.e., teachers, professors, institutions).
Overview
Autodidacts are self-taught humans who learn a subject-of-study's aboutness through self-study. This educative praxis (process) may involve or complement formal education. Formal education itself may have a hidden curriculum that requires self-study for the uninitiated.
Generally, autodidacts are individuals who choose the subject they will study, their studying material, and the studying rhythm and time. Autodidacts may or may not have formal education, and their study may be either a complement or an alternative to formal education. Many notable contributions have been made by autodidacts.
The self-learning curriculum is infinite. One may seek out alternative pathways in education and use these to gain competency; self-study may meet some prerequisite-curricula criteria for experiential education or apprenticeship.
Self-education techniques used in self-study can include reading educational textbooks, watching educational videos and listening to educational audio recordings, or by visiting infoshops. One uses some space as a learning space, where one uses critical thinking to develop study skills within the broader learning environment until they've reached an academic comfort zone.
Etymology
The term has its roots in the Ancient Greek words (, ) and (, ). The related term didacticism defines an artistic philosophy of education.
Terminology
Various terms are used to describe self-education. One such is heutagogy, coined in 2000 by Stewart Hase and Chris Kenyon of Southern Cross University in Australia; others are self-directed learning and self-determined learning. In the heutagogy paradigm, a learner should be at the centre of their own learning. A truly self-determined learning approach also sees the heutagogic learner exploring different approaches to knowledge in order to learn; there is an element of experimentation underpinned by a personal curiosity.
Andragogy "strive[s] for autonomy and self-direction in learning", while Heutagogy "identif[ies] the potential to learn from novel experiences as a matter of course [...] manage their own learning". Ubuntugogy is a type of cosmopolitanism that has a collectivist ethics of awareness concerning the African diaspora.
Modern era
Autodidacticism is sometimes a complement of modern formal education. As a complement to formal education, students would be encouraged to do more independent work. The Industrial Revolution created a new situation for self-directed learners.
Before the twentieth century, only a small minority of people received an advanced academic education. As stated by Joseph Whitworth in his influential report on industry dated from 1853, literacy rates were higher in the United States. However, even in the U.S., most children were not completing high school. High school education was necessary to become a teacher. In modern times, a larger percentage of those completing high school also attended college, usually to pursue a professional degree, such as law or medicine, or a divinity degree.
Collegiate teaching was based on the classics (Latin, philosophy, ancient history, theology) until the early nineteenth century. There were few if any institutions of higher learning offering studies in engineering or science before 1800. Institutions such as the Royal Society did much to promote scientific learning, including public lectures. In England, there were also itinerant lecturers offering their service, typically for a fee.
Prior to the nineteenth century, there were many important inventors working as millwrights or mechanics who, typically, had received an elementary education and served an apprenticeship. Mechanics, instrument makers and surveyors had various mathematics training. James Watt was a surveyor and instrument maker and is described as being "largely self-educated". Watt, like some other autodidacts of the time, became a Fellow of the Royal Society and a member of the Lunar Society. In the eighteenth century these societies often gave public lectures and were instrumental in teaching chemistry and other sciences with industrial applications which were neglected by traditional universities. Academies also arose to provide scientific and technical training.
Years of schooling in the United States began to increase sharply in the early twentieth century. This phenomenon was seemingly related to increasing mechanization displacing child labor. The automated glass bottle-making machine is said to have done more for education than child labor laws because boys were no longer needed to assist. However, the number of boys employed in this particular industry was not that large; it was mechanization in several sectors of industry that displaced child labor toward education. For males in the U.S. born 1886–90, years of school averaged 7.86, while for those born in 1926–30, years of school averaged 11.46.
One of the most recent trends in education is that the classroom environment should cater towards students' individual needs, goals, and interests. This model adopts the idea of inquiry-based learning where students are presented with scenarios to identify their own research, questions and knowledge regarding the area. As a form of discovery learning, students in today's classrooms are being provided with more opportunity to "experience and interact" with knowledge, which has its roots in autodidacticism.
Successful self-teaching can require self-discipline and reflective capability. Some research suggests that the ability to regulate one's own learning may need to be modeled to some students so that they become active learners, while others learn dynamically via a process outside conscious control. To interact with the environment, a framework has been identified to determine the components of any learning system: a reward function, incremental action value functions and action selection methods. Rewards work best in motivating learning when they are specifically chosen on an individual student basis. New knowledge must be incorporated into previously existing information as its value is to be assessed. Ultimately, these scaffolding techniques, as described by Vygotsky (1978) and problem solving methods are a result of dynamic decision making.
In his book Deschooling Society, philosopher Ivan Illich strongly criticized 20th-century educational culture and the institutionalization of knowledge and learning - arguing that institutional schooling as such is an irretrievably flawed model of education - advocating instead ad-hoc co-operative networks through which autodidacts could find others interested in teaching themselves a given skill or about a given topic, supporting one another by pooling resources, materials, and knowledge.
Secular and modern societies have given foundations for new systems of education and new kinds of autodidacts. As Internet access has become more widespread the World Wide Web (explored using search engines such as Google) in general, and websites such as Wikipedia (including parts of it that were included in a book or referenced in a reading list), YouTube, Udemy, Udacity and Khan Academy in particular, have developed as learning centers for many people to actively and freely learn together. Organizations like The Alliance for Self-Directed Education (ASDE) have been formed to publicize and provide guidance for self-directed education. Entrepreneurs like Henry Ford, Steve Jobs, and Bill Gates are considered influential self-teachers.
History
The first philosophical claim supporting an autodidactic program to the study of nature and God was in the philosophical novel Hayy ibn Yaqdhan (Alive son of the Vigilant), whose titular hero is considered the archetypal autodidact. The story is a medieval autodidactic utopia, a philosophical treatise in a literary form, which was written by the Andalusian philosopher Ibn Tufail in the 1160s in Marrakesh. It is a story about a feral boy, an autodidact prodigy who masters nature through instruments and reason, discovers laws of nature by practical exploration and experiments, and gains summum bonum through a mystical mediation and communion with God. The hero rises from his initial state of tabula rasa to a mystical or direct experience of God after passing through the necessary natural experiences. The focal point of the story is that human reason, unaided by society and its conventions or by religion, can achieve scientific knowledge, preparing the way to the mystical or highest form of human knowledge.
Commonly translated as "The Self-Taught Philosopher" or "The Improvement of Human Reason", Ibn-Tufayl's story Hayy Ibn-Yaqzan inspired debates about autodidacticism in a range of historical fields from classical Islamic philosophy through Renaissance humanism and the European Enlightenment. In his book Reading Hayy Ibn-Yaqzan: a Cross-Cultural History of Autodidacticism, Avner Ben-Zaken showed how the text traveled from late medieval Andalusia to early modern Europe and demonstrated the intricate ways in which autodidacticism was contested in and adapted to diverse cultural settings.
Autodidacticism apparently intertwined with struggles over Sufism in twelfth-century Marrakesh; controversies about the role of philosophy in pedagogy in fourteenth-century Barcelona; quarrels concerning astrology in Renaissance Florence in which Pico della Mirandola pleads for autodidacticism against the strong authority of intellectual establishment notions of predestination; and debates pertaining to experimentalism in seventeenth-century Oxford. Pleas for autodidacticism echoed not only within close philosophical discussions; they surfaced in struggles for control between individuals and establishments.
In the story of Black American self-education, Heather Andrea Williams presents a historical account to examine Black American's relationship to literacy during slavery, the Civil War and the first decades of freedom. Many of the personal accounts tell of individuals who have had to teach themselves due to racial discrimination in education.
In architecture
Many successful and influential architects, such as Mies van der Rohe, Frank Lloyd Wright, Violet-Le-Duc, Tadao Ando were self-taught.
There are very few countries allowing autodidacticism in architecture today. The practice of architecture or the use of the title "architect", are now protected in most countries.
Self-taught architects have generally studied and qualified in other fields such as engineering or arts and crafts. Jean Prouvé was first a structural engineer. Le Corbusier had an academic qualification in decorative arts. Tadao Ando started his career as a draftsman, and Eileen Gray studied fine arts.
When a political state starts to implement restrictions on the profession, there are issues related to the rights of established self-taught architects. In most countries the legislation includes a grandfather clause, authorising established self-taught architects to continue practicing. In the UK, the legislation allowed self-trained architects with two years of experience to register. In France, it allowed self-trained architects with five years of experience to register. In Belgium, the law allowed experienced self-trained architects in practice to register. In Italy, it allowed self-trained architects with 10 years of experience to register. In The Netherlands, the "" along with additional procedures, allowed architects with 10 years of experience and architects aged 40 years old or over, with 5 years of experience, to access the register.
However, other sovereign states chose to omit such a clause, and many established and competent practitioners were stripped of their professional rights. In the Republic of Ireland, a group named "Architects' Alliance of Ireland" is defending the interests of long-established self-trained architects who were deprived of their rights to practice as per Part 3 of the Irish Building Control Act 2007.
Theoretical research such as Architecture of Change, Sustainability and Humanity in the Built Environment or older studies such as Vers une Architecture from Le Corbusier describe the practice of architecture as an environment changing with new technologies, sciences, and legislation. All architects must be autodidacts to keep up to date with new standards, regulations, or methods.
Self-taught architects such as Eileen Gray, Luis Barragán, and many others, created a system where working is also learning, where self-education is associated with creativity and productivity within a working environment.
While he was primarily interested in naval architecture, William Francis Gibbs learned his profession through his own study of battleships and ocean liners. Through his life he could be seen examining and changing the designs of ships that were already built, that is, until he started his firm Gibbs and Cox.
Predictors
Openness is the largest predictor of self-directed learning out of the Big Five personality traits, though, in a study, personality only explained 10% of the variance in self-directed learning.
Future role
The role of self-directed learning continues to be investigated in learning approaches, along with other important goals of education, such as content knowledge, epistemic practices and collaboration. As colleges and universities offer distance learning degree programs and secondary schools provide cyber school options for K–12 students, technology provides numerous resources that enable individuals to have a self-directed learning experience. Several studies show these programs function most effectively when the "teacher" or facilitator is a full owner of virtual space to encourage a broad range of experiences to come together in an online format. This allows self-directed learning to encompass both a chosen path of information inquiry, self-regulation methods and reflective discussion among experts as well as novices in a given area. Furthermore, massive open online courses (MOOCs) make autodidacticism easier and thus more common.
A 2016 Stack Overflow poll reported that due to the rise of autodidacticism, 69.1% of software developers appear to be self-taught.
Notable individuals
Some notable autodidacts can be broadly grouped in the following interdisciplinary areas:
Artists and authors
Actors, musicians, and other artists
Architects
Engineers and inventors
Scientists, historians, and educators
Educational materials availability
Most governments have compulsory education that may deny the right to education on the basis of discrimination; state school teachers may unwittingly indoctrinate students into the ideology of the oppressive community and government via a hidden curriculum.
See also
References
Further reading
External links
African-American society
African Americans and education
Alternative education
Applied learning
Area studies
Black studies
Cybernetics
Education activism
Education theory
Education in Poland during World War II
Education museums in the United States
Espionage
History of education in the United States
Information sensitivity
Learning
Learning methods
Learning to read
Lyceum movement
Methodology
Open content
Pedagogical movements and theories
Philosophical methodology
Philosophy of education
Play (activity)
Pre-emancipation African-American history
Problem solving methods
Research methods
Sampling (statistics)
School desegregation pioneers
Science experiments
Self-care
Teaching
Underground education
United States education law
WikiLeaks | 0.763336 | 0.9976 | 0.761503 |
Basic research | Basic research, also called pure research, fundamental research, basic science, or pure science, is a type of scientific research with the aim of improving scientific theories for better understanding and prediction of natural or other phenomena. In contrast, applied research uses scientific theories to develop technology or techniques, which can be used to intervene and alter natural or other phenomena. Though often driven simply by curiosity, basic research often fuels the technological innovations of applied science. The two aims are often practiced simultaneously in coordinated research and development.
In addition to innovations, basic research also serves to provide insight into nature around us and allows us to respect its innate value. The development of this respect is what drives conservation efforts. Through learning about the environment, conservation efforts can be strengthened using research as a basis. Technological innovations can unintentionally be created through this as well, as seen with examples such as kingfishers' beaks affecting the design for high speed bullet trains in Japan.
Overview
Basic research advances fundamental knowledge about the world. It focuses on creating and refuting or supporting theories that explain observed phenomena. Pure research is the source of most new scientific ideas and ways of thinking about the world. It can be exploratory, descriptive, or explanatory; however, explanatory research is the most common.
Basic research generates new ideas, principles, and theories, which may not be immediately utilized but nonetheless form the basis of progress and development in different fields. Today's computers, for example, could not exist without research in pure mathematics conducted over a century ago, for which there was no known practical application at the time. Basic research rarely helps practitioners directly with their everyday concerns; nevertheless, it stimulates new ways of thinking that have the potential to revolutionize and dramatically improve how practitioners deal with a problem in the future.
History
By country
In the United States, basic research is funded mainly by the federal government and done mainly at universities and institutes. As government funding has diminished in the 2010s, however, private funding is increasingly important.
Basic versus applied science
Applied science focuses on the development of technology and techniques. In contrast, basic science develops scientific knowledge and predictions, principally in natural sciences but also in other empirical sciences, which are used as the scientific foundation for applied science. Basic science develops and establishes information to predict phenomena and perhaps to understand nature, whereas applied science uses portions of basic science to develop interventions via technology or technique to alter events or outcomes. Applied and basic sciences can interface closely in research and development. The interface between basic research and applied research has been studied by the National Science Foundation. A worker in basic scientific research is motivated by a driving curiosity about the unknown. When his explorations yield new knowledge, he experiences the satisfaction of those who first attain the summit of a mountain or the upper reaches of a river flowing through unmapped territory. Discovery of truth and understanding of nature are his objectives. His professional standing among his fellows depends upon the originality and soundness of his work. Creativeness in science is of a cloth with that of the poet or painter.It conducted a study in which it traced the relationship between basic scientific research efforts and the development of major innovations, such as oral contraceptives and videotape recorders. This study found that basic research played a key role in the development in all of the innovations. The number of basic science research that assisted in the production of a given innovation peaked between 20 and 30 years before the innovation itself. While most innovation takes the form of applied science and most innovation occurs in the private sector, basic research is a necessary precursor to almost all applied science and associated instances of innovation. Roughly 76% of basic research is conducted by universities.
A distinction can be made between basic science and disciplines such as medicine and technology. They can be grouped as STM (science, technology, and medicine; not to be confused with STEM [science, technology, engineering, and mathematics]) or STS (science, technology, and society). These groups are interrelated and influence each other, although they may differ in the specifics such as methods and standards.
The Nobel Prize mixes basic with applied sciences for its award in Physiology or Medicine. In contrast, the Royal Society of London awards distinguish natural science from applied science.
See also
Blue skies research
Hard and soft science
Metascience
Normative science
Physics
Precautionary principle
Pure mathematics
Pure Chemistry
References
Further reading
Research | 0.764839 | 0.995632 | 0.761498 |
Cognitive test | Cognitive tests are assessments of the cognitive capabilities of humans and other animals. Tests administered to humans include various forms of IQ tests; those administered to animals include the mirror test (a test of visual self-awareness) and the T maze test (which tests learning ability). Such testing is used in psychology and psychometrics, as well as other fields studying human and animal intelligence.
Modern cognitive tests originated through the work of James McKeen Cattell who coined the term "mental tests". They followed Francis Galton's development of physical and physiological tests. For example, Galton measured strength of grip and height and weight. He established an "Anthropometric Laboratory" in the 1880s where patrons paid to have physical and physiological attributes measured. Galton's measurements had an enormous influence on psychology. Cattell continued the measurement approach with simple measurements of perception. Cattell's tests were eventually abandoned in favor of the battery test approach developed by Alfred Binet.
List of human tests
Human tests of cognitive ability measure a wide spectrum of mental abilities. When considering tests of cognitive ability, it is paramount to consider evidence for their reliability, validity, length, and mode of administration (e.g., some assessments require a trained administrator to be present with the respondent). It is also essential to understand which cognitive abilities are measured by each test and sub-test. There are also free, searchable websites with compendia of tests and the constructs they measure. Below are a small sample of some of the best-known measures of cognitive abilities and brief descriptions of their content:
Inductive reasoning tests
Inductive reasoning aptitude: Also known as abstract reasoning tests and diagrammatic style tests, are utilized by examining a person's problem-solving skills. This test is used to "measure the ability to work flexibly with unfamiliar information to find solutions." These tests are often visualized through a set of patterns or sequences, with the user determining what does or does not belong.
Intelligence quotient
Situational judgement test: A situational judgement test is used to examine how an individual responds to certain situations. Oftentimes these tests include a scenario with multiple responses, with the user selecting which response they feel is the most appropriate given the situation. This is used to assess how the user would respond to certain situations that may arise in the future.
Working memory
Intelligence tests
Kohs block design test: "The Kohs Block Design Test is a non-verbal assessment of executive functioning, useful with the language and hearing impaired"
Mental age
Miller Analogies Test: According to Pearson Assessments, the Miller Analogies Test is used to determine a students ability to think analytically. The test is 60 minutes long, and is used by schools to determine those who are able to think analytically, and those who are only "memorizing and repeating information"
Otis–Lennon School Ability Test: The OLSAT is a multiple choice exam administered to students anywhere from Pre-K to 12th grade, used to identify which students are intellectually gifted. Students will need to be able to: "Follow directions, detect likenesses and differences, recall words and numbers, classify items, establish sequences, solve arithmetic problems, and complete analogies." The test consists of a mixture between verbal and non-verbal sections, helping inform the schools of the students "verbal, nonverbal, and quantitative ability"
Raven's Progressive Matrices: The Raven's Progressive Matrices is a nonverbal test consisting of 60 multiple choice questions. This test is used to measure the individual's abstract reasoning, and is considered a nonverbal way to test an individual's "fluid intelligence."
Stanford–Binet Intelligence Scales: By measuring the memory, reasoning, knowledge, and processing power of the user, this test is able to determine "an individual's overall intelligence, cognitive ability, and detect any cognitive impairment or learning disabilities." This test measures five factors of cognitive ability, which are as follows: "fluid reasoning, knowledge, quantitative reasoning, visual-spatial processing and working memory."
Wechsler Adult Intelligence Scale: The Wechsler Adult Intelligence Scale (WAIS) is used to determine and assess the intelligence of the participant. This is one of the more common tests used to test an individual's intelligence quotient. Throughout its history, this test has been revised multiple times since its creation, starting with the WAIS in 1955, to the WAIS-R in 1981, to the WAIS-III in 1996, and most recently the WAIS-IV in 2008. This test helps assess the level of the individuals verbal comprehension, perceptual reasoning, working memory, and processing speed.
Wechsler Intelligence Scale for Children: The Wechsler Intelligence Scale for Children (WISC) is for children within the age range of six to sixteen years old. While this test can be used to help determine a child's intelligence quotient, it is often used to determine a child's cognitive abilities. First introduced in 1949, the WSIC is now on its fifth edition (WISC-V), and was most recently updated in 2014. Similar to the WAIS (Wechsler Adult Intelligence Scale), this test helps assess the level of the individuals verbal comprehension, perceptual reasoning, working memory, and processing speed.
Wechsler Preschool and Primary Scale of Intelligence: The Wechsler Preschool and Primary Scale of Intelligence (WPPSI) is used to assess the cognitive ability of children ages two years and six months old to seven years and seven months old. The current version of the test is the fourth edition (WPPSI-IV). Children between the ages of two years and six months old, to three years and 11 months old, are testing on the following: "block design, information, object assembly, picture naming, and receptive vocabulary". Children between the ages of four years old, to seven years and 7 months old, are testing on the following: "coding, comprehension, matrix reasoning, picture completion, picture concepts, similarities, symbol search, vocabulary, and word reasoning."
Wonderlic test: The Wonderlic test is a multiple choice test consisting of 50 questions within a 12-minute time frame. Throughout the test, the questions become more and more difficult. The test is used to determine not only the individuals intelligence quotient, but also the strengths and weaknesses of the individual. The test consists of questions ranging from "English, reading, math, and logic problems" The Wonderlic test is notoriously used by NFL teams to help gain a better understanding of college prospects during the NFL combine.
Cognitive development tests
Cambridge Neuropsychological Test Automated Battery: The Cambridge Neuropsychological Test Automated Battery (CANTAB) is a test used to assess the "neuro-cognitive dysfunctions associated with neurologic disorders, phannacologic manipulations, and neuro-cognitive syndromes." CANTAB is computer based program from Cambridge Cognition, and can test for "working memory, learning and executive function; visual, verbal and episodic memory; attention, information processing and reaction time; social and emotion recognition, decision making and response control."
CAT4: The Cognitive Ability Test was developed by GL Education and is used to predict student success through the evaluation of verbal, non-verbal, mathematical, and spatial reasoning. It is being used by many international schools as part of their admissions process.
CDR computerized assessment system: The Cognitive Drug Research computerized assessment system is used to help determine if a drug has "cognitive-impairing properties". It is also used to "ensure that unwanted interactions with alcohol and other medications do not occur, or, if they do, to put them in context."
Cognitive bias (see also )
Cognitive pretesting: Cognitive pretests are used to evaluate the "comprehensibility of questions", usually given on a survey. This gives the surveyors a better understanding of how their questions are being perceived, and the "quality of the data" that is gained from the survey.
Draw-a-Person test: The Draw-a-Person test can be used on children, adolescents, and adults. It is most commonly used as a test for children and adolescents to assess their cognitive and intellectual ability by scoring their ability to draw human figures.
Knox Cubes: The Knox Cube Imitation Test (KCIT) is a nonverbal test used to assess intelligence. The creator of the KCIT, Howard A. Knox, described the test as: "Four 1-inch [black] cubes, 4 inches apart, are fastened to a piece of thin boarding. The movements and tapping are done with a smaller cube. The operator moves the cube from left to right facing the subject, and after completing each movement, the latter is asked to do likewise. Line a is tried first, then b, and so on to e. Three trials are given if necessary on lines a, b, c, and d, and five trials if needed on line e. To obtain the correct perspective the subject should be two feet from the cubes. The movements of the operator should be slow and deliberate."
Modern Language Aptitude Test
Multiple choice: The style of multiple choice examination was expanded upon in 1934 when IBM introduced a "test scoring machine" that electronically sensed the location of lead pencil marks on a scanning sheet. This further increased the efficiency of scoring multiple-choice items and created a large-scale educational testing method.
Pimsleur Language Aptitude Battery:
Grade levels: 6, 7, 8, 9, 10, 11, 12
Proficiency level: Beginner
Intended test use: placement, admission, fulfilling a requirement, aptitude
Skills tested: listening, grammar, vocabulary
Test length: 50–60 minutes
Test materials: reusable test booklet, consumable answer sheet, consumable performance chart and report to parents, test administrator manual, audio CD, scoring stencil for test administrator
Test format: multiple choice
Scoring method: number correct
Results reported: percentile, raw score
Administered by: trained testers, classroom teachers, school administrators
Administration time period: prior to foreign language study, at discretion of guidance counselor, school psychologist, or other administration
Porteus Maze test:
a supplement to the Stanford-Binet Intelligence Test.
PMT performance seems to be a valid indicator of planning and behavioral disinhibition across socioeconomic status and culture, can be administered without the use of language, and is inexpensive. The PMT also have a relatively short administration time of 10–15 minutes.
Consensus based assessment
Knowledge organization: Features Ranganathan's PMEST formula: Personality, Matter, Energy, Space and Time, consisting of five fundamental categories- the arrangement of which is used to establish the facet order.
Knowledge hierarchies
Memory
Iconic memory
Long-term memory
Short-term memory
Semantic memory
Episodic memory
Visual short-term memory
Working memory
Self
Intelligent self-assessment
Rouge test
Mirror test
Metacognition
The Sally–Anne test (The ability to attribute false beliefs to others): This test has been used in psychological research to investigate theory of mind. It has been suggested that lacking a Theory of Mind may be the reasoning behind some of the communication difficulties accompanied by individuals with autism.
Thought
Mental chronometry
Neuropsychological tests: These are standardized test which are given in the same manner to all examinees and are scored in a similar fashion. The examinees scores on the tests are interpreted by comparing their score to that of healthy individuals of a similar demographic background and to standard levels of operation.
List of animal tests
Memory
Working memory
Delayed-non-matching to place
Trial-unique delayed non-matching to location
Short-term memory
Y-maze
Long-term memory
Morris water maze
Contextual fear conditioning
Attention
5-choice serial reaction time task
Executive control
Reversal learning
Motivation
Fixed-ratio/Progressive ratio task
See also
Cognition
Intelligence quotient
References
Further reading
Psychological tests and scales | 0.771603 | 0.986877 | 0.761477 |
Cognitive science | Cognitive science is the interdisciplinary, scientific study of the mind and its processes. It examines the nature, the tasks, and the functions of cognition (in a broad sense). Mental faculties of concern to cognitive scientists include language, perception, memory, attention, reasoning, and emotion; to understand these faculties, cognitive scientists borrow from fields such as linguistics, psychology, artificial intelligence, philosophy, neuroscience, and anthropology. The typical analysis of cognitive science spans many levels of organization, from learning and decision to logic and planning; from neural circuitry to modular brain organization. One of the fundamental concepts of cognitive science is that "thinking can best be understood in terms of representational structures in the mind and computational procedures that operate on those structures."
History
The cognitive sciences began as an intellectual movement in the 1950s, called the cognitive revolution. Cognitive science has a prehistory traceable back to ancient Greek philosophical texts (see Plato's Meno and Aristotle's ); Modern philosophers such as Descartes, David Hume, Immanuel Kant, Benedict de Spinoza, Nicolas Malebranche, Pierre Cabanis, Leibniz and John Locke, rejected scholasticism while mostly having never read Aristotle, and they were working with an entirely different set of tools and core concepts than those of the cognitive scientist.
The modern culture of cognitive science can be traced back to the early cyberneticists in the 1930s and 1940s, such as Warren McCulloch and Walter Pitts, who sought to understand the organizing principles of the mind. McCulloch and Pitts developed the first variants of what are now known as artificial neural networks, models of computation inspired by the structure of biological neural networks.
Another precursor was the early development of the theory of computation and the digital computer in the 1940s and 1950s. Kurt Gödel, Alonzo Church, Alan Turing, and John von Neumann were instrumental in these developments. The modern computer, or Von Neumann machine, would play a central role in cognitive science, both as a metaphor for the mind, and as a tool for investigation.
The first instance of cognitive science experiments being done at an academic institution took place at MIT Sloan School of Management, established by J.C.R. Licklider working within the psychology department and conducting experiments using computer memory as models for human cognition.
In 1959, Noam Chomsky published a scathing review of B. F. Skinner's book Verbal Behavior. At the time, Skinner's behaviorist paradigm dominated the field of psychology within the United States. Most psychologists focused on functional relations between stimulus and response, without positing internal representations. Chomsky argued that in order to explain language, we needed a theory like generative grammar, which not only attributed internal representations but characterized their underlying order.
The term cognitive science was coined by Christopher Longuet-Higgins in his 1973 commentary on the Lighthill report, which concerned the then-current state of artificial intelligence research. In the same decade, the journal Cognitive Science and the Cognitive Science Society were founded. The founding meeting of the Cognitive Science Society was held at the University of California, San Diego in 1979, which resulted in cognitive science becoming an internationally visible enterprise. In 1972, Hampshire College started the first undergraduate education program in Cognitive Science, led by Neil Stillings. In 1982, with assistance from Professor Stillings, Vassar College became the first institution in the world to grant an undergraduate degree in Cognitive Science. In 1986, the first Cognitive Science Department in the world was founded at the University of California, San Diego.
In the 1970s and early 1980s, as access to computers increased, artificial intelligence research expanded. Researchers such as Marvin Minsky would write computer programs in languages such as LISP to attempt to formally characterize the steps that human beings went through, for instance, in making decisions and solving problems, in the hope of better understanding human thought, and also in the hope of creating artificial minds. This approach is known as "symbolic AI".
Eventually the limits of the symbolic AI research program became apparent. For instance, it seemed to be unrealistic to comprehensively list human knowledge in a form usable by a symbolic computer program. The late 80s and 90s saw the rise of neural networks and connectionism as a research paradigm. Under this point of view, often attributed to James McClelland and David Rumelhart, the mind could be characterized as a set of complex associations, represented as a layered network. Critics argue that there are some phenomena which are better captured by symbolic models, and that connectionist models are often so complex as to have little explanatory power. Recently symbolic and connectionist models have been combined, making it possible to take advantage of both forms of explanation. While both connectionism and symbolic approaches have proven useful for testing various hypotheses and exploring approaches to understanding aspects of cognition and lower level brain functions, neither are biologically realistic and therefore, both suffer from a lack of neuroscientific plausibility. Connectionism has proven useful for exploring computationally how cognition emerges in development and occurs in the human brain, and has provided alternatives to strictly domain-specific / domain general approaches. For example, scientists such as Jeff Elman, Liz Bates, and Annette Karmiloff-Smith have posited that networks in the brain emerge from the dynamic interaction between them and environmental input.
Recent developments in quantum computation, including the ability to run quantum circuits on quantum computers such as IBM Quantum Platform, has accelerated work using elements from quantum mechanics in cognitive models.
Principles
Levels of analysis
A central tenet of cognitive science is that a complete understanding of the mind/brain cannot be attained by studying only a single level. An example would be the problem of remembering a phone number and recalling it later. One approach to understanding this process would be to study behavior through direct observation, or naturalistic observation. A person could be presented with a phone number and be asked to recall it after some delay of time; then the accuracy of the response could be measured. Another approach to measure cognitive ability would be to study the firings of individual neurons while a person is trying to remember the phone number. Neither of these experiments on its own would fully explain how the process of remembering a phone number works. Even if the technology to map out every neuron in the brain in real-time were available and it were known when each neuron fired it would still be impossible to know how a particular firing of neurons translates into the observed behavior. Thus an understanding of how these two levels relate to each other is imperative. Francisco Varela, in The Embodied Mind: Cognitive Science and Human Experience, argues that "the new sciences of the mind need to enlarge their horizon to encompass both lived human experience and the possibilities for transformation inherent in human experience". On the classic cognitivist view, this can be provided by a functional level account of the process. Studying a particular phenomenon from multiple levels creates a better understanding of the processes that occur in the brain to give rise to a particular behavior.
Marr gave a famous description of three levels of analysis:
The computational theory, specifying the goals of the computation;
Representation and algorithms, giving a representation of the inputs and outputs and the algorithms which transform one into the other; and
The hardware implementation, or how algorithm and representation may be physically realized.
Interdisciplinary nature
Cognitive science is an interdisciplinary field with contributors from various fields, including psychology, neuroscience, linguistics, philosophy of mind, computer science, anthropology and biology. Cognitive scientists work collectively in hope of understanding the mind and its interactions with the surrounding world much like other sciences do. The field regards itself as compatible with the physical sciences and uses the scientific method as well as simulation or modeling, often comparing the output of models with aspects of human cognition. Similarly to the field of psychology, there is some doubt whether there is a unified cognitive science, which have led some researchers to prefer 'cognitive sciences' in plural.
Many, but not all, who consider themselves cognitive scientists hold a functionalist view of the mind—the view that mental states and processes should be explained by their function – what they do. According to the multiple realizability account of functionalism, even non-human systems such as robots and computers can be ascribed as having cognition.
Cognitive science: the term
The term "cognitive" in "cognitive science" is used for "any kind of mental operation or structure that can be studied in precise terms" (Lakoff and Johnson, 1999). This conceptualization is very broad, and should not be confused with how "cognitive" is used in some traditions of analytic philosophy, where "cognitive" has to do only with formal rules and truth-conditional semantics.
The earliest entries for the word "cognitive" in the OED take it to mean roughly "pertaining to the action or process of knowing". The first entry, from 1586, shows the word was at one time used in the context of discussions of Platonic theories of knowledge. Most in cognitive science, however, presumably do not believe their field is the study of anything as certain as the knowledge sought by Plato.
Scope
Cognitive science is a large field, and covers a wide array of topics on cognition. However, it should be recognized that cognitive science has not always been equally concerned with every topic that might bear relevance to the nature and operation of minds. Classical cognitivists have largely de-emphasized or avoided social and cultural factors, embodiment, emotion, consciousness, animal cognition, and comparative and evolutionary psychologies. However, with the decline of behaviorism, internal states such as affects and emotions, as well as awareness and covert attention became approachable again. For example, situated and embodied cognition theories take into account the current state of the environment as well as the role of the body in cognition. With the newfound emphasis on information processing, observable behavior was no longer the hallmark of psychological theory, but the modeling or recording of mental states.
Below are some of the main topics that cognitive science is concerned with. This is not an exhaustive list. See List of cognitive science topics for a list of various aspects of the field.
Artificial intelligence
Artificial intelligence (AI) involves the study of cognitive phenomena in machines. One of the practical goals of AI is to implement aspects of human intelligence in computers. Computers are also widely used as a tool with which to study cognitive phenomena. Computational modeling uses simulations to study how human intelligence may be structured. (See .)
There is some debate in the field as to whether the mind is best viewed as a huge array of small but individually feeble elements (i.e. neurons), or as a collection of higher-level structures such as symbols, schemes, plans, and rules. The former view uses connectionism to study the mind, whereas the latter emphasizes symbolic artificial intelligence. One way to view the issue is whether it is possible to accurately simulate a human brain on a computer without accurately simulating the neurons that make up the human brain.
Attention
Attention is the selection of important information. The human mind is bombarded with millions of stimuli and it must have a way of deciding which of this information to process. Attention is sometimes seen as a spotlight, meaning one can only shine the light on a particular set of information. Experiments that support this metaphor include the dichotic listening task (Cherry, 1957) and studies of inattentional blindness (Mack and Rock, 1998). In the dichotic listening task, subjects are bombarded with two different messages, one in each ear, and told to focus on only one of the messages. At the end of the experiment, when asked about the content of the unattended message, subjects cannot report it.
The psychological construct of Attention is sometimes confused with the concept of Intentionality due to some degree of semantic ambiguity in their definitions. At the beginning of experimental research on Attention, Wilhelm Wundt defined this term as "that psychical process, which is operative in the clear perception of the narrow region of the content of consciousness." His experiments showed the limits of Attention in space and time, which were 3-6 letters during an exposition of 1/10 s. Because this notion develops within the framework of the original meaning during a hundred years of research, the definition of Attention would reflect the sense when it accounts for the main features initially attributed to this term – it is a process of controlling thought that continues over time. While Intentionality is the power of minds to be about something, Attention is the concentration of awareness on some phenomenon during a period of time, which is necessary to elevate the clear perception of the narrow region of the content of consciousness and which is feasible to control this focus in mind.
The significance of knowledge about the scope of attention for studying cognition is that it defines the intellectual functions of cognition such as apprehension, judgment, reasoning, and working memory. The development of attention scope increases the set of faculties responsible for the mind relies on how it perceives, remembers, considers, and evaluates in making decisions. The ground of this statement is that the more details (associated with an event) the mind may grasp for their comparison, association, and categorization, the closer apprehension, judgment, and reasoning of the event are in accord with reality. According to Latvian professor Sandra Mihailova and professor Igor Val Danilov, the more elements of the phenomenon (or phenomena ) the mind can keep in the scope of attention simultaneously, the more significant number of reasonable combinations within that event it can achieve, enhancing the probability of better understanding features and particularity of the phenomenon (phenomena). For example, three items in the focal point of consciousness yield six possible combinations (3 factorial) and four items – 24 (4 factorial) combinations. The number of reasonable combinations becomes significant in the case of a focal point with six items with 720 possible combinations (6 factorial).
Bodily processes related to cognition
Embodied cognition approaches to cognitive science emphasize the role of body and environment in cognition. This includes both neural and extra-neural bodily processes, and factors that range from affective and emotional processes, to posture, motor control, proprioception, and kinaesthesis, to autonomic processes that involve heartbeat and respiration, to the role of the enteric gut microbiome. It also includes accounts of how the body engages with or is coupled to social and physical environments. 4E (embodied, embedded, extended and enactive) cognition includes a broad range of views about brain-body-environment interaction, from causal embeddedness to stronger claims about how the mind extends to include tools and instruments, as well as the role of social interactions, action-oriented processes, and affordances. 4E theories range from those closer to classic cognitivism (so-called "weak" embodied cognition) to stronger extended and enactive versions that are sometimes referred to as radical embodied cognitive science.
Knowledge and processing of language
The ability to learn and understand language is an extremely complex process. Language is acquired within the first few years of life, and all humans under normal circumstances are able to acquire language proficiently. A major driving force in the theoretical linguistic field is discovering the nature that language must have in the abstract in order to be learned in such a fashion. Some of the driving research questions in studying how the brain itself processes language include: (1) To what extent is linguistic knowledge innate or learned?, (2) Why is it more difficult for adults to acquire a second-language than it is for infants to acquire their first-language?, and (3) How are humans able to understand novel sentences?
The study of language processing ranges from the investigation of the sound patterns of speech to the meaning of words and whole sentences. Linguistics often divides language processing into orthography, phonetics, phonology, morphology, syntax, semantics, and pragmatics. Many aspects of language can be studied from each of these components and from their interaction.
The study of language processing in cognitive science is closely tied to the field of linguistics. Linguistics was traditionally studied as a part of the humanities, including studies of history, art and literature. In the last fifty years or so, more and more researchers have studied knowledge and use of language as a cognitive phenomenon, the main problems being how knowledge of language can be acquired and used, and what precisely it consists of. Linguists have found that, while humans form sentences in ways apparently governed by very complex systems, they are remarkably unaware of the rules that govern their own speech. Thus linguists must resort to indirect methods to determine what those rules might be, if indeed rules as such exist. In any event, if speech is indeed governed by rules, they appear to be opaque to any conscious consideration.
Learning and development
Learning and development are the processes by which we acquire knowledge and information over time. Infants are born with little or no knowledge (depending on how knowledge is defined), yet they rapidly acquire the ability to use language, walk, and recognize people and objects. Research in learning and development aims to explain the mechanisms by which these processes might take place.
A major question in the study of cognitive development is the extent to which certain abilities are innate or learned. This is often framed in terms of the nature and nurture debate. The nativist view emphasizes that certain features are innate to an organism and are determined by its genetic endowment. The empiricist view, on the other hand, emphasizes that certain abilities are learned from the environment. Although clearly both genetic and environmental input is needed for a child to develop normally, considerable debate remains about how genetic information might guide cognitive development. In the area of language acquisition, for example, some (such as Steven Pinker) have argued that specific information containing universal grammatical rules must be contained in the genes, whereas others (such as Jeffrey Elman and colleagues in Rethinking Innateness) have argued that Pinker's claims are biologically unrealistic. They argue that genes determine the architecture of a learning system, but that specific "facts" about how grammar works can only be learned as a result of experience.
Memory
Memory allows us to store information for later retrieval. Memory is often thought of as consisting of both a long-term and short-term store. Long-term memory allows us to store information over prolonged periods (days, weeks, years). We do not yet know the practical limit of long-term memory capacity. Short-term memory allows us to store information over short time scales (seconds or minutes).
Memory is also often grouped into declarative and procedural forms. Declarative memory—grouped into subsets of semantic and episodic forms of memory—refers to our memory for facts and specific knowledge, specific meanings, and specific experiences (e.g. "Are apples food?", or "What did I eat for breakfast four days ago?"). Procedural memory allows us to remember actions and motor sequences (e.g. how to ride a bicycle) and is often dubbed implicit knowledge or memory .
Cognitive scientists study memory just as psychologists do, but tend to focus more on how memory bears on cognitive processes, and the interrelationship between cognition and memory. One example of this could be, what mental processes does a person go through to retrieve a long-lost memory? Or, what differentiates between the cognitive process of recognition (seeing hints of something before remembering it, or memory in context) and recall (retrieving a memory, as in "fill-in-the-blank")?
Perception and action
Perception is the ability to take in information via the senses, and process it in some way. Vision and hearing are two dominant senses that allow us to perceive the environment. Some questions in the study of visual perception, for example, include: (1) How are we able to recognize objects?, (2) Why do we perceive a continuous visual environment, even though we only see small bits of it at any one time? One tool for studying visual perception is by looking at how people process optical illusions. The image on the right of a Necker cube is an example of a bistable percept, that is, the cube can be interpreted as being oriented in two different directions.
The study of haptic (tactile), olfactory, and gustatory stimuli also fall into the domain of perception.
Action is taken to refer to the output of a system. In humans, this is accomplished through motor responses. Spatial planning and movement, speech production, and complex motor movements are all aspects of action.
Consciousness
Consciousness is the awareness of experiences within oneself.
This helps the mind with having the ability to experience or feel a sense of self.
Research methods
Many different methodologies are used to study cognitive science. As the field is highly interdisciplinary, research often cuts across multiple areas of study, drawing on research methods from psychology, neuroscience, computer science and systems theory.
Behavioral experiments
In order to have a description of what constitutes intelligent behavior, one must study behavior itself. This type of research is closely tied to that in cognitive psychology and psychophysics. By measuring behavioral responses to different stimuli, one can understand something about how those stimuli are processed. Lewandowski & Strohmetz (2009) reviewed a collection of innovative uses of behavioral measurement in psychology including behavioral traces, behavioral observations, and behavioral choice. Behavioral traces are pieces of evidence that indicate behavior occurred, but the actor is not present (e.g., litter in a parking lot or readings on an electric meter). Behavioral observations involve the direct witnessing of the actor engaging in the behavior (e.g., watching how close a person sits next to another person). Behavioral choices are when a person selects between two or more options (e.g., voting behavior, choice of a punishment for another participant).
Reaction time. The time between the presentation of a stimulus and an appropriate response can indicate differences between two cognitive processes, and can indicate some things about their nature. For example, if in a search task the reaction times vary proportionally with the number of elements, then it is evident that this cognitive process of searching involves serial instead of parallel processing.
Psychophysical responses. Psychophysical experiments are an old psychological technique, which has been adopted by cognitive psychology. They typically involve making judgments of some physical property, e.g. the loudness of a sound. Correlation of subjective scales between individuals can show cognitive or sensory biases as compared to actual physical measurements. Some examples include:
sameness judgments for colors, tones, textures, etc.
threshold differences for colors, tones, textures, etc.
Eye tracking. This methodology is used to study a variety of cognitive processes, most notably visual perception and language processing. The fixation point of the eyes is linked to an individual's focus of attention. Thus, by monitoring eye movements, we can study what information is being processed at a given time. Eye tracking allows us to study cognitive processes on extremely short time scales. Eye movements reflect online decision making during a task, and they provide us with some insight into the ways in which those decisions may be processed.
Brain imaging
Brain imaging involves analyzing activity within the brain while performing various tasks. This allows us to link behavior and brain function to help understand how information is processed. Different types of imaging techniques vary in their temporal (time-based) and spatial (location-based) resolution. Brain imaging is often used in cognitive neuroscience.
Single-photon emission computed tomography and positron emission tomography. SPECT and PET use radioactive isotopes, which are injected into the subject's bloodstream and taken up by the brain. By observing which areas of the brain take up the radioactive isotope, we can see which areas of the brain are more active than other areas. PET has similar spatial resolution to fMRI, but it has extremely poor temporal resolution.
Electroencephalography. EEG measures the electrical fields generated by large populations of neurons in the cortex by placing a series of electrodes on the scalp of the subject. This technique has an extremely high temporal resolution, but a relatively poor spatial resolution.
Functional magnetic resonance imaging. fMRI measures the relative amount of oxygenated blood flowing to different parts of the brain. More oxygenated blood in a particular region is assumed to correlate with an increase in neural activity in that part of the brain. This allows us to localize particular functions within different brain regions. fMRI has moderate spatial and temporal resolution.
Optical imaging. This technique uses infrared transmitters and receivers to measure the amount of light reflectance by blood near different areas of the brain. Since oxygenated and deoxygenated blood reflects light by different amounts, we can study which areas are more active (i.e., those that have more oxygenated blood). Optical imaging has moderate temporal resolution, but poor spatial resolution. It also has the advantage that it is extremely safe and can be used to study infants' brains.
Magnetoencephalography. MEG measures magnetic fields resulting from cortical activity. It is similar to EEG, except that it has improved spatial resolution since the magnetic fields it measures are not as blurred or attenuated by the scalp, meninges and so forth as the electrical activity measured in EEG is. MEG uses SQUID sensors to detect tiny magnetic fields.
Computational modeling
Computational models require a mathematically and logically formal representation of a problem. Computer models are used in the simulation and experimental verification of different specific and general properties of intelligence. Computational modeling can help us understand the functional organization of a particular cognitive phenomenon.
Approaches to cognitive modeling can be categorized as: (1) symbolic, on abstract mental functions of an intelligent mind by means of symbols; (2) subsymbolic, on the neural and associative properties of the human brain; and (3) across the symbolic–subsymbolic border, including hybrid.
Symbolic modeling evolved from the computer science paradigms using the technologies of knowledge-based systems, as well as a philosophical perspective (e.g. "Good Old-Fashioned Artificial Intelligence" (GOFAI)). They were developed by the first cognitive researchers and later used in information engineering for expert systems. Since the early 1990s it was generalized in systemics for the investigation of functional human-like intelligence models, such as personoids, and, in parallel, developed as the SOAR environment. Recently, especially in the context of cognitive decision-making, symbolic cognitive modeling has been extended to the socio-cognitive approach, including social and organizational cognition, interrelated with a sub-symbolic non-conscious layer.
Subsymbolic modeling includes connectionist/neural network models. Connectionism relies on the idea that the mind/brain is composed of simple nodes and its problem-solving capacity derives from the connections between them. Neural nets are textbook implementations of this approach. Some critics of this approach feel that while these models approach biological reality as a representation of how the system works, these models lack explanatory powers because, even in systems endowed with simple connection rules, the emerging high complexity makes them less interpretable at the connection-level than they apparently are at the macroscopic level.
Other approaches gaining in popularity include (1) dynamical systems theory, (2) mapping symbolic models onto connectionist models (Neural-symbolic integration or hybrid intelligent systems), and (3) and Bayesian models, which are often drawn from machine learning.
All the above approaches tend either to be generalized to the form of integrated computational models of a synthetic/abstract intelligence (i.e. cognitive architecture) in order to be applied to the explanation and improvement of individual and social/organizational decision-making and reasoning or to focus on single simulative programs (or microtheories/"middle-range" theories) modelling specific cognitive faculties (e.g. vision, language, categorization etc.).
Neurobiological methods
Research methods borrowed directly from neuroscience and neuropsychology can also help us to understand aspects of intelligence. These methods allow us to understand how intelligent behavior is implemented in a physical system.
Single-unit recording
Direct brain stimulation
Animal models
Postmortem studies
Key findings
Cognitive science has given rise to models of human cognitive bias and risk perception, and has been influential in the development of behavioral finance, part of economics. It has also given rise to a new theory of the philosophy of mathematics (related to denotational mathematics), and many theories of artificial intelligence, persuasion and coercion. It has made its presence known in the philosophy of language and epistemology as well as constituting a substantial wing of modern linguistics. Fields of cognitive science have been influential in understanding the brain's particular functional systems (and functional deficits) ranging from speech production to auditory processing and visual perception. It has made progress in understanding how damage to particular areas of the brain affect cognition, and it has helped to uncover the root causes and results of specific dysfunction, such as dyslexia, anopsia, and hemispatial neglect.
Notable researchers
Some of the more recognized names in cognitive science are usually either the most controversial or the most cited. Within philosophy, some familiar names include Daniel Dennett, who writes from a computational systems perspective, John Searle, known for his controversial Chinese room argument, and Jerry Fodor, who advocates functionalism.
Others include David Chalmers, who advocates Dualism and is also known for articulating the hard problem of consciousness, and Douglas Hofstadter, famous for writing Gödel, Escher, Bach, which questions the nature of words and thought.
In the realm of linguistics, Noam Chomsky and George Lakoff have been influential (both have also become notable as political commentators). In artificial intelligence, Marvin Minsky, Herbert A. Simon, and Allen Newell are prominent.
Popular names in the discipline of psychology include George A. Miller, James McClelland, Philip Johnson-Laird, Lawrence Barsalou, Vittorio Guidano, Howard Gardner and Steven Pinker. Anthropologists Dan Sperber, Edwin Hutchins, Bradd Shore, James Wertsch and Scott Atran, have been involved in collaborative projects with cognitive and social psychologists, political scientists and evolutionary biologists in attempts to develop general theories of culture formation, religion, and political association.
Computational theories (with models and simulations) have also been developed, by David Rumelhart, James McClelland and Philip Johnson-Laird.
Epistemics
Epistemics is a term coined in 1969 by the University of Edinburgh with the foundation of its School of Epistemics. Epistemics is to be distinguished from epistemology in that epistemology is the philosophical theory of knowledge, whereas epistemics signifies the scientific study of knowledge.
Christopher Longuet-Higgins has defined it as "the construction of formal models of the processes (perceptual, intellectual, and linguistic) by which knowledge and understanding are achieved and communicated."
In his 1978 essay "Epistemics: The Regulative Theory of Cognition", Alvin I. Goldman claims to have coined the term "epistemics" to describe a reorientation of epistemology. Goldman maintains that his epistemics is continuous with traditional epistemology and the new term is only to avoid opposition. Epistemics, in Goldman's version, differs only slightly from traditional epistemology in its alliance with the psychology of cognition; epistemics stresses the detailed study of mental processes and information-processing mechanisms that lead to knowledge or beliefs.
In the mid-1980s, the School of Epistemics was renamed as The Centre for Cognitive Science (CCS). In 1998, CCS was incorporated into the University of Edinburgh's School of Informatics.
Binding problem in cognitive science
One of the core aims of cognitive science is to achieve an integrated theory of cognition. This requires integrative mechanisms explaining how the information processing that occurs simultaneously in spatially segregated (sub-)cortical areas in the brain is coordinated and bound together to give rise to coherent perceptual and symbolic representations. One approach is to solve this "Binding problem" (that is, the problem of dynamically representing conjunctions of informational elements, from the most basic perceptual representations ("feature binding") to the most complex cognitive representations, like symbol structures ("variable binding")), by means of integrative synchronization mechanisms. In other words, one of the coordinating mechanisms appears to be the temporal (phase) synchronization of neural activity based on dynamical self-organizing processes in neural networks, described by the Binding-by-synchrony (BBS) Hypothesis from neurophysiology. Connectionist cognitive neuroarchitectures have been developed that use integrative synchronization mechanisms to solve this binding problem in perceptual cognition and in language cognition. In perceptual cognition the problem is to explain how elementary object properties and object relations, like the object color or the object form, can be dynamically bound together or can be integrated to a representation of this perceptual object by means of a synchronization mechanism ("feature binding", "feature linking"). In language cognition the problem is to explain how semantic concepts and syntactic roles can be dynamically bound together or can be integrated to complex cognitive representations like systematic and compositional symbol structures and propositions by means of a synchronization mechanism ("variable binding") (see also the "Symbolism vs. connectionism debate" in connectionism).
However, despite significant advances in understanding the integrated theory of cognition (specifically the Binding problem), the debate on this issue of beginning cognition is still in progress. From the different perspectives noted above, this problem can be reduced to the issue of how organisms at the simple reflexes stage of development overcome the threshold of the environmental chaos of sensory stimuli: electromagnetic waves, chemical interactions, and pressure fluctuations. The so-called Primary Data Entry (PDE) thesis poses doubts about the ability of such an organism to overcome this cue threshold on its own. In terms of mathematical tools, the PDE thesis underlines the insuperable high threshold of the cacophony of environmental stimuli (the stimuli noise) for young organisms at the onset of life. It argues that the temporal (phase) synchronization of neural activity based on dynamical self-organizing processes in neural networks, any dynamical bound together or integration to a representation of the perceptual object by means of a synchronization mechanism can not help organisms in distinguishing relevant cue (informative stimulus) for overcome this noise threshold.
See also
Affective science
Cognitive anthropology
Cognitive biology
Cognitive computing
Cognitive ethology
Cognitive linguistics
Cognitive neuropsychology
Cognitive neuroscience
Cognitive psychology
Cognitive science of religion
Computational neuroscience
Computational-representational understanding of mind
Concept mining
Decision field theory
Decision theory
Dynamicism
Educational neuroscience
Educational psychology
Embodied cognition
Embodied cognitive science
Enactivism
Epistemology
Folk psychology
Heterophenomenology
Human Cognome Project
Human–computer interaction
Indiana Archives of Cognitive Science
Informatics (academic field)
List of cognitive scientists
List of psychology awards
Malleable intelligence
Neural Darwinism
Personal information management (PIM)
Qualia
Quantum cognition
Simulated consciousness
Situated cognition
Society of Mind theory
Spatial cognition
Speech–language pathology
Outlines
Outline of human intelligence – topic tree presenting the traits, capacities, models, and research fields of human intelligence, and more.
Outline of thought – topic tree that identifies many types of thoughts, types of thinking, aspects of thought, related fields, and more.
References
External links
"Cognitive Science" on the Stanford Encyclopedia of Philosophy
Cognitive Science Society
Cognitive Science Movie Index: A broad list of movies showcasing themes in the Cognitive Sciences
List of leading thinkers in cognitive science | 0.763338 | 0.997518 | 0.761444 |
Psychology of reasoning | The psychology of reasoning (also known as the cognitive science of reasoning) is the study of how people reason, often broadly defined as the process of drawing conclusions to inform how people solve problems and make decisions. It overlaps with psychology, philosophy, linguistics, cognitive science, artificial intelligence, logic, and probability theory.
Psychological experiments on how humans and other animals reason have been carried out for over 100 years. An enduring question is whether or not people have the capacity to be rational. Current research in this area addresses various questions about reasoning, rationality, judgments, intelligence, relationships between emotion and reasoning, and development.
Everyday reasoning
One of the most obvious areas in which people employ reasoning is with sentences in everyday language. Most experimentation on deduction has been carried out on hypothetical thought, in particular, examining how people reason about conditionals, e.g., If A then B. Participants in experiments make the modus ponens inference, given the indicative conditional If A then B, and given the premise A, they conclude B. However, given the indicative conditional and the minor premise for the modus tollens inference, not-B, about half of the participants in experiments conclude not-A and the remainder concludes that nothing follows.
The ease with which people make conditional inferences is affected by context, as demonstrated in the well-known selection task developed by Peter Wason. Participants are better able to test a conditional in an ecologically relevant context, e.g., if the envelope is sealed then it must have a 50 cent stamp on it compared to one that contains symbolic content, e.g., if the letter is a vowel then the number is even. Background knowledge can also lead to the suppression of even the simple modus ponens inference Participants given the conditional if Lisa has an essay to write then she studies late in the library and the premise Lisa has an essay to write make the modus ponens inference 'she studies late in the library', but the inference is suppressed when they are also given a second conditional if the library stays open then she studies late in the library. Interpretations of the suppression effect are controversial
Other investigations of propositional inference examine how people think about disjunctive alternatives, e.g., A or else B, and how they reason about negation, e.g., It is not the case that A and B. Many experiments have been carried out to examine how people make relational inferences, including comparisons, e.g., A is better than B. Such investigations also concern spatial inferences, e.g. A is in front of B and temporal inferences, e.g. A occurs before B. Other common tasks include categorical syllogisms, used to examine how people reason about quantifiers such as All or Some, e.g., Some of the A are not B. For example if all A are B and some B are C, what (if anything) follows?
Theories of reasoning
There are several alternative theories of the cognitive processes that human reasoning is based on. One view is that people rely on a mental logic consisting of formal (abstract or syntactic) inference rules similar to those developed by logicians in the propositional calculus. Another view is that people rely on domain-specific or content-sensitive rules of inference. A third view is that people rely on mental models, that is, mental representations that correspond to imagined possibilities. A fourth view is that people compute probabilities.
One controversial theoretical issue is the identification of an appropriate competence model, or a standard against which to compare human reasoning. Initially classical logic was chosen as a competence model. Subsequently, some researchers opted for non-monotonic logic and Bayesian probability. Research on mental models and reasoning has led to the suggestion that people are rational in principle but err in practice. Connectionist approaches towards reasoning have also been proposed.
Despite the ongoing debate about the cognitive processes involved in human reasoning, recent research has shown that multiple approaches can be useful in modeling human thinking. For instance, studies have found that people's reasoning is often influenced by their prior beliefs, which can be modeled using Bayesian probability theory. Additionally, research on mental models has shown that people tend to reason about problems by constructing multiple mental representations of the situation, which can help them to identify relevant features and make inferences based on their understanding of the problem. Moreover, connectionist approaches to reasoning have also gained attention, which focus on the neural network models that can learn from data and generalize to new situations.
Development of reasoning
It is an active question in psychology how, why, and when the ability to reason develops from infancy to adulthood. Jean Piaget's theory of cognitive development posited general mechanisms and stages in the development of reasoning from infancy to adulthood. According to the neo-Piagetian theories of cognitive development, changes in reasoning with development come from increasing working memory capacity, increasing speed of processing, and enhanced executive functions and control. Increasing self-awareness is also an important factor.
In their book The Enigma of Reason, the cognitive scientists Hugo Mercier and Dan Sperber put forward an "argumentative" theory of reasoning, claiming that humans evolved to reason primarily to justify our beliefs and actions and to convince others in a social environment. Key evidence for their theory includes the errors in reasoning that solitary individuals are prone to when their arguments are not criticized, such as logical fallacies, and how groups become much better at performing cognitive reasoning tasks when they communicate with one another and can evaluate each other's arguments. Sperber and Mercier offer one attempt to resolve the apparent paradox that the confirmation bias is so strong despite the function of reasoning naively appearing to be to come to veridical conclusions about the world.
The study of the development of reasoning abilities is an ongoing area of research in psychology, and multiple factors have been proposed to explain how, why, and when reasoning develops from infancy to adulthood. Recent research has suggested that early experiences and social interactions play a critical role in the development of reasoning abilities. For example, studies have shown that infants as young as six months old can engage in basic logical reasoning, such as reasoning about the relationship between objects and their properties. Furthermore, research has highlighted the importance of parental interaction and cognitive stimulation in the development of children's reasoning abilities. Additionally, studies have suggested that cultural factors, such as educational practices and the emphasis on critical thinking, can also influence the development of reasoning skills across different populations.
Different sorts of reasoning
Philip Johnson-Laird trying to taxonomize thought, distinguished between goal-directed thinking and thinking without goal, noting that association was involved in unrelated reading. He argues that goal directed reasoning can be classified based on the problem space involved in a solution, citing Allen Newell and Herbert A. Simon.
Inductive reasoning makes broad generalizations from specific cases or observations. In this process of reasoning, general assertions are made based on past specific pieces of evidence. This kind of reasoning allows the conclusion to be false even if the original statement is true. For example, if one observes a college athlete, one makes predictions and assumptions about other college athletes based on that one observation. Scientists use inductive reasoning to create theories and hypotheses. Philip Johnson-Laird distinguished inductive from deductive reasoning, in that the former creates semantic information while the later does not .
In opposition, deductive reasoning is a basic form of valid reasoning. In this reasoning process a person starts with a known claim or a general belief and from there asks what follows from these foundations or how will these premises influence other beliefs. In other words, deduction starts with a hypothesis and examines the possibilities to reach a conclusion. Deduction helps people understand why their predictions are wrong and indicates that their prior knowledge or beliefs are off track. An example of deduction can be seen in the scientific method when testing hypotheses and theories. Although the conclusion usually corresponds and therefore proves the hypothesis, there are some cases where the conclusion is logical, but the generalization is not. For example, the argument, "All young girls wear skirts; Julie is a young girl; therefore, Julie wears skirts" is valid logically, but is not sound because the first premise isn't true.
The syllogism is a form of deductive reasoning in which two statements reach a logical conclusion. With this reasoning, one statement could be "Every A is B" and another could be "This C is A". Those two statements could then lead to the conclusion that "This C is B". These types of syllogisms are used to test deductive reasoning to ensure there is a valid hypothesis. A Syllogistic Reasoning Task was created from a study performed by Morsanyi, Kinga, Handley, and Simon that examined the intuitive contributions to reasoning. They used this test to assess why "syllogistic reasoning performance is based on an interplay between a conscious and effortful evaluation of logicality and an intuitive appreciation of the believability of the conclusions".
Another form of reasoning is called abductive reasoning. This type is based on creating and testing hypotheses using the best information available. Abductive reasoning produces the kind of daily decision-making that works best with the information present, which often is incomplete. This could involve making educated guesses from observed unexplainable phenomena. This type of reasoning can be seen in the world when doctors make decisions about diagnoses from a set of results or when jurors use the relevant evidence to make decisions about a case.
Apart from the aforementioned types of reasoning, there is also analogical reasoning, which involves comparing and reasoning about two different situations or concepts to draw conclusions about a third. It can be used to make predictions or solve problems by finding similarities between two domains and transferring knowledge from one to the other. For example, a problem-solving approach that works in one domain may be applied to a new, similar problem in a different domain. Analogical reasoning is particularly useful in scientific discovery and problem-solving tasks, as it can help generate hypotheses, create new theories, and develop innovative solutions. However, it can also lead to errors if the similarities between domains are too superficial or if the analogy is based on false assumptions.
Judgment and reasoning
Judgment and reasoning involve thinking through the options, making a judgment or conclusion and finally making a decision. Making judgments involves heuristics, or efficient strategies that usually lead one to the right answers. The most common heuristics used are attribute substitution, the availability heuristic, the representativeness heuristic and the anchoring heuristic – these all aid in quick reasoning and work in most situations. Heuristics allow for errors, a price paid to gain efficiency.
Other errors in judgment, therefore affecting reasoning, include errors in judgment about covariation – a relationship between two variables such that the presence and magnitude of one can predict the presence and magnitude of the other. One cause of covariation is confirmation bias, or the tendency to be more responsive to evidence that confirms one's own beliefs. But assessing covariation can be pulled off track by neglecting base-rate information – how frequently something occurs in general. However people often ignore base rates and tend to use other information presented.
There are more sophisticated judgment strategies that result in fewer errors. People often reason based on availability but sometimes they look for other, more accurate, information to make judgments. This suggests there are two ways of thinking, known as the Dual-Process Model. The first, System I, is fast, automatic and uses heuristics – more of intuition. The second, System II, is slower, effortful and more likely to be correct – more reasoning.
Pragmatics and reasoning
The inferences people draw are related to factors such as linguistic pragmatics and emotion.
Decision making is often influenced by the emotion of regret and by the presence of risk. When people are presented with options, they tend to select the one that they think they will regret the least. In decisions that involve a large amount of risk, people tend to ask themselves how much dread they would experience were a worst-case scenario to occur, e.g. a nuclear accident, and then use that dread as an indicator of the level of risk.
Antonio Damasio suggests that somatic markers, certain memories that can cause a strong bodily reaction, act as a way to guide decision making as well. For example, when a person is remembering a scary movie and once again becomes tense, their palms might begin to sweat. Damasio argues that when making a decision people rely on their "gut feelings" to assess various options, and this makes them decide to go with a decision that is more positive and stay away from those that are negative. He also argues that the orbitofrontal cortex – located at the base of the frontal lobe, just above the eyes – is crucial in the use of somatic markers, because it is the part in the brain that allows people to interpret emotion.
When emotion shapes decisions, the influence is usually based on predictions of the future. When people ask themselves how they would react, they are making inferences about the future. Researchers suggest affective forecasting, the ability to predict one's own emotions, is poor because people tend to overestimate how much they will regret their errors.
Another factor that can influence decision making is linguistic pragmatics, which refers to the use of language in social contexts. Language can be used to convey different levels of politeness, power, and intention, which can all affect how people interpret and respond to messages. For example, if a boss asks an employee to complete a task using a commanding tone, the employee may feel more pressured to complete the task quickly, compared to if the boss asked in a polite tone. Similarly, if someone uses sarcasm or irony, it can be difficult for the listener to discern their true meaning, leading to misinterpretation and potentially poor decision making. In addition to linguistic pragmatics, cultural and social factors can also play a role in decision making. Different cultures may have different norms and values, which can influence how people approach decisions. For example, in collectivistic cultures, decisions may be made based on what is best for the group, whereas in individualistic cultures, decisions may prioritize individual needs and desires. Overall, decision making is a complex process that involves many factors, including emotion, risk, pragmatics, and cultural background. By understanding these factors, individuals can make more informed decisions and better navigate the complexities of the world around them.
Neuroscience of reasoning
Studying reasoning neuroscientifically involves determining the neural correlates of reasoning, often investigated using event-related potentials and functional magnetic resonance imaging.
In fMRI studies, participants are presented with variations of tasks to determine the different cognitive processes required. This is done by cross-referencing where in the brain there is more or less activation (as indexed by the blood-oxygen-level-dependent signal) on the different conditions with what other studies found for those regions. For example, if a condition leads to more activation of the hippocampus, then this may be interpreted as being related to memory retrieval—particularly if the theoretical framing of the task suggests that this is necessary.
See also
Bounded rationality
Cognitive psychology
Ecological rationality
Emotional self-regulation
Great Rationality Debate
Heuristics in judgment and decision-making
Naturalistic decision-making
Notes
Artificial intelligence
Cognition
Cognitive linguistics
Cognitive psychology
Logic
Psycholinguistics
Reasoning | 0.778746 | 0.977782 | 0.761444 |
Psychasthenia | Psychasthenia was a psychological disorder characterized by phobias, obsessions, compulsions, or excessive anxiety. The term is no longer in psychiatric diagnostic use, although it still forms one of the ten clinical subscales of the popular self-report personality inventories MMPI and MMPI-2. It is also one of the fifteen scales of the Karolinska Scales of Personality.
MMPI
The MMPI subscale 7 describes psychasthenia as akin to obsessive-compulsive disorder, and as characterised by excessive doubts, compulsions, obsessions, and unreasonable fears. The psychasthenic has an inability to resist specific actions or thoughts, regardless of their maladaptive nature. In addition to obsessive-compulsive features, the scale taps abnormal fears, self-criticism, difficulties in concentration, and guilt feelings. The scale assesses long-term (trait) anxiety, although it is somewhat responsive to situational stress as well.
The psychasthenic has insufficient control over their conscious thinking and memory, sometimes wandering aimlessly and/or forgetting what they were doing. Thoughts can be scattered and take significant effort to organize, often resulting in sentences that do not come out as intended, therefore making little sense to others. The constant mental effort and characteristic insomnia induces fatigue, which worsens the condition. Symptoms can possibly be greatly reduced with concentration exercises and therapy, depending on whether the condition is psychological or biological.
Earlier conceptions
The term "psychasthenia" was first primarily associated with French psychiatrist Pierre Janet, who divided the neuroses into the psychasthenias and the hysterias. (He discarded the then common term "neurasthenia" (weak nerves) since it implied a neurological theory where none existed.) Whereas the hysterias involved at their source a narrowing of the field of consciousness, the psychasthenias involved at root a disturbance in the fonction du reél ('function of reality'), a kind of weakness in the ability to attend to, adjust to, and synthesise one's changing experience (cf. executive functions in today's empiricist psychologies). Swiss psychiatrist Carl Jung later made Janet's hysteric and psychasthenic types the prototypes of his extroverted and introverted personalities.
The German-Swiss psychiatrist Karl Jaspers, following Janet, described psychasthenia as a variety of phenomena "held together by the theoretical concept of a 'diminution of psychic energy'." The psychasthenic person prefers to "withdraw from his fellows and not be exposed to situations in which his abnormally strong 'complexes' rob him of presence of mind, memory and poise." The psychasthenic lacks confidence, is prone to obsessional thoughts, unfounded fears, self-scrutiny and indecision. This state in turn promotes withdrawal from the world and daydreaming, yet this only makes things worse. "The psyche generally lacks an ability to integrate its life or to work through and manage its various experiences; it fails to build up its personality and make any steady development." Jaspers believed that some of Janet's more extreme cases of psychasthenia were cases of schizophrenia. Jaspers differentiates and contrasts psychasthenia with neurasthenia, defining the later in terms of "irritable weakness" and describing phenomena such as irritability, sensitivity, a painful sensibility, abnormal responsiveness to stimuli, bodily pains, strong experience of fatigue, etc.
References
External links
Stress-related disorders | 0.776883 | 0.980109 | 0.76143 |
Anecdotal evidence | An anecdotal evidence (or anecdata) is a piece of evidence based on descriptions and reports of individual, personal experiences, or observations, collected in a non-systematic manner.
The word anecdotal constitutes a variety of forms of evidence. This word refers to personal experiences, self-reported claims, or eyewitness accounts of others, including those from fictional sources, making it a broad category that can lead to confusion due to its varied interpretations.
Anecdotal evidence can be true or false but is not usually subjected to the methodology of scholarly method, the scientific method, or the rules of legal, historical, academic, or intellectual rigor, meaning that there are little or no safeguards against fabrication or inaccuracy. However, the use of anecdotal reports in advertising or promotion of a product, service, or idea may be considered a testimonial, which is highly regulated in some jurisdictions.
The persuasiveness of anecdotal evidence compared to that of statistical evidence has been a subject of debate; some studies have argued for the presence a generalized tendency to overvalue anecdotal evidence, whereas others have emphasized the types of argument as a prerequisite or rejected the conclusion altogether.
Scientific context
In science, definitions of anecdotal evidence include:
"casual observations or indications rather than rigorous or scientific analysis"
"information passed along by word-of-mouth but not documented scientifically"
"evidence that comes from an individual experience. This may be the experience of a person with an illness or the experience of a practitioner based on one or more patients outside a formal research study."
"the report of an experience by one or more persons that is not objectively documented or an experience or outcome that occurred outside of a controlled environment"
Anecdotal evidence may be considered within the scope of scientific method as some anecdotal evidence can be both empirical and verifiable, e.g. in the use of case studies in medicine. Other anecdotal evidence, however, does not qualify as scientific evidence, because its nature prevents it from being investigated by the scientific method, for instance, in that of folklore or in the case of intentionally fictional anecdotes. Where only one or a few anecdotes are presented, there is a chance that they may be unreliable due to cherry-picked or otherwise non-representative samples of typical cases. Similarly, psychologists have found that due to cognitive bias people are more likely to remember notable or unusual examples rather than typical examples. Thus, even when accurate, anecdotal evidence is not necessarily representative of a typical experience. Accurate determination of whether an anecdote is typical requires statistical evidence. Misuse of anecdotal evidence in the form of argument from anecdote is an informal fallacy and is sometimes referred to as the "person who" fallacy ("I know a person who..."; "I know of a case where..." etc.) which places undue weight on experiences of close peers which may not be typical.
Anecdotal evidence can have varying degrees of formality. For instance, in medicine, published anecdotal evidence by a trained observer (a doctor) is called a case report, and is subjected to formal peer review. Although such evidence is not seen as conclusive, researchers may sometimes regard it as an invitation to more rigorous scientific study of the phenomenon in question. For instance, one study found that 35 of 47 anecdotal reports of drug side-effects were later sustained as "clearly correct."
Anecdotal evidence is considered the least certain type of scientific information. Researchers may use anecdotal evidence for suggesting new hypotheses, but never as validating evidence.
If an anecdote illustrates a desired conclusion rather than a logical conclusion, it is considered a faulty or hasty generalization.
In any case where some factor affects the probability of an outcome, rather than uniquely determining it, selected individual cases prove nothing; e.g. "my grandfather smoked two packs a day until he died at 90" and "my sister never smoked but died of lung cancer". Anecdotes often refer to the exception, rather than the rule: "Anecdotes are useless precisely because they may point to idiosyncratic responses."
In medicine, anecdotal evidence is also subject to placebo effects.
Legal
In the legal sphere, anecdotal evidence, if it passes certain legal requirements and is admitted as testimony, is a common form of evidence used in a court of law. Often this form of anecdotal evidence is the only evidence presented at trial. Scientific evidence in a court of law is called physical evidence, but this is much rarer. Anecdotal evidence, with a few safeguards, represents the bulk of evidence in court.
The legal rigors applied to testimony for it to be considered evidence is that it must be given under oath, that the person is only testifying to their own words and actions, and that someone intentionally lying under oath is subject to perjury. However, these rigors do not make testimony in a court of law equal to scientific evidence as there are far less legal rigors. Testimony about another person's experiences or words is called hearsay and is usually not admissible, though there are certain exceptions. However, any hearsay that is not objected to or thrown out by a judge is considered evidence for a jury. This means that trials contain quite a bit of anecdotal evidence, which is considered as relevant evidence by a jury. Eyewitness testimony (which is a form of anecdotal evidence) is considered the most compelling form of evidence by a jury.
See also
References
Informal fallacies
Philosophy of science
Skepticism
Evidence
Testimony
Inductive fallacies
Pseudoscience
Diversionary tactics
Misuse of statistics
Anecdotes | 0.765815 | 0.994244 | 0.761407 |
Jungian archetypes | Jungian archetypes are a concept from psychology that refers to a universal, inherited idea, pattern of thought, or image that is present in the collective unconscious of all human beings. The psychic counterpart of instinct, archetypes are thought to be the basis of many of the common themes and symbols that appear in stories, myths, and dreams across different cultures and societies. Some examples of archetypes include those of the mother, the child, the trickster, and the flood, among others. The concept of the collective unconscious was first proposed by Carl Jung, a Swiss psychiatrist and psychoanalyst.
According to Jung, archetypes are innate patterns of thought and behavior that strive for realization within an individual's environment. This process of actualization influences the degree of individuation, or the development of the individual's unique identity. For instance, the presence of a maternal figure who closely matches the child's idealized concept of a mother can evoke innate expectations and activate the mother archetype in the child's mind. This archetype is incorporated into the child's personal unconscious as a "mother complex," which is a functional unit of the personal unconscious that is analogous to an archetype in the collective unconscious.
Introduction
Carl Jung rejected the tabula rasa theory of human psychological development, which suggests that people are born as a "blank slate" and their experiences shape their thoughts, behaviors, and feelings. Instead, Jung believed that there are universal experiences that are inherent to the human experience, such as belonging, love, death, and fear. These experiences, which he called the "collective unconscious," are expressed in what he called "archetypes." Jung believed that these archetypes are influenced by evolutionary pressures and manifest in the behaviors and experiences of individuals. He first introduced the concept of primordial images, which he later referred to as archetypes, to explain this idea.
According to Jungian psychology, archetypes are innate potentials that are expressed in human behavior and experiences. They are hidden forms that are activated when they enter consciousness and are shaped by individual and cultural experiences. The concept of archetypes is a key aspect of Jung's theory of the collective unconscious, which suggests that there are universal experiences that are inherent to the human experience. The existence of archetypes can be inferred from various cultural phenomena, such as stories, art, myths, religions, and dreams.
Jung's concept of archetypes was influenced by the theories of Immanuel Kant, Plato, and Arthur Schopenhauer. Jung's idea of archetypes differs from Plato's concept of Ideas in that they are dynamic and constantly seeking expression in an individual's personality and behavior. He believed that these archetypes are activated and given form in the encounter with empirical experiences.
For Jung, "the archetype is the introspectively recognizable form of a priori psychic orderedness". "These images must be thought of as lacking in solid content, hence as unconscious. They only acquire solidity, influence, and eventual consciousness in the encounter with empirical facts."
According to Jungian psychology, archetypes form a common foundation for the experiences of all humans. Each individual builds their own experiences on top of this foundation, influenced by their unique culture, personality, and life events. While there are a relatively small number of innate and amorphous archetypes, they can give rise to a vast array of images, symbols, and behaviors. While the resulting images and forms are consciously recognized, the underlying archetypes are unconscious and cannot be directly perceived.
Jung believed that the form of the archetype was similar to the axial system of a crystal, which determines the structure of the crystal without having a physical existence of its own. The archetype is empty and purely formal, and the specific way in which it is expressed depends on the circumstances in which it is activated. The representations of the archetype are not inherited, only the forms, and they correspond to the instincts. The existence of the instincts and the archetypes cannot be proven unless they manifest themselves concretely.
A study published in the journal Psychological Perspective in 2017 examined the ways in which Jungian representations are expressed in human experiences. The article summarized the findings of the study,
Early development
Jung's intuition that there was more to psyche than individual experience may have originated in his childhood. He had dreams that seemed to come from a source outside himself, and one of his earliest memories was of a dream about an underground phallic god. Later in life, Jung's research in Burghölzli Hospital on psychotic patients and his own self-analysis supported his belief in the existence of universal psychic structures that underlie all human experience and behavior. He discovered that the dreams of his patients followed certain patterns and had elements of myths, legends, and fairy tales. Jung initially referred to these as "primordial images" – a term he borrowed from Jacob Burckhardt., later referring to them as "dominants of the collective unconscious" in 1917.
Jung first coined the term "archetypes" in his 1919 essay "Instinct and the Unconscious". The word is derived from Greek, with the first element, "arche," meaning "beginning, origin, cause, primal source principle," as well as "position of a leader, supreme rule, and government." The second element, "type," means "blow and what is produced by a blow, the imprint of a coin, form, image, prototype, model, order, and norm." In modern usage, the term signifies "pattern underlying form, primordial form."
Later development
In later years, Jung revised and broadened the concept of archetype, conceiving them as psycho-physical patterns existing in the universe, given specific expression by human consciousness and culture. This was part of his attempt to link depth psychology to the larger scientific program of the twentieth century.
Jung proposed the archetype contained a dual nature, existing both in the psyche of an individual and the world at large. The non-psychic element, or "psychoid" archetype, is a synthesis of instinct and spirit and is not accessible to consciousness. Jung developed this concept with the collaboration of Austrian quantum physicist Wolfgang Pauli, who believed that the psychoid archetype was crucial to understanding the principles of the universe. Jung also saw the psychoid archetype as a continuum that includes what he previously referred to as "archtypal tendency", or the innate pattern of action.
The archetype is not just a psychic entity, but is more fundamentally a bridge to matter in general. Jung used the term unus mundus to describe the unitary reality that he believed underlies all manifest phenomena, observable or perceivable things that exist in the physical world. He conceived of archetypes as the mediators of the unus mundus, organizing not only ideas in the psyche, but also the fundamental principles of matter and energy in the physical world. The psychoid aspect of the archetype impressed Nobel laureate physicist Wolfgang Pauli, who embraced Jung's concept and believed that the archetype provided a link between physical events and the mind of the scientist studying them. This echoed the position adopted by German astronomer Johannes Kepler. Thus, the archetypes that order our perceptions and ideas are themselves the product of an objective order that transcends both the human mind and the external world.
Ken Wilber developed a theory called Spectrum of Consciousness that expanded on Jung's archetypes. He said that Jung's archetypes were not used in the same way as the ancient mystics (e.g. Plato and Augustine). Wilber also drew from mystical philosophy to describe a fundamental state of reality from which all subsequent and lower forms emerge. For Wilber, these forms are actual or real archetypes and emerged from the Emptiness or the fundamental state of reality. In Eye to Eye: The Quest for the New Paradigm, Wilber clarified that the lower structures are not the archetypes themselves, but are instead given collectively and archetypically. He also explained that levels of forms are a part of psychological development, in which a higher order emerges through the differentiation of a preceding level.
Analogies
Jung's analogy of the psyche to the electromagnetic spectrum is a useful way to visualize the different components of the psyche. In this analogy, the visible light spectrum represents consciousness, with the center of the spectrum (corresponding to the color green) representing the conscious mind. The red and blue ends of the spectrum represent unconsciousness, with red representing unconscious urges and the invisible light at the infra-red end of the spectrum corresponding to instincts that are influenced by physical and chemical conditions. For example, the red light in the spectrum might represent the influence of primal instincts and emotional impulses on our behavior, such as the desire for food, shelter, and reproduction. Blue, on the other hand, represents spiritual ideas, and the invisible light at the ultra-violet end of the spectrum represents the influence of archetypes on both living and non-living matter. For example, the blue light in the spectrum might represent the influence of spiritual beliefs and values on our behavior, such as the belief in a higher power or a moral code. The ultra-violet light at the end of the spectrum might represent the influence of universal archetypes, such as the hero, the wise elder, or the trickster, on our thoughts, feelings, and actions. These archetypes are thought to exist beyond the visible spectrum, and can exert their influence on both living and non-living things.
In Jung's analogy, the color violet represents a distinct aspect of the psyche, rather than a combination of other colors or light wavelengths. This color might represent the influence of psychological factors that are not easily explained or understood, such as synchronicities, dreams, and other phenomena that defy rational explanation. Jung suggested that these archetypal structures not only govern the behavior of living organisms, but also have an influence on the behavior of inorganic matter. For instance, the hero archetype might inspire a person to bravely confront a dangerous situation, while the wise elder archetype might guide a person to make wise and compassionate decisions. Similarly, the influence of archetypes might be seen in the natural world, such as the way that rivers and mountains seem to embody certain qualities or energies.
Examples
Jung identified various archetypes in human psychology. These include events such as birth, death, and marriage; figures such as the mother, father, and child; and motifs such as the apocalypse and the deluge. Although the number of archetypes is limitless, there are a few particularly notable, recurring archetypal images, "the chief among them being" (according to Jung) "the shadow, the wise old man, the child, the mother ... and her counterpart, the maiden, and lastly the anima in man and the animus in woman". Alternatively he would speak of "the emergence of certain definite archetypes ... the shadow, the animal, the wise old man, the anima, the animus, the mother, the child". The persona, anima and animus, the shadow, and the self are four of the archetypes that fall under the separate systems of the personality.
The father represents the patriarchal qualities of the persona. Some of these qualities may include, protector, provider, and wisdom. The father archetype can be seen in many forms such as, kings, chiefs, and the biological father.
The mother represents the nurturing and protective aspect of the female figure. It is often associated with the qualities of love, compassion, and caring. The mother archetype can manifest itself in a variety of forms, such as a biological mother, a maternal figure in a person's life, or even a motherly aspect within one's own personality.
The self designates the whole range of psychic phenomena in people. It expresses the unity of the personality as a whole. According to Jung, this archetype manifests during middle age - the stage when all systems of the personality had developed and the individual is already concerned with his wholeness and self-fulfilment.
The shadow is a representation of the personal unconscious as a whole and usually embodies the compensating values to those held by the conscious personality. It is the hidden, suppressed side of the persona. The shadows characteristics are in direct opposition to the persona. Thus, the shadow often represents one's dark side, those aspects of oneself that exist, but which one does not acknowledge or with which one does not identify. This is also described as the animalistic and sinister aspects of all people. Even though the shadow may seem to be a negative archetype, one that would degrade and destroy the ego, the opposite is true if the shadow is integrated properly. If the shadow is not integrated properly and suppressed there can be negative effects that can affect the individual and those around them.
The anima archetype appears in men and is his primordial image of woman. It represents the man's sexual expectation of women but also is a symbol of a man's feminine possibilities, his contrasexual tendencies. The animus archetype is the analogous image of the masculine qualities that exist within women. In addition, it can also refer to the conscious sense of masculine qualities among males.
Any attempt to give an exhaustive list of the archetypes would be a futile exercise since they tend to combine with each other and interchange qualities, making it difficult to decide where one archetype ends and another begins. For example, qualities of the shadow archetype may be prominent in an archetypal image of the anima or animus. One archetype may also appear in various distinct forms, thus raising the question of whether four or five distinct archetypes should be said to be present or merely four or five forms of a single archetype.
Actualization and complexes
Archetypes seek actualization as the individual lives out their life cycle within the context of their environment. According to Jung, this process is called individuation, which he described as "an expression of that biological process - simple or complicated as the case may be - by which every living thing becomes what it was destined to become from the beginning". It is considered a creative process that activates the unconscious and primordial images through exposure to unexplored potentials of the mind. Archetypes guide the individuation process towards self-realization.
Jung also used the terms "evocation" and "constellation" to explain the process of actualization. Thus for example, the mother archetype is actualized in the mind of the child by the evoking of innate anticipations of the maternal archetype when the child is in the proximity of a maternal figure who corresponds closely enough to its archetypal template. This mother archetype is built into the personal unconscious of the child as a mother complex. Complexes are functional units of the personal unconscious, in the same way that archetypes are units for the collective unconscious.
Stages of life
Archetypes are innate universal pre-conscious psychic dispositions, allowing humans to react in a human manner as they form the substrate from which the basic themes of human life emerge. The archetypes are components of the collective unconscious and serve to organize, direct and inform human thought and behavior. Archetypes hold control of the human life cycle.
As we mature the archetypal plan unfolds through a programmed sequence which Jung called the stages of life. Each stage of life is mediated through a new set of archetypal imperatives which seek fulfillment in action. These may include being parented, initiation, courtship, marriage and preparation for death.
"The archetype is a tendency to form such representations of a motif – representations that can vary a great deal in detail without losing their basic pattern ... They are indeed an instinctive trend". Thus, "the archetype of initiation is strongly activated to provide a meaningful transition ... with a 'rite of passage' from one stage of life to the next": Such stages may include being parented, initiation, courtship, marriage and preparation for death.
General developments
In his book, Jung and the Post-Jungians, Andrew Samuels points out some important developments that relate to the concept of Jungian archetypes. Claude Lévi-Strauss was an advocate of structuralism in anthropology and, similar to Jung, was interested in better understanding the nature of collective phenomena. As he worked to understand the structure and meaning of myth, Levi-Strauss came to the conclusion that present phenomena are transformations of earlier structures or infrastructures, going so far as to state that "the structure of primitive thoughts is present in our minds".
Samuels further points out that, in Noam Chomsky's study of psycholinguistics, there is a pattern of language acquisition in children, or a universal grammar. Chomsky labeled this pattern as the language acquisition device. He also refers to a concept of 'universals' and makes a distinction between the 'formal' universals and the 'substantive' universals, similar to the difference between archetype as such (structure) and archetypal image.
Jean Piaget writes of 'schemata' which are innate and lay a foundation for perceptuo-motor activity and aid in the acquisition of knowledge. Samuels makes the claim that schemata are comparable to archetypes through their innateness, activity, and need for environmental correspondence.
Anthony Stevens argues that the concept of social instincts, which was proposed by Charles Darwin, the faculties of Henri Bergson, as well as the isomorphs of Wolfgang Kohler are all related to archetypes. All of these concepts relate to the studies of Strauss, who believed that "all forms of social life [are] a projection of universal laws responsible for regulating the unconscious activities of the psyche."
Ethology and attachment theory
In Biological theory and the concept of archetypes, Michael Fordham considered that innate release mechanisms in animals may be applicable to humans, especially in infancy. The stimuli which produce instinctive behaviour are selected from a wide field by an innate perceptual system and the behaviour is 'released'. Fordham drew a parallel between some of Lorenz's ethological observations on the hierarchical behaviour of wolves and the functioning of archetypes in infancy.
Anthony Stevens suggests that ethology and analytical psychology are both disciplines trying to comprehend universal phenomena. Ethology shows us that each species is equipped with unique behavioural capacities that are adapted to its environment, and humans are no exception. Stevens claims that archetypes are the "neuropsychic centres responsible for co-ordinating the behavioural and psychic repertoire of our species."
The confusion about the essential quality of archetypes can partly be attributed to Jung's own evolving ideas about them in his writings and his interchangeable use of the term "archetype" and "primordial image." Jung was also intent on retaining the raw and vital quality of archetypes as spontaneous outpourings of the unconscious and not to give their specific individual and cultural expressions a dry, rigorous, intellectually formulated meaning. Programmed behaviour is taking place in the psychological relationship between mother and newborn. The baby's helplessness, its immense repertoire of sign stimuli and approach behaviour, triggers a maternal response. And the smell, sound and shape of mother, for instance, will trigger a feeding response.
Biology
Stevens suggests that DNA itself can be inspected for the location and transmission of archetypes. As they are co-terminous with natural life they should be expected wherever life is found. He suggests that DNA is the replicable archetype of the species.
The Jungian analyst Murray Stein argues that all the various terms used to delineate the messengers – 'templates, genes, enzymes, hormones, catalysts, pheromones, social hormones' – are concepts similar to archetypes. He mentions archetypal figures which represent messengers such as Hermes, Prometheus or Christ. Continuing to base his arguments on a consideration of biological defence systems he says that it must operate in a whole range of specific circumstances, its agents must be able to go everywhere, the distribution of the agents must not upset the somatic status quo, and, in predisposed persons, the agents will attack the self.
Psychoanalysis
Melanie Klein: Melanie Klein's idea of unconscious phantasy is closely related to Jung's archetype, as both are composed of image and affect and are a priori patternings of psyche whose contents are built from experience.
Jacques Lacan: Lacan went beyond the proposition that the unconscious is a structure that lies beneath the conscious world; the unconscious itself is structured, like a language. This would suggest parallels with Jung. Further, Lacan's Symbolic and Imaginary orders may be aligned with Jung's archetypal theory and personal unconscious respectively. The Symbolic order patterns the contents of the Imaginary in the same way that archetypal structures predispose humans towards certain sorts of experience. If we take the example of parents, archetypal structures and the Symbolic order predispose our recognition of, and relation to them. Lacan's concept of the Real approaches Jung's elaboration of the psychoid unconscious, which may be seen as true but cannot be directly known. Lacan posited that the unconscious is organised in an intricate network governed by association, above all 'metaphoric associations'. The existence of the network is shown by analysis of the unconscious products: dreams, symptoms, and so on.
Wilfred Bion: According to Bion, thoughts precede a thinking capacity. Thoughts in a small infant are indistinguishable from sensory data or unorganised emotion. Bion uses the term proto-thoughts for these early phenomena. Because of their connection to sensory data, proto-thoughts are concrete and self-contained (thoughts-in-themselves), not yet capable of symbolic representations or object relations. The thoughts then function as preconceptions – predisposing psychosomatic entities similar to archetypes. Support for this connection comes from the Kleinian analyst Money-Kyrle's observation that Bion's notion of preconceptions is the direct descendant of Plato's Ideas.
Sigmund Freud: In the introductory Lectures on Psychoanalysis (1916–1917) Freud wrote: "There can be no doubt that the source [of the fantasies] lie in the instincts; but it still has to be explained why the same fantasies with the same content are created on every occasion. I am prepared with an answer that I know will seem daring to you. I believe that...primal fantasies, and no doubt a few others as well, are a phylogenetic endowment". His suggestion that primal fantasies are a residue of specific memories of prehistoric experiences have been construed as being aligned with the idea of archetypes. Laplanehe and Pontalis point out that all the so-called primal fantasies relate to the origins and that "like collective myths they claim to provide a representation of and a 'solution' to whatever constitutes an enigma for the child".
Robert Langs: More recently, adaptive psychotherapist and psychoanalyst Robert Langs has used archetypal theory as a way of understanding the functioning of what he calls the "deep unconscious system". Langs' use of archetypes particularly pertains to issues associated with death anxiety, which Langs takes to be the root of psychic conflict. Like Jung, Langs thinks of archetypes as species-wide, deep unconscious factors.
Neurology
Rossi (1977) suggests that the function and characteristics between left and right cerebral hemispheres may enable us to locate the archetypes in the right cerebral hemisphere. He cites research indicating that left hemispherical functioning is primarily verbal and associational, and that of the right primarily visuospatial and apperceptive. Thus the left hemisphere is equipped as a critical, analytical, information processor while the right hemisphere operates in a 'gestalt' mode. This means that the right hemisphere is better at getting a picture of a whole from a fragment, is better at working with confused material, is more irrational than the left, and is more closely connected to bodily processes. Once expressed in the form of words, concepts and language of the ego's left hemispheric realm, however, they become only representations that 'take their colour' from the individual consciousness. Inner figures such as shadow, anima and animus would be archetypal processes having source in the right hemisphere.
Henry (1977) alluded to Maclean's model of the tripartite brain suggesting that the reptilian brain is an older part of the brain and may contain not only drives but archetypal structures as well. The suggestion is that there was a time when emotional behaviour and cognition were less developed and the older brain predominated. There is an obvious parallel with Jung's idea of the archetypes 'crystallising out' over time.
Literary criticism
Archetypal literary criticism argues that archetypes determine the form and function of literary works, and therefore, that a text's meaning is shaped by cultural and psychological myths. Archetypes are the unknowable basic forms personified or concretized in recurring images, symbols, or patterns which may include motifs such as the quest or the heavenly ascent, recognizable character types such as the trickster or the hero, symbols such as the apple or snake, or images such as crucifixion (as in King Kong, or Bride of Frankenstein) are all already laden with meaning when employed in a particular work.
Psychology
Archetypal psychology was developed by James Hillman in the second half of the 20th century. Hillman trained at the Jung Institute and was its Director after graduation. Archetypal psychology is in the Jungian tradition and most directly related to analytical psychology and psychodynamic theory, yet departs radically, even from Jung's original concept of what an archetype is. Archetypal psychology relativizes and deliteralizes the ego and focuses on the psyche (or soul) itself and the archai, the deepest patterns of psychic functioning, the "fundamental fantasies that animate all of life". Archetypal psychology is a polytheistic psychology, in that it attempts to recognize the myriad fantasies and myths, gods, goddesses, demigods, mortals and animals – that shape and are shaped by humans' psychological lives. According to Hillman, the ego is just one psychological fantasy that exists within a multitude of other fantasies.
Many archetypes have been used in treatment of psychological illnesses. Jung's first research was done with people with schizophrenia.
Pedagogy
Archetypal pedagogy was developed by Clifford Mayes. Mayes' work also aims at promoting what he calls archetypal reflectivity in teachers; this is a means of encouraging teachers to examine and work with psychodynamic issues, images, and assumptions as those factors affect their pedagogical practices. More recently the Pearson-Marr Archetype Indicator (PMAI), based on Jung's theories of both archetypes and personality types, has been used for pedagogical applications (not unlike the Myers–Briggs Type Indicator).
Applications of archetype-based thinking
In historical works
Archetypes have been cited by multiple scholars as key figures within both ancient Greek and ancient Roman culture. Examples from ancient history include the epic works Iliad and Odyssey. Specifically, scholar Robert Eisner has argued that the anima concept within Jungian thought exists in prototype form within the goddess characters in said stories. He has particularly cited Athena, for instance, as a major influence.
In the context of the medieval period, British writer Geoffrey Chaucer's work The Canterbury Tales has been cited as an instance of the prominent use of Jungian archetypes. The Wife of Bath's Tale in particular within the larger collection of stories features an exploration of the bad mother and good mother concepts. The given tale's plot additionally contains broader Jungian themes around the practice of magic, the use of riddles, and the nature of radical transformation.
In British intellectual and poet John Milton's epic work Paradise Lost, the character of Lucifer features some of the attributes of an archetypal hero, including courage and force of will, yet comes to embody the shadow concept in his corruption of Adam and Eve. Like the two first humans, Lucifer is portrayed as a created being meant to serve the purposes of heaven. However, his rebellion and assertions of pride set him up philosophically as a dark mirror of Adam and Eve's initial moral obedience. As well, the first two people function as each other's anima and animus, their romantic love serving to make each other psychologically complete.
In modern popular culture
Archetypes abound in contemporary artistic expression such as films, literature, music, and video games as they have in creative works of the past. These projections of the collective unconscious serve to embody central societal and developmental struggles in media that entertain as well as instruct. Works made both during and after Jung's lifetime have frequently been subject to academic analysis in terms of their psychological aspects.
The very act of watching movies has important psychological meaning not just on an individual level, but also in terms of sharing mass social attitudes through common experience. Films function as a contemporary form of myth-making. They reflect individuals' responses to themselves as well as the broader mysteries and wonders of human existence. Jung himself felt fascinated by the dynamics of the medium. Film criticism has long applied Jungian thought to different types of analysis, with archetypes being seen as an important aspects of storytelling on the silver screen.
A study conducted by scholars Michael A. Faber and John D. Mayer in 2009 found that certain archetypes in richly detailed media sources can be reliably identified by individuals. They stated as well that people's life experiences and personality appeared to give them a kind of psychological resonance with particular creations. Jungian archetypes have additionally been cited as inflecting notions of what appears "cool", particularly in terms of youth culture. Actors such as James Dean and Steve McQueen in particular have been identified as rebellious outcasts embodying a particular sort of Jungian archetype in terms of masculinity.
Contemporary cinema is a rich source of archetypal images, most commonly evidenced for instance in the hero archetype: the one who saves the day and is young and inexperienced, like Luke Skywalker in Star Wars, or older and cynical, like Rick Blaine in Casablanca.
Atticus Finch of To Kill a Mockingbird, named the greatest movie hero of all time by the American Film Institute, fulfills three roles in terms of archetypes: the father, the hero, and the idealist. In terms of the former, he has been described "the purest archetypal father in the movies" in terms of his close relationship to his children, providing them with instincts such as hope.
A classic example of Jungian archetypes can be found in the story of Dr. Jekyll and Mr. Hyde. The shadow, ego, and persona are exemplified through Jekyll's internal struggle with the other facet of his personality, Mr. Hyde. In the original Star Wars Trilogy, the characters Luke Skywalker and Darth Vader represent the archetypes of hero and the shadow, respectively.
In marketing, an archetype is a genre to a brand, based upon symbolism. The idea behind using brand archetypes in marketing is to anchor the brand against an icon, already embedded within the conscience and subconscious of humanity. In the minds of both the brand owner and the public, aligning with a brand archetype makes the brand easier to identify. Twelve archetypes have been proposed for use with branding: Sage, Innocent, Explorer, Ruler, Creator, Caregiver, Magician, Hero, Outlaw, Lover, Jester, and Regular Person.
Criticism
Feminist critiques have focused on aspects of archetypal theory that are seen as being reductionistic and providing a stereotyped view of femininity and masculinity.
Carl Jung has also been accused of metaphysical essentialism. His psychology and particularly his thoughts on spirit lack a scientific basis, making them mystical and based on assumption rather than empirical investigation.
Another criticism of archetypes is that seeing myths as universal tends to abstract them from the history of their actual creation, and their cultural context. Some modern critics state that archetypes reduce cultural expressions to generic decontextualized concepts, stripped bare of their unique cultural context, reducing a complex reality into something "simple and easy to grasp". Other critics respond that archetypes do nothing more than to solidify the cultural prejudices of the myth interpreter – namely modern Westerners. Modern scholarship has characterized archetypes as an Eurocentric and colonialist device to level the specifics of individual cultures and their stories in the service of grand abstraction. This is demonstrated in the conceptualization of the "Other", which can only be represented by limited ego fiction despite its "fundamental unfathomability".
Others have accused him of a romanticized and prejudicial promotion of 'primitivism' through the medium of archetypal theory. Archetypal theory has been posited as being scientifically unfalsifiable and even questioned as to being a suitable domain of psychological and scientific inquiry. Jung mentions the demarcation between experimental and descriptive psychological study, seeing archetypal psychology as rooted by necessity in the latter camp, grounded as it was (to a degree) in clinical case-work.
Because Jung's viewpoint was essentially subjectivist, he displayed a somewhat Neo-Kantian perspective of a skepticism for knowing things in themselves and a preference of inner experience over empirical data. This skepticism opened Jung up to the charge of countering materialism with another kind of reductionism, one that reduces everything to subjective psychological explanation and woolly quasi-mystical assertions.
Post-Jungian criticism seeks to contextualize, expand and modify Jung's original discourse on archetypes. Michael Fordham is critical of tendencies to relate imagery produced by patients to historical parallels only (e.g. from alchemy, mythology or folklore). A patient who produces archetypal material with striking alchemical parallels runs the risk of becoming more divorced than before from his setting in contemporary life.
See also
Archetype
Archetypal psychology
Polytheistic myth as psychology
Archive for Research in Archetypal Symbolism
Egregore
Evolutionary psychology
Joseph Campbell
Monomyth
Mythology
Mysticism
Comparative mythology
Metafiction
Narrativium
Self-actualization
Self-realization
References
Further reading
archetype
Archetypes
History of psychiatry
Literary archetypes
Carl Jung | 0.762129 | 0.999044 | 0.7614 |
Psychomotor learning | Psychomotor learning is the relationship between cognitive functions and physical movement. Psychomotor learning is demonstrated by physical skills such as movement, coordination, manipulation, dexterity, grace, strength, speed—actions which demonstrate the fine or gross motor skills, such as use of precision instruments or tools, and walking. Sports and dance are the richest realms of gross psychomotor skills.
Behavioral examples include driving a car, throwing a ball, and playing a musical instrument. In psychomotor learning research, attention is given to the learning of coordinated activity involving the arms, hands, fingers, and feet, while verbal processes are not emphasized.
Stages of psychomotor development
According to Paul Fitts and Michael Posner's three-stage model, when learning psychomotor skills, individuals progress through the cognitive stages, the associative stage, and the autonomic stage. The cognitive stage is marked by awkward slow and choppy movements that the learner tries to control. The learner has to think about each movement before attempting it. In the associative stage, the learner spends less time thinking about every detail, however, the movements are still not a permanent part of the brain. In the autonomic stage, the learner can refine the skill through practice, but no longer needs to think about the movement.
Factors affecting psychomotor skills
Psychological feedback
Amount of practice
Task complexity
Work distribution
Motive-incentive conditions
Environmental factors
How motor behaviors are recorded
The motor cortices are involved in the formation and retention of memories and skills. When an individual learns physical movements, this leads to changes in the motor cortex. The more practiced a movement is, the stronger the neural encoding becomes. A study cited how the cortical areas include neurons that process movements and that these neurons change their behavior during and after being exposed to tasks. Psychomotor learning is not limited to the motor cortex, however.
See also
Movement in learning
Psychomotor agitation
Psychomotor retardation
References
External link
Psychomotor learning at the Encyclopædia Britannica
Motor control
Somatics | 0.774937 | 0.982523 | 0.761393 |
Autonomy | In developmental psychology and moral, political, and bioethical philosophy, autonomy is the capacity to make an informed, uncoerced decision. Autonomous organizations or institutions are independent or self-governing. Autonomy can also be defined from a human resources perspective, where it denotes a (relatively high) level of discretion granted to an employee in his or her work. In such cases, autonomy is known to generally increase job satisfaction. Self-actualized individuals are thought to operate autonomously of external expectations. In a medical context, respect for a patient's personal autonomy is considered one of many fundamental ethical principles in medicine.
Sociology
In the sociology of knowledge, a controversy over the boundaries of autonomy inhibited analysis of any concept beyond relative autonomy, until a typology of autonomy was created and developed within science and technology studies[citation needed]. According to it, the institution of science's existing autonomy is "reflexive autonomy": actors and structures within the scientific field are able to translate or to reflect diverse themes presented by social and political fields, as well as influence them regarding the thematic choices on research projects.
Institutional autonomy
Institutional autonomy is having the capacity as a legislator to be able to implant and pursue official goals. Autonomous institutions are responsible for finding sufficient resources or modifying their plans, programs, courses, responsibilities, and services accordingly. But in doing so, they must contend with any obstacles that can occur, such as social pressure against cut-backs or socioeconomic difficulties. From a legislator's point of view, to increase institutional autonomy, conditions of self-management and institutional self-governance must be put in place. An increase in leadership and a redistribution of decision-making responsibilities would be beneficial to the research of resources.
Institutional autonomy was often seen as a synonym for self-determination, and many governments feared that it would lead institutions to an irredentist or secessionist region. But autonomy should be seen as a solution to self-determination struggles. Self-determination is a movement toward independence, whereas autonomy is a way to accommodate the distinct regions/groups within a country. Institutional autonomy can diffuse conflicts regarding minorities and ethnic groups in a society. Allowing more autonomy to groups and institutions helps create diplomatic relationships between them and the central government.
Politics
In governmental parlance, autonomy refers to self-governance. An example of an autonomous jurisdiction was the former United States governance of the Philippine Islands. The Philippine Autonomy Act of 1916 provided the framework for the creation of an autonomous government under which the Filipino people had broader domestic autonomy than previously, although it reserved certain privileges to the United States to protect its sovereign rights and interests. Other examples include Kosovo (as the Socialist Autonomous Province of Kosovo) under the former Yugoslav government of Marshal Tito and Puntland Autonomous Region within Federal Republic of Somalia.
Although often being territorially defined as self-governments, autonomous self-governing institutions may take a non-territorial form. Such non-territorial solutions are, for example, cultural autonomy in Estonia and Hungary, national minority councils in Serbia or Sámi parliaments in Nordic countries.
Philosophy
Autonomy is a key concept that has a broad impact on different fields of philosophy. In metaphysical philosophy, the concept of autonomy is referenced in discussions about free will, fatalism, determinism, and agency. In moral philosophy, autonomy refers to subjecting oneself to objective moral law.
According to Kant
Immanuel Kant (1724–1804) defined autonomy by three themes regarding contemporary ethics. Firstly, autonomy as the right for one to make their own decisions excluding any interference from others. Secondly, autonomy as the capacity to make such decisions through one's own independence of mind and after personal reflection. Thirdly, as an ideal way of living life autonomously. In summary, autonomy is the moral right one possesses, or the capacity we have in order to think and make decisions for oneself providing some degree of control or power over the events that unfold within one's everyday life.
The context in which Kant addresses autonomy is in regards to moral theory, asking both foundational and abstract questions. He believed that in order for there to be morality, there must be autonomy. "Autonomous" is derived from the Greek word autonomos where 'auto' means self and 'nomos' means to govern (nomos: as can be seen in its usage in nomárchēs which means chief of the province). Kantian autonomy also provides a sense of rational autonomy, simply meaning one rationally possesses the motivation to govern their own life. Rational autonomy entails making your own decisions but it cannot be done solely in isolation. Cooperative rational interactions are required to both develop and exercise our ability to live in a world with others.
Kant argued that morality presupposes this autonomy in moral agents, since moral requirements are expressed in categorical imperatives. An imperative is categorical if it issues a valid command independent of personal desires or interests that would provide a reason for obeying the command. It is hypothetical if the validity of its command, if the reason why one can be expected to obey it, is the fact that one desires or is interested in something further that obedience to the command would entail. "Don't speed on the freeway if you don't want to be stopped by the police" is a hypothetical imperative. "It is wrong to break the law, so don't speed on the freeway" is a categorical imperative. The hypothetical command not to speed on the freeway is not valid for you if you do not care whether you are stopped by the police. The categorical command is valid for you either way. Autonomous moral agents can be expected to obey the command of a categorical imperative even if they lack a personal desire or interest in doing so. It remains an open question whether they will, however.
The Kantian concept of autonomy is often misconstrued, leaving out the important point about the autonomous agent's self-subjection to the moral law. It is thought that autonomy is fully explained as the ability to obey a categorical command independently of a personal desire or interest in doing so—or worse, that autonomy is "obeying" a categorical command independently of a natural desire or interest; and that heteronomy, its opposite, is acting instead on personal motives of the kind referenced in hypothetical imperatives.
In his Groundwork of the Metaphysic of Morals, Kant applied the concept of autonomy also to define the concept of personhood and human dignity. Autonomy, along with rationality, are seen by Kant as the two criteria for a meaningful life. Kant would consider a life lived without these not worth living; it would be a life of value equal to that of a plant or insect. According to Kant autonomy is part of the reason that we hold others morally accountable for their actions. Human actions are morally praise- or blame-worthy in virtue of our autonomy. Non- autonomous beings such as plants or animals are not blameworthy due to their actions being non-autonomous. Kant's position on crime and punishment is influenced by his views on autonomy. Brainwashing or drugging criminals into being law-abiding citizens would be immoral as it would not be respecting their autonomy. Rehabilitation must be sought in a way that respects their autonomy and dignity as human beings.
According to Nietzsche
Friedrich Nietzsche wrote about autonomy and the moral fight. Autonomy in this sense is referred to as the free self and entails several aspects of the self, including self-respect and even self-love. This can be interpreted as influenced by Kant (self-respect) and Aristotle (self-love). For Nietzsche, valuing ethical autonomy can dissolve the conflict between love (self-love) and law (self-respect) which can then translate into reality through experiences of being self-responsible. Because Nietzsche defines having a sense of freedom with being responsible for one's own life, freedom and self-responsibility can be very much linked to autonomy.
According to Piaget
The Swiss philosopher Jean Piaget (1896–1980) believed that autonomy comes from within and results from a "free decision". It is of intrinsic value and the morality of autonomy is not only accepted but obligatory. When an attempt at social interchange occurs, it is reciprocal, ideal and natural for there to be autonomy regardless of why the collaboration with others has taken place. For Piaget, the term autonomous can be used to explain the idea that rules are self-chosen. By choosing which rules to follow or not, we are in turn determining our own behaviour.
Piaget studied the cognitive development of children by analyzing them during their games and through interviews, establishing (among other principles) that the children's moral maturation process occurred in two phases, the first of heteronomy and the second of autonomy:
Heteronomous reasoning: Rules are objective and unchanging. They must be literal because the authority are ordering it and do not fit exceptions or discussions. The base of the rule is the superior authority (parents, adults, the State), that it should not give reason for the rules imposed or fulfilled them in any case. Duties provided are conceived as given from oneself. Any moral motivation and sentiments are possible through what one believes to be right.
Autonomous reasoning: Rules are the product of an agreement and, therefore, are modifiable. They can be subject to interpretation and fit exceptions and objections. The base of the rule is its own acceptance, and its meaning has to be explained. Sanctions must be proportionate to the absence, assuming that sometimes offenses can go unpunished, so that collective punishment is unacceptable if it is not the guilty. The circumstances may not punish a guilty. Duties provided are conceived as given from the outside. One follows rules mechanically as it is simply a rule, or as a way to avoid a form of punishment.
According to Kohlberg
The American psychologist Lawrence Kohlberg (1927–1987) continues the studies of Piaget. His studies collected information from different latitudes to eliminate the cultural variability, and focused on the moral reasoning, and not so much in the behavior or its consequences. Through interviews with adolescent and teenage boys, who were to try and solve "moral dilemmas", Kohlberg went on to further develop the stages of moral development. The answers they provided could be one of two things. Either they choose to obey a given law, authority figure or rule of some sort or they chose to take actions that would serve a human need but in turn break this given rule or command.
The most popular moral dilemma asked involved the wife of a man approaching death due to a special type of cancer. Because the drug was too expensive to obtain on his own, and because the pharmacist who discovered and sold the drug had no compassion for him and only wanted profits, he stole it. Kohlberg asks these adolescent and teenage boys (10-, 13- and 16-year-olds) if they think that is what the husband should have done or not. Therefore, depending on their decisions, they provided answers to Kohlberg about deeper rationales and thoughts and determined what they value as important. This value then determined the "structure" of their moral reasoning.
Kohlberg established three stages of morality, each of which is subdivided into two levels. They are read in progressive sense, that is, higher levels indicate greater autonomy.
Level 1: Premoral/Preconventional Morality: Standards are met (or not met) depending on the hedonistic or physical consequences.
[Stage 0: Egocentric Judgment: There is no moral concept independent of individual wishes, including a lack of concept of rules or obligations.]
Stage 1: Punishment-Obedience Orientation: The rule is obeyed only to avoid punishment. Physical consequences determine goodness or badness and power is deferred to unquestioningly with no respect for the human or moral value, or the meaning of these consequences. Concern is for the self.
Stage 2: Instrumental-Relativist Orientation: Morals are individualistic and egocentric. There is an exchange of interests but always under the point of view of satisfying personal needs. Elements of fairness and reciprocity are present but these are interpreted in a pragmatic way, instead of an experience of gratitude or justice. Egocentric in nature but beginning to incorporate the ability to see things from the perspective of others.
Level 2: Conventional Morality/Role Conformity: Rules are obeyed according to the established conventions of a society.
Stage 3: Good Boy–Nice Girl Orientation: Morals are conceived in accordance with the stereotypical social role. Rules are obeyed to obtain the approval of the immediate group and the right actions are judged based on what would please others or give the impression that one is a good person. Actions are evaluated according to intentions.
Stage 4: Law and Order Orientation: Morals are judged in accordance with the authority of the system, or the needs of the social order. Laws and order are prioritized.
Level 3: Postconventional Morality/Self-Accepted Moral Principles: Standards of moral behavior are internalized. Morals are governed by rational judgment, derived from a conscious reflection on the recognition of the value of the individual inside a conventionally established society.
Stage 5: Social Contract Orientation: There are individual rights and standards that have been lawfully established as basic universal values. Rules are agreed upon by through procedure and society comes to consensus through critical examination in order to benefit the greater good.
Stage 6: Universal Principle Orientation: Abstract ethical principles are obeyed on a personal level in addition to societal rules and conventions. Universal principles of justice, reciprocity, equality and human dignity are internalized and if one fails to live up to these ideals, guilt or self-condemnation results.
According to Audi
Robert Audi characterizes autonomy as the self-governing power to bring reasons to bear in directing one's conduct and influencing one's propositional attitudes. Traditionally, autonomy is only concerned with practical matters. But, as Audi's definition suggests, autonomy may be applied to responding to reasons at large, not just to practical reasons. Autonomy is closely related to freedom but the two can come apart. An example would be a political prisoner who is forced to make a statement in favor of his opponents in order to ensure that his loved ones are not harmed. As Audi points out, the prisoner lacks freedom but still has autonomy since his statement, though not reflecting his political ideals, is still an expression of his commitment to his loved ones.
Autonomy is often equated with self-legislation in the Kantian tradition. Self-legislation may be interpreted as laying down laws or principles that are to be followed. Audi agrees with this school in the sense that we should bring reasons to bear in a principled way. Responding to reasons by mere whim may still be considered free but not autonomous. A commitment to principles and projects, on the other hand, provides autonomous agents with an identity over time and gives them a sense of the kind of persons they want to be. But autonomy is neutral as to which principles or projects the agent endorses. So different autonomous agents may follow very different principles. But, as Audi points out, self-legislation is not sufficient for autonomy since laws that do not have any practical impact do not constitute autonomy. Some form of motivational force or executive power is necessary in order to get from mere self-legislation to self-government. This motivation may be inherent in the corresponding practical judgment itself, a position known as motivational internalism, or may come to the practical judgment externally in the form of some desire independent of the judgment, as motivational externalism holds.
In the Humean tradition, intrinsic desires are the reasons the autonomous agent should respond to. This theory is called instrumentalism. Audi rejects instrumentalism and suggests that we should adopt a position known as axiological objectivism. The central idea of this outlook is that objective values, and not subjective desires, are the sources of normativity and therefore determine what autonomous agents should do.
Child development
Autonomy in childhood and adolescence is when one strives to gain a sense of oneself as a separate, self-governing individual. Between ages 1–3, during the second stage of Erikson's and Freud's stages of development, the psychosocial crisis that occurs is autonomy versus shame and doubt. The significant event that occurs during this stage is that children must learn to be autonomous, and failure to do so may lead to the child doubting their own abilities and feel ashamed. When a child becomes autonomous it allows them to explore and acquire new skills. Autonomy has two vital aspects wherein there is an emotional component where one relies more on themselves rather than their parents and a behavioural component where one makes decisions independently by using their judgement. The styles of child rearing affect the development of a child's autonomy. Autonomy in adolescence is closely related to their quest for identity. In adolescence parents and peers act as agents of influence. Peer influence in early adolescence may help the process of an adolescent to gradually become more autonomous by being less susceptible to parental or peer influence as they get older. In adolescence the most important developmental task is to develop a healthy sense of autonomy.
Religion
In Christianity, autonomy is manifested as a partial self-governance on various levels of church administration. During the history of Christianity, there were two basic types of autonomy. Some important parishes and monasteries have been given special autonomous rights and privileges, and the best known example of monastic autonomy is the famous Eastern Orthodox monastic community on Mount Athos in Greece. On the other hand, administrative autonomy of entire ecclesiastical provinces has throughout history included various degrees of internal self-governance.
In ecclesiology of Eastern Orthodox Churches, there is a clear distinction between autonomy and autocephaly, since autocephalous churches have full self-governance and independence, while every autonomous church is subject to some autocephalous church, having a certain degree of internal self-governance. Since every autonomous church had its own historical path to ecclesiastical autonomy, there are significant differences between various autonomous churches in respect of their particular degrees of self-governance. For example, churches that are autonomous can have their highest-ranking bishops, such as an archbishop or metropolitan, appointed or confirmed by the patriarch of the mother church from which it was granted its autonomy, but generally they remain self-governing in many other respects.
In the history of Western Christianity the question of ecclesiastical autonomy was also one of the most important questions, especially during the first centuries of Christianity, since various archbishops and metropolitans in Western Europe have often opposed centralizing tendencies of the Church of Rome. , the Catholic Church comprises 24 autonomous (sui iuris) Churches in communion with the Holy See. Various denominations of Protestant churches usually have more decentralized power, and churches may be autonomous, thus having their own rules or laws of government, at the national, local, or even individual level.
Sartre brings the concept of the Cartesian god being totally free and autonomous. He states that existence precedes essence with god being the creator of the essences, eternal truths and divine will. This pure freedom of god relates to human freedom and autonomy; where a human is not subjected to pre-existing ideas and values.
According to the first amendment, In the United States of America, the federal government is restricted in building a national church. This is due to the first amendment's recognizing people's freedom's to worship their faith according to their own belief's. For example, the American government has removed the church from their "sphere of authority" due to the churches' historical impact on politics and their authority on the public. This was the beginning of the disestablishment process. The Protestant churches in the United States had a significant impact on American culture in the nineteenth century, when they organized the establishment of schools, hospitals, orphanages, colleges, magazines, and so forth. This has brought up the famous, however, misinterpreted term of the separation of church and state. These churches lost the legislative and financial support from the state.
The disestablishment process
The first disestablishment began with the introduction of the bill of rights. In the twentieth century, due to the great depression of the 1930s and the completion of the second world war, the American churches were revived. Specifically the Protestant churches. This was the beginning of the second disestablishment when churches had become popular again but held no legislative power. One of the reasons why the churches gained attendance and popularity was due to the baby boom, when soldiers came back from the second world war and started their families. The large influx of newborns gave the churches a new wave of followers. However, these followers did not hold the same beliefs as their parents and brought about the political, and religious revolutions of the 1960s.
During the 1960s, the collapse of religious and cultural middle brought upon the third disestablishment. Religion became more important to the individual and less so to the community. The changes brought from these revolutions significantly increased the personal autonomy of individuals due to the lack of structural restraints giving them added freedom of choice. This concept is known as "new voluntarism" where individuals have free choice on how to be religious and the free choice whether to be religious or not.
Medicine
In a medical context, respect for a patient's personal autonomy is considered one of many fundamental ethical principles in medicine. Autonomy can be defined as the ability of the person to make his or her own decisions. This faith in autonomy is the central premise of the concept of informed consent and shared decision making. This idea, while considered essential to today's practice of medicine, was developed in the last 50 years. According to Tom Beauchamp and James Childress (in Principles of Biomedical Ethics), the Nuremberg trials detailed accounts of horrifyingly exploitative medical "experiments" which violated the subjects' physical integrity and personal autonomy. These incidences prompted calls for safeguards in medical research, such as the Nuremberg Code which stressed the importance of voluntary participation in medical research. It is believed that the Nuremberg Code served as the premise for many current documents regarding research ethics.
Respect for autonomy became incorporated in health care and patients could be allowed to make personal decisions about the health care services that they receive. Notably, autonomy has several aspects as well as challenges that affect health care operations. The manner in which a patient is handled may undermine or support the autonomy of a patient and for this reason, the way a patient is communicated to becomes very crucial. A good relationship between a patient and a health care practitioner needs to be well defined to ensure that autonomy of a patient is respected. Just like in any other life situation, a patient would not like to be under the control of another person. The move to emphasize respect for patient's autonomy rose from the vulnerabilities that were pointed out in regards to autonomy.
However, autonomy does not only apply in a research context. Users of the health care system have the right to be treated with respect for their autonomy, instead of being dominated by the physician. This is referred to as paternalism. While paternalism is meant to be overall good for the patient, this can very easily interfere with autonomy. Through the therapeutic relationship, a thoughtful dialogue between the client and the physician may lead to better outcomes for the client, as he or she is more of a participant in decision-making.
There are many different definitions of autonomy, many of which place the individual in a social context. Relational autonomy, which suggests that a person is defined through their relationships with others, is increasingly considered in medicine and particularly in critical and end-of-life care. Supported autonomy suggests instead that in specific circumstances it may be necessary to temporarily compromise the autonomy of the person in the short term in order to preserve their autonomy in the long-term. Other definitions of the autonomy imagine the person as a contained and self-sufficient being whose rights should not be compromised under any circumstance.
There are also differing views with regard to whether modern health care systems should be shifting to greater patient autonomy or a more paternalistic approach. For example, there are such arguments that suggest the current patient autonomy practiced is plagued by flaws such as misconceptions of treatment and cultural differences, and that health care systems should be shifting to greater paternalism on the part of the physician given their expertise. On the other hand, other approaches suggest that there simply needs to be an increase in relational understanding between patients and health practitioners to improve patient autonomy.
One argument in favor of greater patient autonomy and its benefits is by Dave deBronkart, who believes that in the technological advancement age, patients are capable of doing a lot of their research on medical issues from their home. According to deBronkart, this helps to promote better discussions between patients and physicians during hospital visits, ultimately easing up the workload of physicians. deBronkart argues that this leads to greater patient empowerment and a more educative health care system. In opposition to this view, technological advancements can sometimes be viewed as an unfavorable way of promoting patient autonomy. For example, self-testing medical procedures which have become increasingly common are argued by Greaney et al. to increase patient autonomy, however, may not be promoting what is best for the patient. In this argument, contrary to deBronkart, the current perceptions of patient autonomy are excessively over-selling the benefits of individual autonomy, and is not the most suitable way to go about treating patients. Instead, a more inclusive form of autonomy should be implemented, relational autonomy, which factors into consideration those close to the patient as well as the physician. These different concepts of autonomy can be troublesome as the acting physician is faced with deciding which concept he/she will implement into their clinical practice. It is often references as one of the four pillars of medicine, alongside beneficence, justice and nonmaleficence
Autonomy varies and some patients find it overwhelming especially the minors when faced with emergency situations. Issues arise in emergency room situations where there may not be time to consider the principle of patient autonomy. Various ethical challenges are faced in these situations when time is critical, and patient consciousness may be limited. However, in such settings where informed consent may be compromised, the working physician evaluates each individual case to make the most professional and ethically sound decision. For example, it is believed that neurosurgeons in such situations, should generally do everything they can to respect patient autonomy. In the situation in which a patient is unable to make an autonomous decision, the neurosurgeon should discuss with the surrogate decision maker in order to aid in the decision-making process. Performing surgery on a patient without informed consent is in general thought to only be ethically justified when the neurosurgeon and his/her team render the patient to not have the capacity to make autonomous decisions. If the patient is capable of making an autonomous decision, these situations are generally less ethically strenuous as the decision is typically respected.
Not every patient is capable of making an autonomous decision. For example, a commonly proposed question is at what age children should be partaking in treatment decisions. This question arises as children develop differently, therefore making it difficult to establish a standard age at which children should become more autonomous. Those who are unable to make the decisions prompt a challenge to medical practitioners since it becomes difficult to determine the ability of a patient to make a decision. To some extent, it has been said that emphasis of autonomy in health care has undermined the practice of health care practitioners to improve the health of their patient as necessary. The scenario has led to tension in the relationship between a patient and a health care practitioner. This is because as much as a physician wants to prevent a patient from suffering, they still have to respect autonomy. Beneficence is a principle allowing physicians to act responsibly in their practice and in the best interests of their patients, which may involve overlooking autonomy. However, the gap between a patient and a physician has led to problems because in other cases, the patients have complained of not being adequately informed.
The seven elements of informed consent (as defined by Beauchamp and Childress) include threshold elements (competence and voluntariness), information elements (disclosure, recommendation, and understanding) and consent elements (decision and authorization). Some philosophers such as Harry Frankfurt consider Beauchamp and Childress criteria insufficient. They claim that an action can only be considered autonomous if it involves the exercise of the capacity to form higher-order values about desires when acting intentionally. What this means is that patients may understand their situation and choices but would not be autonomous unless the patient is able to form value judgements about their reasons for choosing treatment options they would not be acting autonomously.
In certain unique circumstances, government may have the right to temporarily override the right to bodily integrity in order to preserve the life and well-being of the person. Such action can be described using the principle of "supported autonomy", a concept that was developed to describe unique situations in mental health (examples include the forced feeding of a person dying from the eating disorder anorexia nervosa, or the temporary treatment of a person living with a psychotic disorder with antipsychotic medication). While controversial, the principle of supported autonomy aligns with the role of government to protect the life and liberty of its citizens. Terrence F. Ackerman has highlighted problems with these situations, he claims that by undertaking this course of action physician or governments run the risk of misinterpreting a conflict of values as a constraining effect of illness on a patient's autonomy.
Since the 1960s, there have been attempts to increase patient autonomy including the requirement that physician's take bioethics courses during their time in medical school. Despite large-scale commitment to promoting patient autonomy, public mistrust of medicine in developed countries has remained. Onora O'Neill has ascribed this lack of trust to medical institutions and professionals introducing measures that benefit themselves, not the patient. O'Neill claims that this focus on autonomy promotion has been at the expense of issues like distribution of healthcare resources and public health.
One proposal to increase patient autonomy is through the use of support staff. The use of support staff including medical assistants, physician assistants, nurse practitioners, nurses, and other staff that can promote patient interests and better patient care. Nurses especially can learn about patient beliefs and values in order to increase informed consent and possibly persuade the patient through logic and reason to entertain a certain treatment plan. This would promote both autonomy and beneficence, while keeping the physician's integrity intact. Furthermore, Humphreys asserts that nurses should have professional autonomy within their scope of practice (35–37). Humphreys argues that if nurses exercise their professional autonomy more, then there will be an increase in patient autonomy (35–37).
International human rights law
After the Second World War, there was a push for international human rights that came in many waves. Autonomy as a basic human right started the building block in the beginning of these layers alongside liberty. The Universal declarations of Human rights of 1948 has made mention of autonomy or the legal protected right to individual self-determination in article 22.
Documents such as the United Nations Declaration on the Rights of Indigenous Peoples reconfirm international law in the aspect of human rights because those laws were already there, but it is also responsible for making sure that the laws highlighted when it comes to autonomy, cultural and integrity; and land rights are made within an indigenous context by taking special attention to their historical and contemporary events
The United Nations Declaration on the Rights of Indigenous Peoples article 3 also through international law provides Human rights for Indigenous individuals by giving them a right to self-determination, meaning they have all the liberties to choose their political status, and are capable to go and improve their economic, social, and cultural statuses in society, by developing it. Another example of this, is article 4 of the same document which gives them autonomous rights when it comes to their internal or local affairs and how they can fund themselves in order to be able to self govern themselves.
Minorities in countries are also protected as well by international law; the 27th article of the United Nations International covenant on Civil and Political rights or the ICCPR does so by allowing these individuals to be able to enjoy their own culture or use their language. Minorities in that manner are people from ethnic religious or linguistic groups according to the document.
The European Court of Human rights, is an international court that has been created on behalf of the European Conventions of Human rights. However, when it comes to autonomy they did not explicitly state it when it comes to the rights that individuals have. The current article 8 has remedied to that when the case of Pretty v the United Kingdom, a case in 2002 involving assisted suicide, where autonomy was used as a legal right in law. It was where Autonomy was distinguished and its reach into law was marked as well making it the foundations for legal precedent in making case law originating from the European Court of Human rights.
The Yogyakarta Principles, a document with no binding effect in international human rights law, contend that "self-determination" used as meaning of autonomy on one's own matters including informed consent or sexual and reproductive rights, is integral for one's self-defined or gender identity and refused any medical procedures as a requirement for legal recognition of the gender identity of transgender. If eventually accepted by the international community in a treaty, this would make these ideas human rights in the law. The Convention on the Rights of Persons with Disabilities also defines autonomy as principles of rights of a person with disability including "the freedom to make one's own choices, and independence of persons".
Celebrity culture on teenage autonomy
A study conducted by David C. Giles and John Maltby conveyed that after age-affecting factors were removed, a high emotional autonomy was a significant predictor of celebrity interest, as well as high attachment to peers with a low attachment to parents. Patterns of intense personal interest in celebrities was found to be conjunction with low levels of closeness and security. Furthermore, the results suggested that adults with a secondary group of pseudo-friends during development from parental attachment, usually focus solely on one particular celebrity, which could be due to difficulties in making this transition.
Various uses
In computing, an autonomous peripheral is one that can be used with the computer turned off.
Within self-determination theory in psychology, autonomy refers to 'autonomy support versus control', "hypothesizing that autonomy-supportive social contexts tend to facilitate self-determined motivation, healthy development, and optimal functioning."
In mathematical analysis, an ordinary differential equation is said to be autonomous if it is time-independent.
In linguistics, an autonomous language is one which is independent of other languages, for example, has a standard variety, grammar books, dictionaries or literature, etc.
In robotics, "autonomy means independence of control. This characterization implies that autonomy is a property of the relation between two agents, in the case of robotics, of the relations between the designer and the autonomous robot. Self-sufficiency, situatedness, learning or development, and evolution increase an agent's degree of autonomy.", according to Rolf Pfeifer.
In spaceflight, autonomy can also refer to crewed missions that are operating without control by ground controllers.
In economics, autonomous consumption is consumption expenditure when income levels are zero, making spending autonomous to income.
In politics, autonomous territories are States wishing to retain territorial integrity in opposition to ethnic or indigenous demands for self-determination or independence (sovereignty).
In anti-establishment activism, an autonomous space is another name for a non-governmental social center or free space (for community interaction).
In social psychology, autonomy is a personality trait characterized by a focus on personal achievement, independence, and a preference for solitude, often labeled as an opposite of sociotropy.
Limits to autonomy
Autonomy can be limited. For instance, by disabilities, civil society organizations may achieve a degree of autonomy albeit nested within—and relative to—formal bureaucratic and administrative regimes. Community partners can therefore assume a hybridity of capture and autonomy—or a mutuality—that is rather nuanced.
Semi-autonomy
The term semi-autonomy (coined with prefix semi- / "half") designates partial or limited autonomy. As a relative term, it is usually applied to various semi-autonomous entities or processes that are substantially or functionally limited, in comparison to other fully autonomous entities or processes.
Quasi-autonomy
The term quasi-autonomy (coined with prefix quasi- / "resembling" or "appearing") designates formally acquired or proclaimed, but functionally limited or constrained autonomy. As a descriptive term, it is usually applied to various quasi-autonomous entities or processes that are formally designated or labeled as autonomous, but in reality remain functionally dependent or influenced by some other entity or process. An example for such use of the term can be seen in common designation for quasi-autonomous non-governmental organizations.
See also
Autonomism
List of autonomous areas by country
Autonomy Day
Cornelius Castoriadis
Counterdependency
Direct democracy
Equality of autonomy
Essential facilities doctrine
Flat organization
Takis Fotopoulos
Home rule
Job autonomy
Personal boundaries
Self-governing colony
Self-sufficiency
Teaching for social justice
Viable system model
Workplace democracy
Notes
References
Citations
Sources
External links
Kastner, Jens. "Autonomy" (2015). University Bielefeld – Center for InterAmerican Studies.
"Self-sustainability strategies for Development Initiatives: What is self-sustainability and why is it so important?"
Ethical principles
Individualism
Organizational cybernetics | 0.763149 | 0.997599 | 0.761316 |
SPSS | SPSS Statistics is a statistical software suite developed by IBM for data management, advanced analytics, multivariate analysis, business intelligence, and criminal investigation. Long produced by SPSS Inc., it was acquired by IBM in 2009. Versions of the software released since 2015 have the brand name IBM SPSS Statistics.
The software name originally stood for Statistical Package for the Social Sciences (SPSS), reflecting the original market, then later changed to Statistical Product and Service Solutions.
Overview
SPSS is a widely used program for statistical analysis in social science. It is also used by market researchers, health researchers, survey companies, government, education researchers, industries, marketing organizations, data miners, and others. The original SPSS manual (Nie, Bent & Hull, 1970) has been described as one of "sociology's most influential books" for allowing ordinary researchers to do their own statistical analysis. In addition to statistical analysis, data management (case selection, file reshaping and creating derived data) and data documentation (a metadata dictionary is stored in the datafile) are features of the base software.
The many features of SPSS Statistics are accessible via pull-down menus or can be programmed with a proprietary 4GL command syntax language. Command syntax programming has the benefits of reproducible output, simplifying repetitive tasks, and handling complex data manipulations and analyses. Additionally, some complex applications can only be programmed in syntax and are not accessible through the menu structure. The pull-down menu interface also generates command syntax: this can be displayed in the output, although the default settings have to be changed to make the syntax visible to the user. They can also be pasted into a syntax file using the "paste" button present in each menu. Programs can be run interactively or unattended, using the supplied Production Job Facility.
A "macro" language can be used to write command language subroutines. A Python programmability extension can access the information in the data dictionary and data and dynamically build command syntax programs. This extension, introduced in SPSS 14, replaced the less functional SAX Basic "scripts" for most purposes, although SaxBasic remains available. In addition, the Python extension allows SPSS to run any of the statistics in the free software package R. From version 14 onwards, SPSS can be driven externally by a Python or a VB.NET program using supplied "plug-ins". (From version 20 onwards, these two scripting facilities, as well as many scripts, are included on the installation media and are normally installed by default.)
SPSS Statistics places constraints on internal file structure, data types, data processing, and matching files, which together considerably simplify programming. SPSS datasets have a two-dimensional table structure, where the rows typically represent cases (such as individuals or households) and the columns represent measurements (such as age, sex, or household income). Only two data types are defined: numeric and text (or "string"). All data processing occurs sequentially case-by-case through the file (dataset). Files can be matched one-to-one and one-to-many, but not many-to-many. In addition to that cases-by-variables structure and processing, there is a separate Matrix session where one can process data as matrices using matrix and linear algebra operations.
The graphical user interface has two views which can be toggled. The 'Data View' shows a spreadsheet view of the cases (rows) and variables (columns). Unlike spreadsheets, the data cells can only contain numbers or text, and formulas cannot be stored in these cells. The 'Variable View' displays the metadata dictionary, where each row represents a variable and shows the variable name, variable label, value label(s), print width, measurement type, and a variety of other characteristics. Cells in both views can be manually edited, defining the file structure and allowing data entry without using command syntax. This may be sufficient for small datasets. Larger datasets such as statistical surveys are more often created in data entry software, or entered during computer-assisted personal interviewing, by scanning and using optical character recognition and optical mark recognition software, or by direct capture from online questionnaires. These datasets are then read into SPSS.
SPSS Statistics can read and write data from ASCII text files (including hierarchical files), other statistics packages, spreadsheets and databases. It can also read and write to external relational database tables via ODBC and SQL.
Statistical output is to a proprietary file format (*.spv file, supporting pivot tables) for which, in addition to the in-package viewer, a stand-alone reader can be downloaded. The proprietary output can be exported to text or Microsoft Word, PDF, Excel, and other formats. Alternatively, output can be captured as data (using the OMS command), as text, tab-delimited text, PDF, XLS, HTML, XML, SPSS dataset or a variety of graphic image formats (JPEG, PNG, BMP and EMF).
Several variants of SPSS Statistics exist. SPSS Statistics Gradpacks are highly discounted versions sold only to students. SPSS Statistics Server is a version of the software with a client/server architecture. Add-on packages can enhance the base software with additional features (examples include complex samples, which can adjust for clustered and stratified samples, and custom tables, which can create publication-ready tables). SPSS Statistics is available under either an annual or a monthly subscription license.
Version 25 of SPSS Statistics launched on August 8, 2017. This added new and advanced statistics, such as random effects solution results (GENLINMIXED), robust standard errors (GLM/UNIANOVA), and profile plots with error bars within the Advanced Statistics and Custom Tables add-on. V25 also includes new Bayesian statistics capabilities, a method of statistical inference, and publication ready charts, such as powerful new charting capabilities, including new default templates and the ability to share with Microsoft Office applications.
Versions and ownership history
SPSS 1 - 1968
SPSS 2 - 1983
SPSS 5 - 1993
SPSS 6.1 - 1995
SPSS 7.5 - 1997
SPSS 8 - 1998
SPSS 9 - 1999
SPSS 10 - 1999
SPSS 11 - 2002
SPSS 12 - 2004
SPSS 13 - 2005
SPSS 14 - 2006
SPSS 15 - 2006
SPSS 16 - 2007
SPSS 17 - 2008
PASW 17 - 2009
PASW 18 - 2009
SPSS 19 - 2010
SPSS 20 - 2011
SPSS 21 - 2012
SPSS 22 - 2013
SPSS 23 - 2015
SPSS 24 - 2016, March
SPSS 25 - 2017, July
SPSS 26 - 2018
SPSS 27 - 2019, June (and 27.0.1 in November, 2020)
SPSS 28 - 2021, May
SPSS 29 - 2022, Sept
SPSS was released in its first version in 1968 as the Statistical Package for the Social Sciences (SPSS) after being developed by Norman H. Nie, Dale H. Bent, and C. Hadlai Hull. Those principals incorporated as SPSS Inc. in 1975. Early versions of SPSS Statistics were written in Fortran and designed for batch processing on mainframes, including for example IBM and ICL versions, originally using punched cards for data and program input. A processing run read a command file of SPSS commands and either a raw input file of fixed-format data with a single record type, or a 'getfile' of data saved by a previous run. To save precious computer time an 'edit' run could be done to check command syntax without analysing the data. From version 10 (SPSS-X) in 1983, data files could contain multiple record types.
Prior to SPSS 16.0, different versions of SPSS were available for Windows, Mac OS X and Unix.
SPSS Statistics version 13.0 for Mac OS X was not compatible with Intel-based Macintosh computers, due to the Rosetta emulation software causing errors in calculations. SPSS Statistics 15.0 for Windows needed a downloadable hotfix to be installed in order to be compatible with Windows Vista.
From version 16.0, the same version runs under Windows, Mac, and Linux. The graphical user interface is written in Java. The Mac OS version is provided as a Universal binary, making it fully compatible with both PowerPC and Intel-based Mac hardware.
SPSS Inc announced on July 28, 2009, that it was being acquired by IBM for US$1.2 billion. Because of a dispute about ownership of the name "SPSS", between 2009 and 2010, the product was referred to as PASW (Predictive Analytics SoftWare). As of January 2010, it became "SPSS: An IBM Company". Complete transfer of business to IBM was done by October 1, 2010. By that date, SPSS: An IBM Company ceased to exist. IBM SPSS is now fully integrated into the IBM Corporation, and is one of the brands under IBM Software Group's Business Analytics Portfolio, together with IBM Algorithmics, IBM Cognos and IBM OpenPages.
Companion software in the "IBM SPSS" family are used for data mining and text analytics (IBM SPSS Modeler), realtime credit scoring services (IBM SPSS Collaboration and Deployment Services), and structural equation modeling (IBM SPSS Amos).
SPSS Data Collection and SPSS Dimensions were sold in 2015 to UNICOM Systems, Inc., a division of UNICOM Global, and merged into the integrated software suite UNICOM Intelligence (survey design, survey deployment, data collection, data management and reporting).
(Interactive Data Analysis)
IDA (Interactive Data Analysis) was a software package that originated at what was formerly the National Opinion Research Center (NORC), at the University of Chicago. Initially offered on the HP-2000, somewhat later, under the ownership of SPSS, it was also available on MUSIC/SP. Regression analysis was one of IDA's strong points.
- Conversational / Columnar SPSS
SCSS was a software product intended for online use of IBM mainframes.
Although the "C" was for "conversational", it also represented a distinction regarding how the data was stored: it used a column-oriented rather than a row-oriented (internal) database.
This gave good interactive response time for the SPSS Conversational Statistical System (SCSS), whose strong point, as with SPSS, was Cross-tabulation.
Project NX
In October, 2020 IBM announced the start of an Early Access Program for the "New SPSS Statistics", codenamed Project NX. It contains "many of your favorite SPSS capabilities presented in a new easy to use interface, with integrated guidance, multiple tabs, improved graphs and much more".
In December, 2021, IBM opened up the Early Access Program for the next generation of SPSS Statistics for more users and shared more visuals about it.
See also
Comparison of statistical packages
JASP and jamovi, both open-source and free of charge alternatives, offering frequentist and Bayesian models
PSPP, a free SPSS replacement from the GNU Project
SPSS Modeler
References
Further reading
External links
Official SPSS User Community
50 years of SPSS history
Raynald Levesque's SPSS Tools – library of worked solutions for SPSS programmers (FAQ, command syntax; macros; scripts; Python)
Archives of SPSSX-L Discussion – SPSS Listserv active since 1996. Discusses programming, statistics and analysis
UCLA ATS Resources to help you learn SPSS – Resources for learning SPSS
UCLA ATS Technical Reports – Report 1 compares Stata, SAS, and SPSS against R (R is a language and environment for statistical computing and graphics).
SPSS Community?ref=wikipedia – Support for developers of applications using SPSS products, including materials and examples of the Python and R programmability features
Biomedical Statistics - An educational website dedicated to statistical evaluation of biomedical data using SPSS software
IBM software
Business intelligence software
Java platform software
Science software for Linux
Proprietary commercial software for Linux
Data mining and machine learning software
Statistical software
Statistical programming languages
Econometrics software
Time series software
Data warehousing
Proprietary cross-platform software
Extract, transform, load tools
Mathematical optimization software
Numerical software | 0.762573 | 0.998255 | 0.761242 |
High- and low-level | High-level and low-level, as technical terms, are used to classify, describe and point to specific goals of a systematic operation; and are applied in a wide range of contexts, such as, for instance, in domains as widely varied as computer science and business administration.
High-level describe those operations that are more abstract and general in nature; wherein the overall goals and systemic features are typically more concerned with the wider, macro system as a whole.
Low-level describes more specific individual components of a systematic operation, focusing on the details of rudimentary micro functions rather than macro, complex processes. Low-level classification is typically more concerned with individual components within the system and how they operate.
Differences
Due to the nature of complex systems, the high-level description will often be completely different from the low-level one; and, therefore, the (different) descriptions that each deliver are consequent upon the level at which each (differently) direct their study. For example,
there are features of an ant colony that are not features of any individual ant;
there are features of the human mind that are not known to be descriptive of individual neurons in the brain;
there are features of oceans which are not features of any individual water molecule; and
there are features of a human personality that are not features of any cell in a body.
Uses
In computer science, software is typically divided into two types: high-level end-user applications software (such as word processors, databases, video games, etc.), and low-level systems software (such as operating systems, hardware drivers, etc.).As such, high-level applications typically rely on low-level applications to function.In terms of programming, a high-level programming language is one which has a relatively high level of abstraction, and manipulates conceptual functions in a structured manner.A low-level programming language is one like assembly language that contains commands closer to processor instructions.
In formal methods, a high-level formal specification can be related to a low-level executable implementation (e.g., formally by mathematical proof using formal verification techniques).
In sociology and social anthropology, high-level descriptions would be terms like economy and political structure, and low level descriptions would be individual peoples' motivations and work.
In neuroscience, low-level would relate to the functioning of a cell (or part of a cell, or molecule) and high level to the overall function or activity of a neural system.
In documentation, a high-level document contains the executive summary, the low-level documents the technical specifications.
In business, corporate strategy is a high-level description, a list of who does what jobs is a low-level description.
Examples
Climate is a high-level description of the actions of the atmosphere and oceans. Physics of water and gas molecules is a low-level description of the same system.
The instruction "write a creative poem on love" is a high-level instruction. The instruction "tighten the tendons in the dominant wrist to grip the pen" is a low-level description of an activity within that.
"Wikipedia is an encyclopedia" is a high-level description compared to "Wikipedia is a collection of textual articles on many topics". The former reflects a higher level view of organization, purpose, concept and structure, but does not explain what Wikipedia physically is. The latter is more detailed as to what exactly Wikipedia contains and how it's made up, but doesn't explain what its overall purpose and goals are. These are typical features of high-level and low-level descriptions, respectively.
As a more general matter, encyclopedias, such as Wikipedia, can be considered a more high-level source of information on a particular topic than one might find in, for example, a trade magazine or a scientific journal.
See also
Granularity
Dennett's three stances
Level of analysis
Meta-system
Systems theory
References
Systems theory
Complex systems theory
Formal methods
Abstraction | 0.770409 | 0.988098 | 0.76124 |
Training and development | Training and development involves improving the effectiveness of organizations and the individuals and teams within them. Training may be viewed as being related to immediate changes in effectiveness via organized instruction, while development is related to the progress of longer-term organizational and employee goals. While training and development technically have differing definitions, the terms are often used interchangeably. Training and development have historically been topics within adult education and applied psychology, but have within the last two decades become closely associated with human resources management, talent management, human resources development, instructional design, human factors, and knowledge management.
Skills training has taken on varying organizational forms across industrialized economies. Germany has an elaborate vocational training system, whereas the United States and the United Kingdom are considered to generally have weak ones.
History
Aspects of training and development have been linked to ancient civilizations around the world. Early training-related articles appeared in journals marketed to enslavers in the Antebellum South and training approaches and philosophies were discussed extensively by Booker T. Washington. Early academic publishing related to training included a 1918 article in the Journal of Applied Psychology, which explored an undergraduate curriculum designed for applied psychologists.
By the 1960s and 70s, the field began developing theories and conducting theory-based research since it was historically rooted in trial-and-error intervention research, and new training methods were developed, such as the use of computers, television, case studies, and role playing. The scope of training and development also expanded to include cross-cultural training, a focus on the development of the individual employee, and the use of new organization development literature to frame training programs.
The 1980s focused on how employees received and implemented training programs, and encouraged the collection of data for evaluation purposes, particularly management training programs. The development piece of training and development became increasingly popular in the 90s, with employees more frequently being influenced by the concept of lifelong learning. It was in this decade that research revealing the impact and importance of fostering a training and development-positive culture was first conducted.
The 21st century brought more research in topics such as team-training, such as cross-training, which emphasizes training in coworkers' responsibilities.
Training practice and methods
Training and development encompass three main activities: training, education, and development. Differing levels and types of development may be used depending on the roles of employees in an organisation.
The "stakeholders" in training and development are categorized into several classes. The sponsors of training and development are senior managers, and line managers are responsible for coaching, resources, and performance. The clients of training and development are business planners, while the participants are those who undergo the processes. The facilitators are human resource management staff and the providers are specialists in the field. Each of these groups has its own agenda and motivations, which sometimes conflict with the others'.
Since the 2000s, training has become more trainee-focused, which allows those being trained more flexibility and active learning opportunities. These active learning techniques include exploratory/discovery learning, error management training, guided exploration, and mastery training. Typical projects in the field include executive and supervisory/management development, new employee orientation, professional skills training, technical/job training, customer-service training, sales-and-marketing training, and health-and-safety training. Training is particularly critical in high-reliability organizations, which rely on high safety standards to prevent catastrophic damage to employees, equipment, or the environment (e.g. nuclear power plants and operating rooms).
The instructional systems design approach (often referred to as the ADDIE model) is often used for designing learning programs and used for instructional design, or the process of designing, developing, and delivering learning content. There are 5 phases in the ADDIE model:
Needs assessment: problem identification. training needs analysis, determination of audience determined, identification of stakeholder's needs and required resources
Program design: mapping of learning intervention/implementation outline and evaluation methods
Program development: delivery method, production of learning outcomes, quality evaluation of learning outcome, development of communication strategy, required technology, and assessment and evaluation tools
Training delivery and implementation: participation in side-programs, training delivery, learning participation, and evaluation of business
Evaluation of training: formal evaluation, including the evaluation of learning and potential points of improvement
Many different training methods exist today, including both on- and off-the-job methods. Other training methods may include:
Apprenticeship training: training in which a worker entering the skilled trades is given thorough instruction and experience both on and off the job in the practical and theoretical aspects of the work
Co-operative programs and internship programs: training programs that combine practical, on-the-job experience with formal education, and are usually offered at colleges and universities
Classroom instruction: information is presented in lectures, demonstrations, films, and videotapes or through computer instruction
Self-directed learning: individuals work at their own pace during programmed instruction, which may include books, manuals, or computers that break down subject-matter content into highly-organized logical sequences that demand a continuous response on the trainee's part. It often includes the use of computer and/or online resources.
Audiovisual: methods used to teach the skills and procedures required for a number of jobs through audiovisual means
Simulation: used when it is not practical or safe to train people on the actual equipment or within the actual work environment
There is significant importance in training as it prepares employees for higher job responsibilities, shows employees they are valued, improves IT and computer processes, and tests the efficiency of new performance management systems. However, some believe training wastes time and money because, in certain cases, real life experience may be better than education, and organizations want to spend less, not more.
Needs assessments
Needs assessments, especially when the training is being conducted on a large-scale, are frequently conducted in order to gauge what needs to be trained, how it should be trained, and how extensively. Needs assessments in the training and development context often reveal employee and management-specific skills to develop (e.g. for new employees), organizational-wide problems to address (e.g. performance issues), adaptations needed to suit changing environments (e.g. new technology), or employee development needs (e.g. career planning). The needs assessment can predict the degree of effectiveness of training and development programs and how closely the needs were met, the execution of the training (i.e. how effective the trainer was), and trainee characteristics (e.g. motivation, cognitive abilities). Training effectiveness is typically done on an individual or team-level, with few studies investigating the impacts on organizations.
Principles
Aik and Tway (2006) estimated that only 20–30% of training given to employees are used within the next month. To mitigate the issue, they recommended some general principles to follow to increase the employees' desire to take part in the program. These include:
improving self-efficacy, which increases the learner's personal belief that they can fully comprehend the teachings
maintaining a positive attitude, as an uncooperative attitude towards learning could hinder the individual's capability to grasp the knowledge being provided
increasing competence, which is the ability for an individual to make good decisions efficiently
providing external motivators, such as a reward for the completion of the training or an extrinsic goal to follow
Motivation
Motivation is an internal process that influences an employee's behavior and willingness to achieve organizational goals. Creating a motivational environment within an organization can help employees achieve their highest level of productivity, and can create an engaged workforce that enhances individual and organizational performance. The model for motivation is represented by motivators separated into two different categories:
Intrinsic factors, which represent the internal factors of an individual, such as the , achievement recognition, responsibility, opportunity for meaningful work, involvement in decision making, and importance within the organization
Extrinsic factors, which are factors external to the individual, such as job security, salary, benefits, work conditions, and vacations
Both intrinsic and extrinsic motivators associate with employee performance in the workplace. A company's techniques to motivate employees may change over time depending on the current dynamics of the workplace.
Feedback
Traditional constructive feedback, also known as weakness-based feedback, can often be viewed as malicious from the employees’ perspective. When interpreted negatively, employees lose motivation on the job, affecting their production level.
Reinforcement is another principle of employee training and development. Studies have shown that reinforcement directly influences employee learning, which is highly correlated with performance after training. Reinforcement-based training emphasizes the importance of communication between managers and trainees in the workplace. The more the training environment can be a positive, nurturing experience, the faster attendees are apt to learn.
Benefits
The benefits of the training and development of employees include:
increased productivity and performance in the workplace
uniformity of work processes
skills and team development
reduced supervision and wastage
a decrease in safety-related accidents
improved organizational structure, designs and morale
better knowledge of policies and organization's goals
improved customer valuation
Enhancements in public service motivation among public employees
However, training and development may lead to adverse outcomes if it is not strategic and goal-oriented. Additionally, there is a lack of consensus on the long-term outcomes of training investments; and in the public sector, managers often hold conservative views about the effectiveness of training.
Barriers and access to training
Training and development are crucial to organizational performance, employee career advancement and engagement.
Disparities in training can be caused by several factors, including societal norms and cultural biases that significantly impact the distribution of training opportunities. Stereotypes and implicit biases can undermine the confidence and performance of minority groups to seek out training, affecting their career development.
The impact of excluding or limiting a person’s access to training and development opportunities can affect both the individual and the organization.
Disparities in training opportunities can adversely affect individuals from underrepresented groups, leading to slower career progression, reduced employee engagement, and limited professional growth. Individuals may experience lower self-esteem and decreased motivation due to perceived or actual access to development opportunities. For example, if a leadership training program does not have minority representation, individuals may lack the confidence to “break the glass ceiling” and seek out the opportunity for themselves.
When training opportunities are not equitably distributed, organizations may have reduced diversity in leadership and decision-making, which may stifle innovation and hinder organizational performance. Failure to address these disparities can lead to higher turnover rates and lower employee morale.
Management teams that are not diverse can be self-replicating as senior leaders’ demographic characteristics significantly impact the types of programs, policies and practices implemented in the organisation – i.e. there are more likely to be diversity programs if the management team is also diverse.
To address these disparities, organizations can implement diversity policies, provide bias training, and establish mentorship programs to support underrepresented groups. These may include:
implementing inclusive policies for addressing disparities: organizations should establish diversity and inclusion programs that specifically target training and development opportunities for underrepresented groups, which should focus on opportunities for future managers at the bottom of the hierarchy, as advancement to lower-level and middle-level positions is crucial for promotion to upper-level management. These policies can help ensure employees have equal access to career advancement resources and can increase the implementation of mechanisms for reporting discrimination or advancement barriers. Some efforts to support diversity and exclusion commitments in workplaces may be enshrined in law, such as the New Zealand Public Service Act 2020.
Developing mentorship and sponsorship programs: these programs can support underrepresented groups by providing them with guidance, networking opportunities, and advocacy within the organisation. Creating supportive networks for minority and gender groups can provide safe spaces for people identifying as minorities to develop programs that are suited to them and to provide a united voice to report ongoing discrimination.
Using data to track and address disparities in training opportunities: this may include censuses or regular pulse surveys or records of learning that are linked to a person’s self-identified attributes.
Occupation
The Occupational Information Network cites training and development specialists as having a bright outlook, meaning that the occupation will grow rapidly or have several job openings in the next few years. Related professions include training and development managers, (chief) learning officers, industrial-organizational psychologists, and organization development consultants. Training and development specialists are equipped with the tools to conduct needs analyses, build training programs to suit the organization's needs by using various training techniques, create training materials, and execute and guide training programs.
See also
Adult education
Andragogy
Microtraining
References
Further reading
Thelen, Kathleen. 2004. How Institutions Evolve: The Political Economy of Skills in Germany, Britain, the United States, and Japan. Cambridge University Press.
Human resource management
Organizational performance management
Training
Learning
Applied psychology
Personal development | 0.766629 | 0.992956 | 0.761229 |
Self-regulation theory | Self-regulation theory (SRT) is a system of conscious, personal management that involves the process of guiding one's own thoughts, behaviors and feelings to reach goals. Self-regulation consists of several stages. In the stages individuals must function as contributors to their own motivation, behavior, and development within a network of reciprocally interacting influences.
Roy Baumeister, one of the leading social psychologists who have studied self-regulation, claims it has four components: standards of desirable behavior, motivation to meet standards, monitoring of situations and thoughts that precede breaking said standards and lastly, willpower. Baumeister along with other colleagues developed three models of self-regulation designed to explain its cognitive accessibility: self-regulation as a knowledge structure, strength, or skill. Studies have been conducted to determine that the strength model is generally supported, because it is a limited resource in the brain and only a given amount of self-regulation can occur until that resource is depleted.
SRT can be applied to:
Impulse control, the management of short-term desires. People with low impulse control are prone to acting on immediate desires. This is one route for such people to find their way to jail as many criminal acts occur in the heat of the moment. For non-violent people it can lead to losing friends through careless outbursts, or financial problems caused by making too many impulsive purchases.
The cognitive bias known as illusion of control. To the extent that people are driven by internal goals concerned with the exercise of control over their environment, they will seek to reassert control in conditions of chaos, uncertainty or stress. Failing genuine control, one coping strategy will be to fall back on defensive attributions of control—leading to illusions of control (Fenton-O'Creevy et al., 2003).
Goal attainment and motivation
Sickness behavior
SRT consists of several stages. First, the patient deliberately monitors one's own behavior and evaluates how this behavior affects one's health. If the desired effect is not realized, the patient changes personal behavior. If the desired effect is realized, the patient reinforces the effect by continuing the behavior. (Kanfer 1970;1971;1980)
Another approach is for the patient to realize a personal health issue and understand the factors involved in that issue. The patient must decide upon an action plan for resolving the health issue. The patient will need to deliberately monitor the results in order to appraise the effects, checking for any necessary changes in the action plan. (Leventhal & Nerenz 1984)
Another factor that can help the patient reach his/her own goal of personal health is to relate to the patient the following: Help them figure out the personal/community views of the illness, appraise the risks involved and give them potential problem-solving/coping skills. Four components of self-regulation described by Baumeister et al. (2007) are:
Standards: Of desirable behavior.
Motivation: To meet standards.
Monitoring: Of situations and thoughts that precede breaking standards.
Willpower: Internal strength to control urges
History and contributors
Albert Bandura
There have been numerous researchers, psychologists and scientists who have studied self-regulatory processes. Albert Bandura, a cognitive psychologist had significant contributions focusing on the acquisition of behaviors that led to the social cognitive theory and social learning theory. His work brought together behavioral and cognitive components in which he concluded that "humans are able to control their behavior through a process known as self-regulation." This led to his known process that contained: self observation, judgment and self response. Self observation (also known as introspection) is a process involving assessing one's own thoughts and feelings in order to inform and motivate the individual to work towards goal setting and become influenced by behavioral changes. Judgement involves an individual comparing his or her performance to their personal or created standards. Lastly, self-response is applied, in which an individual may reward or punish his or herself for success or failure in meeting standard(s). An example of self-response would be rewarding oneself with an extra slice of pie for doing well on an exam.
Dale Schunk
According to Schunk (2012), Lev Vygotsky who was a Russian psychologist and was a major influence on the rise of constructivism, believed that self-regulation involves the coordination of cognitive processes such as planning, synthesizing and formulating concepts (Henderson & Cunningham, 1994); however, such coordination does not proceed independently of the individual's social environment and culture. In fact, self-regulation is inclusive of the gradual internalization of language and concepts. Schunk's Learning Theories: An Educational Perspective is stated to give a contemporary and historical overview of learning theories for undergraduate and graduate learners
Roy Baumeister
As a widely studied theory, SRT was also greatly impacted by the well-known social psychologist Roy Baumeister. He described the ability to self-regulate as limited in capacity and through this he coined the term ego depletion. The four components of self-regulation theory described by Roy Baumeister are standards of desirable behavior, motivation to meet standards, monitoring of situations and thoughts that precede breaking standards and willpower, or the internal strength to control urges. In Baumeister's paper titled Self-Regulation Failure: An Overview, he express that self-regulation is complex and multifaceted. Baumeister lays out his “three ingredients” of self-regulation as a case for self-regulation failure.
Research
Many studies have been done to test different variables regarding self-regulation. Albert Bandura studied self-regulation before, after and during the response. He created the triangle of reciprocal determinism that includes behavior, environment and the person (cognitive, emotional and physical factors) that all influence one another. Bandura concluded that the processes of goal attainment and motivation stem from an equal interaction of self-observation, self-reaction, self-evaluation and self-efficacy.
In addition to Bandura's work, psychologists Muraven, Tice and Baumeister conducted a study for self control as a limited resource. They suggested there were three competing models to self-regulation: self-regulation as a strength, knowledge structure and a skill. In the strength model, they indicated it is possible self-regulation could be considered a strength because it requires willpower and thus is a limited resource. Failure to self-regulate could then be explained by depletion of this resource. For self-regulation as a knowledge structure, they theorized it involves a certain amount of knowledge to exert self control, so as with any learned technique, failure to self-regulate could be explained by insufficient knowledge. Lastly, the model involving self-regulation as a skill referred to self-regulation being built up over time and unable to be diminished; therefore, failure to exert would be explained by a lack of skill. They found that self-regulation as a strength is the most feasible model due to studies that have suggested self-regulation is a limited resource.
Dewall, Baumeister, Gailliot and Maner performed a series of experiments instructing participants to perform ego depletion tasks to diminish the self-regulatory resource in the brain, that they theorized to be glucose. This included tasks that required participants to break a familiar habit, where they read an essay and circled words containing the letter 'e' for the first task, then were asked to break that habit by performing a second task where they circled words containing 'e' and/or 'a'. Following this trial, participants were randomly assigned to either the glucose category, where they drank a glass of lemonade made with sugar, or the control group, with lemonade made from Splenda. They were then asked their individual likelihoods of helping certain people in hypothetical situations, for both kin and non-kin and found that excluding kin, people were much less likely to help a person in need if they were in the control group (with Splenda) than if they had replenished their brain glucose supply with the lemonade containing real sugar. This study also supports the model for self-regulation as a strength because it confirms it is a limited resource.
Baumeister and colleagues expanded on this and determined the four components to self-regulation. Those include standards of desirable behavior, motivation to meet these standards, monitoring of situations and thoughts that precede breaking standards and willpower.
Applications and examples
Impulse control in self-regulation involves the separation of our immediate impulses and long-term desires. We can plan, evaluate our actions and refrain from doing things we will regret. Research shows that self-regulation is a strength necessary for emotional well-being. Violation of one's deepest values results in feelings of guilt, which will undermine well-being. The illusion of control involves people overestimating their own ability to control events. Such as, when an event occurs an individual may feel greater a sense of control over the outcome that they demonstrably do not influence. This emphasizes the importance of perception of control over life events.
The self-regulated learning is the process of taking control and evaluating one's own learning and behavior. This emphasizes control by the individual who monitors, directs and regulates actions toward goals of information. In goal attainment self-regulation it is generally described in these four components of self-regulation. Standards, which is the desirable behavior. Motivation, to meet the standards. Monitoring, situations and thoughts that precede breaking standards. Willpower, internal strength to control urges.
Illness behavior in self-regulation deals with issues of tension that arise between holding on and letting go of important values and goals as those are threatened by disease processes. Also people who have poor self-regulatory skills do not succeed in relationships or cannot hold jobs. Sayette (2004) describes failures in self-regulation as in two categories: under regulation and misregualtion. Under regulation is when people fail to control oneself whereas misregualtion deals with having control but does not bring up the desired goal (Sayette, 2004).
Criticisms/challenges
One challenge of self-regulation is that researchers often struggle with conceptualizing and operationalizing self-regulation (Carver & Scheier, 1990). The system of self-regulation comprises a complex set of functions, including research cognition, problem solving, decision making and meta cognition.
Ego depletion refers to self control or willpower drawing from a limited pool of mental resources. If an individual has low mental activity, self control is typically impaired, which may lead to ego depletion. Self control plays a valuable role in the functioning of self in people. The illusion of control involves the overestimation of an individual's ability to control certain events. It occurs when someone feels a sense of control over outcomes although they may not possess this control. Psychologists have consistently emphasized the importance of perceptions of control over life events. Heider proposed that humans have a strong motive to control their environment.
Reciprocal determinism is a theory proposed by Albert Bandura, stating that a person's behavior is influenced both by personal factors and the social environment. Bandura acknowledges the possibility that individual's behavior and personal factors may impact the environment. These can involve skills that are either under or overcompensating the ego and will not benefit the outcome of the situation.
Recently, Baumeister's strength model of ego depletion has been criticized in multiple ways. Meta-analyses found little evidence for the strength model of self-regulation and for glucose as the limited resource that is depleted. A pre-registered trial did not find any evidence for ego depletion. Several commentaries have raised criticism on this particular study.
In summary, many central assumptions of the strength model of self-regulation seem to be in need of revision, especially the view of self-regulation as a limited resource that can be depleted and glucose as the fuel that is depleted seems to be hardly defensible without major revisions.
Conclusion
Self-regulation can be applied to many aspects of everyday life, including social situations, personal health management, impulse control and more. Since the strength model is generally supported, ego depletion tasks can be performed to temporarily tax the amount of self-regulatory capabilities in a person's brain. It is theorized that self-regulation depletion is associated with willingness to help people in need, excluding members of an individual's kin. Many researchers have contributed to these findings, including Albert Bandura, Roy Baumeister and Robert Wood.
See also
Rubicon model
Emotional self-regulation
References
Self-control | 0.771225 | 0.987033 | 0.761225 |
Type A and Type B personality theory | The Type A and Type B personality concept describes two contrasting personality types. In this hypothesis, personalities that are more competitive, highly organized, ambitious, impatient, highly aware of time management, or aggressive are labeled Type A, while more relaxed, "receptive", less "neurotic" and "frantic" personalities are labeled Type B.
The two cardiologists, Meyer Friedman and Ray Rosenman, who developed this theory came to believe that Type A personalities had a greater chance of developing coronary heart disease. Following the results of further studies and considerable controversy about the role of the tobacco industry funding of early research in this area, some reject, either partially or completely, the link between Type A personality and coronary disease. Nevertheless, this research had a significant effect on the development of the health psychology field, in which psychologists look at how an individual's mental state affects physical health.
History
Type A personality behavior was first described as a potential risk factor for heart disease in the 1950s by cardiologists Meyer Friedman and Ray Rosenman. They credit their insight to an upholsterer who called to their attention the peculiar fact that the chairs in their waiting rooms were worn out only on the arms and on the front edge of the seat. This suggested to Friedman and Rosenman that their patients were getting up from the chairs frequently and were otherwise waiting anxiously. After an eight-and-a-half-year-long study of healthy men between the ages of 35 and 59, Friedman and Rosenman estimated that Type A behavior more than doubled the risk of coronary heart disease in otherwise healthy individuals. The individuals enrolled in this study were followed well beyond the original time frame of the study. Participants were asked to fill out a questionnaire, that asked questions like "Do you feel guilty if you use spare time to relax?" and "Do you generally move, walk, and eat rapidly?" Subsequent analysis indicated that although Type A personality is associated with the incidence of coronary heart disease, it does not seem to be a risk factor for mortality. It was originally called 'Type A Personality' by Friedman and Roseman, it has now been conceptualized as the Type A behavior pattern.
The types
Type A
The hypothesis describes Type A individuals as outgoing, ambitious, rigidly organized, highly status-conscious, impatient, anxious, proactive, and concerned with time management. People with Type A personalities are often high-achieving workaholics. They push themselves with deadlines, and hate both delays and ambivalence. People with Type A personalities experience more job-related stress and less job satisfaction. They tend to set high expectations for themselves, and may believe others have these same high expectations of them as well. Interestingly, those with Type A personalities do not always outperform those with Type B personalities. Depending on the task and the individual's sense of time urgency and control, it can lead to poor results when there are complex decisions to be made. However, research has shown that Type A individuals are in general associated with higher performance and productivity. Moreover, Type A students tend to earn higher grades than Type B students, and Type-A faculty members were shown to be more productive than their Type B behavior counterparts (Taylor, Locke, Lee, & Gist, 1984).
In his 1996 book, Type A Behavior: Its Diagnosis and Treatment, Friedman suggests that dangerous Type A behavior is expressed through three major symptoms: (1) free-floating hostility, which can be triggered by even minor incidents; (2) time urgency and impatience, which causes irritation and exasperation usually described as being "short-fused"; and (3) a competitive drive, which causes stress and an achievement-driven mentality. The first of these symptoms is believed to be covert and therefore less observable, while the other two are more overt.
Type A people were said to be hasty, impatient, impulsive, hyperalert, potentially hostile, and angry. Research has also shown that Type A personalities may be used to deal with reality or avoiding difficult realizations. Therefore, those with Type A personalities may use certain defenses or ways of dealing with reality to avoid difficult realizations. For example, one study found that those with Type A personality are more likely to show higher levels of denial than Type B in stressful situations.
There are two main methods to assessing Type A behavior, the first being the a structured interview (SI) developed by Friedman and Rosenman, and the second being the Jenkins Activity Survey (JAS). The SI assessment involves an interviewer's measuring a person's emotional, nonverbal, and verbal responses (expressive style). The JAS involves a self-questionnaire with three main categories: Speed and Impatience, Job Involvement, and Hard-Driving Competitiveness.
Individuals with Type A personalities have often been linked to higher rates of coronary heart disease, higher morbidity rates, and other undesirable physical outcomes due to their higher levels of stress, impatience, and competitiveness.
Type B
Type B is a behavior pattern that is lacking in Type A behaviors. A-B personality is a continuum over which one either leans to be more Type A or Non Type A (Type B).
The hypothesis is that Type B individuals are noted to live at lower stress levels. They typically work steadily and may enjoy achievement, although they have a greater tendency to disregard physical or mental stress when they do not achieve. When faced with competition, they may focus less on winning or losing than their Type A counterparts, and more on enjoying the game regardless of winning or losing. Type B individuals are also more likely to have a poorer sense of time.
Type B personality types are more tolerant than individuals in the Type A category. This can be evident through their relationship style that members of upper management prefer. Type B individuals can "...see things from a global perspective, encourage teamwork, and exercise patience in decision making..."
Interactions between Type A and Type B
Type A individuals' proclivity for competition and aggression is illustrated in their interactions with other Type As and Type Bs. When playing a modified Prisoner's Dilemma game, Type A individuals elicited more competitiveness and angry feelings from both Type A and Type B opponents than did the Type B individuals. Type A individuals punished their Type A counterparts more than their Type B counterparts, and more than Type Bs punished other Type Bs. The rivalry between Type A individuals was shown by more aggressive behavior in their interactions, including initial antisocial responses, refusal to cooperate, verbal threats, and behavioral challenges.
A common misconception is that having a Type A personality is better than having a Type B personality. This largely comes into play in the workforce because people with Type A personalities are often viewed as very hardworking, highly motivated, and competitive, while Type B personalities often don't feel a sense of urgency to get projects completed and are more relaxed and easy-going. In reality, both personality types are required and bring their own set of strengths to the workplace.
Criticism
Friedman et al. (1986) conducted a randomized controlled trial on 862 male and female post-myocardial infarction patients, ruling out (by probabilistic equivalence) diet and other confounds. Subjects in the control group received group cardiac counseling, and subjects in the treatment group received cardiac counseling plus Type-A counseling, and a comparison group received no group counseling of any kind. The recurrence rate was 21% in the control group and 13% in the treatment group, a strong and statistically significant (p < .005) finding, whereas the comparison group experienced a 28% recurrence rate. The investigative studies following Friedman and Rosenman's discovery compared Type A behavior to independent coronary risk factors such as hypertension and smoking; in contrast, the results here suggest that the negative effects on cardiovascular health associated with Type A personality can be mitigated by modifying Type A behavior patterns.
Funding by tobacco companies
Further discrediting the so-called Type A Behavior Pattern (TABP), a study from 2012 – based on searching the Truth Tobacco Industry Documents – suggests the phenomenon of initially promising results followed by negative findings to be partly explained by the tobacco industry's involvement in TABP research to undermine the scientific evidence on smoking and health. Documents indicate that around 1959, the tobacco industry first became interested in the TABP when the Tobacco Institute Research Committee received an application for funding from New York University in order to investigate the relationship between smoking and personality. The industry's interest in TABP lasted at least four decades until the late 1990s, involving substantial funding to key researchers encouraged to prove smoking to simply correlate with a personality type prone to coronary heart disease (CHD) and cancer. Hence, until the early 1980s, the industry's strategy consisted of suggesting the risks of smoking to be caused by psychological characteristics of individual smokers rather than tobacco products by deeming the causes of cancer to be multifactorial with stress as a key contributing factor. Philip Morris (today Altria) and RJ Reynolds helped generate substantial evidence to support these claims by funding workshops and research aiming to educate about and alter TABP to reduce risks of CHD and cancer. Moreover, Philip Morris primarily funded the Meyer Friedman Institute, e.g. conducting the "crown-jewel" trial on the effectiveness of reducing TABP whose expected findings could discredit studies associating smoking with CHD and cancer but failing to control for Type A behavior.
In 1994, Friedman wrote to the US Occupational Safety and Health Administration criticising restrictions on indoor smoking to reduce CHD, claiming the evidence remained unreliable since it did not account for the significant confounder of Type A behavior, although by then, TABP had proven to be significant in only three of twelve studies. Though apparently unpaid for, this letter was approved by and blind-copied to Philip Morris, and Friedman (falsely) claimed to receive funding largely from the National Heart, Lung and Blood Institute.
When TABP finally became untenable, Philip Morris supported research on its hostility component, allowing Vice President Jetson Lincoln to explain passive smoking lethality by the stress exerted on a non-smoking spouse through media claiming the smoking spouse to be slowly killing themselves. When examining the most recent review on TABP and CHD in this light, the close relationship to the tobacco industry becomes evident: of thirteen etiologic studies in the review, only four reported positive findings, three of which had a direct or indirect link to the industry. Also on the whole most TABP studies had no relationship to the tobacco lobby but the majority of those with positive findings did. Furthermore, TABP was used as a litigation defence, similar to psychosocial stress. Hence, Petticrew et al. proved the tobacco industry to have substantially helped generate the scientific controversy on TABP, contributing to the (in lay circles) enduring popularity and prejudice for Type A personality even though it has been scientifically disproven.
Other issues
Some scholars argue that Type A behavior is not a good predictor of coronary heart disease. According to research by Redford Williams of Duke University, the hostility component of Type A personality is the only significant risk factor. Thus, it is a high level of expressed anger and hostility, not the other elements of Type A behavior, that constitutes the problem. Research done by Hecker et al. (1988) showed that the ‘hostility’ component of the Type A description was predictive of cardiac disease. As time continued, more research was conducted which focused on different components of type A behavior such as hostility, depression, and anxiety predicting cardiac disease.
The initial study that pointed to the association of Type A personality and heart attacks had a massive number of questions under consideration. When there are a lot of questions there is a high probability of a false positive. A study undertaken by the U.S. National Institute of Aging, Sardinian and Italian researchers, as well as bio-statisticians from the University of Michigan, had specifically tested for a direct relationship between coronary heart disease and Type A personalities, and the results had indicated that no such relation exists. A simple explanation is that the initial finding was chance due to multiple questions being under consideration. Those considerations may have changed.
Other studies
A study (that later was questioned for nonplausible results and considered unsafe publication) was performed that tested the effect of psychosocial variables, in particular personality and stress, as risk factors for cancer and coronary heart disease (CHD). In this study, four personality types were recorded. Type 1 personality is cancer-prone, Type 2 is CHD-prone, Type 3 is alternating between behaviors characteristic of Types 1 and 2, and Type 4 is a healthy, autonomous type hypothesized to survive best. The data suggest that the Type 1 probands die mainly from cancer, type 2 from CHD, whereas Type 3 and especially Type 4 probands show a much lower death rate. Two additional types of personalities were measured Type 5 and Type 6. Type 5 is a rational anti-emotional type, which shows characteristics common to Type 1 and Type 2. Type 6 personality shows psychopathic tendencies and is prone to drug addiction and AIDS.
While most studies attempt to show the correlation between personality types and coronary heart disease, studies (that also later were questioned for non plausible results and were considered unsafe) suggested that mental attitudes constitute an important prognostic factor for cancer and that as a method of treatment for cancer-prone patients, behavior therapy should be used. The patient is taught to express his/her emotions more freely, in a socially acceptable manner, to become autonomous and be able to stand up for his/her rights. Behavior therapy would also teach them how to cope with stress-producing situations more successfully. The effectiveness of therapy in preventing death in cancer and CHD is evident. The statistical data associated with higher death rates is impressive. Other measures of therapy have been attempted, such as group therapy. The effects were not as dramatic as behavior therapy, but still showed improvement in preventing death among cancer and CHD patients.
From the study above, several conclusions have been made. A relationship between personality and cancer exists, along with a relationship between personality and coronary heart disease. Personality type acts as a risk factor for diseases and interacts synergistically with other risk factors, such as smoking and heredity. It has been statistically proven that behavior therapy can significantly reduce the likelihood of cancer or coronary heart disease mortality. Studies suggest that both body and mental disease arise from each other. Mental disorders arise from physical causes, and likewise, physical disorders arise from mental causes. While Type A personality did not show a strong direct relationship between its attributes and the cause of coronary heart disease, other types of personalities have shown strong influences on both cancer-prone patients and those prone to coronary heart disease.
A study conducted by the International Journal of Behavioral Medicine re-examined the association between the Type A concept with cardiovascular (CVD) and non-cardiovascular (non-CVD) mortality by using a long follow-up (on average 20.6 years) of a large population-based sample of elderly males (N = 2,682), by applying multiple Type A measures at baseline, and looking separately at early and later follow-up years. The study sample was the participants of the Kuopio Ischemic Heart Disease Risk Factor Study, (KIHD), which includes a randomly selected representative sample of Eastern Finnish men, aged 42–60 years at baseline in the 1980s. They were followed up until the end of 2011 through linkage with the National Death Registry. Four self-administered scales, Bortner Short Rating Scale, Framingham Type A Behavior Pattern Scale, Jenkins Activity Survey, and Finnish Type A Scale, were used for Type A assessment at the start of follow-up. Type A measures were inconsistently associated with cardiovascular mortality, and most associations were non-significant. Some scales suggested a slightly decreased, rather than increased, risk of CVD death during the follow-up. Associations with non-cardiovascular deaths were even weaker. The study's findings further suggest that there is no evidence to support the Type A as a risk factor for CVD and non-CVD mortality.
Substance use disorder
In a 1998 study done by Ball et al., they looked at differences in Type A and Type B personalities based on substance use. Their results showed that Type B personalities had more severe issues with substance use disorders than Type A personalities. Another discovery in their research was more Type B personalities had been diagnosed with a personality disorder than users who had Type A personalities. Type B personalities were rated higher than Type A personalities on symptoms of all DSM-IV personality disorders, with the exception of schizoid personality disorder.
The research conducted in the experiment was tested on 370 outpatients and inpatients who used alcohol, cocaine, and opiates. The personality types and distinctions were replicated. Additionally within the personality dimensions Type A and Type B exhibited different results. Type A personality portrayed higher levels of agreeableness, conscientiousness, cooperativeness, and self-directedness. In contrast, Type B personality showed higher levels of neuroticism, novelty seeking, and harm avoidance. These dimensions can have high correlational levels with mental illness or substance use disorders. Furthermore, even after antisocial personality and psychiatric symptoms, these effects remained.
See also
Alpha (ethology)
Extraversion and introversion
Humorism
Type D personality
References
Further reading
Ischemic heart diseases
Personality typologies | 0.761751 | 0.999295 | 0.761213 |
Health belief model | The health belief model (HBM) is a social psychological health behavior change model developed to explain and predict health-related behaviors, particularly in regard to the uptake of health services. The health belief model also refers to an individual's beliefs about preventing diseases, maintaining health, and striving for well-being. The HBM was developed in the 1950s by social psychologists at the U.S. Public Health Service and remains one of the best known and most widely used theories in health behavior research. The HBM suggests that people's beliefs about health problems, perceived benefits of action and barriers to action, and self-efficacy explain engagement (or lack of engagement) in health-promoting behavior. A stimulus, or cue to action, must also be present in order to trigger the health-promoting behavior.
History
One of the first theories of health behavior, the HBM was developed in 1950s by social psychologists Irwin M. Rosenstock, Godfrey M. Hochbaum, S. Stephen Kegeles, and Howard Leventhal at the U.S. Public Health Service. At that time, researchers and health practitioners were worried because few people were getting screened for tuberculosis (TB), even if mobile X-ray cars went to neighborhoods. The HBM has been applied to predict a wide variety of health-related behaviors such as being screened for the early detection of asymptomatic diseases and receiving immunizations. More recently, the model has been applied to understand intentions to vaccinate (e.g. COVID-19), responses to symptoms of disease, compliance with medical regimens, lifestyle behaviors (e.g. sexual risk behaviors), and behaviors related to chronic illnesses, which may require long-term behavior maintenance in addition to initial behavior change. Amendments to the model were made as late as 1988 to incorporate emerging evidence within the field of psychology about the role of self-efficacy in decision-making and behavior.
Theoretical constructs
The HBM theoretical constructs originate from theories in Cognitive Psychology. In early twentieth century, cognitive theorists believed that reinforcements operated by affecting expectations rather than by affecting behavior straightly. Mental processes are severe consists of cognitive theories that are seen as expectancy-value models, because they propose that behavior is a function of the degree to which people value a result and their evaluation of the expectation, that a certain action will lead that result. In terms of the health-related behaviors, the value is avoiding sickness. The expectation is that a certain health action could prevent the condition for which people consider they might be at risk.
The following constructs of the HBM are proposed to vary between individuals and predict engagement in health-related behaviors.
Perceived susceptibility
Perceived susceptibility refers to subjective assessment of risk of developing a health problem. The HBM predicts that individuals who perceive that they are susceptible to a particular health problem will engage in behaviors to reduce their risk of developing the health problem. Individuals with low perceived susceptibility may deny that they are at risk for contracting a particular illness. Others may acknowledge the possibility that they could develop the illness, but believe it is unlikely. Individuals who believe they are at low risk of developing an illness are more likely to engage in unhealthy, or risky, behaviors. Individuals who perceive a high risk that they will be personally affected by a particular health problem are more likely to engage in behaviors to decrease their risk of developing the condition.
The combination of perceived severity and perceived susceptibility is referred to as perceived threat. Perceived severity and perceived susceptibility to a given health condition depend on knowledge about the condition. The HBM predicts that higher perceived threat leads to a higher likelihood of engagement in health-promoting behaviors.
Perceived severity
Perceived severity refers to the subjective assessment of the severity of a health problem and its potential consequences. The HBM proposes that individuals who perceive a given health problem as serious are more likely to engage in behaviors to prevent the health problem from occurring (or reduce its severity). Perceived seriousness encompasses beliefs about the disease itself (e.g., whether it is life-threatening or may cause disability or pain) as well as broader impacts of the disease on functioning in work and social roles. For instance, an individual may perceive that influenza is not medically serious, but if he or she perceives that there would be serious financial consequences as a result of being absent from work for several days, then he or she may perceive influenza to be a particularly serious condition.
Through studying Australians and their self-reporting in 2019 of receiving the influenza vaccine, researchers found that by studying perceived severity they could determine the likelihood that Australians would receive the shot. They asked, "On a scale from 0 to 10, how severe do you think the flu would be if you got it?" to measure the perceived severity and they found that 31% perceived the severity of getting the flu as low, 44% as moderate, and 25% as high. Additionally, the researchers found those with a high perceived severity were significantly more likely to have received the vaccine than those with a moderate perceived severity. Furthermore, self-reported vaccination was similar for individuals with low and moderate perceived severity of influenza.
Perceived benefits
Health-related behaviors are also influenced by the perceived benefits of taking action. Perceived benefits refer to an individual's assessment of the value or efficacy of engaging in a health-promoting behavior to decrease risk of disease. If an individual believes that a particular action will reduce susceptibility to a health problem or decrease its seriousness, then he or she is likely to engage in that behavior regardless of objective facts regarding the effectiveness of the action. For example, individuals who believe that wearing sunscreen prevents skin cancer are more likely to wear sunscreen than individuals who believe that wearing sunscreen will not prevent the occurrence of skin cancer
Perceived barriers
Health-related behaviors are also a function of perceived barriers to taking action. Perceived barriers refer to an individual's assessment of the obstacles to behavior change. Even if an individual perceives a health condition as threatening and believes that a particular action will effectively reduce the threat, barriers may prevent engagement in the health-promoting behavior. In other words, the perceived benefits must outweigh the perceived barriers in order for behavior change to occur. Perceived barriers to taking action include the perceived inconvenience, expense, danger (e.g., side effects of a medical procedure) and discomfort (e.g., pain, emotional upset) involved in engaging in the behavior. For instance, lack of access to affordable health care and the perception that a flu vaccine shot will cause significant pain may act as barriers to receiving the flu vaccine. In a study about the breast and cervical cancer screening among Hispanic women, perceived barriers, like fear of cancer, embarrassment, fatalistic views of cancer and language, was proved to impede screening.
Modifying variables
Individual characteristics, including demographic, psychosocial, and structural variables, can affect perceptions (i.e., perceived seriousness, susceptibility, benefits, and barriers) of health-related behaviors. Demographic variables include age, sex, race, ethnicity, and education, among others. Psychosocial variables include personality, social class, and peer and reference group pressure, among others. Structural variables include knowledge about a given disease and prior contact with the disease, among other factors. The HBM suggests that modifying variables affect health-related behaviors indirectly by affecting perceived seriousness, susceptibility, benefits, and barriers.
Cues to action
The HBM posits that a cue, or trigger, is necessary for prompting engagement in health-promoting behaviors. Cues to action can be internal or external. Physiological cues (e.g., pain, symptoms) are an example of internal cues to action. External cues include events or information from close others, the media, or health care providers promoting engagement in health-related behaviors. Examples of cues to action include a reminder postcard from a dentist, the illness of a friend or family member, mass media campaigns on health issues, and product health warning labels. The intensity of cues needed to prompt action varies between individuals by perceived susceptibility, seriousness, benefits, and barriers. For example, individuals who believe they are at high risk for a serious illness and who have an established relationship with a primary care doctor may be easily persuaded to get screened for the illness after seeing a public service announcement, whereas individuals who believe they are at low risk for the same illness and also do not have reliable access to health care may require more intense external cues in order to get screened.
Self-efficacy
Self-efficacy was added to the four components of the HBM (i.e., perceived susceptibility, severity, benefits, and barriers) in 1988. Self-efficacy refers to an individual's perception of his or her competence to successfully perform a behavior. Self-efficacy was added to the HBM in an attempt to better explain individual differences in health behaviors. The model was originally developed in order to explain engagement in one-time health-related behaviors such as being screened for cancer or receiving an immunization. Eventually, the HBM was applied to more substantial, long-term behavior change such as diet modification, exercise, and smoking. Developers of the model recognized that confidence in one's ability to effect change in outcomes (i.e., self-efficacy) was a key component of health behavior change. For example, Schmiege et al. found that when dealing with calcium consumption and weight-bearing exercises, self-efficacy was a more powerful predictor than beliefs about future negative health outcomes.
Rosenstock et al. argued that self-efficacy could be added to the other HBM constructs without elaboration of the model's theoretical structure. However, this was considered short-sighted because related studies indicated that key HBM constructs have indirect effects on behavior as a result of their effect on perceived control and intention, which might be regarded as more proximal factors of action.
Empirical support
The HBM has gained substantial empirical support since its development in the 1950s. It remains one of the most widely used and well-tested models for explaining and predicting health-related behavior. A 1984 review of 18 prospective and 28 retrospective studies suggests that the evidence for each component of the HBMl is strong. The review reports that empirical support for the HBM is particularly notable given the diverse populations, health conditions, and health-related behaviors examined and the various study designs and assessment strategies used to evaluate the model. A more recent meta-analysis found strong support for perceived benefits and perceived barriers predicting health-related behaviors, but weak evidence for the predictive power of perceived seriousness and perceived susceptibility. The authors of the meta-analysis suggest that examination of potential moderated and mediated relationships between components of the model is warranted.
Several studies have provided empirical support from the chronic illness perspective. Becker et al. used the model to predict and explain a mother's adherence to a diet prescribed for their obese children. Cerkoney et al. interviewed insulin-treated diabetic individuals after diabetic classes at a community hospital. It empirically tested the HBM's association with the compliance levels of persons chronically ill with diabetes mellitus.
Applications
The HBM has been used to develop effective interventions to change health-related behaviors by targeting various aspects of the model's key constructs. Interventions based on the HBM may aim to increase perceived susceptibility to and perceived seriousness of a health condition by providing education about prevalence and incidence of disease, individualized estimates of risk, and information about the consequences of disease (e.g., medical, financial, and social consequences). Interventions may also aim to alter the cost-benefit analysis of engaging in a health-promoting behavior (i.e., increasing perceived benefits and decreasing perceived barriers) by providing information about the efficacy of various behaviors to reduce risk of disease, identifying common perceived barriers, providing incentives to engage in health-promoting behaviors, and engaging social support or other resources to encourage health-promoting behaviors. Furthermore, interventions based on the HBM may provide cues to action to remind and encourage individuals to engage in health-promoting behaviors. Interventions may also aim to boost self-efficacy by providing training in specific health-promoting behaviors, particularly for complex lifestyle changes (e.g., changing diet or physical activity, adhering to a complicated medication regimen). Interventions can be aimed at the individual level (i.e., working one-on-one with individuals to increase engagement in health-related behaviors) or the societal level (e.g., through legislation, changes to the physical environment, mass media campaigns).
Multiple studies have used the Health Belief Model to understand an individual's intention to change a particular behavior and the factors that influence their ability to do so. Researchers analyzed the correlation between young adult women's intention to stop smoking and their perceived factors in the construction of HBM. The 58 participants were active adult women smokers between 16 and 30 years of age. Table 1 provides more background information while Table 2 shows the correlation between the participants perceived variables and the intention to stop smoking.
Table 2 shows that all variables besides perceived barriers had a weak positive correlation. In regards to perceived susceptibility, the respondents agreed that they were vulnerable to the health and social consequences associated to women smokers; however, they did not fully believe that smoking would trigger such severe health concerns or social consequences therefore they had a low desire to stop smoking. Similarly, with perceived severity the respondents did not view their habits as having a severe consequence therefore they had a low desire to quit. Additionally, the perceived benefits had a weak positive correlation meaning that the individuals saw that the adoption of healthy behaviors would have a beneficial impact on their overall lifestyle. Perceived barriers showed a weak negative correlation meaning that the more barriers the individual associated with stopping smoking then the less likely they were to quit. Lastly, the perceived self-efficacy of respondents were low and this led to a low desire to quit smoking.
The intention to stop smoking among young adult women had a significant correlation with the perceived factors of the Health Belief Model.
Another use of the HBM was in 2016 in a study that was interested in examining the factors associated with physical activity among people with mental illness (PMI) in Hong Kong (Mo et al., 2016). The study used the HBM model because it was one of the most frequently used models to explain health behaviors and the HBM was used as a framework to understand the PMI physical activity levels. The study had 443 PMI complete the survey with the mean age being 45 years old. The survey found that among the HBM variables, perceived barriers were significant in predicting physical activity. Additionally, the research demonstrated that self-efficacy had a positive correlation for physical activity among PMI. These findings support previous literature that self-efficacy and perceived barriers plays a significant role in physical activity and it should be included in interventions. The study also stated that the participants acknowledged that most of their attention is focused on their psychiatric conditions with little focus on their physical health needs.
This study is important to note in regards to the HBM because it illustrates how culture can play a role in this model. The Chinese culture holds different health beliefs than the United States, placing a greater emphasis on fate and the balance of spiritual harmony than on their physical fitness. Since the HBM does not consider these outside variables it highlights a limitation associated with the model and how multiple factors can impact health decisions, not just the ones noted in the model.
Applying the health belief model to women's safety movements
Movements such as the #MeToo movements and current political tensions surrounding abortion laws have moved women's rights and violence against women to the forefront of topical conversation. Additionally, many organizations, such as Women On Guard, have begun to place emphasize on trying to educate women on what measures to take in order to increase their safety when walking alone at night. The murder of Sarah Everard on March 3, 2021, has placed further attention on the need for women to protect themselves and stay vigilant when walking alone at night. Everard was kidnapped and murdered while walking home from work in South London, England. The health belief model can provide insight into the steps that need to be taken in order to reach more women and convince them to take the necessary steps to increase safety when walking alone.
Perceived susceptibility
As stated, perceived susceptibility refers to how susceptible an individual perceives themselves to be to any given risk. In the case of encountering violence while walking along, research shows that many women have high amounts of perceived susceptibility in regards to how susceptible they believe themselves to be to the risk of being attacked while walking along. Studies show that around 50% of women feel unsafe when walking alone at night Since women may already have increased perceive susceptibility to night-violence, according to the health belief model, they may be more apt to engage in behavior changes to help them increase their safety/ defend themselves.
Perceived severity
As the statistics on perceived susceptibility demonstrate, many women feel they are at risk for encountering night-violence. Thus, women also have a higher perception of the severity of the violence as stories such as the tragic death of Sarah Everard demonstrate that night-violence attacks can be not only severe, but fatal.
Perceived benefits and barriers
As the health belief model states, individuals must consider the potential benefits of adopting the change in behavior that is being suggested to them. In the case of night-violence against women organizations that seek to prevent it do so by using advertising to demonstrate to women that tools such as pocket knives, pepper spray, self-defense classes, alarm systems, and traveling with a "buddy" can outweigh barriers such as the cost, time, and other inconveniences that pursuing these changes in behavior may require. The benefit to implementing these behaviors would be that women could feel more safe when walking alone at night.
Modifying variable
It is not surprise that the modifying variable of sex plays a large role in applying the health belief model to women's safety agendas/ movements. While studies show that around 50% of women feel unsafe walking at night, they also show that fewer than one fifth of men feel the same fear and discomfort Thus, it is evident that the modifying variable of gender plays a large role in how night-time violence is perceived. According to the model, women may be more likely to change their behavior toward preventing night-time violence than men.
Cues to action
Cues to action are perhaps the most powerful part of the health belief model and of getting individual to change their behavior. In regards to preventing night-time violence against women, stories of the horrific violent acts committed against women while they are walking at night serve as external cues to action that can spur individuals to take the necessary precautions and make the necessary change to their behavior in order to reduce the likelihood of them encountering night-time violence. Cues to action further factor into increased perceived susceptibility and severity of the given risk.
Self efficacy
Self efficacy is another important factor both in the health belief model and in behavior change in general. When people believe that they actually have the power to prevent the given risk, then they are more likely to take the appropriate measures to do so. When individuals believe that they cannot change their behavior or prevent the risk no matter what they do, then they are less likely to engage in behavior to stop the risk. This concept factors greatly into initiatives to help women defend themselves against night violence because, based on the statistics, many women do feel that if they carry items such as tasers, pepper spray, or alarms they will be able to defend themselves against attackers. Self-defense classes are also things that organizations offer in order to teach individuals that they have the power to learn how to defend themselves and acquire the proper skills to do so. These classes can help to increase self efficacy. Organizations such as community centers may offer classes along these lines.
The issues of night violence against women is an issue of safety and wellness which makes it applicable to a health belief model approach. Defense and preparation for night violence can require behavioral changes on behalf of women if they feel that doing so will help them protect themselves should they ever be attacked.
Limitations
The HBM attempts to predict health-related behaviors by accounting for individual differences in beliefs and attitudes. However, it does not account for other factors that influence health behaviors. For instance, habitual health-related behaviors (e.g., smoking, seatbelt buckling) may become relatively independent of conscious health-related decision-making processes. Additionally, individuals engage in some health-related behaviors for reasons unrelated to health (e.g., exercising for aesthetic reasons). Environmental factors outside an individual's control may prevent engagement in desired behaviors. For example, an individual living in a dangerous neighborhood may be unable to go for a jog outdoors due to safety concerns. Furthermore, the HBM does not consider the impact of emotions on health-related behavior. Evidence suggests that fear may be a key factor in predicting health-related behavior.
Alternative factors may predict health behavior, such as outcome expectancy (i.e., whether the person feels they will be healthier as a result of their behavior) and self-efficacy (i.e., the person's belief in their ability to carry out preventive behavior).
The theoretical constructs that constitute the HBM are broadly defined. Furthermore, the HBM does not specify how constructs of the model interact with one another. Therefore, different operationalizations of the theoretical constructs may not be strictly comparable across studies.
Research assessing the contribution of cues to action in predicting health-related behaviors is limited. Cues to action are often difficult to assess, limiting research in this area. For instance, individuals may not accurately report cues that prompted behavior change. Cues such as a public service announcement on television or on a billboard may be fleeting and individuals may not be aware of their significance in prompting them to engage in a health-related behavior. Interpersonal influences are also particularly difficult to measure as cues.
Another reason why research does not always support the HBM is that factors other than health beliefs also heavily influence health behavior practices. These factors may include: special influences, cultural factors, socioeconomic status, and previous experiences.
Scholars extend the HBM by adding four more variables (self-identity, perceived importance, consideration of future consequences and concern for appearance) as possible determinants of healthy behavior. They prove that consideration of future consequences, self-identity, concern for appearance, perceived importance, self-efficacy, perceived susceptibility are significant determinants of healthy eating behavior that can be manipulated by healthy eating intervention design.
References
Belief
Health psychology
Public health education
Psychological theories | 0.764279 | 0.995974 | 0.761202 |
Psychological egoism | Psychological egoism is the view that humans are always motivated by self-interest and selfishness, even in what seem to be acts of altruism. It claims that, when people choose to help others, they do so ultimately because of the personal benefits that they themselves expect to obtain, directly or indirectly, from doing so.
This is a descriptive rather than normative view, since it only makes claims about how things are, not how they "ought to be" according to some. It is, however, related to several other normative forms of egoism, such as ethical egoism and rational egoism.
Subtypes of psychological egoism
Psychological hedonism
A specific form of psychological egoism is psychological hedonism, the view that the ultimate motive for all voluntary human action is the desire to experience pleasure or to avoid pain.
Immediate gratification can be sacrificed for a chance of greater, future pleasure. Further, humans are not motivated to strictly avoid pain and only pursue pleasure, but, instead, humans will endure pain to achieve the greatest net pleasure. Accordingly, all actions are tools for increasing pleasure or decreasing pain, even those defined as altruistic and those that do not cause an immediate change in satisfaction levels.
The most famous psychological egoists are Sextus Empiricus, Pierre Bayle, and Bernard Mandeville.
Final cause
Some theorists explain behavior motivated by self-interest without using pleasure and pain as the final causes of behavior.
Foundations
Beginning with ancient philosophy, Epicureanism claims humans live to maximize pleasure. Epicurus argued the theory of human behavior being motivated by pleasure alone is evidenced from infancy to adulthood. Humanity performs altruistic, honorable, and virtuous acts not for the sake of another or because of a moral code but rather to increase the well-being of the self.
In modern philosophy, Jeremy Bentham asserted, like Epicurus, that human behavior is governed by a need to increase pleasure and decrease pain. Bentham explicitly described what types and qualities of pain and pleasure exist, and how human motives are singularly explained using psychological hedonism. Bentham attempted to quantify psychological hedonism. Bentham endeavored to find the ideal human behavior based on hedonic calculus or the measurement of relative gains and losses in pain and pleasure to determine the most pleasurable action a human could choose in a situation.
From an evolutionary perspective, Herbert Spencer, a psychological egoist, argued that all animals primarily seek to survive and protect their lineage. Essentially, the need for the individual and for the individual's immediate family to live supersedes the others' need to live. All species attempt to maximize their own chances of survival and, therefore, well-being. Spencer asserted the best adapted creatures will have their pleasure levels outweigh their pain levels in their environments. Thus, pleasure meant an animal was fulfilling its egoist goal of self survival, and pleasure would always be pursued because species constantly strive for survival.
Contributions to modern psychology
Psychoanalysis
Whether or not Sigmund Freud was a psychological egoist, his concept of the pleasure principle borrowed much from psychological egoism and psychological hedonism in particular. The pleasure principle rules the behavior of the Id which is an unconscious force driving humans to release tension from unfulfilled desires. When Freud introduced Thanatos and its opposing force, Eros, the pleasure principle emanating from psychological hedonism became aligned with the Eros, which drives a person to satiate sexual and reproductive desires. Alternatively, Thanatos seeks the cessation of pain through death and the end of the pursuit of pleasure: thus, hedonism rules Thanatos, but it centers on the complete avoidance of pain rather than psychological hedonist function which pursues pleasure and avoids pain. Therefore, Freud believed in qualitatively different hedonisms where the total avoidance of pain hedonism and the achievement of the greatest net pleasure hedonism are separate and associated with distinct functions and drives of the human psyche. Although Eros and Thanatos are ruled by qualitatively different types of hedonism, Eros remains under the rule of Jeremy Bentham's quantitative psychological hedonism because Eros seeks the greatest net pleasure.
Behaviorism
Traditional behaviorism dictates all human behavior is explained by classical conditioning and operant conditioning. Operant conditioning works through reinforcement and punishment which adds or removes pleasure and pain to manipulate behavior. Using pleasure and pain to control behavior means behaviorists assumed the principles of psychological hedonism could be applied to predicting human behavior. For example, Thorndike's law of effect states that behaviors associated with pleasantness will be learned and those associated with pain will be extinguished. Often, behaviorist experiments using humans and animals are built around the assumption that subjects will pursue pleasure and avoid pain. Although psychological hedonism is incorporated into the fundamental principles and experimental designs of behaviorism, behaviorism itself explains and interprets only observable behavior and therefore does not theorize about the ultimate cause of human behavior. Thus, behaviorism uses but does not strictly support psychological hedonism over other understandings of the ultimate drive of human behavior.
Debate
Psychological egoism is controversial. Proponents cite evidence from introspection: reflection on one's own actions may reveal their motives and intended results to be based on self-interest. Psychological hedonists have found through numerous observations of natural human behavior that behavior can be manipulated through reward and punishment, both of which have direct effects of pain and pleasure. Also, the work of some social scientists has empirically supported this theory. Further, they claim psychological egoism posits a theory that is a more parsimonious explanation than competing theories.
Opponents have argued that psychological egoism is not more parsimonious than other theories. For example, a theory that claims altruism occurs for the sake of altruism explains altruism with less complexity than the egoistic approach. The psychological egoist asserts humans act altruistically for selfish reasons even when cost of the altruistic action is far outweighed by the reward of acting selfishly because altruism is performed to fulfill the desire of a person to act altruistically. Other critics argue that it is false either because it is an over-simplified interpretation of behavior or that there exists empirical evidence of altruistic behaviour. Recently, some have argued that evolutionary theory provides evidence against it.
Critics have stated that proponents of psychological egoism often confuse the satisfaction of their own desires with the satisfaction of their own self-regarding desires. Even though it is true that every human being seeks their own satisfaction, this sometimes may only be achieved via the well-being of their neighbor. An example of this situation could be phoning for an ambulance when a car accident has happened. In this case, the caller desires the well-being of the victim, even though the desire itself is the caller's own.
To counter this critique, psychological egoism asserts that all such desires for the well-being of others are ultimately derived from self-interest. For example, German philosopher Friedrich Nietzsche was a psychological egoist for some of his career, though he is said to have repudiated that later in his campaign against morality. He argues in §133 of The Dawn that in such cases compassionate impulses arise out of the projection of our identity unto the object of our feeling. He gives some hypothetical examples as illustrations to his thesis: that of a person, feeling horrified after witnessing a personal feud, coughing blood, or that of the impulse felt to save a person who is drowning in the water. In such cases, according to Nietzsche, there comes into play unconscious fears regarding our own safety. The suffering of another person is felt as a threat to our own happiness and sense of safety, because it reveals our own vulnerability to misfortunes, and thus, by relieving it, one could also ameliorate those personal sentiments. Essentially, proponents argue that altruism is rooted in self-interest whereas opponents claim altruism occurs for altruism's sake or is caused by a non-selfish reason.
Problem of apparent altruism
David Hume once wrote, "What interest can a fond mother have in view, who loses her health by assiduous attendance on her sick child, and afterwards languishes and dies of grief, when freed, by its death [the child's], from the slavery of that attendance?". It seems incorrect to describe such a mother's goal as self-interested.
Psychological egoists, however, respond that helping others in such ways is ultimately motivated by some form of self-interest, such as non-sensory satisfaction, the expectation of reciprocation, the desire to gain respect or reputation, or by the expectation of a reward in a putative afterlife. The helpful action is merely instrumental to these ultimately selfish goals.
In the ninth century, Mohammed Ibn Al-Jahm Al-Barmaki has been quoted saying:
This sort of explanation appears to be close to the view of La Rochefoucauld (and perhaps Hobbes).
According to psychological hedonism, the ultimate egoistic motive is to gain good feelings of pleasure and avoid bad feelings of pain. Other, less restricted forms of psychological egoism may allow the ultimate goal of a person to include such things as avoiding punishments from oneself or others (such as guilt or shame) and attaining rewards (such as pride, self-worth, power or reciprocal beneficial action).
Some psychologists explain empathy in terms of psychological hedonism. According to the "merge with others hypothesis", empathy increases the more an individual feels like they are one with another person, and decreases accordingly. Therefore, altruistic actions emanating from empathy, and empathy itself, are caused by making others' interests our own, and the satisfaction of their desires becomes our own, not just theirs. Both cognitive studies and neuropsychological experiments have provided evidence for this theory: as humans increase our oneness with others, our empathy increases, and as empathy increases, so too does our inclination to act altruistically. Neuropsychological studies have linked mirror neurons to humans experiencing empathy. Mirror neurons are activated both when a human (or animal) performs an action and when they observe another human (or animal) perform the same action. Researchers have found that the more these mirror neurons fire the more human subjects report empathy. From a neurological perspective, scientists argue that when a human empathizes with another, the brain operates as if the human is actually participating in the actions of the other person. Thus, when performing altruistic actions motivated by empathy, humans experience someone else's pleasure of being helped. Therefore, in performing acts of altruism, people act in their own self-interest even at a neurological level.
Criticism
Circularity
Psychological egoism has been accused of being circular: "If a person willingly performs an act, that means he derives personal enjoyment from it; therefore, people only perform acts that give them personal enjoyment." In particular, seemingly altruistic acts must be performed because people derive enjoyment from them and are therefore, in reality, egoistic. This statement is circular because its conclusion is identical to its hypothesis: it assumes that people only perform acts that give them personal enjoyment, and concludes that people only perform acts that give them personal enjoyment. This objection was tendered by William Hazlitt and Thomas Macaulay in the 19th century, and has been restated many times since. An earlier version of the same objection was made by Joseph Butler in the Fifteen Sermons.
Joel Feinberg, in his 1958 paper "Psychological Egoism", embraces a similar critique by drawing attention to the infinite regress of psychological egoism. He expounds it in the following cross-examination:
"All men desire only satisfaction."
"Satisfaction of what?"
"Satisfaction of their desires."
"Their desires for what?"
"Their desires for satisfaction."
"Satisfaction of what?"
"Their desires."
"For what?"
"For satisfaction"—etc., ad infinitum.
Evolutionary argument
In their 1998 book, Unto Others, Sober and Wilson detailed an evolutionary argument based on the likelihood for egoism to evolve under the pressures of natural selection. Specifically, they focus on the human behavior of parental care. To set up their argument, they propose two potential psychological mechanisms for this. The hedonistic mechanism is based on a parent's ultimate desire for pleasure or the avoidance of pain and a belief that caring for its offspring will be instrumental to that. The altruistic mechanism is based on an altruistic ultimate desire to care for its offspring.
Sober and Wilson argue that when evaluating the likelihood of a given trait to evolve, three factors must be considered: availability, reliability and energetic efficiency. The genes for a given trait must first be available in the gene pool for selection. The trait must then reliably produce an increase in fitness for the organism. The trait must also operate with energetic efficiency to not limit the fitness of the organism. Sober and Wilson argue that there is neither reason to suppose that an altruistic mechanism should be any less available than a hedonistic one nor reason to suppose that the content of thoughts and desires (hedonistic vs. altruistic) should impact energetic efficiency. As availability and energetic efficiency are taken to be equivalent for both mechanisms it follows that the more reliable mechanism will then be the more likely mechanism.
For the hedonistic mechanism to produce the behavior of caring for offspring, the parent must believe that the caring behavior will produce pleasure or avoidance of pain for the parent. Sober and Wilson argue that the belief also must be true and constantly reinforced, or it would not be likely enough to persist. If the belief fails then the behavior is not produced. The altruistic mechanism does not rely on belief; therefore, they argue that it would be less likely to fail than the alternative, i.e. more reliable.
Equivocation
In philosopher Derek Parfit's 2011 book On What Matters, Volume 1, Parfit presents an argument against psychological egoism that centers around an apparent equivocation between different senses of the word "want":
The word desire often refers to our sensual desires or appetites, or to our being attracted to something, by finding the thought of it appealing. I shall use ‘desire’ in a wider sense, which refers to any state of being motivated, or of wanting something to happen and being to some degree disposed to make it happen, if we can. The word want already has both these senses.
This argument for Psychological Egoism fails, because it uses the word want first in the wide sense and then in the narrow sense. If I voluntarily gave up my life to save the lives of several strangers, my act would not be selfish, though I would be doing what in the wide sense I wanted to do.
See also
Academic careerism
Acedia
Enlightened self-interest
Experience machine
Inclusive fitness
Reward system, for a proposed anatomic basis of psychological egoism.
Simulated reality
Notes
References
Baier, Kurt (1990). "Egoism" in A Companion to Ethics, Peter Singer (ed.), Blackwell: Oxford.
Batson, C.D. & L. Shaw (1991). "Evidence for Altruism: Toward a Pluralism of Prosocial Motives," Psychological Inquiry 2: 107–122.
Bentham, Jeremy (1789). Introduction to the Principles of Morals and Legislation. Oxford: Clarendon Press, 1907. First published in 1789.
Broad, C. D. (1971). "Egoism as a Theory of Human Motives," in his Broad's Critical Essays in Moral Philosophy, London: George Allen and Unwin.
Cialdini, Robert B., S. L. Brown, B. P. Lewis, C. Luce, & S. L. Neuberg (1997). "Reinterpreting the Empathy-Altruism Relationship: When One Into One Equals Oneness". Journal of Personality and Social Psychology, 73 (3): 481–494.
Gallese, V. (2001). "The 'shared manifold' hypothesis". Journal of Consciousness Studies, 8(5-7), 33–50.
Gert, Bernard (1967). "Hobbes and Psychological Egoism", Journal of the History of Ideas, Vol. 28, No. 4, pp. 503–520.
Hazlitt, William (1991). Self-Love and Benevolence Selected Writings, edited and with Introduction by Jon Cook, Oxford University Press.
Hobbes, Thomas (1651). Leviathan, C. B. Macpherson (ed.), Harmondsworth: Penguin.
Hobbes, Thomas (1654). Of Liberty and Necessity, public domain.
Feinberg, Joel. "Psychological Egoism." In Reason & Responsibility: Readings in Some Basic Problems of Philosophy, edited by Joel Feinberg and Russ Shafer-Landau, 520–532. California: Thomson Wadsworth, 2008.
Kaplan, J. T., & Iacoboni, M. (2006). Getting a grip on other minds: Mirror neurons, intention understanding, and cognitive empathy. Social Neuroscience, 1(3/4), 175–183. doi:10.1080/17470910600985605
Krebs, Dennis (1982). "Psychological Approaches to Altruism: An Evaluation". Ethics, 92, pp. 447–58.
Lloyd, Sharon A. & Sreedhar, Susanne. (2008). "Hobbes's Moral and Political Philosophy", The Stanford Encyclopedia of Philosophy, Edward N. Zalta (ed.). (link)
May, Joshua (2011). "Psychological Egoism", The Internet Encyclopedia of Philosophy, J. Fieser & B. Dowden (eds.). (link)
Mehiel, R. (1997). The consummatory rat: The psychological hedonism of Robert C. Bolles. In M. E. Bouton & M. S. Fanselow (Eds.), Learning, motivation, and cognition: The functional behaviorism of Robert C. Bolles. (Vol. xiii, pp. 271–280). Washington, DC, US: American Psychological Association.
Moseley, Alexander (2006). "Egoism", The Internet Encyclopedia of Philosophy, J. Fieser & B. Dowden (eds.). (link)
O'Keefe, T. (2005). Epicurus. Internet Encyclopedia of Philosophy. Retrieved from http://www.iep.utm.edu/epicur/#SH5a
Shaver, Robert (2002). "Egoism", The Stanford Encyclopedia of Philosophy (Winter Edition), Edward N. Zalta (ed.). (link)
Sober, E., & Wilson, D. S. (1999). Unto others: the evolution and psychology of unselfish behavior. Cambridge, Mass.: Harvard University Press.
Mees, U., & Schmitt, A. (2008). Goals of action and emotional reasons for action. A modern version of the theory of ultimate psychological hedonism. Journal for the Theory of Social Behaviour, 38(2), 157–178. doi:10.1111/j.1468-5914.2008.00364.x
Sweet, W. (2004). Spencer, Herbert. Internet Encyclopedia of Philosophy. Retrieved from http://www.iep.utm.edu/spencer/
Wallwork, E. (1991). Psychoanalysis and Ethics. Yale University Press.
Young, P. T. (1936). Motivation of behavior: The fundamental determinants of human and animal activity. (Vol. xviii). Hoboken, NJ, US: John Wiley & Sons Inc.
Further reading
Baier, Kurt (1990). "Egoism" in A Companion to Ethics, Peter Singer (ed.), Blackwell: Oxford.
Batson, C.D. & L. Shaw (1991). "Evidence for Altruism: Toward a Pluralism of Prosocial Motives," Psychological Inquiry 2: 107–122.
Broad, C. D. (1971). "Egoism as a Theory of Human Motives," in his Broad's Critical Essays in Moral Philosophy, London: George Allen and Unwin.
Hobbes, Thomas (1651). Leviathan, C. B. Macpherson (ed.), Harmondsworth: Penguin.
Hobbes, Thomas (1654). Of Liberty and Necessity, public domain.
Krebs, Dennis (1982). "Psychological Approaches to Altruism: An Evaluation". Ethics, 92, pp. 447–58.
May, Joshua (2011). "Psychological Egoism", The Internet Encyclopedia of Philosophy, J. Fieser & B. Dowden (eds.). (link)
External links
Individualism
Narcissism
Theory of mind
Ethical theories | 0.769965 | 0.988533 | 0.761136 |
Psychiatric assessment | A psychiatric assessment, or psychological screening, is the process of gathering information about a person within a psychiatric service, with the purpose of making a diagnosis. The assessment is usually the first stage of a treatment process, but psychiatric assessments may also be used for various legal purposes. The assessment includes social and biographical information, direct observations, and data from specific psychological tests. It is typically carried out by a psychiatrist, but it can be a multi-disciplinary process involving nurses, psychologists, occupational therapist, social workers, and licensed professional counselors.
Purpose
Clinical assessment
A psychiatric assessment is most commonly carried out for clinical and therapeutic purposes, to establish a diagnosis and formulation of the individual's problems, and to plan their care and treatment. This may be done in a hospital, in an out-patient setting, or as a home-based assessment.
Forensic assessment
A forensic psychiatric assessment may have a number of purposes. A forensic assessment may be required of an individual who has been charged with a crime, to establish whether the person has the legal competence to stand trial. If a person with a mental illness is convicted of an offense, a forensic report may be required to inform the Court's sentencing decision, as a mental illness at the time of the offense may be a mitigating factor. A forensic assessment may also take the form of a risk assessment, to comment on the relationship between the person's mental illness and the risk of further violent offenses.
Medico-legal assessment
A medico-legal psychiatric assessment is required when a psychiatric report is used as evidence in civil litigation, for example in relation to compensation for work-related stress or after a traumatic event such as an accident. The psychiatric assessment may be requested in order to establish a link between the trauma and the victim's psychological condition, or to determine the extent of psychological harm and the amount of compensation to be awarded to the victim.
Medico-legal psychiatric assessments are also utilized in the context of child safety and child protection services. A child psychiatrist's assessment can provide information on the psychological impact of abuse or neglect on a child. A child psychiatrist can carry out an assessment of parenting capacity, taking into consideration the mental state of both the child and the parents, and this may be used by child protective services to decide whether a child should be placed in an alternative care arrangement such as foster care.
History
A standard part of any psychiatric assessment is the obtaining of a body of social, demographic and biographical data known as the history. The standard psychiatric history consists of biographical data (name, age, marital and family contact details, occupation, and first language), the presenting complaint (an account of the onset, nature and development of the individual's current difficulties) and personal history (including birth complications, childhood development, parental care in childhood, educational and employment history, relationship and marital history, and criminal background). The history also includes an enquiry about the individual's current social circumstances, family relationships, current and past use of alcohol and illicit drugs, and the individual's past treatment history (current and past diagnoses, and use of prescribed medication).
The psychiatric history includes an exploration of the individual's culture and ethnicity, as cultural values can influence the way a person and their family communicates psychological distress and responds to a diagnosis of mental illness. Certain behaviors and beliefs may be misinterpreted as features of mental illness by a clinician who is from a different cultural background than the individual being assessed.
This assessment also includes information from related people.
Mental status examination
The mental status examination (MSE) is another core part of any psychiatric assessment. The MSE is a structured way of describing a patient's current state of mind, under the domains of appearance, attitude, behavior, speech, mood and affect, thought process, thought content, perception, cognition (including for example orientation, memory and concentration), insight and judgement. The purpose of the MSE is to obtain a comprehensive cross-sectional description of the patient's mental state. The data are collected through a combination of direct and indirect means: unstructured observation while obtaining the biographical and social information, focused questions about current symptoms, and formalized psychological tests. As with the psychiatric history, the MSE is prone to errors if cultural differences between the examiner and the patient are not taken into account, as different cultural backgrounds may be associated with different norms of interpersonal behavior and emotional expression. The MSE differs from a mini-mental state examination (MMSE) which is a brief neuro-psychological screening test for dementia.
Physical examination
A thorough physical examination is regarded as an integral part of a comprehensive psychiatric assessment. This is because physical illnesses are more common in people with mental disorders, because neurological and other medical conditions may be associated with psychiatric symptoms, and to identify side effects of psychiatric medication. The physical examination would include measurement of body mass index, vital signs such as pulse, blood pressure, temperature and respiratory rate, observation for pallor and nutritional deficiencies, palpation for lymph nodes, palpation of the abdomen for organ enlargement, and examination of the cardiovascular, respiratory and neurological systems.
Physical investigations
Although there are no physiological tests that confirm any mental illness, medical tests may be employed to exclude any co-occurring medical conditions that may present with psychiatric symptoms. These include blood tests measuring TSH to exclude hypo- or hyperthyroidism, basic electrolytes, serum calcium and liver enzymes to rule out a metabolic disturbance, and a full blood count to rule out a systemic infection or chronic disease. The investigation of dementia could include measurement of serum vitamin B-12 levels, serology to exclude syphilis or HIV infection, EEG, and a CT scan or MRI scan. People receiving antipsychotic medication require measurement of plasma glucose and lipid levels to detect a medication-induced metabolic syndrome, and an electrocardiogram to detect iatrogenic cardiac arrhythmias.
Assessment tools
Clinical assessment can be supplemented by the use of symptom scales for specific disorders, such as the Beck Depression Inventory for depression, or the Brief Psychiatric Rating Scale (BPRS) or Positive and Negative Syndrome Scale (PANSS) for psychotic disorders. Scales such as HoNOS or the Global Assessment of Functioning are used to measure global level of functioning and to monitor response to treatment.
Multidisciplinary assessment
Psychiatric assessment in hospital settings is typically a multidisciplinary process, with contributions from psychiatric nurses, occupational therapists, psychologists and social workers. A psychiatrist takes a history and carries out a mental state examination and physical examination as described above. A nursing assessment includes risk assessment (risk of suicide, aggression, absconding from hospital, self-harm, sexual safety in hospital and medication compliance), physical health screening, and obtaining background personal and health information from the person being admitted and their carers. The immediate purpose of the nursing assessment is to determine the required level of care and supervision, and to have a plan to manage disturbed behavior. Assessment could include a visit to the person's home, for direct observation of the social and living environment. The role of a psychologist includes the use of psychological tests: structured diagnostic instruments such as the Millon Clinical Multiaxial Inventory or psychometric tests such as the WISC or WAIS, to assist with diagnosis and formulation of the person's problems. A psychologist might contribute to the team's assessment by providing a psychological formulation or behavioral analysis, which is an analysis, through systematic observation, of the factors which trigger or perpetuate the presenting problems.
Other perspectives
This article describes the assessment process within a medical model, with the collection of supposedly objective data, identification of problems, formulation of a diagnosis leading to a specific treatment, but there are other approaches to the assessment of people with social and emotional difficulties. A family therapy or systemic therapy approach is not concerned with diagnoses but seeks to understand the problem in terms of relationships and communication patterns. The systemic tradition is suspicious of the objectivity of medical assessment, sees the individual's account as a subjective narrative, and sees diagnosis as a socially constructed phenomenon. From a solution focused perspective, the assessment deliberately avoids identification of problems, and seeks to elicit strengths and solutions.
Criticism
Psychiatric assessments have recently been under heavy criticism from a community of experts. Some go as far as saying that. "Yet they are just subjective opinions with no scientific basis and can change over time."
See also
List of diagnostic classification and rating scales used in psychiatry
Medical history
Mental disorder
Psychiatry
Seasonal Pattern Assessment Questionnaire
Psychiatrist
Notes
References
External links
American Psychiatric Association Practice Guidelines: Psychiatric Evaluation of Adults
Medical diagnosis | 0.778616 | 0.977494 | 0.761093 |
Exposure therapy | Exposure therapy is a technique in behavior therapy to treat anxiety disorders. Exposure therapy involves exposing the patient to the anxiety source or its context (without the intention to cause any danger). Doing so is thought to help them overcome their anxiety or distress. Numerous studies have demonstrated its effectiveness in the treatment of disorders such as generalized anxiety disorder (GAD), social anxiety disorder (SAD), obsessive-compulsive disorder (OCD), post-traumatic stress disorder (PTSD), and specific phobias.
As of 2024, focus is particularly on exposure and response prevention (ERP or ExRP) therapy, in which exposure is continued and the resolution to refrain from the escape response is maintained at all times (not just during specific therapy sessions).
Techniques
Exposure therapy is based on the principle of respondent conditioning often termed Pavlovian extinction. The exposure therapist identifies the cognitions, emotions and physiological arousal that accompany a fear-inducing stimulus and then tries to break the pattern of escape that maintains the fear. This is done by exposing the patient to fear-inducing stimuli.
This may be done;
using progressively stronger stimulii. Fear is minimized at each of a series of steadily escalating steps or challenges (a hierarchy), which can be explicit ("static") or implicit ("dynamic" — see Method of Factors) until the fear is finally gone. The patient is able to terminate the procedure at any time.
using flooding therapy, which exposes the patient to feared stimuli starting at the most feared item in a fear hierarchy.
There are several types of exposure procedures.
in vivo or "real life." This type exposes the patient to actual fear-inducing situations. For example, if someone fears public speaking, the person may be asked to give a speech to a small group of people.
virtual reality, in which technology is used to simulate in vivo exposure.
imaginal, where patients are asked to imagine a situation that they are afraid of. This procedure is helpful for people who need to confront feared thoughts and memories.
written exposure therapy, where patients write down their account of the traumatic event
interoceptive, in which patients confront feared bodily symptoms such as increased heart rate and shortness of breath. This may be used for more specific disorders such as panic or post-traumatic stress disorder.
All types of exposure may be used together or separately. Discussion continues on how to best to carry out exposure therapy, including on whether safety behaviours should be discontinued.
Exposure and response prevention (ERP)
In the exposure and response prevention (ERP or EX/RP) form of exposure therapy, the resolution to refrain from the escape response is to be maintained at all times (not just during specific practice sessions). Thus, not only does the subject experience habituation to the feared stimulus, but they also practice a fear-incompatible behavioral response to the stimulus. The distinctive feature is that individuals confront their fears and discontinue their escape response.
While this type of therapy typically causes some short-term anxiety, this facilitates long-term reduction in obsessive and compulsive symptoms.
The American Psychiatric Association recommends ERP for the treatment of OCD, citing that ERP has the richest empirical support. As of 2019, ERP is considered a first-line psychotherapy for OCD.
Effectiveness is heterogeneous. Higher efficacy correlates with lower avoidance behaviours, and greater adherence to homework. Using SSRI meds whilst doing ERP does not appear to correlate with better outcomes. Discussion continues on how to best conduct ERP.
Generally, ERP incorporates a relapse prevention plan toward the end of the course of therapy. This can include being ready to re-apply ERP if an anxiety does occur.
Mechanism
Mechanism research has been limited in the field.
Habituation was seen as a mechanism in the past, but is seen more recently as a model of therapeutic process.
Inhibitory learning
As of 2022, the inhibitory learning model is the most common conjecture of the mechanism which causes exposure therapy efficacy. This model posits that in exposure therapy the unpleasant reactions such as anxiety (that were previously learned during fear conditioning) remain intact - they are not expected to be eliminated - but that they are now inhibited or balanced or overcome by new learning about the situation (for instance that the feared result will not necessarily happen). More research is needed.
Inhibitory retrieval
This model posits that additional associative learning processes, such as counterconditioning and novelty-enhanced extinction may contribute to exposure therapy.
Under-use and barriers to use
Exposure therapy is seen as under-used in relation to its efficacy. Barriers to use of exposure therapy by psychologists include it appearing antithetical to mainline psychology, lack of confidence, and negative beliefs about exposure therapy.
Uses
Phobia
Exposure therapy is the most successful known treatment for phobias. Several published meta-analyses included studies of one-to-three-hour single-session treatments of phobias, using imaginal exposure. At a post-treatment follow-up four years later 90% of people retained a considerable reduction in fear, avoidance, and overall level of impairment, while 65% no longer experienced any symptoms of a specific phobia.
Agoraphobia and social anxiety disorder are examples of phobias that have been successfully treated by exposure therapy.
Post-traumatic stress disorder
Exposure therapy in PTSD involves exposing the patient to PTSD-anxiety triggering stimuli, with the aim of weakening the neural connections between triggers and trauma memories ( desensitisation). Exposure may involve:
a real-life trigger ("in vivo")
an imagined trigger ("imaginal")
Virtual reality exposure
a triggered feeling generated in a physical way ("interoceptive")
Forms include:
Flooding – exposing the patient directly to a triggering stimulus, while simultaneously making them not feel afraid.
Systematic desensitisation ( "graduated exposure") – gradually exposing the patient to increasingly vivid experiences that are related to the trauma, but do not trigger post-traumatic stress.
Narrative exposure therapy - creates a written account of the traumatic experiences of a patient or group of patients, in a way that serves to recapture their self-respect and acknowledges their value. Under this name it is used mainly with refugees, in groups. It also forms an important part of cognitive processing therapy and is conditionally recommended for treatment of PTSD by the American Psychological Association.
Prolonged exposure therapy (PE) - a form of behavior therapy and cognitive behavioral therapy designed to treat post-traumatic stress disorder, characterized by two main treatment procedures – imaginal and in vivo exposures. Imaginal exposure is a repeated 'on-purpose' retelling of the trauma memory. In vivo exposure is gradually confronting situations, places, and things that are reminders of the trauma or feel dangerous (despite being objectively safe). Additional procedures include processing of the trauma memory and breathing retraining. The American Psychological Association strongly recommends PE as a first-line psychotherapy treatment for PTSD.
Researchers began experimenting with Virtual reality exposure (VRE) therapy in PTSD exposure therapy in 1997 with the advent of the "Virtual Vietnam" scenario. Virtual Vietnam was used as a graduated exposure therapy treatment for Vietnam veterans meeting the qualification criteria for PTSD. A 50-year-old Caucasian male was the first veteran studied. The preliminary results concluded improvement post-treatment across all measures of PTSD and maintenance of the gains at the six-month follow up. Subsequent open clinical trial of Virtual Vietnam using 16 veterans, showed a reduction in PTSD symptoms.
This method was also tested on several active duty Army soldiers, using an immersive computer simulation of military settings over six sessions. Self-reported PTSD symptoms of these soldiers were greatly diminished following the treatment. Exposure therapy has shown promise in the treatment of co-morbid PTSD and substance abuse.
In the area of PTSD, historic barriers to the use of exposure therapy include that clinicians may not understand it, are not confident in their own ability to use it, or more commonly, see significant contraindications for their client.
Obsessive compulsive disorder
Exposure and response prevention (also known as exposure and ritual prevention; ERP or EX/RP) is a variant of exposure therapy that is recommended by the American Academy of Child and Adolescent Psychiatry (AACAP), the American Psychiatric Association (APA), and the Mayo Clinic as first-line treatment of OCD citing that it has the richest empirical support for both youth and adolescent outcomes.
ERP is predicated on the idea that a therapeutic effect is achieved as subjects confront their fears, but refrain from engaging in the escape response or ritual that delays or eliminates distress. In the case of individuals with OCD or an anxiety disorder, there is a thought or situation that causes distress. Individuals usually combat this distress through specific behaviors that include avoidance or rituals. However, ERP involves purposefully evoking fear, anxiety, and or distress in the individual by exposing him/her to the feared stimulus. The response prevention then involves having the individual refrain from the ritualistic or otherwise compulsive behavior that functions to decrease distress. The patient is then taught to tolerate distress until it fades away on its own, thereby learning that rituals are not always necessary to decrease distress or anxiety. Over repeated practice of ERP, patients with OCD expect to find that they can have obsessive thoughts and images but not have the need to engage in compulsive rituals to decrease distress.
The AACAP's practice parameters for OCD recommends cognitive behavioral therapy, and more specifically ERP, as first line treatment for youth with mild to moderate severity OCD and combination psychotherapy and pharmacotherapy for severe OCD. The Cochrane Review's examinations of different randomized control trials echoes repeated findings of the superiority of ERP over waitlist control or pill-placebos, the superiority of combination ERP and pharmacotherapy, but similar effect sizes of efficacy between ERP or pharmacotherapy alone.
Generalized anxiety disorder
There is empirical evidence that exposure therapy can be an effective treatment for people with generalized anxiety disorder, citing specifically in vivo exposure therapy (exposure through a real-life situation), which has greater effectiveness than imaginal exposure in regards to generalized anxiety disorder. The aim of in vivo exposure treatment is to promote emotional regulation using systematic and controlled therapeutic exposure to traumatic stimuli. Exposure is used to promote fear tolerance.
Exposure therapy is also a preferred method for children who struggle with anxiety.
Other possible uses of exposure therapy
Exposure therapy has been posited as potentially helpful for other uses, including substance abuse disorders, overeating, binge eating, and obesity, and depression.
History
The 9th century Persian polymath Abu Zayd al-Balkhi wrote about 'tranquilizing fear' by 'forcing oneself to repeatedly expose one's hearing and sight to noxious things' and to 'moved again and again near the thing it is scared of until it becomes used to it and loses its fear.'
The use of exposure as a mode of therapy began in the 1950s, at a time when psychodynamic views dominated Western clinical practice and behavioral therapy was first emerging. South African psychologists and psychiatrists first used exposure as a way to reduce pathological fears, such as phobias and anxiety-related problems, and they brought their methods to England in the Maudsley Hospital training program.
Joseph Wolpe (1915–1997) was one of the first psychiatrists to spark interest in treating psychiatric problems as behavioral issues. He sought consultation with other behavioral psychologists, among them James G. Taylor (1897–1973), who worked in the psychology department of the University of Cape Town in South Africa. Although most of his work went unpublished, Taylor was the first psychologist known to use exposure therapy treatment for anxiety, including methods of situational exposure with response prevention—a common exposure therapy technique still being used.
Since the 1950s, several sorts of exposure therapy have been developed, including systematic desensitization, flooding, implosive therapy, prolonged exposure therapy, in vivo exposure therapy, and imaginal exposure therapy.
Exposure and response prevention (ERP) traces its roots back to the work of psychologist Vic Meyer in the 1960s. Meyer devised this treatment from his analysis of fear extinguishment in animals via flooding and applied it to human cases in the psychiatric setting that, at the time, were considered intractable. The success of ERP clinically and scientifically has been summarized as "spectacular" by prominent OCD researcher Stanley Rachman decades following Meyer's creation of the method.
Possibly related psychological techniques
Mindfulness
A 2015 review pointed out parallels between exposure therapy and mindfulness, stating that mindful meditation "resembles an exposure situation because [mindfulness] practitioners 'turn towards their emotional experience', bring acceptance to bodily and affective responses, and refrain from engaging in internal reactivity towards it." Imaging studies have shown that the ventromedial prefrontal cortex, hippocampus, and the amygdala are all affected by exposure therapy; imaging studies have shown similar activity in these regions with mindfulness training.
EMDR
Eye movement desensitization and reprocessing (EMDR) includes an element of exposure therapy (desensitization), though whether this is an effective method or not, is controversial.
Other
Desensitization and extinction also involve exposure to a cause of disturbance.
Research
Exposure therapy can be investigated in the laboratory using Pavlovian extinction paradigms. Using rodents such as rats or mice to study extinction allows for the investigation of underlying neurobiological mechanisms involved, as well as testing of pharmacological adjuncts to improve extinction learning.
See also
Catharsis
EMDR
Desensitization (psychology)
Extinction (psychology)
Explanatory footnotes
References
Cognitive behavioral therapy
Behavior therapy
Treatment of obsessive–compulsive disorder | 0.765087 | 0.994754 | 0.761073 |
Self-acceptance | Self-acceptance is acceptance of self.
Definition
Self-acceptance can be defined as:
the awareness of one's strengths and weaknesses,
the realistic (yet subjective) appraisal of one's talents, capabilities, and general worth, and,
feelings of satisfaction with one's self despite deficiencies and regardless of past behaviors and choices.
According to Shepard, self-acceptance is an individual's satisfaction or happiness with oneself, and is thought to be necessary for good mental health. Self-acceptance involves self-understanding, a realistic, albeit subjective, awareness of one's strengths and weaknesses. It results in an individual's feeling about oneself, that they are of "unique worth".
Albert Ellis advocated the importance of accepting yourself just because you are alive, human and unique—and not giving yourself a global rating, or being influenced by what others think of you.
In clinical psychology and positive psychology, self-acceptance is considered the prerequisite for change to occur. It can be achieved by stopping criticizing and solving the defects of one's self, and then accepting them to be existing within one's self. That is, tolerating oneself to be imperfect in some parts.
Some distinguish between conditional and unconditional self-acceptance.
Self-acceptance is one of the six factors in Carol D. Ryff's structure for eudaimonic well-being.
Qualities
A person who scores high on self-acceptance:
has a positive self-attitude,
acknowledges and accepts all aspects of themselves (including the good and bad),
is not self-critical or confused about their identity, and,
does not wish they were any different from who they already are.
Past and current views in psychology
In the past, the practice of self-acceptance was reproved by the Greeks. However, the need to know about and understand "the self" eventually became an important, underlying point in several psychological theories, such as:
Jahoda's work on mental health,
Carl Rogers' Theory of Personality,
Gordon Allport's Eight Stages of Self (Proprium) Development,
Maslow's Hierarchy of Needs under the "self-actualization" category,
Albert Ellis' Rational emotive behavioral therapy
In addition to that, the life-span theories of Erikson and Neugarten mention the importance of self-acceptance including one's past life, and Carl Jung's process of individuation also emphasizes coming to terms with the dark side of one's self, or "the shadow".
Relation to positive psychology
With respect to positive psychology, self-acceptance, as a component of eudaimonic well-being (EWB), is an indicator and a measure of psychological well-being. For instance, Alfred Adler, founder of individual psychology, observed that people who thought of themselves as inferior also observed a depreciation of others.
Psychological benefits
Some psychological benefits of self-acceptance include mood regulation, a decrease in depressive symptoms, and an increase in positive emotions.
An example of this can be seen in a 2014 study that looked at affective profiles. The results yielded suggest that individuals categorized as self-fulfilling (as compared to the other profiles) tended to score higher on all the factors of Ryff's eudaimonic well-being dimensions (self-acceptance included).
In addition to that, self-acceptance (and environmental mastery) specifically and significantly predicted harmony in life across all affective profiles.
Other psychological benefits include:
a heightened sense of freedom,
a decrease in fear of failure,
an increase in self-worth,
an increase in independence (autonomy),
an increase in self-esteem,
less desire to win the approval of others,
less self-critique and more self-kindness when mistakes occur,
more desire to live life for one's self (and not others), and,
the ability to take more risks without worrying about the consequences.
Self-acceptance is also thought to be necessary for good mental health.
Physical benefits
In addition to psychological benefits, self-acceptance may have physical benefits as well. For example, the results of a 2008 study propose that older women with higher levels of environmental mastery, positive relations with others, and self-acceptance showed lower levels of glycosylated hemoglobin, which is a marker for glucose levels/insulin resistance.
See also
Self-compassion
Self-esteem
Self-love
Unconditional positive regard
References
Self
Spiritual faculties
Mindfulness movement
Positive psychology | 0.775106 | 0.981891 | 0.761069 |
Emotionally focused therapy | Emotionally focused therapy and emotion-focused therapy (EFT) are related humanistic approaches to psychotherapy that aim to resolve emotional and relationship issues with individuals, couples, and families. These therapies combine experiential therapy techniques, including person-centered and Gestalt therapies, with systemic therapy and attachment theory. The central premise is that emotions influence cognition, motivate behavior, and are strongly linked to needs. The goals of treatment include transforming maladaptive behaviors, such as emotional avoidance, and developing awareness, acceptance, expression, and regulation of emotion and understanding of relationships. EFT is usually a short-term treatment (eight to 20 sessions).
Emotion-focused therapy for individuals was originally known as process-experiential therapy, and continues to be referred to by this name in some contexts. EFT should not be confused with emotion-focused coping, a separate concept involving coping strategies for managing emotions. EFT has been used to improve clients' emotion-focused coping abilities.
History
EFT began in the mid-1980s as an approach to helping couples. EFT was originally formulated and tested by Sue Johnson and Les Greenberg in 1985, and the first manual for emotionally focused couples therapy was published in 1988.
To develop the approach, Johnson and Greenberg began reviewing videos of sessions of couples therapy to identify, through observation and task analysis, the elements that lead to positive change. They were influenced in their observations by the humanistic experiential psychotherapies of Carl Rogers and Fritz Perls, both of whom valued (in different ways) present-moment emotional experience for its power to create meaning and guide behavior. Johnson and Greenberg saw the need to combine experiential therapy with the systems theoretical view that meaning-making and behavior cannot be considered outside of the whole situation in which they occur. In this "experiential–systemic" approach to couples therapy, as in other approaches to systemic therapy, the problem is viewed as belonging not to one partner, but rather to the cyclical reinforcing patterns of interactions between partners. Emotion is viewed not only as a within-individual phenomena, but also as part of the whole system that organizes the interactions between partners.
In 1986, Greenberg chose "to refocus his efforts on developing and studying an experiential approach to individual therapy". Greenberg and colleagues shifted their attention away from couples therapy toward individual psychotherapy. They attended to emotional experiencing and its role in individual self-organization. Building on the experiential theories of Rogers and Perls and others such as Eugene Gendlin, as well as on their own extensive work on information processing and the adaptive role of emotion in human functioning, created a treatment manual with numerous clearly outlined principles for what they called a process-experiential approach to psychological change. and have further expanded the process-experiential approach, providing detailed manuals of specific principles and methods of therapeutic intervention. presented case formulation maps for this approach.
Johnson continued to develop EFT for couples, integrating attachment theory with systemic and humanistic approaches, and explicitly expanding attachment theory's understanding of love relationships. Johnson's model retained the original three stages and nine steps and two sets of interventions that aim to reshape the attachment bond: one set of interventions to track and restructure patterns of interaction and one to access and reprocess emotion (see below). Johnson's goal is the creation of positive cycles of interpersonal interaction wherein individuals are able to ask for and offer comfort and support to safe others, facilitating interpersonal emotion regulation.
developed a variation of EFT for couples that contains some elements from Greenberg and Johnson's original formulation but adds several steps and stages. Greenberg and Goldman posit three motivational dimensions—(1) attachment, (2) identity or power, and (3) attraction or liking—that impact emotion regulation in intimate relationships.
Similar terminology, different meanings
The terms emotion-focused therapy and emotionally focused therapy have different meanings for different therapists.
In Les Greenberg's approach the term emotion-focused is sometimes used to refer to psychotherapy approaches in general that emphasize emotion. Greenberg "decided that on the basis of the development in emotion theory that treatments such as the process experiential approach, as well as some other approaches that emphasized emotion as the target of change, were sufficiently similar to each other and different from existing approaches to merit being grouped under the general title of emotion-focused approaches." He and colleague Rhonda Goldman noted their choice to "use the more American phrasing of emotion-focused to refer to therapeutic approaches that focused on emotion, rather than the original, possibly more English term (reflecting both Greenberg's and Johnson's backgrounds) emotionally focused." Greenberg uses the term emotion-focused to suggest assimilative integration of an emotional focus into any approach to psychotherapy. He considers the focus on emotions to be a common factor among various systems of psychotherapy: "The term emotion-focused therapy will, I believe, be used in the future, in its integrative sense, to characterize all therapies that are emotion-focused, be they psychodynamic, cognitive-behavioral, systemic, or humanistic." Greenberg co-authored a chapter on the importance of research by clinicians and integration of psychotherapy approaches that stated:
In addition to these empirical findings, leaders of major orientations have voiced serious criticisms of their preferred theoretical approaches, while encouraging an open-minded attitude toward other orientations.... Furthermore, clinicians of different orientations recognized that their approaches did not provide them with the clinical repertoire sufficient to address the diversity of clients and their presenting problems.
Sue Johnson's use of the term emotionally focused therapy refers to a specific model of relationship therapy that explicitly integrates systems and experiential approaches and places prominence upon attachment theory as a theory of emotion regulation. Johnson views attachment needs as a primary motivational system for mammalian survival; her approach to EFT focuses on attachment theory as a theory of adult love wherein attachment, care-giving, and sex are intertwined. Attachment theory is seen to subsume the search for personal autonomy, dependability of the other and a sense of personal and interpersonal attractiveness, love-ability and desire. Johnson's approach to EFT aims to reshape attachment strategies towards optimal inter-dependency and emotion regulation, for resilience and physical, emotional, and relational health.
Features
Experiential focus
All EFT approaches have retained emphasis on the importance of Rogerian empathic attunement and communicated understanding. They all focus upon the value of engaging clients in emotional experiencing moment-to-moment in session. Thus, an experiential focus is prominent in all EFT approaches. All EFT theorists have expressed the view that individuals engage with others on the basis of their emotions, and construct a sense of self from the drama of repeated emotionally laden interactions.
The information-processing theory of emotion and emotional appraisal (in accordance with emotion theorists such as Magda B. Arnold, Paul Ekman, Nico Frijda, and James Gross) and the humanistic, experiential emphasis on moment-to-moment emotional expression (developing the earlier psychotherapy approaches of Carl Rogers, Fritz Perls, and Eugene Gendlin) have been strong components of all EFT approaches since their inception. EFT approaches value emotion as the target and agent of change, honoring the intersection of emotion, cognition, and behavior. EFT approaches posit that emotion is the first, often subconscious response to experience. All EFT approaches also use the framework of primary and secondary (reactive) emotion responses.
Maladaptive emotion responses and negative patterns of interaction
Greenberg and some other EFT theorists have categorized emotion responses into four types (see below) to help therapists decide how to respond to a client at a particular time: primary adaptive, primary maladaptive, secondary reactive, and instrumental. Greenberg has posited six principles of emotion processing: (1) awareness of emotion or naming what one feels, (2) emotional expression, (3) regulation of emotion, (4) reflection on experience, (5) transformation of emotion by emotion, and (6) corrective experience of emotion through new lived experiences in therapy and in the world. While primary adaptive emotion responses are seen as a reliable guide for behavior in the present situation, primary maladaptive emotion responses are seen as an unreliable guide for behavior in the present situation (alongside other possible emotional difficulties such as lack of emotional awareness, emotion dysregulation, and problems in meaning-making).
Johnson rarely distinguishes between adaptive and maladaptive primary emotion responses, and rarely distinguishes emotion responses as dysfunctional or functional. Instead, primary emotional responses are usually construed as normal survival reactions in the face of what John Bowlby called "separation distress". EFT for couples, like other systemic therapies that emphasize interpersonal relationships, presumes that the patterns of interpersonal interaction are the problematic or dysfunctional element. The patterns of interaction are amenable to change after accessing the underlying primary emotion responses that are subconsciously driving the ineffective, negative reinforcing cycles of interaction. Validating reactive emotion responses and reprocessing newly accessed primary emotion responses is part of the change process.
Individual therapy
proposed a 14-step case formulation process that regards emotion-related problems as stemming from at least four different possible causes: lack of awareness or avoidance of emotion, dysregulation of emotion, maladaptive emotion response, or a problem with making meaning of experiences. The theory features four types of emotion response (see below), categorizes needs under "attachment" and "identity", specifies four types of emotional processing difficulties, delineates different types of empathy, has at least a dozen different task markers (see below), relies on two interactive tracks of emotion and narrative processes as sources of information about a client, and presumes a dialectical-constructivist model of psychological development and an emotion schematic system.
The emotion schematic system is seen as the central catalyst of self-organization, often at the base of dysfunction and ultimately the road to cure. For simplicity, we use the term emotion schematic process to refer to the complex synthesis process in which a number of co-activated emotion schemes co-apply, to produce a unified sense of self in relation to the world.
Techniques used in "coaching clients to work through their feelings" may include the Gestalt therapy empty chair technique, frequently used for resolving "unfinished business", and the two-chair technique, frequently used for self-critical splits.
Emotion response types
Emotion-focused theorists have posited that each person's emotions are organized into idiosyncratic emotion schemes that are highly variable both between people and within the same person over time, but for practical purposes emotional responses can be classified into four broad types: primary adaptive, primary maladaptive, secondary reactive, and instrumental.
Primary adaptive emotion responses are initial emotional responses to a given stimulus that have a clear beneficial value in the present situation—for example, sadness at loss, anger at violation, and fear at threat. Sadness is an adaptive response when it motivates people to reconnect with someone or something important that is missing. Anger is an adaptive response when it motivates people to take assertive action to end the violation. Fear is an adaptive response when it motivates people to avoid or escape an overwhelming threat. In addition to emotions that indicate action tendencies (such as the three just mentioned), primary adaptive emotion responses include the feeling of being certain and in control or uncertain and out of control, and/or a general felt sense of emotional pain—these feelings and emotional pain do not provide immediate action tendencies but do provide adaptive information that can be symbolized and worked through in therapy. Primary adaptive emotion responses "are attended to and expressed in therapy in order to access the adaptive information and action tendency to guide problem solving."
Primary maladaptive emotion responses are also initial emotional responses to a given stimulus; however, they are based on emotion schemes that are no longer useful (and that may or may not have been useful in the person's past) and that were often formed through previous traumatic experiences. Examples include sadness at the joy of others, anger at the genuine caring or concern of others, fear at harmless situations, and chronic feelings of insecurity/fear or worthlessness/shame. For example, a person may respond with anger at the genuine caring or concern of others because as a child he or she was offered caring or concern that was usually followed by a violation; as a result, he or she learned to respond to caring or concern with anger even when there is no violation. The person's angry response is understandable, and needs to be met with empathy and compassion even though his or her angry response is not helpful. Primary maladaptive emotion responses are accessed in therapy with the aim of transforming the emotion scheme through new experiences.
Secondary reactive emotion responses are complex chain reactions where a person reacts to his or her primary adaptive or maladaptive emotional response and then replaces it with another, secondary emotional response. In other words, they are emotional responses to prior emotional responses. ("Secondary" means that a different emotion response occurred first.) They can include secondary reactions of hopelessness, helplessness, rage, or despair that occur in response to primary emotion responses that are experienced (secondarily) as painful, uncontrollable, or violating. They may be escalations of a primary emotion response, as when people are angry about being angry, afraid of their fear, or sad about their sadness. They may be defenses against a primary emotion response, such as feeling anger to avoid sadness or fear to avoid anger; this can include gender role-stereotypical responses such as expressing anger when feeling primarily afraid (stereotypical of men's gender role), or expressing sadness when primarily angry (stereotypical of women's gender role). "These are all complex, self-reflexive processes of reacting to one's emotions and transforming one emotion into another. Crying, for example, is not always true grieving that leads to relief, but rather can be the crying of secondary helplessness or frustration that results in feeling worse." Secondary reactive emotion responses are accessed and explored in therapy in order to increase awareness of them and to arrive at more primary and adaptive emotion responses.
Instrumental emotion responses are experienced and expressed by a person because the person has learned that the response has an effect on others, "such as getting them to pay attention to us, to go along with something we want them to do for us, to approve of us, or perhaps most often just not to disapprove of us." Instrumental emotion responses can be consciously intended or unconsciously learned (i.e., through operant conditioning). Examples include crocodile tears (instrumental sadness), bullying (instrumental anger), crying wolf (instrumental fear), and feigned embarrassment (instrumental shame). When a client responds in therapy with instrumental emotion responses, it may feel manipulative or superficial to the therapist. Instrumental emotion responses are explored in therapy in order to increase awareness of their interpersonal function and/or the associated primary and secondary gain.
The therapeutic process with different emotion responses
Emotion-focused theorists have proposed that each type of emotion response calls for a different intervention process by the therapist. Primary adaptive emotion responses need be more fully allowed and accessed for their adaptive information. Primary maladaptive emotion responses need to be accessed and explored to help the client identify core unmet needs (e.g., for validation, safety, or connection), and then regulated and transformed with new experiences and new adaptive emotions. Secondary reactive emotion responses need empathic exploration in order to discover the sequence of emotions that preceded them. Instrumental emotion responses need to be explored interpersonally in the therapeutic relationship to increase awareness of them and address how they are functioning in the client's situation.
Primary emotion responses are not called "primary" because they are somehow more real than the other responses; all of the responses feel real to a person, but therapists can classify them into these four types in order to help clarify the functions of the response in the client's situation and how to intervene appropriately.
Therapeutic tasks
A therapeutic task is an immediate problem that a client needs to resolve in a psychotherapy session. In the 1970s and 1980s, researchers such as Laura North Rice (a former colleague of Carl Rogers) applied task analysis to transcripts of psychotherapy sessions in an attempt to describe in more detail the process of clients' cognitive and emotional change, so that therapists might more reliably provide optimal conditions for change. This kind of psychotherapy process research eventually led to a standardized (and evolving) set of therapeutic tasks in emotion-focused therapy for individuals.
The following table summarizes the standard set of these therapeutic tasks as of 2012. The tasks are classified into five broad groups: empathy-based, relational, experiencing, reprocessing, and action. The task marker is an observable sign that a client may be ready to work on the associated task. The intervention process is a sequence of actions carried out by therapist and client in working on the task. The end state is the desired resolution of the immediate problem.
In addition to the task markers listed below, other markers and intervention processes for working with emotion and narrative have been specified: same old stories, empty stories, unstoried emotions, and broken stories.
Experienced therapists can create new tasks; EFT therapist Robert Elliott, in a 2010 interview, noted that "the highest level of mastery of the therapy—EFT included—is to be able to create new structures, new tasks. You haven't really mastered EFT or some other therapy until you actually can begin to create new tasks."
Emotion-focused therapy for trauma
The interventions and the structure of emotion-focused therapy have been adapted for the specific needs of psychological trauma survivors. A manual of emotion-focused therapy for individuals with complex trauma (EFTT) has been published. For example, modifications of the traditional Gestalt empty chair technique have been developed.
Other versions of EFT for individuals
proposed an emotionally focused approach to individual therapy that focuses on attachment, while integrating the experiential focus of empathic attunement for engaging and reprocessing emotional experience and tracking and restructuring the systemic aspects and patterns of emotion regulation. The therapist follows the attachment model by addressing deactivating and hyperactivating strategies. Individual therapy is seen as a process of developing secure connections between therapist and client, between client and past and present relationships, and within the client. Attachment principles guide therapy in the following ways: forming the collaborative therapeutic relationship, shaping the overall goal for therapy to be that of "effective dependency" (following John Bowlby) upon one or two safe others, depathologizing emotion by normalizing separation distress responses, and shaping change processes. The change processes are: identifying and strengthening patterns of emotion regulation, and creating corrective emotional experiences to transform negative patterns into secure bonds.
integrated EFT principles and methods with mindfulness-based cognitive therapy and mindfulness-based stress reduction.
Couples therapy
A systemic perspective is important in all approaches to EFT for couples. Tracking conflictual patterns of interaction, often referred to as a "dance" in Johnson's popular literature, has been a hallmark of the first stage of Johnson and Greenberg's approach since its inception in 1985. In Goldman and Greenberg's newer approach, therapists help clients "also work toward self-change and the resolution of pain stemming from unmet childhood needs that affect the couple interaction, in addition to working on interactional change." Goldman and Greenberg justify their added emphasis on self-change by noting that not all problems in a relationship can be solved only by tracking and changing patterns of interaction:
In addition, in our observations of psychotherapeutic work with couples, we have found that problems or difficulties that can be traced to core identity concerns such as needs for validation or a sense of worth are often best healed through therapeutic methods directed toward the self rather than to the interactions. For example, if a person's core emotion is one of shame and they feel "rotten at the core" or "simply fundamentally flawed," soothing or reassuring from one's partner, while helpful, will not ultimately solve the problem, lead to structural emotional change, or alter the view of oneself.
In Greenberg and Goldman's approach to EFT for couples, although they "fully endorse" the importance of attachment, attachment is not considered to be the only interpersonal motivation of couples; instead, attachment is considered to be one of three aspects of relational functioning, along with issues of identity/power and attraction/liking. In Johnson's approach, attachment theory is considered to be the defining theory of adult love, subsuming other motivations, and it guides the therapist in processing and reprocessing emotion.
In Greenberg and Goldman's approach, the emphasis is on working with core issues related to identity (working models of self and other) and promoting both self-soothing and other-soothing for a better relationship, in addition to interactional change. In Johnson's approach, the primary goal is to reshape attachment bonds and create "effective dependency" (including secure attachment).
Stages and steps
EFT for couples features a nine-step model of restructuring the attachment bond between partners. In this approach, the aim is to reshape the attachment bond and create more effective co-regulation and "effective dependency", increasing individuals' self-regulation and resilience. In good-outcome cases, the couple is helped to respond and thereby meet each other's unmet needs and injuries from childhood. The newly shaped secure attachment bond may become the best antidote to a traumatic experience from within and outside of the relationship.
Adding to the original three-stage, nine-step EFT framework developed by Johnson and Greenberg, Greenberg and Goldman's emotion-focused therapy for couples has five stages and 14 steps. It is structured to work on identity issues and self-regulation prior to changing negative interactions. It is considered necessary, in this approach, to help partners experience and reveal their own underlying vulnerable feelings first, so they are better equipped to do the intense work of attuning to the other partner and to be open to restructuring interactions and the attachment bond.
summarizes the nine treatment steps in Johnson's model of EFT for couples: "The therapist leads the couple through these steps in a spiral fashion, as one step incorporates and leads into the other. In mildly distressed couples, partners usually work quickly through the steps at a parallel rate. In more distressed couples, the more passive or withdrawn partner is usually invited to go through the steps slightly ahead of the other."
Stage 1. Stabilization (assessment and de-escalation phase)
Step 1: Identify the relational conflict issues between the partners
Step 2: Identify the negative interaction cycle where these issues are expressed
Step 3: Access attachment emotions underlying the position each partner takes in this cycle
Step 4: Reframe the problem in terms of the cycle, unacknowledged emotions, and attachment needs
During this stage, the therapist creates a comfortable and stable environment for the couple to have an open discussion about any hesitations the couples may have about the therapy, including the trustworthiness of the therapist. The therapist also gets a sense of the couple's positive and negative interactions from past and present and is able to summarize and present the negative patterns for them. Partners soon no longer view themselves as victims of their negative interaction cycle; they are now allies against it.
Stage 2. Restructuring the bond (changing interactional positions phase)
Step 5: Access disowned or implicit needs (e.g., need for reassurance), emotions (e.g., shame), and models of self
Step 6: Promote each partner's acceptance of the other's experience
Step 7: Facilitate each partner's expression of needs and wants to restructure the interaction based on new understandings and create bonding events
This stage involves restructuring and widening the emotional experiences of the couple. This is done through couples recognizing their attachment needs and then changing their interactions based on those needs. At first, their new way of interacting may be strange and hard to accept, but as they become more aware and in control of their interactions they are able to stop old patterns of behavior from reemerging.
Stage 3. Integration and consolidation
Step 8: Facilitate the formulation of new stories and new solutions to old problems
Step 9: Consolidate new cycles of behavior
This stage focuses on the reflection of new emotional experiences and self-concepts. It integrates the couple's new ways of dealing with problems within themselves and in the relationship.
Styles of attachment
described four attachment styles that affect the therapy process:
Secure attachment: People who are secure and trusting perceive themselves as lovable, able to trust others and themselves within a relationship. They give clear emotional signals, and are engaged, resourceful and flexible in unclear relationships. Secure partners express feelings, articulate needs, and allow their own vulnerability to show.
Avoidant attachment: People who have a diminished ability to articulate feelings, tend not to acknowledge their need for attachment, and struggle to name their needs in a relationship. They tend to adopt a safe position and solve problems dispassionately without understanding the effect that their safe distance has on their partners.
Anxious attachment: People who are psychologically reactive and who exhibit anxious attachment. They tend to demand reassurance in an aggressive way, demand their partner's attachment and tend to use blame strategies (including emotional blackmail) in order to engage their partner.
Fearful–avoidant attachment: People who have been traumatized and have experienced little to no recovery from it vacillate between attachment and hostility. This is sometimes referred to as disorganized attachment.
Family therapy
The emotionally focused family therapy (EFFT) of Johnson and her colleagues aims to promote secure bonds among distressed family members. It is a therapy approach consistent with the attachment-oriented experiential–systemic emotionally focused model in three stages: (1) de-escalating negative cycles of interaction that amplify conflict and insecure connections between parents and children; (2) restructuring interactions to shape positive cycles of parental accessibility and responsiveness to offer the child or adolescent a safe haven and a secure base; (3) consolidation of the new responsive cycles and secure bonds. Its primary focus is on strengthening parental responsiveness and care-giving, to meet children and adolescents' attachment needs. It aims to "build stronger families through (1) recruiting and strengthening parental emotional responsiveness to children, (2) accessing and clarifying children's attachment needs, and (3) facilitating and shaping care-giving interactions from parent to child". Some clinicians have integrated EFFT with play therapy.
One group of clinicians, inspired in part by Greenberg's approach to EFT, developed a treatment protocol specifically for families of individuals struggling with an eating disorder. The treatment is based on the principles and techniques of four different approaches: emotion-focused therapy, behavioral family therapy, motivational enhancement therapy, and the New Maudsley family skills-based approach. It aims to help parents "support their child in the processing of emotions, increasing their emotional self-efficacy, deepening the parent–child relationships and thereby making ED [eating disorder] symptoms unnecessary to cope with painful emotional experiences". The treatment has three main domains of intervention, four core principles, and five steps derived from Greenberg's emotion-focused approach and influenced by John Gottman: (1) attending to the child's emotional experience, (2) naming the emotions, (3) validating the emotional experience, (4) meeting the emotional need, and (5) helping the child to move through the emotional experience, problem solving if necessary.
Efficacy
Johnson, Greenberg, and many of their colleagues have spent their long careers as academic researchers publishing the results of empirical studies of various forms of EFT.
The American Psychological Association considers emotion-focused therapy for individuals to be an empirically supported treatment for depression. Studies have suggested that it is effective in the treatment of depression, interpersonal problems, trauma, and avoidant personality disorder.
Practitioners of EFT have claimed that studies have consistently shown clinically significant improvement post therapy. Studies, again mostly by EFT practitioners, have suggested that emotionally focused therapy for couples is an effective way to restructure distressed couple relationships into safe and secure bonds with long-lasting results. conducted a meta-analysis of the four most rigorous outcome studies before 2000 and concluded that the original nine-step, three-stage emotionally focused therapy approach to couples therapy had a larger effect size than any other couple intervention had achieved to date, but this meta-analysis was later harshly criticized by psychologist James C. Coyne, who called it "a poor quality meta-analysis of what should have been left as pilot studies conducted by promoters of a therapy in their own lab". A study with an fMRI component conducted in collaboration with American neuroscientist Jim Coan suggested that emotionally focused couples therapy reduces the brain's response to threat in the presence of a romantic partner; this study was also criticized by Coyne.
A 2019 meta-analysis on EFT effectiveness for couples therapy concluded that the approach significantly improves relationship satisfaction, with these improvements being sustained for up to two years at follow-up.
Strengths
Some of the strengths of EFT approaches can be summarized as follows:
EFT aims to be collaborative and respectful of clients, combining experiential person-centered therapy techniques with systemic therapy interventions.
Change strategies and interventions are specified through intensive analysis of psychotherapy process.
EFT has been validated by 30 years of empirical research. There is also research on the change processes and predictors of success.
EFT has been applied to different kinds of problems and populations, although more research on different populations and cultural adaptations is needed.
EFT for couples is based on conceptualizations of marital distress and adult love that are supported by empirical research on the nature of adult interpersonal attachment.
Criticism
Psychotherapist Campbell Purton, in his 2014 book The Trouble with Psychotherapy, criticized a variety of approaches to psychotherapy, including behavior therapy, person-centered therapy, psychodynamic therapy, cognitive behavioral therapy, emotion-focused therapy, and existential therapy; he argued that these psychotherapies have accumulated excessive and/or flawed theoretical baggage that deviates too much from an everyday common-sense understanding of personal troubles. With regard to emotion-focused therapy, Purton argued that "the effectiveness of each of the 'therapeutic tasks' can be understood without the theory" and that what clients say "is not well explained in terms of the interaction of emotion schemes; it is better explained in terms of the person's situation, their response to it, and their having learned the particular language in which they articulate their response."
In 2014, psychologist James C. Coyne criticized some EFT research for lack of rigor (for example, being underpowered and having high risk of bias), but he also noted that such problems are common in the field of psychotherapy research.
In a 2015 article in Behavioral and Brain Sciences on "memory reconsolidation, emotional arousal and the process of change in psychotherapy", Richard D. Lane and colleagues summarized a common claim in the literature on emotion-focused therapy that "emotional arousal is a key ingredient in therapeutic change" and that "emotional arousal is critical to psychotherapeutic success". In a response accompanying the article, Bruce Ecker and colleagues (creators of coherence therapy) disagreed with this claim and argued that the key ingredient in therapeutic change involving memory reconsolidation is not emotional arousal but instead a perceived mismatch between an expected pattern and an experienced pattern; they wrote:
The brain clearly does not require emotional arousal per se for inducing deconsolidation. That is a fundamental point. If the target learning happens to be emotional, then its reactivation (the first of the two required elements) of course entails an experience of that emotion, but the emotion itself does not inherently play a role in the mismatch that then deconsolidates the target learning, or in the new learning that then rewrites and erases the target learning (discussed at greater length in ). [...] The same considerations imply that "changing emotion with emotion" (stated three times by Lane et al.) inaccurately characterizes how learned responses change through reconsolidation. Mismatch consists most fundamentally of a direct, unmistakable perception that the world functions differently from one's learned model. "Changing model with mismatch" is the core phenomenology.
Other responses to argued that their emotion-focused approach "would be strengthened by the inclusion of predictions regarding additional factors that might influence treatment response, predictions for improving outcomes for non-responsive patients, and a discussion of how the proposed model might explain individual differences in vulnerability for mental health problems", and that their model needed further development to account for the diversity of states called "psychopathology" and the relevant maintaining and worsening processes.
See also
Accelerated experiential dynamic psychotherapy
Affectional bond
Attachment in adults
Attachment in children
Attachment-based psychotherapy
Compassion focused therapy
Emotional reasoning
Emotions in decision-making
Human bonding
Inner Relationship Focusing
Interpersonal attraction
Interpersonal communication
Intimate relationship
Motivated reasoning
Object relations theory
Schema therapy
Notes
References
A new edition was published in 2017:
EFT for couples
EFT for families
Videos
Attachment theory
Emotion
Integrative psychotherapy
Interpersonal relationships
Psychotherapy by type
Relationship counseling | 0.766956 | 0.992266 | 0.761025 |
Systems science | Systems science, also referred to as systems research, or, simply, systems, is a transdisciplinary field that is concerned with understanding simple and complex systems in nature and society, which leads to the advancements of formal, natural, social, and applied attributions throughout engineering, technology and science, itself.
To systems scientists, the world can be understood as a system of systems. The field aims to develop transdisciplinary foundations that are applicable in a variety of areas, such as psychology, biology, medicine, communication, business, technology, computer science, engineering, and social sciences.
Themes commonly stressed in system science are (a) holistic view, (b) interaction between a system and its embedding environment, and (c) complex (often subtle) trajectories of dynamic behavior that sometimes are stable (and thus reinforcing), while at various 'boundary conditions' can become wildly unstable (and thus destructive). Concerns about Earth-scale biosphere/geosphere dynamics is an example of the nature of problems to which systems science seeks to contribute meaningful insights.
Associated fields
The systems sciences are a broad array of fields. One way of conceiving of these is in three groups: fields that have developed systems ideas primarily through theory; those that have done so primarily through practical engagements with problem situations; and those that have applied ideas for other disciplines.
Theoretical fields
Chaos and dynamical systems
Complexity
Control theory
Affect control theory
Control engineering
Control systems
Cybernetics
Autopoiesis
Conversation Theory
Engineering Cybernetics
Perceptual Control Theory
Management Cybernetics
Second-Order Cybernetics
Cyber-Physical Systems
Artificial Intelligence
Synthetic Intelligence
Information theory
General systems theory
Systems theory in anthropology
Biochemical systems theory
Ecological systems theory
Developmental systems theory
General systems theory
Living systems theory
LTI system theory
Social systems
Sociotechnical systems theory
Mathematical system theory
World-systems theory
Hierarchy Theory
Practical fields
Critical systems thinking
Operations research and management science
Soft systems methodology
The soft systems methodology was developed in England by academics at the University of Lancaster Systems Department through a ten-year action research programme. The main contributor is Peter Checkland (born 18 December 1930, in Birmingham, UK), a British management scientist and emeritus professor of systems at Lancaster University.
Systems analysis
Systems analysis branch of systems science that analyzes systems, the interactions within those systems, or interaction with its environment, often prior to their automation as computer models. Systems analysis is closely associated with the RAND corporation.
Systemic design
Systemic design integrates methodologies from systems thinking with advanced design practices to address complex, multi-stakeholder situations.
Systems dynamics
System dynamics is an approach to understanding the behavior of complex systems over time. It offers "simulation technique for modeling business and social systems", which deals with internal feedback loops and time delays that affect the behavior of the entire system. What makes using system dynamics different from other approaches to studying complex systems is the use of feedback loops and stocks and flows.
Systems engineering
Systems engineering (SE) is an interdisciplinary field of engineering, that focuses on the development and organization of complex systems. It is the "art and science of creating whole solutions to complex problems", for example: signal processing systems, control systems and communication system, or other forms of high-level modelling and design in specific fields of engineering. Systems Science is foundational to the Embedded Software Development that is founded in the embedded requirements of Systems Engineering.
Aerospace systems
Biological systems engineering
Earth systems engineering and management
Electronic systems
Enterprise systems engineering
Software systems
Systems analysis
Applications in other disciplines
Earth system science
Climate systems
Systems geology
Systems biology
Computational systems biology
Synthetic biology
Systems immunology
Systems neuroscience
Systems chemistry
Systems ecology
Ecosystem ecology
Agroecology
Systems psychology
Ergonomics
Family systems theory
Systemic therapy
See also
Antireductionism
Evolutionary prototyping
Holism
Cybernetics
System engineering
System Dynamics
Systemics
System equivalence
Systems theory
Tektology
World-systems theory
Complex Systems
References
Further reading
B. A. Bayraktar, Education in Systems Science, 1979, 369 pp.
Kenneth D. Bailey, "Fifty Years of Systems Science:Further Reflections", Systems Research and Behavioral Science, 22, 2005, pp. 355–361.
Robert L. Flood, Ewart R Carson, Dealing with Complexity: An Introduction to the Theory and Application of Systems Science (2nd Edition), 1993.
George J. Klir, Facets of Systems Science (2nd Edition), Kluwer Academic/Plenum Publishers, 2001.
Ervin László, Systems Science and World Order: Selected Studies, 1983.
G. E. Mobus & M. C. Kalton, Principles of Systems Science, 2015, New York:Springer.
Anatol Rapoport (ed.), General Systems: Yearbook of the Society for the Advancement of General Systems Theory, Society for General Systems Research, Vol 1., 1956.
Li D. Xu, "The contributions of Systems Science to Information Systems Research", Systems Research and Behavioral Science, 17, 2000, pp. 105–116.
Graeme Donald Snooks, "A general theory of complex living systems: Exploring the demand side of dynamics", Complexity, vol. 13, no. 6, July/August 2008.
John N. Warfield, "A proposal for Systems Science", Systems Research and Behavioral Science, 20, 2003, pp. 507–520.
Michael C. Jackson, Critical Systems Thinking and the Management of Complexity, 2019 , Wiley.
External links
Principia Cybernetica Web
Institute of System Science Knowledge (ISSK.org)
International Society for the System Sciences
American Society for Cybernetics
UK Systems Society
Cybernetics Society
Formal sciences | 0.76769 | 0.99128 | 0.760996 |
Postgraduate education | Postgraduate education, graduate education, or graduate school consists of academic or professional degrees, certificates, diplomas, or other qualifications usually pursued by post-secondary students who have earned an undergraduate (bachelor's) degree.
The organization and structure of postgraduate education varies in different countries, as well as in different institutions within countries. The term "graduate school" or "grad school" is typically used in North America, while "postgraduate" is more common in the rest of the English-speaking world.
Graduate degrees can include master's and doctoral degrees, and other qualifications such as graduate diplomas, certificates and professional degrees. A distinction is typically made between graduate schools (where courses of study vary in the degree to which they provide training for a particular profession) and professional schools, which can include medical school, law school, business school, and other institutions of specialized fields such as nursing, speech–language pathology, engineering, or architecture. The distinction between graduate schools and professional schools is not absolute since various professional schools offer graduate degrees and vice versa.
Producing original research is a significant component of graduate studies in the humanities, natural sciences and social sciences. This research typically leads to the writing and defense of a thesis or dissertation. In graduate programs that are oriented toward professional training (e.g., MPA, MBA, JD, MD), the degrees may consist solely of coursework, without an original research or thesis component. Graduate students in the humanities, sciences and social sciences often receive funding from their university (e.g., fellowships or scholarships) or a teaching assistant position or other job; in the profession-oriented grad programs, students are less likely to get funding, and the fees are typically much higher.
Although graduate school programs are distinct from undergraduate degree programs, graduate instruction (in the US, Australia, and other countries) is often offered by some of the same senior academic staff and departments who teach undergraduate courses. Unlike in undergraduate programs, however, it is less common for graduate students to take coursework outside their specific field of study at graduate or graduate entry level. At the doctorate programs, though, it is quite common for students to take courses from a wider range of study, for which some fixed portion of coursework, sometimes known as a residency, is typically required to be taken from outside the department and university of the degree-seeking candidate to broaden the research abilities of the student.
Types of postgraduate qualification
There are two main types of degrees studied for at the postgraduate level: academic and vocational degrees.
Degrees
The term degree in this context means the moving from one stage or level to another (from French degré, from Latin dē- + gradus), and first appeared in the 13th century.
History
Although systems of higher education date back to ancient India, ancient Greece, ancient Rome and ancient China, the concept of postgraduate education depends upon the system of awarding degrees at different levels of study, and can be traced to the workings of European medieval universities, mostly Italian. University studies took six years for a bachelor's degree and up to twelve additional years for a master's degree or doctorate. The first six years taught the faculty of the arts, which was the study of the seven liberal arts: arithmetic, geometry, astronomy, music theory, grammar, logic, and rhetoric. The main emphasis was on logic. Once a Bachelor of Arts degree had been obtained, the student could choose one of three faculties—law, medicine, or theology—in which to pursue master's or doctor's degrees.
The degrees of master (from Latin magister) and doctor (from Latin doctor) were for some time equivalent, "the former being more in favour at Paris and the universities modeled after it, and the latter at Bologna and its derivative universities. At Oxford and Cambridge a distinction came to be drawn between the Faculties of Law, Medicine, and Theology and the Faculty of Arts in this respect, the title of Doctor being used for the former, and that of Master for the latter." Because theology was thought to be the highest of the subjects, the doctorate came to be thought of as higher than the master's.
The main significance of the higher, postgraduate degrees was that they licensed the holder to teach ("doctor" comes from Latin docere, "to teach").
Definition
In most countries, the hierarchy of postgraduate degrees is as follows:
Master's degrees. These are sometimes placed in a further hierarchy, starting with degrees such as the Master of Arts (from Latin Magister artium; M.A.) and Master of Science (from Latin Magister scientiae; M.Sc.) degrees, then the Master of Philosophy degree (from Latin Magister philosophiae; M.Phil.), and finally the Master of Letters degree (from Latin Magister litterarum; M.Litt.) (all formerly known in France as DEA or DESS before 2005, and nowadays Masters too). In the UK, master's degrees may be taught or by research: taught master's degrees include the Master of Science and Master of Arts degrees which last one year and are worth 180 CATS credits (equivalent to 90 ECTS European credits), whereas the master's degrees by research include the Master of Research degree (M.Res.) which also lasts one year and is worth 180 CATS or 90 ECTS credits (the difference compared to the Master of Science and Master of Arts degrees being that the research is much more extensive) and the Master of Philosophy degree which lasts two years. In Scottish Universities, the Master of Philosophy degree tends to be by research or higher master's degree and the Master of Letters degree tends to be the taught or lower master's degree. In many fields such as clinical social work, or library science in North America, a master's is the terminal degree. Professional degrees such as the Master of Architecture degree (M.Arch.) can last to three and a half years to satisfy professional requirements to be an architect. Professional degrees such as the Master of Business Administration degree (M.B.A.) can last up to two years to satisfy the requirement to become a knowledgeable business leader.
Doctorates. These are often further divided into academic and professional doctorates. An academic doctorate can be awarded as a Doctor of Philosophy degree (from Latin Doctor philosophiae; Ph.D. or D.Phil.), a Doctor of Psychology degree (from Latin Doctor psychologia; Psy.D.), or as a Doctor of Science degree (from Latin Doctor scientiae; D.Sc.). The Doctor of Science degree can also be awarded in specific fields, such as a Doctor of Science in Mathematics degree (from Latin Doctor scientiarum mathematic arum; D.Sc.Math.), a Doctor of Agricultural Science degree (from Latin Doctor scientiarum agrariarum; D.Sc.Agr.), a Doctor of Business Administration degree (D.B.A.), etc. In some parts of Europe, doctorates are divided into the Doctor of Philosophy degree or "junior doctorate", and the "higher doctorates" such as the Doctor of Science degree, which is generally awarded to highly distinguished professors. A doctorate is the terminal degree in most fields. In the United States, there is little distinction between a Doctor of Philosophy degree and a Doctor of Science degree. In the UK, Doctor of Philosophy degrees are often equivalent to 540 CATS credits or 270 ECTS European credits, but this is not always the case as the credit structure of doctoral degrees is not officially defined.
In some countries such as Finland and Sweden, there is the degree of Licentiate, which is more advanced than a master's degree but less so than a doctorate. Credits required are about half of those required for a doctoral degree. Coursework requirements are the same as for a doctorate, but the extent of original research required is not as high as for doctorate. Medical doctors for example are typically licentiates instead of doctors.
In the UK and countries whose education systems were founded on the British model, such as the US, the master's degree was for a long time the only postgraduate degree normally awarded, while in most European countries apart from the UK, the master's degree almost disappeared. In the second half of the 19th century, however, US universities began to follow the European model by awarding doctorates, and this practice spread to the UK. Conversely, most European universities now offer master's degrees parallelling or replacing their regular system, so as to offer their students better chances to compete in an international market dominated by the American model.
In the UK, an equivalent formation to doctorate is the NVQ 5 or QCF 8.
Honorary degrees
Most universities award honorary degrees, usually at the postgraduate level. These are awarded to a wide variety of people, such as artists, musicians, writers, politicians, businesspeople, etc., in recognition of their achievements in their various fields. (Recipients of such degrees do not normally use the associated titles or letters, such as "Dr.")
Non-degree qualifications
Postgraduate education can involve studying for qualifications such as postgraduate certificates and postgraduate diplomas. They are sometimes used as steps on the route to a degree, as part of the training for a specific career, or as a qualification in an area of study too narrow to warrant a full degree course.
Argentina
Admission
In Argentina, the admission to a Postgraduate program at an Argentine University requires the full completion of any undergraduate course, called in Argentina "carrera de grado" (v.gr. Licenciado, Ingeniero or Lawyer degree). The qualifications of 'Licenciado', 'Ingeniero', or the equivalent qualification in Law degrees (a graduate from a "carrera de grado") are similar in content, length and skill-set to a joint first and second cycles in the qualification framework of the Bologna Process (that is, Bachelor and Master qualifications).
Funding
While a significant portion of postgraduate students finance their tuition and living costs with teaching or research work at private and state-run institutions, international institutions, such as the Fulbright Program and the Organization of American States (OAS), have been known to grant full scholarships for tuition with apportions for housing.
Degree requirements
Upon completion of at least two years' research and coursework as a postgraduate student, a candidate must demonstrate truthful and original contributions to his or her specific field of knowledge within a frame of academic excellence. The Master and Doctoral candidate's work should be presented in a dissertation or thesis prepared under the supervision of a tutor or director, and reviewed by a postgraduate committee. This committee should be composed of examiners external to the program, and at least one of them should also be external to the institution.
Australia
Types of postgraduate degrees
Programmes are divided into coursework-based and research-based degrees. Coursework programs typically include qualifications such as:
Graduate certificate, six-month full-time coursework.
Graduate diploma, twelve-month full-time coursework.
Master (of Arts, Science or other discipline). 1 year to 2 years of full-time study for coursework and research master's and 3 to 4 years of full-time study for extended master's degrees (which can allow the use of the word doctor in their title, such as Doctor of Medicine and Juris Doctor). Research master's ends in the submission of a thesis.
Doctor of Philosophy, 3 to 4 years full-time study. Also ends in the submission of a thesis.
Higher doctorate, awarded usually ten or more years after the completion of PhD (which is a prerequisite of higher doctorate) after submission of a research portfolio that is of a higher standard than that required for the awarding of a PhD.
Admission
Generally, the Australian higher education system follows that of its British counterpart (with some notable exceptions). Entrance is decided by merit, entrance to coursework-based programmes is usually not as strict; most universities usually require a "Credit" average as entry to their taught programmes in a field related to their previous undergraduate. On average, however, a strong "Credit" or "Distinction" average is the norm for accepted students. Not all coursework programs require the student to already possess the relevant undergraduate degree, they are intended as "conversion" or professional qualification programs, and merely any relevant undergraduate degree with good grades is required.
Ph.D. entrance requirements in the higher ranked schools typically require a student to have postgraduate research honours or a master's degree by research, or a master's with a significant research component. Entry requirements depend on the subject studied and the individual university. The minimum duration of a Ph.D. programme is two years, but completing within this time span is unusual, with Ph.D.s usually taking an average of three to four years to be completed.
Most of the confusion with Australian postgraduate programmes occurs with the research-based programmes, particularly scientific programmes. Research degrees generally require candidates to have a minimum of a second-class four-year honours undergraduate degree to be considered for admission to a Ph.D. programme (M.Phil. are an uncommon route). In science, a British first class honours (3 years) is not equivalent to an Australian first class honours (1 year research postgraduate programme that requires a completed undergraduate (pass) degree with a high grade-point average). In scientific research, it is commonly accepted that an Australian postgraduate honours is equivalent to a British master's degree (in research). There has been some debate over the acceptance of a three-year honours degree (as in the case of graduates from British universities) as the equivalent entry requirement to graduate research programmes (M.Phil., Ph.D.) in Australian universities. The letters of honours programmes also added to the confusion. For example: B.Sc. (Hons) are the letters gained for postgraduate research honours at the University of Queensland. B.Sc. (Hons) does not indicate that this honours are postgraduate qualification. The difficulty also arises between different universities in Australia—some universities have followed the UK system.
Professional programs
There are many professional programs such as medical and dental school require a previous bachelors for admission and are considered graduate or Graduate Entry programs even though they culminate in a bachelor's degree. Example, the Bachelor of Medicine (MBBS) or Bachelor of Dentistry (BDent).
There has also been some confusion over the conversion of the different marking schemes between British, US, and Australian systems for the purpose of assessment for entry to graduate programmes. The Australian grades are divided into four categories: High Distinction, Distinction, Credit, and Pass (though many institutions have idiosyncratic grading systems). Assessment and evaluation based on the Australian system is not equivalent to British or US schemes because of the "low-marking" scheme used by Australian universities. For example, a British student who achieves 70+ will receive an A grade, whereas an Australian student with 70+ will receive a Distinction which is not the highest grade in the marking scheme.
Funding
The Australian government usually offer full funding (fees and a monthly stipend) to its citizens and permanent residents who are pursuing research-based higher degrees. There are also highly competitive scholarships for international candidates who intend to pursue research-based programmes. Taught-degree scholarships (certain master's degrees, Grad. Dip., Grad. Cert., D.Eng., D.B.A.) are almost non-existent for international students. Domestic students have access to tuition subsidy through the Australian Government's FEE-Help loan scheme. Some students may be eligible for a Commonwealth Supported Place (CSP), via the HECS-Help scheme, at a substantially lower cost.
Degree requirements
Requirements for the successful completion of a taught master's programme are that the student pass all the required modules. Some universities require eight taught modules for a one-year programme, twelve modules for a one-and-a-half-year programme, and twelve taught modules plus a thesis or dissertation for a two-year programme. The academic year for an Australian postgraduate programme is typically two semesters (eight months of study).
Requirements for research-based programmes vary among universities. Generally, however, a student is not required to take taught modules as part of their candidacy. It is now common that first-year Ph.D. candidates are not regarded as permanent Ph.D. students for fear that they may not be sufficiently prepared to undertake independent research. In such cases, an alternative degree will be awarded for their previous work, usually an M.Phil. or M.Sc. by research.
Brazil
Admission
In Brazil, a Bachelor's, Licenciate or Technologist degree is required in order to enter a graduate program, called pós-graduação. Generally, in order to be accepted, the candidate must have above average grades and it is highly recommended to be initiated on scientific research through government programs on undergraduate areas, as a complement to usual coursework.
Funding
The competition for public universities is very large, as they are the most prestigious and respected universities in Brazil. Public universities do not charge fees for undergraduate level/course. Funding, similar to wages, is available but is usually granted by public agencies linked to the university in question (i.e. FAPESP, CAPES, CNPq, etc.), given to the students previously ranked based on internal criteria.
Degree requirements
There are two types of postgraduate; lato sensu (Latin for "in broad sense"), which generally means a specialization course in one area of study, mostly addressed to professional practice, and stricto sensu (Latin for "in narrow sense"), which means a master's degree or doctorate, encompassing broader and profound activities of scientific research.
Lato sensu graduate degrees: degrees that represent a specialization in a certain area and take from 1 to 2 years to complete. It can sometimes be used to describe a specialization level between a master's degree and an MBA. In that sense, the main difference is that the Lato Sensu courses tend to go deeper into the scientific aspects of the study field, while MBA programs tend to be more focused on the practical and professional aspects, being used more frequently to Business, Management and Administration areas. However, since there are no norms to regulate this, both names are used indiscriminately most of the time.
Stricto sensu graduate degrees: degrees for those who wish to pursue an academic career.
Masters: 2 years for completion. Usually serves as additional qualification for those seeking a differential on the job market (and maybe later a doctorate), or for those who want to pursue a doctorate. Most doctoral programs in Brazil require a master's degree (stricto sensu), meaning that a lato sensu degree is usually insufficient to start a doctoral program.
Doctorate: 3–4 years for completion. Usually used as a stepping stone for academic life.
Canada
In Canada, the schools and faculties of graduate studies are represented by the Canadian Association of Graduate Studies (CAGS) or Association canadienne pour les études supérieures (ACES). The Association brings together 58 Canadian universities with graduate programs, two national graduate student associations, and the three federal research-granting agencies and organizations having an interest in graduate studies. Its mandate is to promote, advance, and foster excellence in graduate education and university research in Canada. In addition to an annual conference, the association prepares briefs on issues related to graduate studies including supervision, funding, and professional development.
Types of programs
Graduate certificates (sometimes called "postgraduate certificates")
Master's degree (course-based, thesis-based and available in part-time and full-time formats)
Doctoral degree (available in part-time and full-time formats)
Admission
Admission to a graduate certificate program requires a university degree (or in some cases, a diploma with years of related experience). English speaking colleges require proof of English language proficiency such as IELTS. Some colleges may provide English language upgrading to students prior to the start of their graduate certificate program.
Admission to a master's (course-based, also called "non-thesis") program generally requires a bachelor's degree in a related field, with sufficiently high grades usually ranging from B+ and higher (different schools have different letter grade conventions, and this requirement may be significantly higher in some faculties), and recommendations from professors. Admission to a high-quality thesis-type master's program generally requires an honours bachelor or Canadian bachelor with honours, samples of the student's writing as well as a research thesis proposal. Some programs require Graduate Record Exams (GRE) in both the general examination and the examination for its specific discipline, with minimum scores for admittance. At English-speaking universities, applicants from countries where English is not the primary language are required to submit scores from the Test of English as a Foreign Language (TOEFL). Nevertheless, some French speaking universities, like HEC Montreal, also require candidates to submit TOEFL score or to pass their own English test.
Admission to a doctoral program typically requires a master's degree in a related field, sufficiently high grades, recommendations, samples of writing, a research proposal, and an interview with a prospective supervisor. Requirements are often set higher than those for a master's program. In exceptional cases, a student holding an honours BA with sufficiently high grades and proven writing and research abilities may be admitted directly to a Ph.D. program without the requirement to first complete a master's. Many Canadian graduate programs allow students who start in a master's to "reclassify" into a Ph.D. program after satisfactory performance in the first year, bypassing the master's degree.
Students must usually declare their research goal or submit a research proposal upon entering graduate school; in the case of master's degrees, there will be some flexibility (that is, one is not held to one's research proposal, although major changes, for example from premodern to modern history, are discouraged). In the case of Ph.D.s, the research direction is usually known as it will typically follow the direction of the master's research.
Master's degrees can be completed in one year but normally take at least two; they typically may not exceed five years. Doctoral degrees require a minimum of two years but frequently take much longer, although not usually exceeding six years.
Funding
Graduate students may take out student loans, but instead they often work as teaching or research assistants. Students often agree, as a condition of acceptance to a programme, not to devote more than twelve hours per week to work or outside interests.
Funding is available to first-year masters students whose transcripts reflect exceptionally high grades; this funding is normally given in the second year.
Funding for Ph.D. students comes from a variety of sources, and many universities waive tuition fees for doctoral candidates.
Funding is available in the form of scholarships, bursaries and other awards, both private and public.
Degree requirements
Graduate certificates require between eight and sixteen months of study. The length of study depends on the program. Graduate certificates primarily involve coursework. However, some may require a research project or a work placement.
Both master's and doctoral programs may be done by coursework or research or a combination of the two, depending on the subject and faculty. Most faculties require both, with the emphasis on research, and with coursework being directly related to the field of research.
Master's and doctoral programs may also be completed on a part-time basis. Part-time graduate programs will usually require that students take one to two courses per semester, and the part-time graduate programs may be offered in online formats, evening formats, or a combination of both.
Master's candidates undertaking research are typically required to complete a thesis comprising some original research and ranging from 70 to 200 pages. Some fields may require candidates to study at least one foreign language if they have not already earned sufficient foreign-language credits. Some faculties require candidates to defend their thesis, but many do not. Those that do not, often have a requirement of taking two additional courses, at minimum, in lieu of preparing a thesis.
Ph.D. candidates undertaking research must typically complete a thesis, or dissertation, consisting of original research representing a significant contribution to their field, and ranging from 200 to 500 pages. Most Ph.D. candidates will be required to sit comprehensive examinations—examinations testing general knowledge in their field of specialization—in their second or third year as a prerequisite to continuing their studies, and must defend their thesis as a final requirement. Some faculties require candidates to earn sufficient credits in a third or fourth foreign language; for example, most candidates in modern Japanese topics must demonstrate ability in English, Japanese, and Mandarin, while candidates in pre-modern Japanese topics must demonstrate ability in English, Japanese, Classical Chinese, and Classical Japanese.
At English-speaking Canadian universities, both master's and Ph.D. theses may be presented in English or in the language of the subject (German for German literature, for example), but if this is the case an extensive abstract must be also presented in English. , a thesis may be presented in French. One exception to this rule is McGill University, where all work can be submitted in either English or French, unless the purpose of the course of study is acquisition of a language.
French-speaking universities have varying sets of rules; some (e.g. HEC Montreal) will accept students with little knowledge of French if they can communicate with their supervisors (usually in English).
The Royal Military College of Canada is a bilingual University, and allows a thesis to be in either English or French, but requires the abstract to be in both official languages.
France
The écoles doctorales ("Doctoral schools") are educational structures similar in focus to graduate schools, but restricted at PhD level. These schools have the responsibilities of providing students with a structured doctoral training in a disciplinary field. The field of the school is related to the strength of the university: while some have two or three schools (typically "Arts and Humanities" and "Natural and Technological Sciences"), others have more specialized schools (History, Aeronautics, etc.).
A large share of the funding offered to junior researchers is channeled through the école doctorale, mainly in the shape of three-years "Doctoral Fellowships" (contrats doctoraux). These fellowships are awarded after submitting a biographical information, undergraduate and graduate transcripts where applicable, letters of recommendation, and research proposal, then an oral examination by an Academical Committee.
Specific context
Prior to 2004, when the European system of LMD Bologna process was founded, the French equivalent of a Post Graduate degree was called a "Maitrise."
For historical reasons dating back to the French Revolution of 1789, France has a dual education system, with Grandes Écoles on one side, and universities on the other hand, with the Grandes Écoles . Some Grandes écoles deliver the French diplôme d'ingénieur, which is ranked as a master's degree.
France ranks a professional doctorate in health sciences (i.e. physician, surgeon, pharmacist, dentist, veterinarian diplomas) as equivalent to a master's degree in any other discipline, to account for the difficulty gap between getting a medical degree and getting non health related doctoral degrees, the latter requiring original research.
Admission
There are 87 public universities in France, and also some private universities, and they are based upon the European education ladder including bachelors, Masters, and Ph.D.s. Students gain each degree though the successful completion of a predetermined number of years in education, gaining credits via the European Credit Transfer System (ECTS).
There are over 300 doctoral programs that collaborate with 1200 research laboratories and centers. Each degree has a certain set of national diplomas that are all of equal value, irrespective of where they were issued. There are also other diplomas that are exclusive to France and are very hard to attain.
Admission to a doctoral program requires a master's degree, both research-oriented and disciplinary focused. High marks are required (typically a très bien honour, equating a cum laude), but the acceptance is linked to a decision of the School Academical Board.
Germany
The traditional and most common way of obtaining a doctorate in Germany is by doing so individually under supervision of a single professor (Doktorvater or Doktormutter) without any formal curriculum. During their studies, doctoral students are enrolled at university while often being employed simultaneously either at the university itself, at a research institute or at a company as a researcher.
Working in research during doctoral studies is, however, not a formal requirement.
With the establishment of Graduiertenkollegs funded by the Deutsche Forschungsgemeinschaft (DFG), the German Research Foundation, in the early 1990s, the concept of a graduate school was introduced to the German higher education system. Unlike the American model of graduate schools, only doctoral students participate in a Graduiertenkolleg. In contrast to the traditional German model of doctoral studies, a Graduiertenkolleg aims to provide young researchers with a structured doctoral training under supervision of a team of professors within an excellent research environment. A Graduiertenkolleg typically consists of 20-30 doctoral students, about half of whom are supported by stipends from the DFG or another sponsor. The research programme is usually narrowly defined around a specific topic and has an interdisciplinary aspect. The programme is set up for a specific period of time (up to nine years if funded by the DFG). The official English translation of the term Graduiertenkolleg is Research Training Group.
In 2006, a different type of graduate school, termed Graduiertenschule ("graduate school"), was established by the DFG as part of the German Universities Excellence Initiative. They are thematically much broader than the focused Graduiertenkollegs and consist often of 100-200 doctoral students.
Germany and the Netherlands introduced the Bologna process with a separation between Bachelor and Master programmes in many fields, except for education studies, law and other specially regulated subjects.
Ireland
In the Republic of Ireland higher education is operated by the Higher Education Authority.
Nigeria
Admission to a postgraduate degree programme in Nigeria requires a bachelor's degree with at least a Second Class Lower Division (not less than 2.75/5). Admission to Doctoral programmes requires an Academic master's degree with a minimum weighted average of 60% (B average or 4.00/5). In addition to this, applicants may be subjected to written and oral examinations depending on the school. Most universities with high numbers of applicants have more stringent admission processes.
Postgraduate degrees in Nigeria include M.A., M.Sc., M.Ed., M.Eng., LL.M, M.Arch., M.Agric., M.Phil., PhD. The master's degree typically take 18–36 months with students undertaking coursework and presenting seminars and a dissertation. The doctoral degree is for a minimum of 36 months and may involve coursework alongside the presentation of seminars and a research thesis. Award of postgraduate degrees requires a defence of the completed research before a panel of examiners comprising external and internal examiners, Head of Department, Departmental Postgraduate Coordinator, Representative(s) of Faculty and Postgraduate School, and any other member of staff with a PhD in the department/faculty.
United Kingdom
The term "graduate school" is used more widely by North American universities than by those in the UK. However, in addition to universities set up solely for postgraduate studies such as Cranfield University, numerous universities in the UK have formally launched 'Graduate Schools', including the University of Birmingham, Durham University, Keele University, the University of Nottingham, Bournemouth University, Queen's University Belfast and the University of London, which includes graduate schools at King's College London, Royal Holloway and University College London. They often coordinate the supervision and training of candidates for research master's programmes and for doctorates.
Admission
Admission to undertake a research degree in the UK typically requires a strong bachelor's degree or Scottish M.A. (at least lower second, but usually an upper second or first class). In some institutions, doctoral candidates are initially admitted to a Masters in Research Philosophy (M.Phil. or M.Res.), then later transfer to a Ph.D./D.Phil. if they can show satisfactory progress in their first 8–12 months of study. Candidates for the degree of Doctor of Education (Ed.D) are typically required to hold a good bachelor's degree as well as an appropriate master's degree before being admitted.
Funding
Funding for postgraduate study in the UK is awarded competitively, and usually is disseminated by institution (in the form of a certain allocation of studentships for a given year) rather than directly to individuals. There are a number of scholarships for master's courses, but these are relatively rare and dependent on the course and class of undergraduate degree obtained (usually requiring at least a lower second). Most master's students are self-funded.
Funding is available for some Ph.D./D.Phil. courses. As at the master's level, there is more funding available to those in the sciences than in other disciplines. Such funding generally comes from Research Councils such as the Engineering and Physical Sciences Research Council (EPSRC), Arts and Humanities Research Council (AHRC), Medical Research Council (MRC) and others. Masters students may also have the option of a Postgraduate loan introduced by the UK Government in 2016.
For overseas students, most major funding applications are due as early as twelve months or more before the intended graduate course will begin. This funding is also often highly competitive. The most widely available, and thus important, award for overseas students is the Overseas Research Student (ORS) Award, which pays the difference in university fees between an overseas student and a British or EU resident. However, a student can only apply for one university for the ORS Award, often before they know whether they have been accepted. As of the 2009/2010 academic year, the HEFCE has cancelled the Overseas Research Student Award scheme for English and Welsh universities. The state of the scheme for Scottish and Northern Irish universities is currently unclear.
Full-time students (of any type) are not normally eligible for state benefits, including during vacation time.
United States
Admission
While most graduate programs will have a similar list of general admission requirements, the importance placed on each type of requirement can vary drastically between graduate schools, departments within schools, and even programs within departments. The best way to determine how a graduate program will weigh admission materials is to ask the person in charge of graduate admissions at the particular program being applied to.
Admission to graduate school requires a bachelor's degree. High grades in one's field of study are important—grades outside the field less so. Traditionally in the past, the Graduate Record Examination standardized test was required by almost all graduate schools, however, programs in multiple disciplines are removing the GRE requirement for their admission process. Some programs require other additional standardized tests (such as the Graduate Management Admission Test (GMAT) and Graduate Record Examination (GRE) Subject Tests) to apply to their institutions. During the COVID-19 pandemic, the GRE exam moved to an online format. This led some programs to waive GRE requirements temporarily or permanently, arguing that the new format was unfair or too difficult for test-takers. In addition, good letters of recommendation from undergraduate instructors are often essential, as strong recommendation letters from mentors or supervisors of undergraduate research experience provide evidence that the applicant can perform research and can handle the rigors of a graduate school education.
Within the sciences and some social sciences, previous research experience may be important. By contrast, within most humanities disciplines, an example of academic writing normally suffices. Many universities require a personal statement (sometimes called statement of purpose or letter of intent), which may include indications of the intended area(s) of research. The amount of detail in this statement is and whether it is possible to change one's focus of research depends strongly on the discipline and department to which the student is applying.
Some schools set minimum GPAs and test scores below which they will not accept any applicants; this reduces the time spent reviewing applications. On the other hand, many other institutions often explicitly state that they do not use any sort of cut-offs in terms of GPA or the GRE scores. Instead, they claim to consider many factors, including past research achievements, the compatibility between the applicant's research interest and that of the faculty, the statement of purpose and the letters of reference, as stated above. Some programs also require professors to act as sponsors. Finally, applicants from non-English speaking countries often must take the Test of English as a Foreign Language (TOEFL).
At most institutions, decisions regarding admission are not made by the institution itself but the department to which the student is applying. Some departments may require interviews before making a decision to accept an applicant. Most universities adhere to the Council of Graduate Schools' Resolution Regarding Graduate Scholars, Fellows, Trainees, and Assistants, which gives applicants until April 15 to accept or reject offers that contain financial support.
Non-degree seeking
In addition to traditional "degree-seeking" applications for admission, many schools allow students to apply as "non-degree-seeking". Admission to the non-degree-seeking category is usually restricted primarily to those who may benefit professionally from additional study at the graduate level. For example, current elementary, middle, and high school teachers wishing to gain re-certification credit most commonly apply as non-degree-seeking students.
Degree requirements
Graduate students often declare their intended degree (master's or doctorate) in their applications. In some cases, master's programs allow successful students to continue toward the doctorate degree. Additionally, doctoral students who have advanced to candidacy but not filed a dissertation ("ABD", for "all but dissertation") often receive master's degrees and an additional master's called a Master of Philosophy (MPhil) or a Candidate of Philosophy (C.Phil.) degree. The master's component of a doctorate program often requires one or two years.
Many graduate programs require students to pass one or several examinations in order to demonstrate their competence as scholars. In some departments, a comprehensive examination is often required in the first year of doctoral study, and is designed to test a student's background undergraduate-level knowledge. Examinations of this type are more common in the sciences and some social sciences but relatively unknown in most humanities disciplines.
Most graduate students perform teaching duties, often serving as graders and tutors. In some departments, they can be promoted to lecturer status, a position that comes with more responsibility.
Doctoral students generally spend roughly their first two to three years taking coursework and begin research by their second year if not before. Many master's and all specialist students will perform research culminating in a paper, presentation, and defense of their research. This is called the master's thesis (or, for Educational Specialist students, the specialist paper). However, many US master's degree programs do not require a master's thesis, focusing instead primarily on coursework or on "practicals" or "workshops". Some students complete a final culminating project or "capstone" rather than a thesis. Such "real-world" experience may typically require a candidate work on a project alone or in a team as a consultant, or consultants, for an outside entity approved or selected by the academic institution and under faculty supervision.
In the second and third years of study, doctoral programs often require students to pass more examinations. Programs often require a Qualifying Examination ("Quals"), a Ph.D. Candidacy Examination ("Candidacy"), or a General Examination ("Generals") designed to test the students' grasp of a broad sample of their discipline, or one or several Special Field Examinations ("Specials") which test students in their narrower selected areas of specialty within the discipline. If these examinations are held orally, they may be known colloquially as "orals." For some social science and many humanities disciplines, where graduate students may or may not have studied the discipline at the undergraduate level, these exams will be the first set, and be based either on graduate coursework or specific preparatory reading (sometimes up to a year's work in reading).
In all cases, comprehensive exams normally must be passed to be allowed to proceed on to the dissertation. Passing such examinations allows the student to begin doctoral research, and rise to the status of a doctoral candidate, while failing usually results in the student leaving the program or re-taking the test after some time has passed (usually a semester or a year). Some schools have an intermediate category, passing at the master's level, which allows the student to leave with a master's without having completed a master's thesis.
The doctoral candidate primarily performs his or her research over the course of three to eight years. In total, the typical doctoral degree takes between four and eight years from entering the program to completion, though this time varies depending upon the department, dissertation topic, and many other factors. For example, astronomy degrees take five to six years on average, but observational astronomy degrees take six to seven due to limiting factors of weather, while theoretical astronomy degrees take five. In some disciplines, doctoral programs can average seven to ten years. Archaeology, which requires long periods of research, tends towards the longer end of this spectrum. The increase in length of the degree is a matter of great concern for both students and universities, though there is much disagreement on potential solutions to this problem.
Traditionally, doctoral programs were only intended to last three to four years and, in some disciplines (primarily the natural sciences), with a helpful advisor, and a light teaching load, it is possible for the degree to be completed in that amount of time. However, increasingly many disciplines, including most humanities, set their requirements for coursework, languages and the expected extent of thesis research by the assumption that students will take five years minimum or six to seven years on average; competition for jobs within these fields also raises expectations on the length and quality of theses considerably.
Though there is substantial variation among universities, departments, and individuals, humanities and social science doctorates on average take somewhat longer to complete than natural science doctorates. These differences are due to the differing nature of research between the humanities and some social sciences and the natural sciences and to the differing expectations of the discipline in coursework, languages, and length of dissertation. However, time required to complete a doctorate also varies according to the candidate's abilities and choice of research. Some students may also choose to remain in a program if they fail to win an academic position, particularly in disciplines with a tight job market; by remaining a student, they can retain access to libraries and university facilities, while also retaining an academic affiliation, which can be essential for conferences and job-searches.
After the doctorate degree, a second training period is available for students in fields such as life sciences, called a postdoctoral fellowship.
Funding
In general, there is less funding available to students admitted to master's degrees than for students admitted to Ph.D. or other doctoral degrees. Many departments, especially those in which students have research or teaching responsibilities, offer tuition-forgiveness and a stipend that pays for most expenses. At some elite universities, there may be a minimum stipend established for all Ph.D. students, as well as a tuition waiver. The terms of these stipends vary greatly, and may consist of a scholarship or fellowship, followed by teaching responsibilities. At many elite universities, these stipends have been increasing, in response both to student pressure and, especially, to competition among the elite universities for graduate students.
In some fields, research positions are more coveted than teaching positions because student researchers are typically paid to work on the dissertation they are required to complete anyway, while teaching is generally considered a distraction from one's work. Research positions are more typical of science disciplines; they are relatively uncommon in humanities disciplines, and where they exist, rarely allow the student to work on their own research. Science PhD students can apply for individual NRSA fellowships from the NIH or fellowships from private foundations. US universities often also offer competitive support from NIH-funded training programs. One example is the Biotechnology Training Program – University of Virginia. Departments often have funds for limited discretionary funding to supplement minor expenses such as research trips and travel to conferences.
A few students can attain funding through dissertation improvement grants funded by the National Science Foundation (NSF), or through similar programs in other agencies. Many students are also funded as lab researchers by faculty who have been funded by private foundations or by the NSF, National Institutes of Health (NIH), or federal "mission agencies" such as the Department of Defense or the Environmental Protection Agency. The natural sciences are typically well funded, so that most students can attain either outside or institutional funding, but in the humanities, not all do. Some humanities students borrow money during their coursework, then take full-time jobs while completing their dissertations. Students in the social sciences are less well funded than are students in the natural and physical sciences, but often have more funding opportunities than students in the humanities, particularly as science funders begin to see the value of social science research.
Funding differs greatly by departments and universities; some universities give five years of full funding to all Ph.D. students, though often with a teaching requirement attached; other universities do not. However, because of the teaching requirements, which can be in the research years of the Ph.D., even the best funded universities often do not have funding for humanities or social science students who need to do research elsewhere, whether in the United States or overseas. Such students may find funding through outside funders such as private foundations, such as the German Marshall Fund or the Social Science Research Council (SSRC).
Foreign students are typically funded the same way as domestic (US) students, although federally subsidized student and parent loans and work-study assistance are generally limited to U.S. citizens and nationals, permanent residents, and approved refugees. Moreover, some funding sources (such as many NSF fellowships) may only be awarded to domestic students. International students often have unique financial difficulties such as high costs to visit their families back home, support of a family not allowed to work due to immigration laws, tuition that is expensive by world standards, and large fees: visa fees by U.S. Citizenship and Immigration Services, and surveillance fees under the Student and Exchange Visitor Program of the United States Department of Homeland Security.
Graduate employee unions
At many universities, graduate students are employed by their university to teach classes or do research. While all graduate employees are graduate students, many graduate students are not employees. MBA students, for example, usually pay tuition and do not have paid teaching or research positions. In many countries graduate employees have collectively organized labor unions in order to bargain a contract with their university.
In the United States there are many graduate employee unions at public universities. The Coalition of Graduate Employee Unions lists 25 recognized unions at public universities on its website. Private universities, however, are covered under the National Labor Relations Act rather than state labor laws and until 2001 there were no recognized unions at private universities.
Many graduate students see themselves as akin to junior faculty, but with significantly lower pay. Many graduate students feel that teaching takes time that would better be spent on research, and many point out that there is a vicious circle in the academic labor economy. Institutions that rely on cheap graduate student labor have no need to create expensive professorships, so graduate students who have taught extensively in graduate school can find it immensely difficult to get a teaching job when they have obtained their degree. Many institutions depend heavily on graduate student teaching: a 2003 report by agitators for a graduate student union at Yale, for instance, claims that "70% of undergraduate teaching contact hours at Yale are performed by transient teachers: graduate teachers, adjunct instructors, and other teachers not on the tenure track." The state of Michigan leads in terms of progressive policy regarding graduate student unions with five universities recognizing graduate employee unions: Central Michigan University, Michigan State University, the University of Michigan, Wayne State University, and Western Michigan University.
The United Auto Workers (under the slogan "Uniting Academic Workers") and the American Federation of Teachers are two international unions that represent graduate employees. Private universities' administrations often oppose their graduate students when they try to form unions, arguing that students should be exempt from labor laws intended for "employees". In some cases, unionization movements have met with enough student opposition to fail. At the schools where graduate employees are unionized, which positions are unionized vary. Sometimes only one set of employees will unionize (e.g. teaching assistants, residential directors); at other times, most or all will. Typically, fellowship recipients, usually not employed by their university, do not participate.
When negotiations fail, graduate employee unions sometimes go on strike. While graduate student unions can use the same types of strikes that other unions do, they have also made use of teach-ins, work-ins, marches, rallies, and grade strikes. In a grade strike, graduate students refuse to grade exams and papers and, if the strike lasts until the end of the academic term, also refuse to turn in final grades. Another form of job action is known as "work-to-rule", in which graduate student instructors work exactly as many hours as they are paid for and no more.
See also
Bologna Process Qualifications framework
EURODOC (European Council of Doctoral Candidates and junior researchers)
History of higher education in the United States#Graduate schools
List of fields of doctoral studies
List of postgraduate-only institutions
Postdoctoral researcher
Professional association
Professional certification
Notes
References
Citations
Sources
J.A. Burns. "Master of Arts" (The Catholic Encyclopedia, 1909)
E.A. Pace. "Doctor" (The Catholic Encyclopedia, 1909)
William G. Bowen & Neil L. Rudenstine, In Pursuit of the Ph.D. (Princeton UP, 1992; ).
Growth of the Ph.D., Discusses innovations in doctoral training.
Educational stages
Higher education
Postgraduate education | 0.761915 | 0.998782 | 0.760987 |
Perfectionism (psychology) | Perfectionism, in psychology, is a broad personality trait characterized by a person's concern with striving for flawlessness and perfection and is accompanied by critical self-evaluations and concerns regarding others' evaluations. It is best conceptualized as a multidimensional and multilayered personality characteristic, and initially some psychologists thought that there were many positive and negative aspects.
Maladaptive perfectionism drives people to be concerned with achieving unattainable ideals or unrealistic goals that often lead to many forms of adjustment problems such as depression, anxiety, ADHD, OCD, OCPD and low self-esteem. These adjustment problems often lead to suicidal thoughts and tendencies and influence or invite other psychological, physical, social, and further achievement problems in children, adolescents, and adults.
Although perfectionist sights can reduce stress, anxiety, and panic, recent data, compiled by British psychologists Thomas Curran and Andrew Hill, show that perfectionistic tendencies are on the rise among recent generations of young people.
Definition
Perfectionists strain compulsively and unceasingly toward unattainable goals. They measure their self-worth by productivity and accomplishments to the point that some tendencies even lead to distraction from other areas of life. Perfectionists pressure themselves to achieve unrealistic goals that inevitably lead to disappointment. Perfectionists tend to be harsh critics of themselves, their work, and their failure to meet their own expectations.
Normal vs. neurotic
In 1978, D. E. Hamachek argued for two contrasting types of perfectionism, classifying people as tending towards normal perfectionism or neurotic perfectionism. Normal perfectionists are more inclined to pursue perfection without compromising their self-esteem, and derive pleasure from their efforts. Neurotic perfectionists are prone to strive for unrealistic goals and feel dissatisfied when they cannot reach them. Hamachek offers several strategies that have been proven useful in helping people change from maladaptive towards healthier behavior. Contemporary research supports the idea that these two basic aspects of perfectionistic behavior, as well as other dimensions such as "nonperfectionism", can be differentiated. They have been labeled differently, and are sometimes referred to as positive striving and maladaptive evaluation concerns, active and passive perfectionism, positive and negative perfectionism, and adaptive and maladaptive perfectionism. Although there is a general perfectionism that affects all realms of life, some researchers contend that levels of perfectionism are significantly different across different domains (i.e. work, academic, sport, interpersonal relationships, home life).
However, it is debated whether perfectionism can be adaptive and has positive aspects. In fact, recent research suggests that what is termed "adaptive perfectionism" is associated with suicidal thinking, depression, eating disorders, poor health and early mortality. Some researchers argue that, certainly, a construct that causes people to think more about suicide, and places them at risk for depression, eating disorders, poor health, and early mortality is far from one that is adaptive. In fact, there is no empirical support for the assertion that a healthy form of perfectionism exists. Instead, what has been termed adaptive perfectionism has little relation to perfectionism and has more to do with striving for excellence. A relentless striving for unreasonably high expectations that are rarely achieved and an avoidance of imperfection at all costs is what distinguishes perfectionism from excellencism. Perfectionism therefore extends beyond adaptive strivings and is not a synonym for excellence or conscientiousness. Numerous researchers advise against using the term "adaptive perfectionism" as it is inappropriate for a personality trait.
There is some literature that supports the usage of adaptive perfectionism when used in comparison with maladaptive perfectionism. Differences were found when these two dimensions of perfectionism were paired with the Big Five personality traits. For example, adaptive perfectionism was found to predict openness, conscientiousness, and extraversion, while maladaptive perfectionism was found to predict neuroticism.
Strivings vs. concerns
J. Stoeber and K. Otto suggested in a narrative review that perfectionism consists of two main dimensions: perfectionistic strivings and perfectionistic concerns. Perfectionistic strivings are associated with positive aspects of perfectionism; perfectionistic concerns are associated with negative aspects (see below).
Healthy perfectionists score high in perfectionistic strivings and low in perfectionistic concerns.
Unhealthy perfectionists score high in both strivings and concerns.
Non-perfectionists show low levels of perfectionistic strivings.
Prompted by earlier research providing empirical evidence that perfectionism could be associated with positive aspects (specifically perfectionistic strivings), they challenged the widespread belief that perfectionism is only detrimental through a non-empirical narrative review. They claimed that people with high levels of perfectionistic strivings and low levels of perfectionist concerns demonstrated more self-esteem, agreeableness, academic success and social interaction. This type of perfectionist also showed fewer psychological and somatic issues typically associated with perfectionism, namely depression, anxiety and maladaptive coping styles. However, empirical meta-analytic reviews have failed to replicate these claims.
The Comprehensive Model of Perfectionistic Behaviour
The Comprehensive Model of Perfectionism (CMPB) operationalizes perfectionism as a multilevel and multidimensional personality style that contains a trait level, a self-presentational level, and a cognitive level.
The stable, dispositional, trait-like level of this model includes self-oriented perfectionism and socially prescribed perfectionism, as well as other-oriented perfectionism. Self-oriented perfectionism is characterized by requiring perfection from oneself, while socially prescribed perfectionism refers to the need to obtain acceptance by fulfilling actual or perceived expectations imposed by others. In contrast, other-oriented perfectionists direct their perfectionism towards external sources and are preoccupied with expecting perfection from others.
The second component of the Comprehensive Model of Perfectionism contains the interpersonal expression of perfection through impression management and self-monitoring. This relational component reflects the need to appear, rather than be, perfect via the promotion of perfection and the concealment of imperfection. Like the perfectionism traits, these components are also multifaceted. One of its facets, perfectionistic self-promotion, refers to the expression of perfectionism by actively presenting a flawless, though often false, image of oneself. Another interpersonal facet, nondisplay of imperfection, is the expression of perfectionism through concealment of attributes or behaviours that may be deemed as imperfect, such as making mistakes in front of others. Similarly, nondisclosure of imperfection is also associated with concealment of self-aspects, but focuses on avoiding verbal disclosure of imperfections, such as not revealing personal information that may be judged negatively or admitting failures. All three facets are used as an (alleged) protection from feelings of low self-worth and possible rejection.
The self-relational/intrapersonal component of the CMPB refers to ruminative, perfectionistic thinking and is characterized by cognitive processes concerning the need for perfection, as well as self-recriminations and a focus on the discrepancy between one's actual and ideal self. This component therefore entails the information processing related to perfectionism. These three components of the Comprehensive Model of Perfectionism are independent but interrelated, and can be present in individuals in heterogeneous, idiosyncratic patterns with different combinations.
The Perfectionism Social Disconnection Model
The Perfectionism Social Disconnection Model (PSDM) is a dynamic-relational model describing perfectionism and its consequences in an interpersonal context. This model asserts that perfectionism, via an interpersonal style characterized by aloofness and inauthenticity, leads to the social disconnection and rejection perfectionists aim to avoid. According to the PSDM, perfectionism develops in an early interpersonal context through asynchrony between child and caregiver, when there is a lack of attunement ("fit") between the temperament of the child and caregiver responses, leading to unfulfilled needs for belonging, acceptance, and self-esteem. This creates a relational schema of others as critical, and rejecting, and an internal model of oneself as defective which makes perfectionists highly sensitive to the potential for judgment and rejection in interpersonal encounters. Consequently, according to the PSDM, perfectionism serves an interpersonal purpose and the person relies on it as a means of fulfilling the needs for belonging and self-esteem. In an attempt to gain a sense of acceptance and connection while avoiding possible judgment and rejection, these individuals aim to be or appear flawless. Paradoxically, this often rigid, aloof, and self-concealing relational style increases the potential for alienation and rejection and can lead to social disconnection. In this way, the very behaviours that perfectionists consider as purportedly fulfilling unmet relational needs exert a detrimental influence on interpersonal encounters, so the alleged solution to social disconnection actually generates it. The PSDM also provides a link between perfectionism and its maladaptive consequences since the estrangement from oneself and others generated by perfectionism is associated with a number of adverse outcomes, such as interpersonal difficulties, depression, and suicide risk.
Measurement
Multidimensional perfectionism scale (MPS)
Randy O. Frost et al. (1990) developed a multidimensional perfectionism scale (now known as the "Frost Multidimensional Perfectionism Scale", FMPS) with six dimensions:
Concern over making mistakes
High personal standards (striving for excellence)
The perception of high parental expectations
The perception of high parental criticism
The doubting of the quality of one's actions, and
A preference for order and organization.
Hewitt & Flett (1991) devised another "multidimensional perfectionism scale", a 45-item measure that rates three aspects of perfectionistic self-presentation:
Self-oriented perfectionism
Other-oriented perfectionism, and
Socially prescribed perfectionism.
Self-oriented perfectionism refers to having unrealistic expectations and standards for oneself that lead to perfectionistic motivation. Other-oriented perfectionism is having unrealistic expectations and standards for others that in turn pressure them to have perfectionistic motivations of their own. Socially prescribed perfectionism is characterized by developing perfectionistic motivations due actual or perceived high expectations of significant others. Parents who push their children to be successful in certain endeavors (such as athletics or academics) provide an example of what often causes this type of perfectionism, as the children feel that they must meet their parents' lofty expectations.
A similarity has been pointed out among Frost's distinction between setting high standards for oneself and the level of concern over making mistakes in performance (the two most important dimensions of the FMPS) and Hewitt & Flett's distinction between self-oriented versus socially prescribed perfectionism.
Perfectionistic Self-Presentation Scale (PSPS)
Hewitt et al. (2003) developed the Perfectionistic Self-Presentation Scale (PSPS), a 27-item self-report measure assessing the three interpersonal, expressive components of the Comprehensive Model of Perfectionism. It includes three subscales pertaining to perfectionistic self-presentation, i.e., to the need to appear flawless:
1. Perfectionistic self-promotion
2. Nonsdisplay of imperfection
3. Nondisclosure of imperfection
The PSPS measures the expression (the process) of the trait of perfectionism and is directly linked to the perfectionism traits, particularly self-oriented and socially prescribed perfectionism. Additionally, the dimensions of the PSPS correlate with measures of psychological distress, such as anxiety symptoms, indicating that perfectionistic self-presentation is a maladaptive, defensive tendency.
Perfectionism Cognitions Inventory (PCI)
The Perfectionism Cognitions Inventory (PCI) developed by Flett, Hewitt, Blankstein, and Gray (1998) is a 25-item inventory measuring the self-relational, cognitive component of perfectionism in the form of automatic thoughts about attaining perfection. It includes statements about perfectionism-themed cognitions, such as references to social comparison and awareness of being imperfect and failing to attain high expectations. Rather than emphasizing trait-like statements, the PCI is characterized by state-like statements, focusing on the varying situational and temporal contexts that can lead to different perfectionistic thoughts.
The PCI is associated with the presence of negative automatic thoughts and scoring high on this measure has been linked to a high degree of self-criticism, self-blame and failure perseveration.
Almost perfect scale-revised (APS-R)
Slaney and his colleagues (1996) developed the Almost Perfect Scale-Revised (APS-R). People are classified based on their scores for three measures:
High Standards
Order, and
Discrepancy
Discrepancy refers to the belief that personal high standards are not being met, which is the defining negative aspect of perfectionism. Those with high scores in what the APS-R considers maladaptive perfectionism typically yield the highest social stress and anxiety scores, reflecting their feelings of inadequacy and low self-esteem. However, whether high standards as measured by APS-R actually assess perfectionism is debatable.
In general, the APS-R is a relatively easy instrument to administer, and can be used to identify perfectionist adolescents as well as adults, though it has yet to be proven useful for children. Two other forms of the APS-R measure perfectionism directed towards intimate partners (Dyadic Almost Perfect Scale) and perceived perfectionism from one's family (Family Almost Perfect Scale).
The validity of the APS-R has been challenged. Namely, some researchers maintain that high standards are not necessarily perfectionistic standards. For instance, it has been shown that when the APS-R is re-worded to reflect more perfectionistic terms, outcomes differ in comparison to the original wording of this scale. Specifically, only the reworded, more perfectionistic scale is associated with maladjustment, such as depression and anxiety, while only the original scale is related to adaptive outcomes. This suggests that what is labelled as "adaptive perfectionism" in the original APS-R may simply reflect high standards. Moreover, a number of researchers view the relevance of discrepancy to the perfectionism literature as suspect given the number of negative mood terms included. Including negative mood terms in items, such as the discrepancy subscale, greatly increases the likelihood for discovering a relation between perfectionism and neuroticism which may be simply due to wording rather than a perfectionism-neuroticism link.
Physical appearance perfectionism scale (PAPS)
The Physical Appearance Perfectionism Scale (PAPS) explains a particular type of perfectionism: the desire for a perfect physical appearance. The PAPS is a multidimensional assessment of physical appearance perfectionism that provides the most insight when the sub-scales are evaluated separately.
In general, the PAPS allows researchers to determine participants' body image and self-conceptions of their looks, which is critical in present times when so much attention is paid to attractiveness and obtaining the ideal appearance. The two sub-scales it uses to assess appearance concerns are:
Worry About Imperfection, and
Hope For Perfection.
Those that obtain high "Worry About Imperfection" scores are usually greatly concerned with attaining perfection, physical appearance, and body control behavior. They also demonstrate low positive self-perceptions of their appearance, whereas those scoring highly on "Hope for Perfection" yielded high positive self-perceptions. Hope For Perfection also corresponded with impression management behaviors and striving for ambitious goals.
In summary, Worry About Imperfection relates to negative aspects of appearance perfectionism, while Hope For Perfection relates to positive aspects. One limitation of using the PAPS is the lack of psychological literature evaluating its validity.
Psychological implications
Perfectionists tend to dissociate themselves from their flaws or what they believe are flaws (such as negative emotions) and can become hypocritical and hypercritical of others, seeking the illusion of virtue to hide their own vices.
Researchers have begun to investigate the role of perfectionism in various mental disorders such as depression, anxiety, eating disorders, autism and personality disorders, as well as suicide. Each disorder is associated with varying levels of the three subscales on the Multidimensional Perfectionism Scale. For instance, socially prescribed perfectionism in young women has been associated with greater body-image dissatisfaction and avoidance of social situations that focus on weight and physical appearance.
The relationship that exists between perfectionistic tendencies and methods of coping with stress has also been examined in some detail. Those who displayed tendencies associated with perfectionism, such as rumination over past events or fixation on mistakes, tended to utilize more passive or avoidance coping. They also tended to utilize self-criticism as a coping method. This is consistent with theories that conceptualize self-criticism as a central element of perfectionism.
Consequences
Perfectionism can be damaging. It can take the form of procrastination when used to postpone tasks and self-deprecation when used to excuse poor performance or to seek sympathy and affirmation from other people. These, together or separate, are self-handicapping strategies perfectionists may use to protect their sense of self-competence. In general, perfectionists feel constant pressure to meet their high expectations, which creates cognitive dissonance when expectations cannot be met. Perfectionism has been associated with numerous other psychological and physiological complications. Moreover, perfectionism may result in alienation and social disconnection via certain rigid interpersonal patterns common to perfectionistic individuals.
Suicide
Perfectionism is increasingly considered to be a risk factor for suicide. The tendency of perfectionists to have excessively high expectations of self and to be self-critical when their efforts do not meet the expectations they have established, combined with their tendency to present a public image of flawlessness increases their risk of suicidal ideation while decreasing the likelihood of seeking help when it is needed. Perfectionism is one of many suicide predictors that affect individuals negatively via pressure to fulfill other- or self-generated high expectations, feeling incapable of living up to them, and social disconnection.
Importantly, the relation between suicidality and perfectionism depends on the particular perfectionism dimensions. Perfectionistic strivings are associated with suicidal ideation while perfectionistic concerns are predictive of both suicidal ideation and attempting suicide. Additionally, socially prescribed perfectionism, a type of perfectionistic concern, was found to be associated with both baseline and long-term suicidal ideation. This implies that perfectionistic concerns, such as socially prescribed perfectionism, are related to more pernicious outcomes in the context of suicide.
Anorexia nervosa
Perfectionism has been linked with anorexia nervosa in research for decades. Researchers in 1949 described the behavior of the average anorexic person as being "rigid" and "hyperconscious", observing also a tendency to "neatness, meticulosity, and a mulish stubbornness not amenable to reason [which] make her a rank perfectionist". Perfectionism is an enduring characteristic in the biographies of anorexics. It is present before the onset of the eating disorder, generally in childhood, during the illness, and also, after remission. The incessant striving for thinness among anorexics is itself a manifestation of this personality style, of an insistence upon meeting unattainably high standards of performance.
Because of its chronicity, those with eating disorders also display perfectionistic tendencies in other domains of life than dieting and weight control. Over-achievement at school, for example, has been observed among anorexics, as a result of their overly industrious behavior.
The level of perfectionism was found to have an influence on individual's long-term recovery of anorexia. Those who scored a lower range of perfectionism were able to have a faster recovery rate than patients who scored high in perfectionism.
General applications
Perfectionism often shows up in performance at work or school, neatness and aesthetics, organization, writing, speaking, physical appearance, and health and personal cleanliness. In the workplace, perfectionism is often marked by low productivity and missed deadlines as people lose time and energy by paying attention to irrelevant details of their tasks, ranging from major projects to mundane daily activities. This can lead to depression, social alienation, and a greater risk of workplace "accidents". Adderholdt-Elliot (1989) describes five characteristics of perfectionist students and teachers which contribute to underachievement: procrastination, fear of failure, an "all-or-nothing" mindset, paralyzed perfectionism, and workaholism.
According to C. Allen, in intimate relationships, unrealistic expectations can cause significant dissatisfaction for both partners. Greenspon lists behaviors, thoughts, and feelings that typically characterize perfectionism. Perfectionists will not be content with their work until it meets their standards, which can make perfectionists less efficient in finishing projects, and they therefore will struggle to meet deadlines.
In a different occupational context, athletes may develop perfectionist tendencies. Optimal physical and mental performance is critical for professional athletes, which are aspects that closely relate to perfectionism. Although perfectionist athletes strive to succeed, they can be limited by their intense fear of failure and therefore not exert themselves fully or feel overly personally responsible for a loss. Because their success is frequently measured by a score or statistics, perfectionist athletes may feel excessive pressure to succeed.
Medical complications
Perfectionism is a risk factor for obsessive–compulsive disorder, obsessive–compulsive personality disorder, eating disorders, social anxiety, body dysmorphic disorder, workaholism, self harm and suicide, substance abuse, and clinical depression as well as physical problems like heart disease. In addition, studies have found that people with perfectionism have a higher mortality rate than those without perfectionism. A possible reason for this is the additional stress and worry that accompanies the irrational belief that everything should be perfect.
Therapists attempt to tackle the negative thinking that surrounds perfectionism, in particular the "all-or-nothing" thinking in which the client believes that an achievement is either perfect or useless. They encourage clients to set realistic goals and to face their fear of failure.
Since perfectionism is a self-esteem issue based on emotional convictions about what one must do to be acceptable as a person, negative thinking is most successfully addressed in the context of a recovery process which directly addresses these convictions.
Impact on psychological treatment
A number of studies suggest that perfectionism can limit the effectiveness of psychotherapy. Namely, perfectionism impedes treatment success across seeking, maintaining, and ultimately benefiting from help. Unfavourable attitudes and negative beliefs towards seeking help present a barrier to treatment among perfectionists. When they do attend treatment, perfectionists, especially those high in perfectionistic self-presentation, are more likely to experience initial clinical interviews as anxiety-provoking and appraise their performance as inadequate. Perfectionism can also affect treatment adherence. For example, a study demonstrated that other-oriented perfectionism is associated with treatment attrition. Further, treatment effectiveness may be compromised by perfectionists' tendency to present an image of flawlessness and avoid self-disclosures because of an excessive sensitivity to judgment and rejection. Most importantly, treatment success may be negatively impacted due to the interpersonal disconnection prevalent among perfectionists which is associated with a failure to develop or strengthen a positive therapeutic alliance.
Narcissism
According to Arnold Cooper, narcissism can be considered as a self-perceived form of perfectionism – "an insistence on perfection in the idealized self-object and the limitless power of the grandiose self. These are rooted in traumatic injuries to the grandiose self." In support, research suggests some forms of perfectionism are associated with grandiose narcissism while others are associated with vulnerable narcissism. Similar to perfectionism, narcissism, particularly in its vulnerable form, is associated with a contingent self-worth and a need for validation. Narcissists often are pseudo-perfectionists and require being the center of attention and create situations where they will receive attention. This attempt at being perfect is cohesive with the narcissist's grandiose self-image. Behind such perfectionism, self psychology would see earlier traumatic injuries to the grandiose self.
Vulnerable narcissism is mostly covert and is characterized by a need for other people's recognition (e.g., validation or admiration) and a sense of self-worth that is contingent upon this recognition. If a perceived state of perfection is not attained and recognition is not forthcoming or doubtful, this can result in a lowered self-worth, social withdrawal and avoidance behaviours as the individual fear that he or she will lose validation and admiration.
Personality traits
Perfectionism is one of Raymond Cattell's 16 Personality Factors. According to this construct, people who are organized, compulsive, self-disciplined, socially precise, exacting will power, controlled, and self-sentimental are perfectionists. In the Big Five personality traits, perfectionism is an extreme manifestation of conscientiousness and can provoke increasing neuroticism as the perfectionist's expectations are not met.
Perfectionistic concerns are more similar to neuroticism while perfectionistic strivings are more similar to conscientiousness.
Children and adolescents
The prevalence of perfectionism is high in children and adolescents, with estimates ranging from 25% to 30%. Similar to adults, perfectionism in young people is a core vulnerability factor for a variety of negative outcomes, such as depression, anxiety, suicidal ideation, and obsessive-compulsive disorder. In order to measure the two trait components of self-oriented and socially prescribed perfectionism in this age group, the widely used Child-Adolescent Perfectionism Scale (CAPS) can be useful.
Treatments
Cognitive-behavioral therapy (CBT)
Cognitive-behavioral therapy (CBT) has been shown to successfully help perfectionists in reducing social anxiety, public self-consciousness, obsessive-compulsive disorder (OCD) behaviors, and perfectionism. By using this approach, a person can begin to recognize their irrational thinking and find an alternative way to approach situations.
Psychodynamic/interpersonal therapy (PI)
Consistent with the development and expression of perfectionism within an interpersonal context, this treatment focuses on the dynamic-relational basis of perfectionism. Rather than targeting perfectionistic behaviour directly and aiming merely for symptom reduction, dynamic-relational therapy is characterized by a focus on the maladaptive relational patterns and interpersonal dynamics underlying and maintaining perfectionism. According to research by Hewitt et al. (2015), this form of treatment is associated with long-lasting reductions in both perfectionism and associated distress.
Exposure and response prevention (ERP)
Exposure and response prevention (ERP) is also employed by psychologists in the treatment of obsessive-compulsive symptoms, including perfectionism. This form of therapy is premised on encouraging individuals to stop their perfectionistic behavior in tasks that they would normally pursue toward perfection. Over time, anxiety may decrease as the person finds that there are no major consequences of completing particular tasks imperfectly.
Acceptance-based behavior therapy (ABBT)
Acceptance-based behavior therapy (ABBT) was demonstrated to have a major contribution to treat perfectionism from increasing awareness, increasing acceptance, and living a meaningful life. These practices were shown to help reduce anxiety, depression, and social phobia. This approach has been shown to be effective six months post to the therapy.
See also
Cognitive-behavioral therapy
Psychodynamic psychotherapy
Obsessive-compulsive personality disorder
Pedant
Perfect is the enemy of good
Overachiever
Satisficing
Self-acceptance
Self-compassion
References
Further reading
Hewitt, P. L., Flett, G. L., & Mikail, S. F. (2017). Perfectionism: A relational approach to conceptualization, assessment, and treatment. New York: Guilford Publications.
Shaw, Daniel (2013). Traumatic Narcissism: Relational Systems of Subjugation. Routledge
Curran, T. & Hill, A. P. (2017). Perfectionism Is Increasing, and That’s Not Good News. Harvard Business Review. Retrieved 2 March 2022.
Petersen, Sigrid Z. (2020). Perfectionism’s Relationship with Higher Education Students’ Help-Seeking: A Literature Review. Master thesis, University of Oslo
External links
Our dangerous obsession with perfectionism is getting worse at TED
Power of Shame at TED
Seeking Perfection – BBC Science and Nature
Neuropsychology
Depression (mood)
Anxiety
Personality traits
Suicide
Narcissism | 0.762963 | 0.9974 | 0.760979 |
Complex post-traumatic stress disorder | Complex post-traumatic stress disorder (CPTSD, sometimes hyphenated C-PTSD) is a stress-related mental disorder generally occurring in response to complex traumas, i.e., commonly prolonged or repetitive exposures to a series of traumatic events, within which individuals perceive little or no chance to escape.
In the ICD-11 classification, C-PTSD is a category of post-traumatic stress disorder (PTSD) with three additional clusters of significant symptoms: emotional dysregulation, negative self-beliefs (e.g., feelings of shame, guilt, failure for wrong reasons), and interpersonal difficulties. Examples of C-PTSD's symptoms are prolonged feelings of terror, worthlessness, helplessness, distortions in identity or sense of self, and hypervigilance. C-PTSD's symptoms share some similarities with the observed symptoms in borderline personality disorder, dissociative identity disorder, and somatization disorder.
History
Judith Lewis Herman of Harvard University was the first psychiatrist and scholar to conceptualise Complex Post-Traumatic Stress Disorder (CPTSD) as a (new) mental health condition in 1992, within her book Trauma & Recovery and an accompanying article.
In 1988, Herman suggested that a new diagnosis of Complex Post-Traumatic Stress Disorder (CPTSD) was needed to describe the symptoms and psychological and emotional effects of long-term trauma.
Classifications
The World Health Organization (WHO)'s International Statistical Classification of Diseases has included CPTSD since its eleventh revision that was published in 2018 and came into effect in 2022 (ICD-11). The previous edition (ICD-10) proposed a diagnosis of Enduring Personality Change after Catastrophic Event (EPCACE), which was an ancestor of CPTSD. Healthdirect Australia (HDA) and the British National Health Service (NHS) have also acknowledged CPTSD as mental disorder. However, the American Psychiatric Association (APA) has not included CPTSD in the Diagnostic and Statistical Manual of Mental Disorders. It has nonetheless proposed: Disorders of Extreme Stress – not otherwise specified (DESNOS) since the DSM-IV, which is a mental disorder close to CPTSD.
Symptoms
Children and adolescents
The diagnosis of PTSD was originally given to adults who had suffered because of a single-event trauma (e.g., during a war, rape). However, the situation for many children is quite different. Children can suffer chronic trauma such as maltreatment, family violence, dysfunction, or a disruption in attachment to their primary caregiver. In many cases, it is the child's caregiver who causes the trauma. The diagnosis of PTSD does not take into account how the developmental stages of children may affect their symptoms and how trauma can affect a child's development.
The term developmental trauma disorder (DTD) has been proposed as the childhood equivalent of CPTSD. This developmental form of trauma places children at risk for developing psychiatric and medical disorders. Bessel van der Kolk explains DTD as numerous encounters with interpersonal trauma such as physical assault, sexual assault, violence or death. It can also be brought on by subjective events such as abandonment, betrayal, defeat or shame.
Repeated traumatization during childhood leads to symptoms that differ from those described for PTSD. Cook and others describe symptoms and behavioral characteristics in seven domains:
Attachmentproblems with relationship boundaries, lack of trust, social isolation, difficulty perceiving and responding to others' emotional states
Biomedical symptomssensory-motor developmental dysfunction, sensory-integration difficulties; increased medical problems or even somatization
Affect or emotional regulationpoor affect regulation, difficulty identifying and expressing emotions and internal states, and difficulties communicating needs, wants, and wishes
Elements of dissociationamnesia, depersonalization, discrete states of consciousness with discrete memories, affect, and functioning, and impaired memory for state-based events
Behavioral controlproblems with impulse control, aggression, pathological self-soothing, and sleep problems
Cognitiondifficulty regulating attention; problems with a variety of executive functions such as planning, judgment, initiation, use of materials, and self-monitoring; difficulty processing new information; difficulty focusing and completing tasks; poor object constancy; problems with cause-effect thinking; and language developmental problems such as a gap between receptive and expressive communication abilities.
Self-conceptfragmented and/or disconnected autobiographical narrative, disturbed body image, low self-esteem, excessive shame, and negative internal working models of self.
Adults
Adults with CPTSD have sometimes experienced prolonged interpersonal traumatization beginning in childhood, rather than, or as well as, in adulthood. These early injuries interrupt the development of a robust sense of self and of others. Because physical and emotional pain or neglect was often inflicted by attachment figures such as caregivers or other siblings, these individuals may develop a sense that they are fundamentally flawed and that others cannot be relied upon. This can become a pervasive way of relating to others in adult life, described as insecure attachment. This symptom is neither included in the diagnosis of dissociative disorder nor in that of PTSD in the current DSM-5 (2013). Individuals with Complex PTSD also demonstrate lasting personality disturbances with a significant risk of revictimization.
Six clusters of symptoms have been suggested for diagnosis of CPTSD:
Alterations in regulation of affect and impulses
Alterations in attention or consciousness
Alterations in self-perception
Alterations in relations with others
Somatization
Alterations in systems of meaning
Experiences in these areas may include:
Changes in emotional regulation, including experiences such as persistent dysphoria, chronic suicidal preoccupation, self-injury, explosive or extremely inhibited anger (may alternate), and compulsive or extremely inhibited sexuality (may alternate).
Variations in consciousness, such as amnesia or improved recall for traumatic events, episodes of dissociation, depersonalization/derealization, and reliving experiences (either in the form of intrusive PTSD symptoms or in ruminative preoccupation).
Changes in self-perception, such as a sense of helplessness or paralysis of initiative, shame, guilt and self-blame, a sense of defilement or stigma, and a sense of being completely different from other human beings (may include a sense of specialness, utter aloneness, a belief that no other person can understand, or a feeling of nonhuman identity).
Varied changes in perception of the perpetrators, such as a preoccupation with the relationship with a perpetrator (including a preoccupation with revenge), an unrealistic attribution of total power to a perpetrator (though the individual's assessment may be more realistic than the clinician's), idealization or paradoxical gratitude, a sense of a special or supernatural relationship with a perpetrator, and acceptance of a perpetrator's belief system or rationalizations.
Alterations in relations with others, such as isolation and withdrawal, disruption in intimate relationships, a repeated search for a rescuer (may alternate with isolation and withdrawal), persistent distrust, and repeated failures of self-protection.
Changes in systems of meaning, such as a loss of sustaining faith and a sense of hopelessness and despair.
Diagnosis
CPTSD was considered for inclusion in the DSM-IV but was excluded from the 1994 publication. It was also excluded from the DSM-5, which lists post-traumatic stress disorder. The ICD-11 has included CPTSD since its initial publication in 2018 and an official psychometrics exists for assessing the ICD-11 CPTSD, which is the International Trauma Questionnaire (ITQ).
Differential diagnosis
Post-traumatic stress disorder
Post-traumatic stress disorder (PTSD) was included in the DSM-III (1980), mainly due to the relatively large numbers of American combat veterans of the Vietnam War who were seeking treatment for the lingering effects of combat stress. In the 1980s, various researchers and clinicians suggested that PTSD might also accurately describe the sequelae of such traumas as child sexual abuse and domestic abuse. However, it was soon suggested that PTSD failed to account for the cluster of symptoms that were often observed in cases of prolonged abuse, particularly that which was perpetrated against children by caregivers during multiple childhood and adolescent developmental stages. Such patients were often extremely difficult to treat with established methods.
PTSD descriptions fail to capture some of the core characteristics of CPTSD. These elements include captivity, psychological fragmentation, the loss of a sense of safety, trust, and self-worth, as well as the tendency to be revictimized. Most importantly, there is a loss of a coherent sense of self: this loss, and the ensuing symptom profile, most pointedly differentiates CPTSD from PTSD.
CPTSD is also characterized by attachment disorder, particularly the pervasive insecure, or disorganized-type attachment. DSM-IV (1994) dissociative disorders and PTSD do not include insecure attachment in their criteria. As a consequence of this aspect of CPTSD, when some adults with CPTSD become parents and confront their own children's attachment needs, they may have particular difficulty in responding sensitively especially to their infants' and young children's routine distress — such as during routine separations, despite these parents' best intentions and efforts. Although the great majority of survivors do not abuse others, this difficulty in parenting may have adverse repercussions for their children's social and emotional development if parents with this condition and their children do not receive appropriate treatment.
Thus, a differentiation between the diagnostic category of CPTSD and that of PTSD has been suggested. PTSD can exist alongside CPTSD; however a sole diagnosis of PTSD often does not sufficiently encapsulate the breadth of symptoms experienced by those who have experienced prolonged traumatic experience, and therefore CPTSD extends beyond the PTSD parameters.
Continuous traumatic stress disorder (CTSD), which was introduced into the trauma literature by Gill Straker in 1987, differs from CPTSD. It was originally used by South African clinicians to describe the effects of exposure to frequent, high levels of violence usually associated with civil conflict and political repression. The term is applicable to the effects of exposure to contexts in which gang violence and crime are endemic as well as to the effects of ongoing exposure to life threats in high-risk occupations such as police, fire and emergency services. It has also been used to describe ongoing relationship trauma frequently experienced by people leaving relationships which involved intimate partner violence.
Traumatic grief
Traumatic grief or complicated mourning are conditions where trauma and grief coincide. There are conceptual links between trauma and bereavement since loss of a loved one is inherently traumatic. If a traumatic event was life-threatening, but did not result in a death, then it is more likely that the survivor will experience post-traumatic stress symptoms. If a person dies, and the survivor was close to the person who died, then it is more likely that symptoms of grief will also develop. When the death is of a loved one, and was sudden or violent, then both symptoms often coincide. This is likely in children exposed to community violence.
For CPTSD to manifest traumatic grief, the violence would occur under conditions of captivity, loss of control and disempowerment, coinciding with the death of a friend or loved one in life-threatening circumstances. This again is most likely for children and stepchildren who experience prolonged domestic or chronic community violence that ultimately results in the death of friends and loved ones. The phenomenon of the increased risk of violence and death of stepchildren is referred to as the Cinderella effect.
Borderline personality disorder
CPTSD may share some symptoms with both PTSD and borderline personality disorder (BPD). However, there is enough evidence to also differentiate CPTSD from borderline personality disorder.
It may help to understand the intersection of attachment theory with CPTSD and BPD if one reads the following opinion of Bessel A. van der Kolk together with an understanding drawn from a description of BPD:
25% of those diagnosed with BPD have no known history of childhood neglect or abuse and individuals are six times as likely to develop BPD if they have a relative who was diagnosed so compared to those who do not. One conclusion is that there is a genetic predisposition to BPD unrelated to trauma. Researchers conducting a longitudinal investigation of identical twins found that "genetic factors play a major role in individual differences of borderline personality disorder features in Western society." A 2014 study published in the European Journal of Psychotraumatology was able to compare and contrast CPTSD, PTSD, and borderline personality disorder and found that it could distinguish between individual cases of each and when it was co-morbid, arguing for a case of separate diagnoses for each. BPD may be confused with CPTSD by some without proper knowledge of the two conditions because those with BPD also tend to have PTSD or to have some history of trauma.
In Trauma and Recovery, Herman expresses the additional concern that patients with CPTSD frequently risk being misunderstood as inherently 'dependent', 'masochistic', or 'self-defeating', comparing this attitude to the historical misdiagnosis of female hysteria. However, those who develop CPTSD do so as a result of the intensity of the traumatic bond — in which someone becomes tightly biochemically bound to someone who abuses them and the responses they learned to survive, navigate and deal with the abuse they suffered then become automatic responses, embedded in their personality over the years of trauma — a normal reaction to an abnormal situation.
Treatment
While standard evidence-based treatments may be effective for treating post-traumatic stress disorder, treating complex PTSD often involves addressing interpersonal relational difficulties and a different set of symptoms which make it more challenging to treat.
Children
The utility of PTSD-derived psychotherapies for assisting children with CPTSD is uncertain. This area of diagnosis and treatment calls for caution in use of the category CPTSD. Julian Ford and Bessel van der Kolk have suggested that CPTSD may not be as useful a category for diagnosis and treatment of children as a proposed category of developmental trauma disorder (DTD). According to Courtois and Ford, for DTD to be diagnosed it requires a
Since CPTSD or DTD in children is often caused by chronic maltreatment, neglect or abuse in a care-giving relationship the first element of the biopsychosocial system to address is that relationship. This invariably involves some sort of child protection agency. This both widens the range of support that can be given to the child but also the complexity of the situation, since the agency's statutory legal obligations may then need to be enforced.
A number of practical, therapeutic and ethical principles for assessment and intervention have been developed and explored in the field:
Identifying and addressing threats to the child's or family's safety and stability are the first priority.
A relational bridge must be developed to engage, retain and maximize the benefit for the child and caregiver.
Diagnosis, treatment planning and outcome monitoring are always relational (and) strengths based.
All phases of treatment should aim to enhance self-regulation competencies.
Determining with whom, when and how to address traumatic memories.
Preventing and managing relational discontinuities and psychosocial crises.
Adults
Trauma recovery model
Judith Lewis Herman, in her book, Trauma and Recovery, proposed a complex trauma recovery model that occurs in three stages:
Establishing safety
Remembrance and mourning for what was lost
Reconnecting with community and more broadly, society
Herman believes recovery can only occur within a healing relationship and only if the survivor is empowered by that relationship. This healing relationship need not be romantic or sexual in the colloquial sense of "relationship", however, and can also include relationships with friends, co-workers, one's relatives or children, and the therapeutic relationship. However, the first stage of establishing safety must always include a thorough evaluation of the surroundings, which might include abusive relationships. This stage might involve the need for major life changes for some patients.
Complex trauma means complex reactions and this leads to complex treatments. Hence, treatment for CPTSD requires a multi-modal approach.
It has been suggested that treatment for complex PTSD should differ from treatment for PTSD by focusing on problems that cause more functional impairment than the PTSD symptoms. These problems include emotional dysregulation, dissociation, and interpersonal problems. Six suggested core components of complex trauma treatment include:
Safety
Self-regulation
Self-reflective information processing
Traumatic experiences integration
Relational engagement
Positive affect enhancement
The above components can be conceptualized as a model with three phases. Not every case will be the same, but the first of phase will emphasize the acquisition and strengthening of adequate coping strategies as well as addressing safety issues and concerns. The next phase would focus on decreasing avoidance of traumatic stimuli and applying coping skills learned in phase one. The care provider may also begin challenging assumptions about the trauma and introducing alternative narratives about the trauma. The final phase would consist of solidifying what has previously been learned and transferring these strategies to future stressful events.
Neuroscientific and trauma informed interventions
In practice, the forms of treatment and intervention varies from individual to individual since there is a wide spectrum of childhood experiences of developmental trauma and symptomatology and not all survivors respond positively, uniformly, to the same treatment. Therefore, treatment is generally tailored to the individual. Recent neuroscientific research has shed some light on the impact that severe childhood abuse and neglect (trauma) has on a child's developing brain, specifically as it relates to the development in brain structures, function and connectivity among children from infancy to adulthood. This understanding of the neurophysiological underpinning of complex trauma phenomena is what currently is referred to in the field of traumatology as 'trauma informed' which has become the rationale which has influenced the development of new treatments specifically targeting those with childhood developmental trauma. Martin Teicher, a Harvard psychiatrist and researcher, has suggested that the development of specific complex trauma related symptomatology (and in fact the development of many adult onset psychopathologies) may be connected to gender differences and at what stage of childhood development trauma, abuse or neglect occurred. For example, it is well established that the development of dissociative identity disorder among women is often associated with early childhood sexual abuse.
Use of evidence-based treatment and its limitations
One of the current challenges faced by many survivors of complex trauma (or developmental trauma disorder) is support for treatment since many of the current therapies are relatively expensive and not all forms of therapy or intervention are reimbursed by insurance companies who use evidence-based practice as a criterion for reimbursement. Cognitive behavioral therapy, prolonged exposure therapy and dialectical behavioral therapy are well established forms of evidence-based intervention. These treatments are approved and endorsed by the American Psychiatric Association, the American Psychological Association and the Veteran's Administration.
While standard evidence-based treatments may be effective for treating standard post-traumatic stress disorder, treating complex PTSD often involves addressing interpersonal relational difficulties and a different set of symptoms which make it more challenging to treat.
For example, "Limited evidence suggests that predominantly cognitive behavioral therapy treatments are effective, but do not suffice to achieve satisfactory end states, especially in Complex PTSD populations."
Treatment challenges
It is widely acknowledged by those who work in the trauma field that there is no one single, standard, 'one size fits all' treatment for complex PTSD. There is also no clear consensus regarding the best treatment among the greater mental health professional community which included clinical psychologists, social workers, licensed therapists (MFTs) and psychiatrists. Although most trauma neuroscientifically informed practitioners understand the importance of utilizing a combination of both 'top down' and 'bottom up' interventions as well as including somatic interventions (sensorimotor psychotherapy or somatic experiencing or yoga) for the purposes of processing and integrating trauma memories.
Survivors with complex trauma often struggle to find a mental health professional who is properly trained in trauma informed practices. They can also be challenging to receive adequate treatment and services to treat a mental health condition which is not universally recognized or well understood by general practitioners.
Allistair and Hull echo the sentiment of many other trauma neuroscience researchers (including Bessel van der Kolk and Bruce D. Perry) who argue:
Complex post trauma stress disorder is a long term mental health condition which is often difficult and relatively expensive to treat and often requires several years of psychotherapy, modes of intervention and treatment by highly skilled, mental health professionals who specialize in trauma informed modalities designed to process and integrate childhood trauma memories for the purposes of mitigating symptoms and improving the survivor's quality of life. Delaying therapy for people with complex PTSD, whether intentionally or not, can exacerbate the condition.
Recommended treatment modalities and interventions
There is no one treatment which has been designed specifically for use with the adult complex PTSD population (with the exception of component based psychotherapy) there are many therapeutic interventions used by mental health professionals to treat PTSD. , the American Psychological Association PTSD Guideline Development Panel (GDP) strongly recommends the following for the treatment of PTSD:
Cognitive behavioral therapy (CBT) and trauma focused CBT
Cognitive processing therapy (CPT)
Cognitive therapy (CT)
Prolonged exposure therapy (PE)
The American Psychological Association also conditionally recommends
Brief eclectic psychotherapy (BEP)
Eye movement desensitization and reprocessing (EMDR)
Narrative exposure therapy (NET)
While these treatments have been recommended, there is still a lack of research on the best and most efficacious treatments for complex PTSD. Psychological therapies such as cognitive behavioural therapy, eye movement desensitisation and reprocessing therapy are effective in treating CPTSD symptoms like PTSD, depression and anxiety. For example, in a 2016, meta-analysis, four out of eight EMDR studies resulted in statistical significance, indicating the potential effectiveness of EMDR in treating certain conditions. Additionally, subjects from two of the studies continued to benefit from the treatment months later. Seven of the studies that employed psychometric tests showed that EMDR led to a reduction in depression symptoms compared to those in the placebo group. Like EMDR, the other therapies are especially effective for complex trauma related to domestic violence and less effective when the condition is related to experiences of war or childhood sexual abuse. Mindfulness and relaxation is effective for PTSD symptoms, emotion regulation and interpersonal problems for people whose complex trauma is related to sexual abuse.
Many commonly used treatments are considered complementary or alternative since there still is a lack of research to classify these approaches as evidence based. Some of these additional interventions and modalities include:
biofeedback
dyadic resourcing (used with EMDR)
emotionally focused therapy
emotional freedom technique (EFT) or tapping
equine-assisted therapy
expressive arts therapy
internal family systems therapy
dialectical behavior therapy (DBT)
family systems therapy
group therapy
neurofeedback
psychodynamic therapy
sensorimotor psychotherapy
somatic experiencing
yoga, specifically trauma-sensitive yoga
Criticism of disorder and diagnosis
Though acceptance of the idea of complex PTSD has increased with mental health professionals, the fundamental research required for the proper validation of a new disorder is insufficient as of 2013. The disorder was proposed under the name DES-NOS (Disorder of Extreme Stress Not Otherwise Specified) for inclusion in the DSM-IV but was rejected by members of the Diagnostic and Statistical Manual of Mental Disorders (DSM) committee of the American Psychiatric Association for lack of sufficient diagnostic validity research. Chief among the stated limitations was a study which showed that 95% of individuals who could be diagnosed with the proposed DES-NOS were also diagnosable with PTSD, raising questions about the added usefulness of an additional disorder.
Following the failure of DES-NOS to gain formal recognition in the DSM-IV, the concept was re-packaged for children and adolescents and given a new name, developmental trauma disorder. Supporters of DTD appealed to the developers of the DSM-5 to recognize DTD as a new disorder. Just as the developers of DSM-IV refused to included DES-NOS, the developers of DSM-5 refused to include DTD due to a perceived lack of sufficient research.
One of the main justifications offered for this proposed disorder has been that the current system of diagnosing PTSD plus comorbid disorders does not capture the wide array of symptoms in one diagnosis. Because individuals who suffered repeated and prolonged traumas often show PTSD plus other concurrent psychiatric disorders, some researchers have argued that a single broad disorder such as CPTSD provides a better and more parsimonious diagnosis than the current system of PTSD plus concurrent disorders. Conversely, an article published in BioMed Central has posited there is no evidence that being labeled with a single disorder leads to better treatment than being labeled with PTSD plus concurrent disorders.
Complex PTSD embraces a wider range of symptoms relative to PTSD, specifically emphasizing problems of emotional regulation, negative self-concept, and interpersonal problems. Diagnosing complex PTSD can imply that this wider range of symptoms is caused by traumatic experiences, rather than acknowledging any pre-existing experiences of trauma which could lead to a higher risk of experiencing future traumas. It also asserts that this wider range of symptoms and higher risk of traumatization are related by hidden confounder variables and there is no causal relationship between symptoms and trauma experiences. In the diagnosis of PTSD, the definition of the stressor event is narrowly limited to life-threatening events, with the implication that these are typically sudden and unexpected events. Complex PTSD vastly widened the definition of potential stressor events by calling them adverse events, and deliberating dropping reference to life-threatening, so that experiences can be included such as neglect, emotional abuse, or living in a war zone without having specifically experienced life-threatening events. By broadening the stressor criterion, an article published by the Child and Youth Care Forum claims this has led to confusing differences between competing definitions of complex PTSD, undercutting the clear operationalization of symptoms seen as one of the successes of the DSM.
See also
References
Further reading
External links
APA practice parameters for assessment and treatment for PTSD (Updated 2017)
Anxiety disorders
Post-traumatic stress disorder
Traumatology
Stress-related disorders | 0.761236 | 0.999654 | 0.760973 |
Political psychology | Political psychology is an interdisciplinary academic field, dedicated to understanding politics, politicians and political behavior from a psychological perspective, and psychological processes using socio-political perspectives. The relationship between politics and psychology is considered bidirectional, with psychology being used as a lens for understanding politics and politics being used as a lens for understanding psychology. As an interdisciplinary field, political psychology borrows from a wide range of disciplines, including: anthropology, economics, history, international relations, journalism, media, philosophy, political science, psychology, and sociology.
Political psychology aims to understand interdependent relationships between individuals and contexts that are influenced by beliefs, motivation, perception, cognition, information processing, learning strategies, socialization and attitude formation. Political psychological theory and approaches have been applied in many contexts such as: leadership role; domestic and foreign policy making; behavior in ethnic violence, war and genocide; group dynamics and conflict; racist behavior; voting attitudes and motivation; voting and the role of the media; nationalism; and political extremism. In essence political psychologists study the foundations, dynamics, and outcomes of political behavior using cognitive and social explanations.
History and early influences
France
Political psychology originated from Western Europe, France, where it was closely tied to the emergence of new disciplines and paradigms as well as to the precise social and political context in various countries. The discipline political psychology was formally introduced during the Franco-Prussian war and the socialist revolution, stirred by the rise of the Paris Commune (1871). The term political psychology was first introduced by the ethnologist Adolf Bastian in his book Man in History (1860). The philosopher Hippolyte Taine (1828–1893), a founder of the Ecole Libre de Sciences Politiques, applied Bastian's theories in his works The Origins of Contemporary France (1875–1893), to ideas on the founding and development of the Third Republic. The head of Ecole Libre de Sciences Politiques, Émile Boutmy (1835–1906), was a famous explorer of social, political and geographical concepts of national interactions. He contributed various works on political psychology such as English People; A study of their Political Psychology (1901) and The American People; Elements of Their Political Psychology (1902). The contributor of crowd theory Gustave Le Bon (1841–1931) suggested that crowd activity subdued will and polluted rational thought which resulted in uncontrollable impulses and emotions. He suggested in his works Psychology of Socialism (1896) and Political Psychology and Social Defense (1910) that in the uncontrollable state of a crowd people were more vulnerable to submission and leadership, and suggested that embracing nationalism would remedy this.
Italy
Meanwhile, in Italy, the Risorgimento (1870) instigated various social reforms and voting rights. The large division in social class during this period led lawyer Gaetano Mosca (1858–1914) to publish his work, The Ruling Class: Elements of Political Science (1896), which theorized the presence of the ruling and the ruled classes of all societies. Vilfredo Pareto (1828–1923), inspired by Mosca's concepts, contributed The Rise and Fall of the Elites (1901) and The Socialist System (1902–1903) to the discipline of political psychology, theorizing on the role of class and social systems. His work The Mind and Society (1916) offers a sociology treatise. Mosca and Pareto's texts on the Italian elite contributed to the theories of Robert Michels (1875–1936). Michels was a German socialist fascinated by the distinction between the largely lower class run parliament in Germany and upper class run parliament in Italy. He wrote Political Parties: A Sociological Study of the Oligarchic Tendencies of Modern Democracy (1911).
Austria
A large psychoanalytical influence was contributed to the discipline of political psychology by Sigmund Freud (1856–1939). His texts Totem and Taboo (1913) and Group Psychology and the Analysis of the Ego (1921) linked psychoanalysis with politics. Freud and Bullitt (1967) developed the first psychobiographical explanation to how the personality characteristics of U.S. President Woodrow Wilson affected his decision making during World War I. Wilhelm Reich (1897–1957), inspired by the effects of World War II, was interested in whether personality types varied according to epoch, culture and class. He described the bidirectional effect of group, society and the environment with personality. He combined Freudian and Marxist theories in his book The Mass Psychology of Fascism (1933). He also edited The Journal for Political Psychology and Sexual Economy (1934–1938) which was the first journal to present political psychology in the principal of western language.
Germany
In Germany, novice political alterations and fascist control during World War II spurred research into authoritarianism from Frankfurt School. Philosopher Herbert Marcuse (1898–1979) opened up issues concerning freedom and authority in his book, Reason and Revolution: Hegel and the Rise of Social Theory (1941), where he suggested groups compromise on individual rights. Theodor W. Adorno (1903–1969) also investigated authoritarian individuals and anti-Semitism. His report The Authoritarian Personality (1950) attempts to determine the personality type susceptible to following fascism and anti-democratic propaganda. Nazi movements during World War II also spurred controversial psychologists such as Walther Poppelreuter (1932) to lecture and write about political psychology that identified with Hitler. The psychologist Eric Jaensch (1883–1940) contributed the racist book The Anti-type (1933).
United Kingdom
At the turn of the century, Oxford University and Cambridge University introduced disciplinary political psychology courses such as "The Sciences of the Man", along with the foundation of the Psychological society (1901) and the Sociological society (1904). Oxford historian G. B. Grundy (1861–1948) noted political psychology (1917) as a sub-discipline of history. Motivated by social and political behavior during World War I, he deemed a new branch of historical science, "The Psychology of Men Acting in Masses". He referred to science to instrument the clarification of mistaken beliefs about intention. The intellectual Graham Wallas (1859–1932) implicated the significance of studying psychology in politics in Human Nature in Politics (1908). Wallace emphasized the importance of enlightening politicians and the public about the psychological processes in order to raise awareness on exploitation while developing control over one's own psychological intellect. He suggested in Great Society (1917) that recognition of such processes could help to build a more functional humanity.
United States
Across the Atlantic the first American to be considered a political psychologist was Harold Lasswell (1902–1978) whose research was also spurred by a sociological fascination of World War I. His work Propaganda Technique in the World War (1927) discussed the use of applying psychological theories in order to enhance propaganda technique. Lasswell moved to Europe shortly after where he started to tie Freudian and Adler personality theories to politics and published Psychopathology and Politics (1930). His major theories involved the motives of the politically active and the relation between propaganda and personality.
Another contributing factor to the development of Political Psychology was the introduction of psychometrics and "The Measurement of Attitude" by Thurstone and Chave (1929). The methodological revolution in social science gave quantitative grounds and therefore more credibility to Political Psychology. Research into political preference during campaigns was spurred by George Gallup (1901–1984), who founded the "American Institute of Public Opinion". The 1940s election in America drew a lot of attention in connection with the start of World War II. Gallup, Roper and Crossley instigated research into the chances of Roosevelt being re-elected. Lazarsfeld, Berelson and Gaudet (1944) also conducted a famous panel study "The People's Choice" on the 1940s election campaign. These studies drew attention to the possibility of measuring political techniques using psychological theories. The entry of the US into World War II spiraled vast research into fields such as war technique, propaganda, group moral, psycho-biography and culture conflict to name a few, with the U.S. army and Navy recruiting young psychologists. Thus the discipline quickly developed and gained international accreditation.
Hadley Cantril and L. A. Free established the Institute for International Social Research to focus "attention primarily on psychological changes which influence political behavior in ways that have significant effect on international relations." They studied "governments and why, in terms of psychological variables, they behave as they do in regard to international issues."
McGuire identifies three broad phases in the development of political psychology, these three phases are: (1) The era of personality studies in the 1940s and 1950s dominated by psychoanalysis. (2) The era of political attitudes and voting behavior studies in the 1960s and 1970s characterized by the popularity of "rational man" assumptions. (3) An era since the 1980s and 1990s, which has focused on political beliefs, information processing and decision making, and has dealt in particular with international politics.
Personality and politics
The study of personality in political psychology focuses on the effects of leadership personality on decision-making, and the consequences of mass personality on leadership boundaries. Key personality approaches utilized in political psychology are psychoanalytic theories, trait-based theories and motive-based theories.
A psychoanalytical approach
Sigmund Freud (1856–1939) made significant contributions to the study of personality in political psychology through his theories on the unconscious motives of behavior. Freud suggested that a leader's behavior and decision-making skill were largely determined by the interaction in their personality of the id, ego and superego, and their control of the pleasure principle and reality principle. The psychoanalytic approach has also been used extensively in psychobiographies of political leaders. Psychobiographies draw inferences from personal, social and political development, starting from childhood, to understand behavior patterns that can be implemented to predict decision-making motives and strategies.
A trait-based approach
Traits are personality characteristics that show to be stable over time and in different situations, creating predispositions to perceive and respond in particular ways. Gordon Allport (1897–1967) realized the study of traits introducing central, secondary, cardinal and common traits. These four distinctions suggest that people demonstrate traits to varying degrees, and further that there is a difference between individual and common traits to be recognized within a society. Hans Eysenck (1916–1997) contributed three major traits. Currently, however, Costa and McCrae's (1992) "Big Five" personality dimensions are the most recognized; these are: neuroticism, extraversion, agreeableness, openness to experience and conscientiousness. Theories in political psychology induce that one's combination of these traits has implications for leadership style and capacity. For example, individuals who score highly on extroversion are demonstrated as having superior leadership skills. The Myers-Briggs Type Indicator (MBTI) is a personality assessment scale commonly used in the study of political personality and for job profiling.
A motive-based approach
In terms of political psychology, motivation is viewed as goal-oriented behavior driven by a need for four things; power, affiliation, intimacy, and achievement. These categories were grouped by Winter (1996) from Murray's (1938) twenty suggested common human goals. Need for power affects the style in which a leader performs. Winter and Stewart (1977) suggested that leaders high in power motivation and low in need of affiliation intimacy motivation make better presidents. Affiliation-motivated leaders alternatively tend to collaborate joint efforts in the absence of threat. Lastly, achievement motivation has demonstrated to not correspond with political success, especially if it is higher than power motivation (Winter, 2002). Motivation between a leader and those whom they are ruling needs to be consistent with success. Motives have been shown to be correlated more highly with situation and time since last goal-fulfillment, rather than consistent traits. The Thematic Apperception Test (TAT) is commonly used for assessing motives. However, in the case of leadership assessment this test is more difficult to implement therefore more applicable tests are often used such as content analysis of speeches and interviews.
Frameworks for assessing personality
The authoritarian personality
The authoritarian personality is a syndrome theory that was developed by the researchers Adorno, Frenkel-Brunswick, Levinson and Sanford (1950) at The University of California. The American Jewish Committee subsidized research and publishing on the theory since it revolved around ideas developed from World War II events. Adorno (1950) explained the authoritarian personality type from a psychoanalytic point of view suggesting it to be a result of highly controlled and conventional parenting. Adorno (1950) explained that individuals with an authoritarian personality type had been stunted in terms of developing an ability to control the sexual and aggressive id impulses. This resulted in a fear of them and thus development of defense mechanisms to avoid confronting them. Authoritarian personality types are persons described as swinging between depending on yet resenting authority. The syndrome was theorized to encompass nine characteristics; conventionalism, authoritarian submission, authoritarian aggression, anti-intraception (an opposition to subjective or imaginative tendencies), superstition and stereotypy, power and toughness, destructiveness and cynicism, sex obsession, and projectivity. The authoritarian personality type is suggested to be: ethnocentric, ego-defensive, mentally rigid, conforming and conventional, adverse to the out of the ordinary, and as having conservative political views. The book The Authoritarian Personality (1950) introduces several scales based on different authoritarian personality types. These are: the F-scale which measures from where and to what degree fascist attitudes develop, the anti-Semitism scale, the ethnocentrism scale and the politico economic conservatism scale. The F-scale, however, is the only scale that is expected to measure implicit authoritarian personality tendencies.
Bob Altemeyer (1996) deconstructed the authoritarian personality using trait analysis. He developed a Right-wing Authoritarianism (RWA) scale based on the traits: authoritarian submission, authoritarian aggression, and conventionalism. Altmeyer (1996) suggested that those who score high on the F-scale have low ability for critical thinking and therefore are less able to contradict authority. Altmeyer's theories also incorporate the psychodynamic point of view, suggesting that authoritarian personality types were taught by their parents to believe that the world was a dangerous place and thus their impulses lead them to make impulsive, emotional and irrational decisions. The beliefs and behavior of an authoritarian are suggested to be easily manipulated by authority instead of being based on internal values. Altmeyer also theorized that leaders with authoritarian personality types were more susceptible to the fundamental attribution error. There are many weaknesses associated with this syndrome and the F-scale. It may have been more relevant during the period in which it was produced, being shortly after World War II. The authoritarian personality is generally related to a fascist image; however, it is suggested to explain the behavior of individuals across all political ideologies.
Trait-based frameworks
Trait-based frameworks, excluding the Freudian approach, were suggested by James Barber (1930–2004) in The Presidential Character (1972) who highlighted the importance of psychobiography in political personality analysis. Barber suggested that leadership personality comprised three dimensions; "character", "world view", and "style". Barber also proposed that leadership typology followed a pattern leading from an individual's first political success and that it is includes two variables; the effort that a leader puts in and the personal satisfaction that the leader gains. This typology is fairly limited in its dimensions.
Etheredge (1978) proposed the importance of the traits; "dominance", "interpersonal trust", "self-esteem" and "introversion-extroversion", in leadership views and policy shaping. Etheredge found from studies on leaders during the Soviet Union, that those who scored highly on dominance were more likely to support the use of force during debate settlement. He found that the trait introversion can lead to a lack of co-operation, and that extroversion usually leads to cooperation and negotiation. Further he suggested that interpersonal trust and self-esteem were closely related to not advocating force.
Margaret Hermann (1976) introduced the Leader Trait Assessment (LTA) and advocated the development of the Profiler-Plus. The Profiler-Plus is a computer system used to code spontaneous interview answers for seven major characteristics; need for power, cognitive complexity, task-interpersonal emphasis, self-confidence, locus of control, distrust of others, and ethnocentrism. This method can profile large bodies of leadership related text whilst removing any subjective bias from content analysis. It is efficient and has high reliability.
Hermann and Preston (1994) suggested 5 distinct variables of leadership style; their involvement in policy making, their willingness to tolerate conflict, their level and reasons for motivation, their information managing strategies, and their conflict resolving strategies.
An alternative approach is the Operational-Code method introduced by Nathan Leites (1951) and restructured by Alexander George (1979). The code is based on five philosophical beliefs and five instrumental beliefs. A Verbs in Context (VIC) coding system employed through the Profiler-Plus computer program once again allows substantial bodies of written and spoken speech, interviews and writings to be analyzed subjectively. The method attempts to be able to predict behavior thorough applying knowledge of various beliefs.
Although political behavior is governed and represented by a leader the consequential influence of the leader largely depends upon the context in which they are placed and in which type of political climate they are running. For this reason group behavior is also instrumental for understanding sociopolitical environments
The political psychology of groups
Group behavior is key in the structure, stability, popularity and ability to make successful decisions of political parties. Individual behavior deviates substantially in a group setting therefore it is difficult to determine group behavior by looking solely at the individuals that comprise the group. Group form and stability is based upon several variables; size, structure, the purpose that the group serves, group development and influences upon a group.
Group size
Group size has various consequences. In smaller groups individuals are more committed (Patterson and Schaeffer, 1997) and there is a lower turnover rate (Widmeyer, Brawley and Carron, 1990). Large groups display greater levels of divergence (O'Dell, 1968) and less conformity (Olson and Caddell, 1994). Group performance also diminishes with size increase, due to decreased co-ordination and free-riding. The size of a political party or nation can therefore have consequential effects on their ability to co-ordinate and progress.
Group structure
The structure of a group is altered by member diversity, which largely affects its efficiency. Individual diversity with in a group has proven to demonstrate less communication and therefore to increase conflict (Maznevski, 1994). This has implications for political parties based in strongly colonial or multiracial nations.
Member diversity has consequences for; status, role allocation and role strain within a group, all of which can cause disagreement. Thus, maintenance of group cohesion is key. Cohesion is affected by several factors; the amount of time members spend in the group, the amount that members like one another, the amount of reward that the group offers, the amount of external threat to the group and the level of warmth offered by leaders. These factors should be considered when attempting to form an efficient political group. President decision efficiency for example is affected by the degree to which members of the advisory group have a hierarchical status and by the roles that each member is assigned.
Group function
Studying the purpose for formation of a group, whether it is serving a "functional" purpose or an "interpersonal attraction" purpose (Mackie and Goethals, 1987), has implications for political popularity. Often people join groups in order to fulfill certain survival, interpersonal, informational and collective needs. A political party that provides; stability, clear information, offers power to individuals and satisfies a sense of affiliation, will gain popularity. Shutz's (1958) "Fundamental interpersonal relations orientation" theory suggests that groups satisfy the need for control, intimacy and inclusion. Groups also form due to natural attraction. Newcomb (1960) states that we are drawn to others close in socioeconomic status, beliefs, attitudes and physical appearance. Similarity in certain respects can thus be related to how much a person is attracted to joining one group over another.
Group development
Group development tends to happen in several stages; forming, storming, norming, performing, and adjourning (Tuckman, 1965). Group awareness of these stages is important in order for members to acknowledge that a process is taking place and that certain stages such as storming are part of progression and that they should not be discouraged or cause fear of instability. Awareness of group development also allows for models to be implemented in order to manipulate different stages. External influences upon a group will have different effects depending upon which stage the group is at in its course. This has implications for how open a group should be depending upon the stage of development it is at, and on its strength.
Consistency is also a key aspect in a group for success (Wood, 1994).
The influence of conformity in groups
The application of conformity is key for understanding group influence in political behaviour. Decision making within a group is largely influenced by conformity. It is theorized to occur based on two motives; normative social influence and informational social influence (Asch, 1955). Chance of conformity is influenced by several factors; an increase in group size but only to a certain degree at which it plateaus, and degree of unanimity and commitment to the group. Therefore, the degree of popularity of a political group can be influenced by its existing size and the believed unanimity and commitment by the public of the already existing members. The degree by which the group conforms as a whole can also be influenced by the degree of individuation of its members.
Also, the conformity within political groups can be related to the term, political coalition. Humans represent groups as if there was a special category of an individual. For example, for cognitive simplicity, ancestral groups anthropomorphize each other because they have similar thoughts, values, and a historical background. Even though the member of a group may have an irrational or wrong argument about a political issue, there is a high possibility for the other members to conform to it because of the mere fact that they are in the same coalition.
The influence of power in groups
Power is another influential factor within a group or between separate groups. The "critical bases of power" developed by French and Raven (1959) allocates the following types of power as the most successful; reward power, coercive power, legitimate power, referent power and expert power.
The way in which power is exerted upon a group can have repercussive outcomes for popularity. Referent power results in greater popularity of a political group or leader than coercive power (Shaw and Condelli, 1986). This has implications for leaders to manipulate others to identify with them, rather than to enforce consequential punishment. However, if coercive power is enforced, success and a trusted leader (Friedland, 1976) are necessary in order for group conflict not to escalate. Extrinsic punishment and reward are also suggested to detract from intrinsic motivation. A sense of freedom must be advocated to the group.
Decision-making in groups
Decision-making is an important political process which influences the course of a country's policy. Group decision-making is largely influenced by three rules; "majority-wins rule", "truth-wins rule", and "first-shift rule". Decision-making is also coerced by conformity. Irrational decisions are generally made during emotional periods. For example, an unpopular political party may receive more votes during a period of actual or perceived economic or political instability. Controversial studies by George Marcus (2003) however imply that high levels of anxiety can actually cause an individual to analyze information more rationally and carefully, resulting in more well-informed and successful decisions. The psychology of decision-making however must be analyzed in accordance with whether it is within a leadership context or a between-group context. The implementation of successful decision-making is often enhanced by group decision-making (Hill, 1982) especially if the decision is important to the group and when the group has been working together for an extended period of time (Watson, Michaelson and Sharp, 1991). However, groups can also hinder decision-making if a correct answer is not clear. Janis (1972) introduced the notion of Groupthink that advocates an increased chance of groups making faulty decisions under several conditions; strong group cohesion, isolation of group decision from public review, the presence of a directive leader in the group, and high-stress levels.
Group polarization (Janis, 1972) suggests that group decision-making is often more extreme whether is it more risky or cautious. Groupthink refers to "a mode of thinking that people engage in when they are deeply involved in a cohesive in-group, when the members' striving for unanimity override their motivation to realistically appraise alternative courses of action."
Techniques to establish more effective decision-making skills in political dimensions have been suggested. Hirt and Markman (1995) claim that implementing an individual in a group to find faults and to critique will enable the members to establish alternative viewpoints. George (1980) suggested "multiple advocacy" which implements that a neutral person analyses the pros and cons of various advocate suggestions and thus makes an informed decision.
Applied psychology theories to improve productivity of political groups include implementing "team development" techniques, "quality circles" and autonomous workgroups.
Using psychology in the understanding of certain political behaviors
Evolution
Evolutionary psychology plays a significant role in understanding how the current political regime came to be. It is an approach that focuses on the structure of human behavior claiming its dependence on the social and ecological environment. Developed through natural selection, the human brain functions to react appropriately to environmental challenges of coalitional conflict using psychological mechanisms and modifications. An example of political conflict would involve state aggression such as war. Psychological mechanisms work to digest what is taken in from internal and external information regarding the current habitat and project it in the most suited form of action such as acts of aggression, retrieval, dominance, submission and so forth.
Political identity and voting behavior
In order to make inferences and predictions about behavior concerning voting decision, certain key public influences must be considered. These influences include the role of emotions, political socialization, political sophistication, tolerance of diversity of political views and the media. The effect of these influences on voting behavior is best understood through theories on the formation of attitudes, beliefs, schema, knowledge structures and the practice of information processing. The degree to which voting decision is affected by internal processing systems of political information and external influences, alters the quality of making truly democratic decisions. Perceiving external events such as terrorist attacks, governmental warnings, and shifts in racial demography can lead to shifts in political opinion (Jost, 2017).
Some prominent academics in the field include Dr. Chadly Daniel Stern, who currently works at the Department of Psychology at the University of Illinois, Urbana Champaign. His research centers around answering social cognitive questions of how a person's political belief systems shape the way that they perceive the world and their everyday interactions.
Childhood influence
In 2006, scientists reported a relationship between personality and political views of Americans on a left–right spectrum as follows: "Preschool children who 20 years later were relatively liberal were characterized as: developing close relationships, self-reliant, energetic, somewhat dominating, relatively under-controlled, and resilient. Preschool children subsequently relatively conservative at age 23 were described as: feeling easily victimized, easily offended, indecisive, fearful, rigid, inhibited, and relatively over-controlled and vulnerable."
The amount of research done on children and the impact their childhoods have on their political views or identity is limited. However, an increasing amount of empirical work on children and their environment could be highly revealing of how their political awareness and attitudes develop very early on (Reifen‐Tagar & Cimpian, 2020).
Conflict
The application of psychology for understanding conflict and extreme acts of violence can be understood in both individual and group terms. Political conflict is often a consequence of ethnic disparity and "ethnocentrism" Sumner (1906).
On an individual level participators in situations of conflict can either be perpetrators, bystanders or altruists. The behavior of perpetrators is often explained through the authoritarian personality type. Individual differences in levels of empathy have been used to explain whether an individual chooses to stand up to authority or ignore a conflict. Rotter's (1954) locus of control theory in personality psychology has also been used to determine individual differences in reaction to situations of conflict.
Group behavior during conflict often affects the actions of an individual. The bystander effect introduced by Darley and Latane (1968) demonstrates that group behavior causes individuals to monitor whether others think it is necessary to react in a situation and thus base their behavior on this judgment. They also found that individuals are more likely to diffuse responsibility in group situations. These theories can be applied to situations of conflict and genocide in which individuals remove personal responsibility and therefore justify their behavior. Social identity theory explains that during the Holocaust of World War II political leaders used the Jews as an out-group in order to increase in-group cohesion. This allowed for the perpetrators to depersonalize from the situation and to diffuse their responsibility. The out-groups were held in separate confines and dehumanized in order to aid the in-group to disengage themselves from relating.
Research by Dan Kahan has demonstrated that individuals are resistant to accepting new political views even if they are presented with evidence that challenges their views. The research also demonstrated that if the individual was required to write a few sentences about experiences they enjoyed or spend a few moments affirming their self-worth, the individual was more likely to accept the new political position.
Although somewhat unusual, evolutionary psychology can also explain conflicts in politics and the international society. A journal article by Anthony C. Lopez, Rose McDermott and Michael Bang Petersen uses this idea to give out a hypothesis to explain political events. According to the authors, instincts and psychological characteristics developed through evolution is still existent with modern people. They suggest human being as "adaptation executers"; people designed through natural selection, and not "utility maximizers"; people who strive for utility in every moment. Though a group of people, perhaps those who are in the same political coalition, may seem as if they pursue a common utility maximization, it is difficult to generalize the theory of "utility maximizers" into a nation-view because people evolved in small groups. This approach helps scholars to explain seemingly irrational behaviors like aggressiveness in politics and international society because "irrational behavior" would be the result of a mismatch between the modern world and evolutionary psychology.
For example, according to evolutionary psychology, coalitional aggression is more commonly found in males. This is because of their psychological mechanism designed since ancestral times. During those times men had more to earn when winning wars compared to women (they had more chance of finding a mate, or even many mates). Also, the victorious men had more chance of reproduction which eventually led to the succession of aggressive, eager-to-war DNAs. As a result, the authors hypothesize that countries with more men will tend to show more aggressive politics thereby having more possibility of triggering conflicts within and especially among states.
Indeed, some exceptions do exist in this theory as this is just a hypothesis. However, it is viable enough to be a hypothesis to be tested to explain certain political events like war and crisis.
Terrorism
On an individual level terrorism has been explained in terms of psychopathology. Terrorists have demonstrated to show narcissistic personality traits (Lasch, 1979, Pearlstein, 1991). Jerrold Post (2004) argues that narcissistic and borderline personality disorders are found in terrorists and that mechanisms such as splitting and externalization are used by terrorists. Others such as Silke (2004) and Mastors and Deffenbaugh (2007) refute this view. Crenshaw (2004) showed that certain terrorist groups are actually careful in not enlisting those demonstrating pathology. The authoritarian personality theory has also been used as an explanation for terrorist behavior in individuals.
In terms of explaining reasons for which individuals join terrorist groups, motivational theories such as need for power and need for affiliation intimacy are suggested. Festinger (1954) explained that people often join groups in order to compare their own beliefs and attitudes. Joining a terrorist group could be a method to remedy individual uncertainty. Taylor and Louis (2004) explained that individuals strive for meaningful behavior. This can also be used to explain why terrorists look for such radical beliefs and demonstrations. Studies on children in northern Ireland by Field (1979) have shown that exposure to violence can lead to terrorist behavior later on. Implicating the effect of developing acceptable norms in groups. However this view has also been criticized (Taylor, 1998). Other theories suggest that goal frustration can result in aggression (Dollard, Doob. Miller, mower, and Sears, 1939) and that aggression can lead to frustration (Borum, 2004).
Group settings can cause a social identity and terrorist behavior to manifest. Methods such as dehumanization allow individuals to detach more easily from moral responsibility, and group influence increase the chance that individuals will concede to conformity and compliance. Manipulations of social control and propaganda can also instrument terrorist involvement.
In fact, a strategic model has been proposed to examine the political motivations of terrorists. The strategic model, the dominant paradigm in terrorism studies, considers terrorists are rational actors who attack civilians for political ends. According to this view, terrorists are political utility maximizers. The strategic model rests on three core assumptions which are: (1) terrorists are motivated by relatively stable and consistent political preferences; (2) terrorists evaluate the expected political payoffs of their available options; and (3) terrorism is adopted when the expected political return is superior to those of alternative options. However, it turns out that terrorists' decision-making process does not fully conform to the strategic model. According to Max Abrahms, the author of "What Terrorists Really Want: Terrorist Motives and Counterterrorism Strategy", there are seven common tendencies that represent important empirical puzzles for the strategic model, going against the conventional thought that terrorists are rational actors.
See also
:Category:Political psychologists
References
Footnotes
Bibliography
Further reading
Idrees Kahloon, "Border Control: The economics of immigration vs. the politics of immigration", The New Yorker, 12 June 2023, pp. 65–69. "The limits of immigration are not set by economics but by political psychology – by backlash unconcerned with net benefits." (p. 65.)
External links
International Bulletin of Political Psychology
The Center for the Study of Political Psychology
The Center for Research in Political Psychology (Queen's University Belfast)
The International Society of Political Psychology
Political Psychology at The George Washington University
Psychology | 0.769518 | 0.988852 | 0.760939 |
Alogia | In psychology, alogia (; from Greek ἀ-, "without", and λόγος, "speech" + New Latin -ia) is poor thinking inferred from speech and language usage.
There may be a general lack of additional, unprompted content seen in normal speech, so replies to questions may be brief and concrete, with less spontaneous speech. This is termed poverty of speech
or laconic speech.
The amount of speech may be normal but conveys little information because it is vague, empty, stereotyped, overconcrete, overabstract, or repetitive.
This is termed poverty of content
or poverty of content of speech.
Under Scale for the Assessment of Negative Symptoms used in clinical research, thought blocking is considered a part of alogia, and so is increased latency in response.
This condition is associated with schizophrenia, dementia, severe depression, and autism.
As a symptom, it is commonly seen in patients with schizophrenia and schizotypal personality disorder, and is traditionally considered a negative symptom. It can complicate psychotherapy severely because of the considerable difficulty in holding a fluent conversation.
The alternative meaning of alogia is inability to speak because of dysfunction in the central nervous system,
found in mental deficiency and dementia.
In this sense, the word is synonymous with aphasia,
and in less severe form, it is sometimes called dyslogia.
Characteristics
Alogia may be on a continuum with normal behaviors. People without mental illness may have it occasionally including when fatigued or disinhibited, when writers use language creatively, when people in certain disciplinessuch as politicians, administrators, philosophers, ministers, and scientistsuse language pedantically.
Hence, deciding if an individual has alogia depends on contextual clues. Is the person in control? Can the person moderate the effect if asked to be specific or concise? Is it better with another topic? Are there other significant symptoms?
Alogia is characterized by a lack of speech, often caused by a disruption in the thought process. Usually, an injury to the left side of the brain may cause alogia to appear in an individual. While in conversation, alogic patients will reply very sparsely and their answers to questions will lack spontaneous content; sometimes, they will even fail to answer at all. Their responses will be brief, generally only appearing as a response to a question or prompt.
Apart from the lack of content in a reply, the manner in which the person delivers the reply is affected as well. Patients affected by alogia will often slur their responses, and not pronounce the consonants as clearly as usual. The few words spoken usually trail off into a whisper, or are just ended by the second syllable. Studies have shown a correlation between alogic ratings in individuals and the amount and duration of pauses in their speech when responding to a series of questions posed by the researcher.
The inability to speak stems from a deeper mental inability that causes alogic patients to have difficulty grasping the right words mentally, as well as formulating their thoughts. A study investigating alogiacs and their results on the category fluency task showed that people with schizophrenia who exhibit alogia display a more disorganized semantic memory than controls. While both groups produced the same number of words, the words produced by people with schizophrenia were much more disorderly and the results of cluster analysis revealed bizarre coherence in the alogiac group.
If the condition is assessed using a language other than the individual's primary language, the medical professional needs to make sure that the problem is not from language barriers.
This condition is associated with schizophrenia, dementia, and severe depression.
Example
The following table shows an example of "poverty of speech" which shows replies to questions that are brief and concrete, with a reduction in spontaneous speech:
The following example of "poverty of content of speech" is a response from a patient when asked why he was in a hospital. Speech is vague, conveys little information, but is not grossly incoherent and the amount of speech is not reduced. "I often contemplate—it is a general stance of the world—it is a tendency which varies from time to time—it defines things more than others—it is in the nature of habit—this is what I would like to say to explain everything."
Causes
Alogia can be brought on by frontostriatal dysfunction which causes degradation of the semantic store, the center located in the temporal lobe that processes meaning in language. A subgroup of chronic schizophrenia patients in a word generation experiment generated fewer words than the unaffected subjects and had limited lexicons, evidence of the weakening of the semantic store. Another study found that when given the task of naming items in a category, schizophrenia patients displayed a great struggle but improved significantly when experimenters employed a second stimulus to guide behavior unconsciously. This conclusion was similar to results produced from patients with Huntington's and Parkinson's disease, ailments which also involve frontostriatal dysfunction.
Treatment
Medical studies conclude that certain adjunctive drugs effectively palliate the negative symptoms of schizophrenia, mainly alogia. In one study, Maprotiline produced the greatest reduction in alogia symptoms with severity reduction in 50% of patients (out of 10). Of the negative symptoms of schizophrenia, alogia had the second best responsiveness to the drugs, surpassed only by attention deficiency. D-amphetamine is another drug that has been tested on people with schizophrenia and found success in alleviating negative symptoms. This treatment, however, has not been developed greatly as it seems to have adverse effects on other aspects of schizophrenia such as increasing the severity of positive symptoms.
Relation to schizophrenia
Although alogia is found as a symptom in a variety of health disorders, it is most commonly found as a negative symptom of schizophrenia.
Previous studies and analyses conclude that at least three factors are needed to cover both the positive and negative symptoms of schizophrenia; the three are: psychotic, disorganization, and negative symptom factors. Studies suggest that an inappropriate affect is strongly associated with bizarre behavior and positive formal thought disorder on a disorganization factor; attention impairment correlates significantly with psychotic, disorganization, and negative symptom factors. Alogia contains both positive and negative symptoms, with the poverty of content of speech as the disorganization factor, and poverty of speech, response latency, and thought blocking as the negative symptom factors.
Alogia is a major diagnostic sign of schizophrenia, when organic mental disorders have been excluded.
In schizophrenia, negative symptoms including flattening of affect, avolition, and alogia are responsible for the considerable morbidity of the disease compared with other psychotic disorders.
Negative symptoms are common in the prodromal and residual phases of the disease and can be severe.
During the first year, negative symptoms can progress, especially alogia, which may start off from a relatively low rate. Within 2 years, up to 25% of patients will have significant negative symptoms.
Psychotic symptoms tend to diminish as the individuals age, but negative symptoms tend to persist.
Prominent negative symptoms at disease onset, including alogia, are good predictors of worse outcomes.
Negative symptoms can arise in the presence of other psychiatric symptoms. Positive symptoms are a common cause of apathy, social withdrawal, and alogia. Secondary causes of negative symptoms, such as depression and demoralization, often remit within a year, which helps distinguishing them from primary negative symptoms. Symptoms that don't diminish over a year with medications should be reconsidered as possible primary negative symptoms.
See also
Aphasia
Communication deviance
List of language disorders
Mutism
References
Other references
Medical signs
Schizophrenia | 0.766625 | 0.992558 | 0.760919 |
Discourse | Discourse is a generalization of the notion of a conversation to any form of communication. Discourse is a major topic in social theory, with work spanning fields such as sociology, anthropology, continental philosophy, and discourse analysis. Following pioneering work by Michel Foucault, these fields view discourse as a system of thought, knowledge, or communication that constructs our world experience. Since control of discourse amounts to control of how the world is perceived, social theory often studies discourse as a window into power. Within theoretical linguistics, discourse is understood more narrowly as linguistic information exchange and was one of the major motivations for the framework of dynamic semantics. In these expressions, ' denotations are equated with their ability to update a discourse context.
Social theory
In the humanities and social sciences, discourse describes a formal way of thinking that can be expressed through language. Discourse is a social boundary that defines what statements can be said about a topic. Many definitions of discourse are primarily derived from the work of French philosopher Michel Foucault. In sociology, discourse is defined as "any practice (found in a wide range of forms) by which individuals imbue reality with meaning".
Political science sees discourse as closely linked to politics and policy making. Likewise, different theories among various disciplines understand discourse as linked to power and state, insofar as the control of discourses is understood as a hold on reality itself (e.g. if a state controls the media, they control the "truth"). In essence, discourse is inescapable, since any use of language will have an effect on individual perspectives. In other words, the chosen discourse provides the vocabulary, expressions, or style needed to communicate. For example, two notably distinct discourses can be used about various guerrilla movements, describing them either as "freedom fighters" or "terrorists".
In psychology, discourses are embedded in different rhetorical genres and meta-genres that constrain and enable them—language talking about language. This is exemplified in the APA's Diagnostic and Statistical Manual of Mental Disorders, which tells of the terms that have to be used in speaking about mental health, thereby mediating meanings and dictating practices of professionals in psychology and psychiatry.
Modernism
Modernist theorists focused on achieving progress and believed in natural and social laws that could be used universally to develop knowledge and, thus, a better understanding of society. Such theorists would be preoccupied with obtaining the "truth" and "reality", seeking to develop theories which contained certainty and predictability. Modernist theorists therefore understood discourse to be functional. Discourse and language transformations are ascribed to progress or the need to develop new or more "accurate" words to describe discoveries, understandings, or areas of interest. In modernist theory, language and discourse are dissociated from power and ideology and instead conceptualized as "natural" products of common sense usage or progress. Modernism further gave rise to the liberal discourses of rights, equality, freedom, and justice; however, this rhetoric masked substantive inequality and failed to account for differences, according to Regnier.
Structuralism (Saussure & Lacan)
Structuralist theorists, such as Ferdinand de Saussure and Jacques Lacan, argue that all human actions and social formations are related to language and can be understood as systems of related elements. This means that the "individual elements of a system only have significance when considered about the structure as a whole, and that structures are to be understood as self-contained, self-regulated, and self-transforming entities". In other words, it is the structure itself that determines the significance, meaning, and function of the individual elements of a system. Structuralism has contributed to our understanding of language and social systems. Saussure's theory of language highlights the decisive role of meaning and signification in structuring human life more generally.
Poststructuralism (Foucault)
Following the perceived limitations of the modern era, emerged postmodern theory. Postmodern theorists rejected modernist claims that there was one theoretical approach that explained all aspects of society. Rather, postmodernist theorists were interested in examining the variety of experiences of individuals and groups and emphasized differences over similarities and shared experiences.
In contrast to modernist theory, postmodern theory is pessimistic regarding universal truths and realities. Hence, it has attempted to be fluid, allowing for individual differences as it rejects the notion of social laws. Postmodern theorists shifted away from truth-seeking and sought answers to how truths are produced and sustained. Postmodernists contended that truth and knowledge are plural, contextual, and historically produced through discourses. Postmodern researchers, therefore, embarked on analyzing discourses such as texts, language, policies, and practices.
Foucault
In the works of the philosopher Michel Foucault, a discourse is "an entity of sequences, of signs, in that they are enouncements (énoncés)." The enouncement (l’énoncé, "the statement") is a linguistic construct that allows the writer and the speaker to assign meaning to words and to communicate repeatable semantic relations to, between, and among the statements, objects, or subjects of the discourse. Internal ties exist between the signs (semiotic sequences) . The term discursive formation identifies and describes written and spoken statements with semantic relations that produce discourses. As a researcher, Foucault applied the discursive formation to analyses of large bodies of knowledge, e.g. political economy and natural history.
In The Archaeology of Knowledge (1969), a treatise about the methodology and historiography of systems of thought ("epistemes") and knowledge ("discursive formations"), Michel Foucault developed the concepts of discourse. The sociologist Iara Lessa summarizes Foucault's definition of discourse as "systems of thoughts composed of ideas, attitudes, courses of action, beliefs, and practices that systematically construct the subjects and the worlds of which they speak." Foucault traces the role of discourse in the legitimation of society's power to construct contemporary truths, to maintain said truths, and to determine what relations of power exist among the constructed truths; therefore discourse is a communications medium through which power relations produce men and women who can speak.
The interrelation between power and knowledge renders every human relationship into a power negotiation, Because power is always present and so produces and constrains the truth. Power is exercised through rules of exclusion (discourses) that determine what subjects people can discuss; when, where, and how a person may speak; and determines which persons are allowed to speak. That knowledge is both the creator of power and the creation of power, Foucault coined "power/knowledge" to show that it is "an abstract force which determines what will be known, rather than assuming that individual thinkers develop ideas and knowledge."
Interdiscourse studies the external semantic relations among discourses, as discourses exists in relation to other discourses.
Discourse analysis
There is more than one type of discourse analysis, and the definition of "discourse" shifts slightly between types. Generally speaking, discourse analyses can be divided into those concerned with "little d" discourse and "big D" Discourse. The former ("little d") refers to language-in-use, such as spoken communication; the latter ("big D") refers to sociopolitical discourses (language plus social and cultural contexts).
Common forms of discourse analysis include:
Critical discourse analysis
Conversation analysis
Foucauldian discourse analysis
Genre analysis
Narrative analysis
Formal semantics and pragmatics
In formal semantics and pragmatics, discourse is often viewed as the process of refining the information in a common ground. In some theories of semantics, such as discourse representation theory, sentences' denotations themselves are equated with functions that update a common ground.
See also
References
Further reading
— (1980). "Two Lectures", in Power/Knowledge: Selected Interviews, edited by C. Gordon. New York; Pantheon Books.
Howard, Harry. (2017). "Discourse 2." Brain and Language, Tulane University. [PowerPoint slides].
External links
DiscourseNet, an international association for discourse studies.
Beyond open access: open discourse, the next great equalizer, Retrovirology 2006, 3:55
Discourse (Lun) in the Chinese tradition
Discourse analysis
Semantics
Sociolinguistics
Anthropology
Concepts in social philosophy
Debating | 0.764484 | 0.995315 | 0.760903 |
Holistic education | Holistic education is a movement in education that seeks to engage all aspects of the learner, including mind, body, and spirit. Its philosophy, which is also identified as holistic learning theory, is based on the premise that each person finds identity, meaning, and purpose in life through connections to their local community, to the natural world, and to humanitarian values such as compassion and peace.
Holistic education aims to call forth from people an intrinsic reverence for life and a passionate love of learning, gives attention to experiential learning, and places significance on "relationships and primary human values within the learning environment".
The term "holistic education" is often used to refer to a type of alternative education, as opposed to mainstream educational research and evidence-based education.
Background
Holistic education's origins has been associated with the emergence of the concept of instruction in ancient Greece and other indigenous cultures. This involved the method that focused on the whole person instead of one or some segments of an individual's experience. It formed part of the view that the world is a single whole and that learning cannot be separated from all of man's experiences.
The term holistic education has been attributed to the South African military leader, statesman, scholar and philosopher, Field Marshal General Jan Christiaan Smuts (1870-1950), who is noted for his role in the foundation of the League of Nations, and the formation of the international peace organization, the United Nations. He drew from the ancient Greek conceptualization of holistic education to propose a modern philosophy of learning.
Smuts is considered the founder of "Holism", which he derived from the Greek word ολος, which means "whole". In his 1926 book Holism and Evolution, Smuts describes "holism" as the tendency in nature to form wholes that are greater than the sum of the parts through creative evolution. Today, this work is recognized as the foundation theory for systems thinking, complexity theory, neural networks, semantic holism, holistic education, and the general systems theory in ecology. Smuts' "holism" was also the inspiration for Emile Durkheim's concept of the "holistic society", as well as Alfred Adler's psychological approach, which views the individual as an "integrated whole".
There are also sources that credit Rudolph Steiner, John Dewey, and Maria Montessori as the originator of the modern model of holistic education. Steiner, particularly, developed a holistic education framework based on the works of Johann Wolfgang von Goethe and H.P. Blavatsky. It introduced the concept of "imaginative teaching" and its role in the learner's self-actualization.
Development
It is difficult to map the history of holistic education, as in some respects its core ideas are not new but "timeless and found in the sense of wholeness in humanity's religious impetus".
The explicit application of holistic ideas to education has a clear tradition, however, whose originating theorists include:
Jean-Jacques Rousseau,
Ralph Waldo Emerson,
Henry Thoreau,
Bronson Alcott,
Johann Pestalozzi, and
Friedrich Fröbel.
More recent theorists are Rudolf Steiner,
Maria Montessori,
Francis Parker,
John Dewey,
Francisco Ferrer
John Caldwell Holt,
George Dennison
Kieran Egan,
Howard Gardner,
Jiddu Krishnamurti,
Carl Jung,
Abraham Maslow,
Carl Rogers,
Paul Goodman,
Ivan Illich, and
Paulo Freire.
Many scholars feel the modern "look and feel" of holistic education coalesced through two factors: the rise of humanist philosophies after World War II and the cultural paradigm shift beginning in the mid-1960s. In the 1970s, after the holism movement in psychology became much more mainstream, "an emerging body of literature in science, philosophy and cultural history provided an overarching concept to describe this way of understanding education – a perspective known as holism."
In July 1979, the first National Holistic Education Conference took place at the University of California at San Diego. The conference was presented by The Mandala Society and The National Center for the Exploration of Human Potential and was titled Mind: Evolution or Revolution? The Emergence of Holistic Education. For six years after, the Holistic Education Conference was combined with the Mandala Holistic Health Conferences at the University of California, San Diego. About three thousand professionals participated each year. Out of these conferences came the annual Journals of Holistic Health. Holistic education became an identifiable area of study and practice in the mid-1980s in North America. Since the early 2000s, some of the historically separate academic areas, Science, Technology, Engineering, and Mathematics (STEM) on the one hand, and the Humanities, Arts, and Social Sciences (HASS) on the other, have found new holistic common ground, as demonstrated in consensus reports on Integrating Social and Behavioral Sciences Within the Weather Enterprise (2018) and The Integration of the Humanities and Arts with Sciences, Engineering, and Medicine in Higher Education. Branches from the Same Tree (2018).
Philosophical framework for holistic education
Holistic education aims at helping students be the most that they can be. Abraham Maslow referred to this as "self-actualization". Education with a holistic perspective is concerned with the development of every person's intellectual, emotional, social, physical, artistic, creative and spiritual potentials. It seeks to engage students in the teaching/learning process and encourages personal and collective responsibility.
In describing the general philosophy of holistic education, Robin Ann Martin and Scott Forbes (2004) divided their discussion into two categories: the idea of "ultimacy" and Basil Bernstein's notion of sagacious competence.
Ultimacy
Religious; as in becoming "enlightened". You see the light out of difficulties and challenges. This can be done through increased spirituality. Spirituality is an important component in holistic education as it emphasizes the connectedness of all living things and stresses the "harmony between the inner life and outer life".
Psychological; as in Maslow's "self-actualization". Holistic education believes that each person should strive to be all that they can be in life. There are no deficits in learners, just differences.
Undefined; as in a person developing to the ultimate extent a human could reach and, thus, moving towards the highest aspirations of the human spirit.
Sagacious competence
Freedom (in a psychological sense).
Good-judgment (self-governance).
Meta learning (each student learns in their "own way").
Social ability (more than just learning social skills).
Refining Values (development of character).
Self Knowledge (emotional development).
Curriculum
An application of holistic education to a curriculum has been described as transformational learning where the instruction recognizes the wholeness of the learner and that he and the curriculum are not seen as separate but connected. According to John Miller, the position is similar to the Quaker belief that there is "that of God in every one".
Various attempts to articulate the central themes of a holistic education, seeking to educate the whole person, have been made:
In holistic education the basic three R's have been said to be education for: Relationships, Responsibility and Reverence for all life.
First, children need to learn about themselves. This involves learning self-respect and self-esteem. Second, children need to learn about relationships. In learning about their relationships with others, there is a focus on social "literacy" (learning to see social influence) and emotional "literacy" (one's own self in relation to others). Third, children need to learn about resilience. This entails overcoming difficulties, facing challenges and learning how to ensure long-term success. Fourth, children need to learn about aesthetics – This encourages the student to see the beauty of what is around them and learn to have awe in life.
Curriculum is derived from the teacher listening to each child and helping the child bring out what lies within oneself.
Tools/teaching strategies of holistic education
With the goal of educating the whole child, holistic education promotes several strategies to address the question of how to teach and how people learn. First, the idea of holism advocates a transformative approach to learning. Rather than seeing education as a process of transmission and transaction, transformative learning involves a change in the frames of reference that a person might have. This change may include points of view, habits of mind, and worldviews. Holism understands knowledge as something that is constructed by the context in which a person lives. Therefore, teaching students to reflect critically on how we come to know or understand information is essential. As a result, if "we ask students to develop critical and reflective thinking skills and encourage them to care about the world around them they may decide that some degree of personal or social transformation is required."
Second, the idea of connections is emphasized as opposed to the fragmentation that is often seen in mainstream education. This fragmentation may include the dividing of individual subjects, dividing students into grades, etc. Holism sees the various aspects of life and living as integrated and connected, therefore, education should not isolate learning into several different components. Martin (2002) illustrates this point further by stating that, "Many alternative educators argue instead that who the learners are, what they know, how they know it, and how they act in the world are not separate elements, but reflect the interdependencies between our world and ourselves". Included in this idea of connections is the way that the classroom is structured. Holistic school classrooms are often small and consist of mixed-ability and mixed-age students. They are flexible in terms of how they are structured so that if it becomes appropriate for a student to change classes, (s)he is moved regardless of what time of year it is on the school calendar. Flexible pacing is key in allowing students to feel that they are not rushed in learning concepts studied, nor are they held back if they learn concepts quickly.
Third, along the same thread as the idea of connections in holistic education, is the concept of transdisciplinary inquiry. Transdisciplinary inquiry is based on the premise that division between disciplines is eliminated. One must understand the world in wholes as much as possible and not in fragmented parts. "Transdisciplinary approaches involve multiple disciplines and the space between the disciplines with the possibility of new perspectives 'beyond' those disciplines. Where multidisciplinary and interdisciplinary inquiry may focus on the contribution of disciplines to an inquiry transdisciplinary inquiry tends to focus on the inquiry issue itself."
Fourth, holistic education proposes that meaningfulness is also an important factor in the learning process. People learn better when what is being learned is important to them. Holistic schools seek to respect and work with the meaning structures of each person. Therefore, the start of a topic would begin with what a student may know or understand from their worldview, what has meaning to them rather than what others feel should be meaningful to them. Meta-learning is another concept that connects to meaningfulness. In finding inherent meaning in the process of learning and coming to understand how they learn, students are expected to self-regulate their own learning. However, they are not completely expected to do this on their own. Because of the nature of community in holistic education, students learn to monitor their own learning through interdependence on others inside and outside of the classroom.
Finally, as mentioned above, community is an integral aspect in holistic education. As relationships and learning about relationships are keys to understanding ourselves, so the aspect of community is vital in this learning process. Scott Forbes stated, "In holistic education the classroom is often seen as a community, which is within the larger community of the school, which is within the larger community of the village, town, or city, and which is, by extension, within the larger community of humanity."
Teacher's role
In holistic education, the teacher is seen less as person of authority who leads and controls but rather is seen as "a friend, a mentor, a facilitator, or an experienced traveling companion". Schools should be seen as places where students and adults work toward a mutual goal. Open and honest communication is expected and differences between people are respected and appreciated. Cooperation is the norm, rather than competition. Thus, many schools incorporating holistic beliefs do not give grades or rewards. The reward of helping one another and growing together is emphasized rather than being placed above one another.
See also
Deschooling
Holism
Homeschooling
Unschooling
School movements that incorporate elements of holistic education
Camphill Schools
Democratic school and anarchistic free school
Forest School
Friends/Quaker Schools
Krishnamurti Schools
Montessori School
Reggio Emilia Inspired Schools
Waldorf Education (or Steiner Education)
Note on semantics
There is a debate on whether holistic education is connected to education in holistic health or spiritual practices such as massage and yoga. Some educators feel that holistic education is a part of holistic practices, while others feel that they are totally separate concepts.
Notes
Philosophy of education | 0.769224 | 0.989165 | 0.76089 |
The arts | The arts or creative arts are a vast range of human practices of creative expression, storytelling, and cultural participation. The arts encompass diverse and plural modes of thinking, doing, and being in an extensive range of media. Both dynamic and a characteristically constant feature of human life have developed into stylized and intricate forms. This is achieved through sustained and deliberate study, training, or theorizing within a particular tradition, generations, and even between civilizations. The arts are a vehicle through which human beings cultivate distinct social, cultural, and individual identities while transmitting values, impressions, judgements, ideas, visions, spiritual meanings, patterns of life, and experiences across time and space.
Prominent examples of the arts include: visual arts (including architecture, ceramics, drawing, filmmaking, painting, photography, and sculpting), literary arts (including fiction, drama, poetry, and prose), and performing arts (including dance, music, and theatre). They can employ skill and imagination to produce objects and performances, convey insights and experiences, and construct new environments and spaces.
The arts can refer to common, popular, or everyday practices as well as more sophisticated, systematic, or institutionalized ones. They can be discrete and self-contained or combine and interweave with other art forms, such as combining artwork with the written word in comics. They can also develop or contribute to some particular aspect of a more complex art form, as in cinematography. By definition, the arts themselves are open to being continually redefined. The practice of modern art, for example, is a testament to the shifting boundaries, improvisation and experimentation, reflexive nature, and self-criticism or questioning that art and its conditions of production, reception, and possibility can undergo.
As both a means of developing capacities of attention and sensitivity and ends in themselves, the arts can simultaneously be a form of response to the world. It is a way to transform our responses and what we deem worthwhile goals or pursuits. From prehistoric cave paintings to ancient and contemporary forms of ritual to modern-day films, art has served to register, embody, and preserve our ever-shifting relationships with each other and the world.
Definition
The arts are considered various practices or objects done by people with skill, creativity, and imagination across cultures and history, viewed as a group. These activities include painting, sculpture, music, theatre, literature, and more. Art refers to the way of doing or applying human creative skills, typically in visual form.
History and classifications
In Ancient Greece, art and craft were referred to by the word techne. Ancient Greek art brought the veneration of the animal form and the development of equivalent skills to show musculature, poise, beauty, and anatomically correct proportions. Ancient Roman art depicted gods as idealized humans, shown with characteristic distinguishing features, e.g. Zeus' thunderbolt. In Byzantine and Gothic art of the Middle Ages, the dominant church insisted on the expression of Christian themes due to the overlap of church and state. Eastern art has generally worked in style akin to Western medieval art, namely a concentration on surface patterning and local colour (meaning the plain colour of an object, such as basic red for a red robe, rather than the modulations of that colour brought about by light, shade, and reflection). A characteristic of this style is that local colour is defined by an outline (a contemporary equivalent is the cartoon). This is evident, for example, in the art of India, Tibet, and Japan. Islamic art avoids the representation of living beings, particularly humans and other animals, in religious contexts. It instead expresses religious ideas through calligraphy and geometrical designs.
Classifications
In the Middle Ages, liberal arts were taught in European universities as part of the Trivium, an introductory curriculum involving grammar, rhetoric, and logic, and of the Quadrivium, a curriculum involving the "mathematical arts" of arithmetic, geometry, music, and astronomy. In modern academia, the arts can be grouped with, or as a subset of, the humanities.
The arts have been classified as seven: painting, architecture, sculpture, literature, music, performing, and cinema. Some view literature, painting, sculpture, and music as the central four arts, of which the others are derivative; drama is literature with acting, dance is music expressed through motion, and song is music with literature and voice. Film is sometimes called the "eighth" and comics the "ninth art" in Francophone scholarship, adding to the traditional "Seven Arts". Cultural fields like gastronomy are only sometimes considered as arts.
Visual arts
Architecture
Architecture is the art and science of designing buildings and structures. A wider definition would include the design of the built environment, from the macro level of urban planning, urban design, and landscape architecture, to the micro level of creating furniture. The word architecture comes from the Latin , from "master builder, director of works." Architectural design usually must address feasibility and cost for the builder, as well as function and aesthetics for the user.
In modern usage, architecture is the art and discipline of creating or inferring an implied or apparent plan for a complex object or system. Some types of architecture manipulate space, volume, texture, light, shadow, or abstract elements, to achieve pleasing aesthetics. Architectural works may be seen as cultural and political symbols, or works of art. The role of the architect, though changing, has been central to the design and implementation of pleasingly built environments, in which people live.
Ceramics
Ceramic art is art made from ceramic materials (including clay), which may take forms such as pottery, tile, figurines, sculpture, and tableware. While some ceramic products are considered fine art, others are considered decorative, industrial, or applied art objects. Ceramics may also be considered artefacts in archaeology. Ceramic art can be made by one person or by a group of people. In a pottery or ceramic factory, a group of people design, manufacture, and decorate the pottery. Some pottery is regarded as art pottery. In a one-person pottery studio, ceramists or potters produce studio pottery. Ceramics excludes glass and mosaics made from glass tesserae.
Conceptual art
Conceptual art is art wherein the concept(s) or idea(s) involved in the work take precedence over traditional aesthetic and material concerns.
The inception of the term in the 1960s referred to a strict and focused practice of idea-based art that defied traditional visual criteria associated with the visual arts in its presentation as text. Through its association with the Young British Artists and the Turner Prize during the 1990s, its popular usage, particularly in the United Kingdom, developed as a synonym for all contemporary art that does not practice the traditional skills of painting and sculpture.
Drawing
Drawing is a means of making an image using any of a wide variety of tools and techniques. It generally involves making marks on a surface by applying pressure from a tool, or moving a tool across a surface. Common tools are graphite pencils, pen and ink, inked brushes, wax coloured pencils, crayons, charcoals, pastels, and markers. Digital tools with similar effects are also used. The main techniques used in drawing are line drawing, hatching, cross-hatching, random hatching, scribbling, stippling, and blending. An artist who excels in drawing is referred to as a drafter, draftswoman, or draughtsman. Drawing can be used to create art used in cultural industries such as illustrations, comics, and animation. Comics are often called the "ninth art" in Francophone scholarship, adding to the traditional "Seven Arts".
Painting
Painting is considered to be a form of self-expression. Drawing, gesture (as in gestural painting), composition, narration (as in narrative art), or abstraction (as in abstract art), among other aesthetic modes, may serve to manifest the expressive and conceptual intention of the practitioner. Paintings can be a wide variety of topics, such as photographic, abstract, narrative, symbolistic (Symbolist art), emotive (Expressionism), or political in nature (Artivism). Some modern painters incorporate different materials, such as sand, cement, straw, wood, or strands of hair, for their artwork texture. Examples of this are the works of Jean Dubuffet or Anselm Kiefer.
Photography
Photography as an art form refers to photographs that are created in accordance with the creative vision of the photographer. Art photography stands in contrast to photojournalism, which provides a visual account of news events, and commercial photography, the primary focus of which is to advertise products or services.
Sculpture
Sculpture is the branch of the visual arts that operates in three dimensions. It is one of the plastic arts. Durable sculptural processes originally used carving (the removal of material) and modelling (the addition of material, such as clay), in stone, metal, ceramics, wood, and other materials, but shifts in sculptural processes have led to almost complete freedom of materials and processes following modernism. A wide variety of materials may be worked by removal such as carving, assembled by welding or modelling, or moulded or cast.
Literary arts
Literature (also known as literary arts or language arts) is literally "acquaintance with letters", as in the first sense given in the Oxford English Dictionary. The noun "literature" comes from the Latin word , meaning "an individual written character (letter)." The term has generally come to identify a collection of writings, which in Western culture are mainly prose (both fiction and non-fiction), drama, and poetry. In much, if not all, of the world, artistic linguistic expression can be oral as well and include such genres as epic, legend, myth, ballad, other forms of oral poetry, and folktales. Comics, the combination of drawings or other visual arts with narrating literature, are called the "ninth art" in Francophone scholarship.
Performing arts
Performing arts comprise dance, music, theatre, opera, mime, and other art forms in which human performance is the principal product. Performing arts are distinguished by this performance element in contrast with disciplines such as visual and literary arts, where the product is an object that does not require a performance to be observed and experienced. Each discipline in the performing arts is temporal in nature, meaning the product is performed over a period of time. Products are broadly categorized as being either repeatable (for example, by script or score) or improvised for each performance. Artists who participate in these arts in front of an audience are called performers, including actors, magicians, comedians, dancers, musicians, and singers. Performing arts are also supported by the services of other artists or essential workers, such as songwriting and stagecraft. Performers adapt their appearance with tools such as costumes and stage makeup.
Dance
Dance generally refers to human movement, either used as a form of expression or presented in a social, spiritual, or performance setting. Choreography is the art of making dances, and the person who does this is called a choreographer. Definitions of what constitutes dance are dependent on social, cultural, aesthetic, artistic, and moral constraints and range from functional movement (such as folk dance) to codified virtuoso techniques such as ballet. In sports: gymnastics, figure skating, and synchronized swimming are dance disciplines. In martial arts, "kata" is compared to dances.
Music
Music is defined as an art form whose medium is a combination of sounds. Though scholars agree that music generally consists of a few core elements, their exact definitions are debated. Commonly identified aspects include pitch (which governs melody and harmony), duration (including rhythm and tempo), intensity (including dynamics), and timbre. Though considered a cultural universal, definitions of music vary wildly throughout the world as they are based on diverse views of nature, the supernatural, and humanity. Music is differentiated into composition and performance, while musical improvisation may be regarded as an intermediary tradition. Music can be divided into genres and subgenres, although the dividing lines and relationships between genres are subtle, open to individual interpretation, and controversial.
Theatre
Theatre or theater (from Greek ; from , "behold") is the branch of the performing arts concerned with acting out stories in front of an audience using combinations of speech, gesture, music, dance, sound, and spectacle. In addition to the standard narrative dialogue style, theatre takes such forms as opera, ballet, mime, kabuki, classical Indian dance, and Chinese opera.
Multidisciplinary artistic works
Areas exist in which artistic works incorporate multiple artistic fields, such as film, opera, and performance art. While opera is often categorized as the performing arts of music, the word itself is Italian for "works", because opera combines artistic disciplines into a singular artistic experience. In a traditional opera, the work uses the following: the sets, costumes, acting, the libretto, singers and an orchestra.
The composer Richard Wagner recognized the fusion of so many disciplines into a single work of opera, exemplified by his cycle Der Ring des Nibelungen ("The Ring of the Nibelung"). He did not use the term opera for his works, but instead Gesamtkunstwerk ("synthesis of the arts"), sometimes referred to as "music drama" in English, emphasizing the literary and theatrical components, which were as important as the music. Classical ballet is another form that emerged in the 17th century in which orchestral music is combined with dance.
Other works in the late 19th, 20th, and 21st centuries have fused other disciplines in creative ways, such as performance art. Performance art is a performance over time that combines any number of instruments, objects, and art within a predefined or less well-defined structure, some of which can be improvised. Performance art may be scripted, unscripted, random, or carefully organized—even audience participation may occur. John Cage is regarded by many as a performance artist rather than a composer, although he preferred the latter term. He did not compose for traditional ensembles. Cage's composition Living Room Music, composed in 1940, is a quartet for unspecified instruments, really non-melodic objects, that can be found in the living room of a typical house, hence the title.
Other arts
Applied arts
The applied arts are the application of design and decoration to everyday, functional objects to make them aesthetically pleasing. The applied arts include fields such as industrial design, illustration, and commercial art. The term "applied art" is used in distinction to the fine arts, where the latter is defined as arts that aim to produce objects that are beautiful or provide intellectual stimulation but have no primary everyday function. In practice, the two often overlap.
Video games
Video games are multidisciplinary works that include non-controversially artistic elements such as visuals and sound, as well as an emergent experience from the nature of their interactivity. Within the video game community, debates surround whether video games should be classified as an art form and whether game developers—AAA or indie—should be classified as artists. Hideo Kojima, a video game designer considered a gaming auteur, argued in 2006 that video games are a type of service rather than an art form. In the social sciences, cultural economists show how playing video games is conducive to involvement in more traditional art forms. In 2011, the National Endowment of the Arts included video games in its definition of a "work of art", and the Smithsonian American Art Museum presented an exhibit titled The Art of the Video Game in 2012.
Arts critique
Art criticism is the discussion or evaluation of art. Art critics usually criticize art in the context of aesthetics or the theory of beauty. A goal of art criticism is the pursuit of a rational basis for art appreciation but it is questionable whether such criticism can transcend prevailing sociopolitical circumstances.
The variety of artistic movements has resulted in a division of art criticism into different disciplines, which may each use different criteria for their judgements. The most common division in the field of criticism is between historical criticism and evaluation, a form of art history, and contemporary criticism of work by living artists.
Despite perceptions that criticism is a lower-risk activity than making art, opinions of current art are liable to corrections with the passage of time. Critics of the past can be ridiculed for dismissing artists now venerated (like the early work of the Impressionists). Some art movements themselves were named disparagingly by critics, with the name later adopted as a badge of honour by the artists of the style with the original negative meaning forgotten, e.g. Impressionism and Cubism. Artists have had an uneasy relationship with their critics. Artists usually need positive opinions from critics for their work to be viewed and purchased.
Many variables determine judgement of art such as aesthetics, cognition or perception. Aesthetic, pragmatic, expressive, formalist, relativist, processional, imitation, ritual, cognition, mimetic, and postmodern theories, are some of the many theories to criticize and appreciate art. Art criticism and appreciation can be subjective based on personal preference toward aesthetics and form, or on the elements and principles of design and by social and cultural acceptance.
Education
Arts in education is a field of educational research and practice informed by investigations into learning through arts experiences. In this context, the arts can include performing arts education (dance, drama, music), literature and poetry, storytelling, visual arts education in film, craft, design, digital art, media and photography.
Politics
A strong relationship between the arts and politics, particularly between various kinds of art and power, occurs across history and cultures. As they respond to events and politics, the arts take on political as well as social dimensions, becoming themselves a focus of controversy and a force of political and social change.
One observation is that an artist has a free spirit. For instance Pushkin, a well-regarded writer, attracted the irritation of Russian officialdom and particularly the Tsar, since he "instead of being a good servant of the state in the rank and file of the administration and extolling conventional virtues in his vocational writings (if write he must), composed extremely arrogant and extremely independent and extremely wicked verse in which dangerous freedom of thought was evident in the novelty of his versification, in the audacity of his sensual fancy, and in his propensity for making fun of major and minor tyrants."
Artists use their work to express their political views and promote social change, from influencing negatively in the form of hate speech to influencing positively through artivism. Governments use art, or propaganda, to promote their own agendas.
Notes
References
Bibliography
Books
Articles
Online
Further reading
External links
Topic Dictionaries at Oxford Learner's Dictionaries.
Definition of Art by Lexico.
Aesthetics
Humanities | 0.761838 | 0.998738 | 0.760877 |
Military psychology | Military psychology is a specialization within psychology that applies psychological science to promote the readiness of military members, organizations, and operations. Military psychologists provide support to the military in many ways, including through direct clinical care, consultation to military commanders, teaching others and supporting military training; and through research relevant to military operations and personnel.
Military psychology as a field has been growing since the early 20th century, evidence that the demands and needs for psychological clinical and operational application is continuing to grow steadily. There are many stressors associated with military service, including exposure to high-risk training and combat. As such, psychologists are critical support components that assist military leaders in designing appropriate training programs, providing oversight to those programs, and assisting military members as they navigate the challenges of military training and their new lifestyle.
Military psychology covers a wide range of fields throughout the military including operational, tactical, and occupational psychology. Gender differences between military-trained personnel who seek mental health assistance have been extensively studied. Specific examples include post traumatic stress disorder (PTSD) associated with combat, or guilt and family/partner difficulties accompanying extended or frequent deployments due to separation. Clinical providers in military psychology are often focused on the treatment of stress, fatigue, and other personal readiness issues.
Previous wars such as the Korean war, Vietnam war, and WW 2 provide great insight to the workings and practices of military psychology and how the practices have changed and assisted the military over the years.
Role
The military is a group of individuals who are trained and equipped to perform national security tasks in unique and often chaotic and trauma-filled situations. These situations can include the front-lines of battle, national emergencies, counter-terrorism support, allied assistance, or the disaster response scenarios where they are providing relief-aid for the host populations of both friendly and enemy states. Though many psychologists may have a general understanding with regards to a humans response to traumatic situations, military psychologists are uniquely trained and experienced specialists in applied science and practice among this special population. While the service members may be providing direct aid to the victims of events, military psychologists are providing specialized aid to both members, their families, and the victims of military operations as they cope with the often "normal" response or reaction to uncommon and abnormal circumstances. Military psychologists can assess, diagnose, treat and recommend the duty status most suitable for the optimal well-being of the individual, group, and organization. Through the use of group therapy, individual therapy, and behavior modification, these psychologists actively treat psychological disorders, most commonly emotional trauma. When counseling members of a military personnel's family, they are most often tasked with providing grief counseling after the loss of a loved one in the line of duty. Events that affect the mental state, resilience or psychological assets and vulnerabilities of the warrior and the command are where military psychologists are most equipped to meet the unique challenges and provide expert care and consultation to preserve the behavioral health of the fighting force.
In addition to the specialized roles previously mentioned, military psychologists often provide support to many non-healthcare-related activities. For example, military psychologists may provide their expertise and training in the consultation to hostage negotiations. Military psychologists are not hostage negotiators; however, they often consult [ation] with those directly communicating with hostage-takers in a manner that seeks the safety and protections of all involved. Military psychologists may also apply their science to aviation selection and training, to the study and application of survival training, and the selection of personnel for special military duties.
Another common practice domain for military psychologists is in performing fitness for duty evaluations, especially in high risk and high reliability occupations. The types of fitness evaluations include both basic entry examinations and career progression examinations such as those conducted when individuals are seeking promotion, higher-classification clearance status, and specialized, hazardous, and mission critical working conditions. When operational commanders become concerned about the impact of continuous, critical, and traumatic operations on those in their command, they often consult with a military psychologist.
The fitness evaluations might lead to command directed administrative actions or provide the information necessary to make decisions by a medical board or other tribunal and must be thoroughly conducted by non-biased individuals with the experience and training necessary to render a professional opinion that is critical to key decision makers. Military psychologists must be well versed in the art and science of psychology as specialized applied practice professionals. They must also be highly competent generalists in the military profession, and be able to understand both professions well enough to examine human behavior in the context of military operations. It takes the psychologist several years beyond the doctorate to develop the expertise necessary to understand how to integrate psychology with the complex needs of the military.
Another very select and infrequent use of military psychology is in the interview of subjects, the interrogation of prisoners, and the vetting of those who may provide information of operational or intelligence value that would enhance outcomes of friendly military operations or reduce friendly and enemy casualties. Psychology's scientific principles applied here allow the interviewer, agent, or interrogator to get as much information as possible through non-invasive means without the need to resort to active measures or risk violating the rules of engagement, host nation agreements, international and military law or crossing the threshold of the Geneva Conventions' guidelines to which the United States and its allies subscribe, regardless of the status of many of the modern belligerent countries on the international laws and United Nations agreements.
Area of study
The goals and missions of current military psychologists have been retained over the years, varying with the focus and strength of intensity of research put forth into each sector. Working in research as a military psychologist entails performing personnel research, such as determining what traits are best utilized in which positions, the training procedures, and analyzing what variables impact the health and performance of military personnel. The need for mental health care is now an expected part of high-stress military environments. The importance and severity of post traumatic stress disorder (PTSD) has gained more credibility than those suffering from it received in the past, and is being highlighted in treatment programs. More extensive post-deployment screenings take place now to home in on problematic recoveries that used to be passed unnoticed and untreated.
Terrorism
Terrorism and counterterrorism, information management, and psychological warfare are value-added roles for the applied aspects of military psychology that are developing. For instance, contrary to the common myths and stereotypes about modern terrorists, that tend portray them as mentally disturbed individuals; most terrorists are far from that typology according to studies conducted by behavioral and social scientists who have either directly interviewed and observed terrorists or conducted meta-analytic studies of terrorism and terrorists.
Terrorists have tended to be from among the more well educated in their host countries. They often have developed a well thought out, but not very often publicized or well articulated, rigid ideology that provides the foundation for their strategy and tactics. Psychologically disturbed terrorists increase the risk of damage to the terror organization's strategic outcomes. As in any organization, mentally disturbed terrorists are a liability and the leaders of terrorist groups are well aware of the risks that these types of persons present. As any good organizational leader, the effective terrorist will try to recruit the best person for the job. It is doubtful that modern terrorist groups would adopt the affirmative action and other hiring practices dictated under employment laws in the United States or other Western countries.
It is important to understand when and how the label of terrorism is applied because of its psychological impact as suggested above. The causes, goals, methodology, and strategy of the terrorist mindset is well suited for psychological inquiry and the development of the strategy and tactics used to confront it. Terrorism is an ideology that uses behavioral, emotional, and group dynamics, along with social and psychological principles to influence populations for political purposes. It is a form of psychological warfare. The terrorists are experts in the use of fear, violence, threats of violence and trauma in order to advance the political agenda. Terrorists seek psychological control and use violent behavior to cause the population to behave in ways that disrupt and destroy the existing political processes and symbols of political power. They control people by using deep primal emotions to elicit a reaction and shape behavior.
The goal of a terrorist is to use violence to create the natural fear of death and dismemberment and use it to change or shape political behavior, control thought and modify speech. Military and operational psychologists are highly trained and experienced. They are experts equipped with the specialized knowledge, skills, and abilities in the art and science of the military and psychology professions that give them a great deal of potential in this unique operational environment.
Operational psychology
Operational psychology is a specialty within the field of psychology that applies behavioral science principles through the use of consultation to enable key decision makers to more effectively understand, develop, target, and influence an individual, group or organization to accomplish tactical, operational, or strategic objectives within the domain of national security or national defense. This is a relatively new sub-discipline that has been employed largely by psychologists and behavioral scientists in military, intelligence, and law enforcement arenas (although other areas of public safety employ psychologists in this capacity as well). While psychology has been utilized in non-health related fields for many decades, recent years have seen an increased focus on its national security applications. Examples of such applications include the development of counterinsurgency strategy through human profiling, interrogation and detention support, information-psychological operations, and the selection of personnel for specialized military or other public safety activities.
Recently, operational psychology has been under increased scrutiny due to allegations of unethical conduct by some practitioners supporting military and law enforcement interrogations. As a result, a small group of psychologists have raised concerns about the ethics of such practice. Supporters of operational psychology have responded by providing an ethical defense of such activity. They argue that the American Psychological Association's ethical code is sufficient to support operational psychologists in a number of activities (to include legal interrogation by the military and other law enforcement agencies).
In response to this controversy, the American Psychological Association (APA) assembled a cross-divisional task force to draft professional practice guidelines built around the APA ethics code and related policies. These guidelines were adopted by the APA's Council of Representatives in August 2023 at their annual convention.
Tactical psychology
Tactical psychology is "a sharp focus on what soldiers do once they are in contact with the enemy...on what a front-line soldier can do to win a battle". It combines psychology and historical analysis (the application of statistics to military historical data) to find out how tactics make the enemy freeze, flee or fuss, instead of fight. Tactical psychology examines how techniques like suppressive fire, combined arms or flanking reduce the enemy's will to fight.
Health, organizational, and occupational psychology
Military psychologists perform work in a variety of areas, including operating mental health and family counseling clinics, performing research to help select recruits for the armed forces, determining which recruits will be best suited for various military occupational specialties, and performing analysis on humanitarian and peacekeeping missions to determine procedures that could save military and civilian lives. Some military psychologists also work to improve the lives of service personnel and their families. Other military psychologists work with large social policy programs within the military that are designed to increase diversity and equal opportunity. More modern programs employ the skills and knowledge of military psychologists to address issues such as integrating diverse ethnic and racial groups into the military and reducing sexual assault and discrimination. Others assist in the employment of women in combat positions and other positions traditionally held by men. Other responsibilities include helping to utilize low-capability recruits and rehabilitate drug-addicted and wounded service members.
Many military psychologists are in charge of drug testing and psychological treatment for mental illnesses, such as alcohol and substance abuse. In terms of the prevalence of psychological issues in the military, active duty members and veterans most commonly struggle with PTSD, anxiety, depression, suicidal ideation, and substance abuse. Worsening psychological symptoms due to potentially traumatic events can cause decision-making impairments. During high-stress situations, decision-making impairments can heavily impact the safety of the individual and their unit. Veteran men who served in the Army and Marine Corps showed poorer mental health than Air Force. These men also showed higher use of alcohol and drugs. Research shows that there are high rates of alcohol use in the military, with a higher prevalence in service men than in service women. In modern times, the advisement of military psychologists is being heard and taken into consideration for national policy more than ever before. There are now more psychologists employed by the U.S. Department of Defense than by any other organization in the world. Since the downsizing of the military in the 1990s, however, there has been a considerable reduction in psychological research and support in the armed forces as well.
Feminism
Women in military roles is an area of study receiving an increasing amount of attention. Currently women make up 10%-15% of the armed forces. However, gender integration in the military has been an ongoing process. In 1948, the Women's Armed Services Integration Act was established, allowing women's units to be a part of federal forces. In 1976, women were officially permitted to be integrated into the three main Department of Defense service academies, which only men were originally allowed to attend. While this decision was highly debated, research has shown that gender integration has resulted in men having more positive attitudes towards working in combat positions with women. However, as women tended to move to away from nursing and helping roles, increasing attention is given to how the brutal realities of combat would affect the women psychologically. Research shows that, when affected, women tend to ask for help, more so than men, thus avoiding many of the long-term mental suffering that male soldiers face after their deployment has ended. Some of the mental issues that researchers have been looking into lately is the link between PTSD, sexual harassment, and sexual trauma. Reports indicate that military personnel who report experiencing sexual trauma have a higher likelihood of being diagnosed with a mental health condition during their lifetime (e.g., PTSD) as compared to their civilian counterparts. There are gender differences in regard to sexual assault and or harassment while on deployment. Women have shown statistically that they receive more sexual assault than men. A large majority of military members turn away from seeking psychological help because they fear differential treatment from leaders.
History
Psychological stress and disorders have always been a part of military life, especially during and after wartime, but the mental health section of military psychology has not always experienced the awareness it does now. Even in the present day there is much more research and awareness needed concerning this area.
One of the first institutions created to care for military psychiatric patients was St. Elizabeths Hospital in Washington, D.C. Formerly known as the United States Government Hospital for the Insane, the hospital was founded by the United States Congress in 1855 and is currently in a state of disrepair although operational, with revitalization plans scheduled to begin in 2010.
Early work
In 1890 James McKeen Cattell coined the term “mental tests”. Cattell studied under Wundt at Leipzig in Germany at one point during his life and strongly advocated for psychology to be viewed as a science on par with the physical and life sciences. He promoted the need for standardization of procedures, use of norms, and advocated the use of statistical analysis to study individual differences. He was unwavering in his opposition to America's involvement in World War I.
Lightner Witmer, who also spent some time working under Wundt, changed the scene for psychology forever from his position at the University of Pennsylvania when he coined the term “clinical psychology” and outlined a program of training and study. This model for clinical psychology is still followed in modern times. Eleven years later in 1907 Witmer founded the journal The Psychological Clinic.
Also in 1907, a routine psychological screening plan for hospitalized psychiatric patients was developed by Shepard Ivory Franz, civilian research psychologist at St. Elizabeth's Hospital. Two years later, under the leadership of William Alanson White, St. Elizabeth's Hospital became known for research and training of psychiatrists and military medical officers. In 1911 Hebert Butts, a navy medical officer stationed at St. Elizabeth's, published the first protocol for psychological screening of navy recruits based on Franz's work.
Intelligence testing in the U.S. military
Lewis M. Terman, a professor at Stanford University, revised the Binet-Simon Scale in 1916, renaming it the Stanford-Binet Revision. This test was the beginning of the “Intelligence Testing Movement” and was administered to over 170,000 soldiers in the United States Army during World War I. Yerkes published the results of these tests in 1921 in a document that became known as the Army Report.
There were two tests that initially made up the intelligence tests for the military: Army Alpha and Army Beta tests. They were developed to evaluate vast numbers of military recruits that were both literate (Army Alpha tests) and illiterate (Army Beta tests). The Army Beta test were designed to “measure native intellectual capacity”. The Army Beta test also helped to test non-English speaking service members.
The standardized intelligence and entrance tests that have been used for each military branch in the United States has transformed over the years. Finally, in 1974, “the Department of Defense decided that all Services should use Armed Services Vocational Aptitude Battery (ASVAB) for both screening enlistees and assigning them to military occupations. Combining selection and classification testing made the testing process more efficient. It also enabled the Services to improve the matching of applicants with available job positions and allowed job guarantees for those qualified”. This went fully into effect in 1976.
Yerkes and war
Robert M. Yerkes, while he was president of the American Psychological Association (APA) in 1917, worked with Edward B. Titchener and a group of psychologists that were known as the “Experimentalists”. Their work resulted in formulating a plan for APA members to offer their professional services to the World War I effort, even though Yerkes was known for being opposed to America being involved in the war at all. It was decided that psychologists could provide support in developing methods for selection of recruits and treatment of war victims. This was spurred, in part, by America's growing interest in the work of Alfred Binet in France on mental measurement, as well as the scientific management movement to enhance worker productivity.
In 1919, Yerkes was commissioned as a major in the U.S. Army Medical Service Corps. In a plan proposed to the Surgeon General, Yerkes wrote: "The Council of the American Psychological Association is convinced that in the present emergency American psychology can substantially serve the Government, under the medical corps of the Army and Navy, by examining recruits with respect to intellectual deficiency, psychopathic tendencies, nervous instability, and inadequate self-control". Also in 1919, the Army Division of Psychology in the Medical Department was established at the medical training camp at Fort Oglethorpe, Georgia to train personnel to provide mental testing of large groups.
This was also the era when the condition referred to as “shell shock” was first seriously studied by psychologists and standardized screening tests for pilots were administered.
World War II
World War II ushered in an era of substantial growth for the psychological field, centering around four major areas: testing for individual abilities, applied social psychology, instruction and training, and clinical psychology. During World War II, the Army General Classification Test (AGCT) and the Navy General Classification Test (NGCT) were used in place of the Army Alpha and Army Beta tests for similar purposes.
The United States Army had no unified program for the use of clinical psychologists until 1944, towards the end of World War II. Before this time, no clinical psychologists were serving in Army hospitals under the supervision of psychiatrists. This had to do with psychologists’ opposition to this type of service and also to the limited role the Army assigned to psychiatry. At this time, the only psychiatric interview that was being processed on the ever-increasing numbers of military recruits lasted only three minutes and could only manage to weed out the severely disturbed recruits. Under these conditions, it was impossible to determine which seemingly normal recruits would crack under the strain of military duties, and the need for clinical psychologists grew. By 1945 there were over 450 clinical psychologists serving in the U.S. Army.
Military psychology matured well past the areas aforementioned that concerned psychologists up until this time, branching off into sectors that included military leadership, the effects of environmental factors on human performance, military intelligence, psychological operations and warfare (such as Special Forces like PSYOP), selection for special duties, and the influences of personal background, attitudes, and the work group on soldier motivation and morals.
Korean War
The Korean War was the first war in which clinical psychologists served overseas, positioned in hospitals as well as combat zones. Their particular roles were vague, broad, and fairly undefined, except for the U.S. Air Force, which provided detailed job descriptions. The Air Force also outlined the standardized tests and procedures for evaluating recruits that were to be used.
Vietnam War
In the Vietnam War, there were significant challenges that obstructed the regular use of psychologists to support combat troops. The mental health teams were very small, usually only consisting of one psychiatrist, one psychologist, and three or four enlisted corpsmen. Quite often, medical officers, including psychologists, were working in severe conditions with little or no field experience. Despite these challenges, military psychiatry had improved compared to previous wars, which focused on maximizing function and minimizing disability by preventive and therapeutic measures.
Global War on Terror
A 2014 study of soldiers who had mental health problems after Overseas Contingency Operation service found that a majority of them had symptoms before they enlisted.
See also
Army Alpha
Army Beta
Center for Deployment Psychology
Human subject research
Military science
MKULTRA
Morale
Psychological warfare
Unit cohesion
References
External links
The Center for Deployment Psychology at the Uniformed Services University of the Health Sciences
Military life
Military medicine
Military supporting service occupations
Military medical personnel
Military veterans topics | 0.772558 | 0.984825 | 0.760835 |
Manipulation (psychology) | In psychology, manipulation is defined as an action designed to influence or control another person, usually in an underhanded or unfair manner which facilitates one's personal aims. Methods someone may use to manipulate another person may include seduction, suggestion, coercion, and blackmail to induce submission. Manipulation is generally considered a dishonest form of social influence as it is used at the expense of others.
Differentiation
Manipulation differs from general influence and persuasion. Non-manipulative influence is generally perceived to be harmless and it is not seen as unduly coercive to the individual's right of acceptance or rejection of influence. Persuasion is the ability to move others to a desired action, usually within the context of a specific goal. Persuasion often attempts to influence a person's beliefs, religion, motivations, or behavior. Influence and persuasion are neither positive nor negative, unlike manipulation which is strictly negative.
Elements of manipulation
While the motivations for manipulation are mostly self-serving, certain styles of social influence can be intended to be to the benefit of others. Manipulation has been referred to as the use of strategies "to advance personal agendas or self-serving motives at the expense of others", and is usually considered antisocial behavior. Pro-social behavior is a voluntary act intended to help or benefit another individual or group of individuals and is an important part of empathy.
Different measures of manipulativeness focus on different aspects or expressions of manipulation, and tend to paint slightly different pictures of its predictors. Features such as low empathy, high narcissism, use of self-serving rationalizations, and an interpersonal style marked by high agency (dominance) and low communion (i.e. cold-heartedness) are consistent across measures.
Manipulative behaviors typically exploit the following vulnerabilities:
Harriet B. Braiker
Harriet B. Braiker identified the following ways that manipulators control their victims:
Positive reinforcement: includes praise, superficial charm, superficial sympathy (crocodile tears), excessive apologizing, money, approval, gifts, attention, facial expressions such as a forced laugh or smile, and public recognition.
Negative reinforcement: involves removing one from a negative situation as a reward.
Gaslighting: making someone question their own reality.
Intermittent or partial reinforcement: Partial or intermittent negative reinforcement can create an effective climate of fear and doubt. Partial or intermittent positive reinforcement can encourage the victim to persist.
Punishment: includes nagging, yelling, the silent treatment, intimidation, threats, swearing, emotional blackmail, guilt trips, sulking, crying, and playing the victim.
Traumatic one-trial learning: using verbal abuse, explosive anger, or other intimidating behavior to establish dominance or superiority; even one incident of such behavior can condition or train victims to avoid upsetting, confronting or contradicting the manipulator.
According to Braiker, manipulators exploit the following vulnerabilities (buttons) that may exist in victims:
The desire to please
Addiction to earning the approval and acceptance of others
Emotophobia (fear of negative emotion; i.e. a fear of expressing anger, frustration or disapproval)
A lack of assertiveness and ability to say no
A blurry sense of identity (with soft personal boundaries)
Low self-reliance
Possessing an external locus of control
Manipulators can have various possible motivations, including but not limited to:
The need to advance their own purposes and personal gain at (virtually any) cost to others
A strong need to attain feelings of power and superiority in relationships with others - compare megalomania (associated with, for example, narcissistic personality disorder)
A want and need to feel in control
A desire to gain a feeling of power over others in order to raise their perception of self-esteem
Boredom, or growing tired of one's surroundings; seeing manipulation as a game more than hurting others
Covert agendas, criminal or otherwise, including financial manipulation (often seen when intentionally targeting the elderly or unsuspecting, unprotected wealthy for the sole purpose of obtaining victims' financial assets)
Not identifying with underlying emotions (including experiencing commitment phobia), and subsequent rationalization (offenders do not manipulate consciously, but rather try to convince themselves of the invalidity of their own emotions)
A lack of self-control over impulsive and anti-social behaviour - leading to pre-emptive or reactionary manipulation to maintain image
George K. Simon
According to psychology author George K. Simon, successful psychological manipulation primarily involves the manipulator:
Concealing aggressive intentions and behaviors and being affable.
Knowing the psychological vulnerabilities of the victim to determine which tactics are likely to be the most effective.
Having a sufficient level of ruthlessness to have no qualms about causing harm to the victim if necessary.
Techniques of manipulators may include:
Martin Kantor
Kantor advises in his 2006 book The Psychopathology of Everyday Life: How Antisocial Personality Disorder Affects All of Us that vulnerability to psychopathic manipulators involves being too:
Dependent – dependent people need to be loved and are therefore gullible and liable to say yes to something to which they should say no.
Immature – has impaired judgment and so tends to believe exaggerated advertising claims.
Naïve – cannot believe there are dishonest people in the world, or takes it for granted that if there are any, they will not be allowed to prey on others.
Impressionable – overly seduced by charmers.
Trusting – people who are honest often assume that everyone else is honest. They are more likely to commit themselves to people they hardly know without checking credentials, etc., and less likely to question so-called experts.
Careless – not giving sufficient amount of thought or attention to harm or errors.
Lonely – lonely people may accept any offer of human contact. A psychopathic stranger may offer human companionship for a price.
Narcissistic – narcissists are prone to falling for unmerited flattery.
Impulsive – make snap decisions.
Altruistic – the opposite of psychopathic: too honest, too fair, too empathetic.
Frugal – cannot say no to a bargain even if they know the reason it is so cheap.
Materialistic – easy prey for loan sharks or get-rich-quick schemes.
Greedy – the greedy and dishonest may fall prey to a psychopath who can easily entice them to act in an immoral way.
Masochistic – lack self-respect and so unconsciously let psychopaths take advantage of them. They think they deserve it out of a sense of guilt.
The elderly – the elderly can become fatigued and less capable of multi-tasking. When hearing a sales pitch they are less likely to consider that it could be a con. They are more likely to give money to someone with a hard-luck story. See elder abuse.
Assessment tools
MACH-IV
Manipulativeness is a primary feature found in the Machiavellianism construct. The MACH-IV, conceptualized by Richard Christie and Florence Geis, is a popular and widely used psychological measure of manipulative and deceptive behavior.
Emotional manipulation scale
The emotional manipulation scale is a ten-item questionnaire developed in 2006 through factor analysis, primarily to measure one's tendency to use emotions to their advantage in controlling others. At the time of publication, emotional intelligence assessments did not specifically examine manipulative behavior and were instead predominantly focussed on Big Five personality trait assessment.
Managing the emotions of others scale
The "Managing the Emotions of Others Scale" (MEOS) was developed in 2013 through factor analysis to measure the ability to change emotions of others. The survey questions measure six categories: mood (or emotional state) enhancement, mood worsening, concealing emotions, capacity for inauthenticity, poor emotion skills, and using diversion to enhance mood. The enhancement, worsening and diversion categories have been used to identify the ability and willingness of manipulative behavior. The MEOS has also been used for assessing emotional intelligence, and has been compared to the HEXACO model of personality structure, for which the capacity for inauthenticity category in the MEOS was found to correspond to low honesty-humility scores on the HEXACO.
Manipulation and personality disorders
Manipulative tendencies may derive from cluster B personality disorders such as narcissistic personality disorder, antisocial personality disorder, and borderline personality disorder. Manipulative behavior has also been related with one's level of emotional intelligence. Discussion of manipulation may vary depending on which behavior is specifically included, and whether one is referring to the general population or in clinical contexts.
Antisocial personality disorder features deceit and manipulation of others as an explicit criterion. This runs the gamut of deception, from lying and superficial displays of charisma to frequent use of aliases and disguises, and criminal fraudulence. The Alternative Model of Personality Disorders (AMPD) in Section III of DSM-5 requires the presence of manipulative behaviour for a diagnosis of ASPD, with two symptoms (deceitfulness and manipulativeness) reflecting such tendencies out of the seven listed, with six being required for diagnosis (the others are impulsivity, irresponsibility, risk-taking, callousness and hostility). The related syndrome of psychopathy also features pathological lying and manipulation for personal gain, as well as superficial charm, as cardinal features.
Borderline personality disorder is unique in the grouping as "borderline" manipulation is characterized as unintentional and dysfunctional manipulation. Marsha M. Linehan has stated that people with borderline personality disorder often exhibit behaviors which are not truly manipulative, but are erroneously interpreted as such. According to Linehan, these behaviors often appear as unthinking manifestations of intense pain, and are often not deliberate as to be considered truly manipulative. In the DSM-V, manipulation was removed as a defining characteristic of borderline personality disorder.
Conduct disorder is the appearance of antisocial behavior occurring in children and adolescents. Individuals with this disorder are characterized by a lack of empathy, a low sense of guilt, and shallow emotionality. Aggression and violence are two factors that characterize individuals with this disorder. In order for a child to be diagnosed with this disorder, the behavior must be consistent for at least 12 months.
Factitious disorder is a mental illness in which individuals purposely fake having symptoms of some condition, physically or psychologically. Fabricating illnesses allows individuals to feel a thrill and receive free aid in hospital admissions and treatment. Feelings of persistence, abuse in early childhood, and excessive thoughts were common for these individuals who connected to Borderline Personality Disorder.
Histrionic personality disorder is a personality disorder characterized by dramatic and attention seeking behavior. Individuals with the personality disorder exhibit inappropriate alluring tactics, and irregular emotional patterns. Histrionic symptoms include "seeking reassurance, switching emotional, and feeling uncomfortable." Histrionic and Narcissistic Personality Disorders overlap because decisions are sporadic and unreliable.
Narcissistic personality disorder is characterized by a belief of superiority, exhibitionism, self centeredness and a lack of empathy. Individuals with NPD can be charming but also show exploitive behaviors in the interpersonal domain. They are motivated by success, beauty, and may have feelings of entitlement. Those with this disorder often engage in assertive self enhancement and antagonistic self protection. All of these factors can lead an individual with narcissistic personality disorder to manipulate others.
Under the ICD-11's dimensional model of personality pathology, deceitful, manipulative and exploitative behaviours are cardinal expressions of the lack of empathy domain of the Dissociality trait.
See also
Appeal to emotion
Brainwashing
Bullying
Culture of fear
Coercive persuasion
Confidence trick
Crowd manipulation
Covert hypnosis
Covert interrogation
Dark triad
Deception
Demagogy
Discrediting tactic
Dumbing down
Emotional blackmail
Fear mongering
Gaslighting
Half-truth
Internet manipulation
Isolation to facilitate abuse
List of confidence tricks
Lying
Master suppression techniques
Media manipulation
Mind control
Mobbing
Psychological abuse
Psychological warfare
Sheeple
Social engineering (political science)
Social influence
Whispering campaign
References
Further reading
Books
Academic papers
Social influence
Narcissism
Psychopathy
Anti-social behaviour
Borderline personality disorder | 0.761645 | 0.998934 | 0.760834 |
Nomothetic | Nomothetic literally means "proposition of the law" (Greek derivation) and is used in philosophy, psychology, and law with differing meanings.
Etymology
In the general humanities usage, nomothetic may be used in the sense of "able to lay down the law", "having the capacity to posit lasting sense" (from , from nomothetēs νομοθέτης "lawgiver", from νόμος "law" and the Proto-Indo-European etymon nem- meaning to "take, give, account, apportion")), e.g., 'the nomothetic capability of the early mythmakers' or 'the nomothetic skill of Adam, given the power to name things.'
In psychology
In psychology, nomothetic refers to research about general principles or generalizations across a population of individuals. For example, the Big Five model of personality and Piaget's developmental stages are nomothetic models of personality traits and cognitive development respectively. In contrast, idiographic refers to research about the unique and contingent aspects of individuals, as in psychological case studies.
In psychological testing, nomothetic measures are contrasted to ipsative or idiothetic measures, where nomothetic measures are measures that are observed on a relatively large sample and have a more general outlook.
In other fields
In sociology, nomothetic explanation presents a generalized understanding of a given case, and is contrasted with idiographic explanation, which presents a full description of a given case. Nomothetic approaches are most appropriate to the deductive approach to social research inasmuch as they include the more highly structured research methodologies which can be replicated and controlled, and which focus on generating quantitative data with a view to explaining causal relationships.
In anthropology, nomothetic refers to the use of generalization rather than specific properties in the context of a group as an entity.
In history, nomothetic refers to the philosophical shift in emphasis away from traditional presentation of historical text restricted to wars, laws, dates, and such, to a broader appreciation and deeper understanding.
See also
Nomothetic and idiographic
Nomological
References
Sociological terminology | 0.786982 | 0.966758 | 0.760821 |
Depression (mood) | Depression is a mental state of low mood and aversion to activity. It affects about 3.5% of the global population, or about 280 million people of all ages (as of 2020). Depression affects a person's thoughts, behavior, feelings, and sense of well-being. Experiences that would normally bring a person pleasure or joy give reduced pleasure or joy, and the afflicted person often experiences a loss of motivation or interest in those activities.
Depressed mood is a symptom of some mood disorders, also categorized and called depression, such as major depressive disorder, bipolar disorder and dysthymia; it is a normal temporary reaction to life events, such as the loss of a loved one; and it is also a symptom of some physical diseases and a side effect of some drugs and medical treatments. It may feature sadness, difficulty in thinking and concentration, or a significant increase or decrease in appetite and time spent sleeping. People experiencing depression may have feelings of dejection or hopelessness, and may experience suicidal thoughts. Depression can either be short term or long term.
Contributing factors
Life events
Adversity in childhood, such as bereavement, neglect, mental abuse, physical abuse, sexual abuse, or unequal parental treatment of siblings can contribute to depression in adulthood. Childhood physical or sexual abuse in particular significantly correlates with the likelihood of experiencing depression over the survivor's lifetime. People who have experienced four or more adverse childhood experiences are 3.2 to 4.0 times more likely to suffer from depression. Poor housing quality, non-functionality, lack of green spaces, and exposure to noise and air pollution are linked to depressive moods, emphasizing the need for consideration in planning to prevent such outcomes. Locality has also been linked to depression and other negative moods. The rate of depression among those who reside in large urban areas is shown to be lower than those who do not. Likewise, those from smaller towns and rural areas tend to have higher rates of depression, anxiety, and psychological unwellness.
Studies have consistently shown that physicians have had the highest depression and suicide rates compared to people in many other lines of work—for suicide, 40% higher for male physicians and 130% higher for female physicians.
Life events and changes that may cause depressed mood include (but are not limited to): childbirth, menopause, financial difficulties, unemployment, stress (such as from work, education, military service, family, living conditions, marriage, etc.), a medical diagnosis (cancer, HIV, diabetes, etc.), bullying, loss of a loved one, natural disasters, social isolation, rape, relationship troubles, jealousy, separation, or catastrophic injury. Similar depressive symptoms are associated with survivor's guilt. Adolescents may be especially prone to experiencing a depressed mood following social rejection, peer pressure, or bullying.
Childhood and adolescence
Depression in childhood and adolescence is similar to adult major depressive disorder, although young sufferers may exhibit increased irritability or behavioral dyscontrol instead of the more common sad, empty, or hopeless feelings seen with adults. Children who are under stress, experiencing loss, or have other underlying disorders are at a higher risk for depression. Childhood depression is often comorbid with mental disorders outside of other mood disorders; most commonly anxiety disorder and conduct disorder. Depression also tends to run in families.
Personality
Depression is associated with low extraversion, and people who have high levels of neuroticism are more likely to experience depressive symptoms and are more likely to receive a diagnosis of a depressive disorder. Additionally, depression is associated with low conscientiousness. Some factors that may arise from low conscientiousness include disorganization and dissatisfaction with life. Individuals may be more exposed to stress and depression as a result of these factors.
Side effect of medical treatment
It is possible that some early-generation beta-blockers induce depression in some patients, though the evidence for this is weak and conflicting. There is strong evidence for a link between alpha interferon therapy and depression. One study found that a third of alpha interferon-treated patients had developed depression after three months of treatment. (Beta interferon therapy appears to have no effect on rates of depression.) There is moderately strong evidence that finasteride when used in the treatment of alopecia increases depressive symptoms in some patients. Evidence linking isotretinoin, an acne treatment, to depression is strong. Other medicines that seem to increase the risk of depression include anticonvulsants, antimigraine drugs, antipsychotics and hormonal agents such as gonadotropin-releasing hormone agonist.
Substance-induced
Several drugs of abuse can cause or exacerbate depression, whether in intoxication, withdrawal, and from chronic use. These include alcohol, sedatives (including prescription benzodiazepines), opioids (including prescription pain killers and illicit drugs such as heroin), stimulants (such as cocaine and amphetamines), hallucinogens, and inhalants.
Non-psychiatric illnesses
Depressed mood can be the result of a number of infectious diseases, nutritional deficiencies, neurological conditions, and physiological problems, including hypoandrogenism (in men), Addison's disease, Cushing's syndrome, pernicious anemia, hypothyroidism, hyperparathyroidism, Lyme disease, multiple sclerosis, Parkinson's disease, celiac disease, chronic pain, stroke, diabetes, cancer, and HIV.
Studies have found that anywhere from 30 to 85 percent of patients suffering from chronic pain are also clinically depressed. A 2014 study by Hooley et al. concluded that chronic pain increased the chance of death by suicide by two to three times. In 2017, the British Medical Association found that 49% of UK chronic pain patients also had depression.
Psychiatric syndromes
A number of psychiatric syndromes feature depressed mood as a main symptom. The mood disorders are a group of disorders considered to be primary disturbances of mood. These include major depressive disorder (commonly called major depression or clinical depression) where a person has at least two weeks of depressed mood or a loss of interest or pleasure in nearly all activities; and dysthymia, a state of chronic depressed mood, the symptoms of which do not meet the severity of a major depressive episode. Another mood disorder, bipolar disorder, features one or more episodes of abnormally elevated mood, cognition, and energy levels, but may also involve one or more episodes of depression. Individuals with bipolar depression are often misdiagnosed with unipolar depression. When the course of depressive episodes follows a seasonal pattern, the disorder (major depressive disorder, bipolar disorder, etc.) may be described as a seasonal affective disorder.
Outside the mood disorders: borderline personality disorder often features an extremely intense depressive mood; adjustment disorder with depressed mood is a psychological response to an identifiable event or stressor, in which the resulting emotional or behavioral symptoms are significant but do not meet the criteria for a major depressive episode; and posttraumatic stress disorder, a mental disorder that sometimes follows trauma, is commonly accompanied by depressed mood.
Inflammation
Historical legacy
Researchers have begun to conceptualize ways in which the historical legacies of racism and colonialism may create depressive conditions. Given the lived experiences of marginalized peoples, ranging from conditions of migration, class stratification, cultural genocide, labor exploitation, and social immobility, depression can be seen as a "rational response to global conditions", according to Ann Cvetkovich.
Psychogeographical depression overlaps somewhat with the theory of "deprejudice", a portmanteau of depression and prejudice proposed by Cox, Abramson, Devine, and Hollon in 2012, who argue for an integrative approach to studying the often comorbid experiences. Cox, Abramson, Devine, and Hollon are concerned with the ways in which social stereotypes are often internalized, creating negative self-stereotypes that then produce depressive symptoms.
Unlike the theory of "deprejudice", a psychogeographical theory of depression attempts to broaden study of the subject beyond an individual experience to one produced on a societal scale, seeing particular manifestations of depression as rooted in dispossession; historical legacies of genocide, slavery, and colonialism are productive of segregation, both material and psychic material deprivation, and concomitant circumstances of violence, systemic exclusion, and lack of access to legal protections. The demands of navigating these circumstances compromise the resources available to a population to seek comfort, health, stability, and sense of security. The historical memory of this trauma conditions the psychological health of future generations, making psychogeographical depression an intergenerational experience as well.
This work is supported by recent studies in genetic science which has demonstrated an epigenetic link between the trauma suffered by Holocaust survivors and the genetic reverberations for subsequent generations. Likewise, research by scientists at Emory University suggests that memories of trauma can be inherited, rendering offspring vulnerable to psychological predispositions for stress disorders, schizophrenia, and PTSD.
Measures
Measures of depression include, but are not limited to: Beck Depression Inventory-11 and the 9-item depression scale in the Patient Health Questionnaire (PHQ-9). Both of these measures are psychological tests that ask personal questions of the participant, and have mostly been used to measure the severity of depression. The Beck Depression Inventory is a self-report scale that helps a therapist identify the patterns of depression symptoms and monitor recovery. The responses on this scale can be discussed in therapy to devise interventions for the most distressing symptoms of depression.
Theories
Schools of depression theories include:
Cognitive theory of depression
Tripartite Model of Anxiety and Depression
Behavioral theories of depression
Evolutionary approaches to depression
Biology of depression
Epigenetics of depression
Management
Depressed mood may not require professional treatment, and may be a normal temporary reaction to life events, a symptom of some medical condition, or a side effect of some drugs or medical treatments. A prolonged depressed mood, especially in combination with other symptoms, may lead to a diagnosis of a psychiatric or medical condition which may benefit from treatment.
The UK National Institute for Health and Care Excellence (NICE) 2009 guidelines indicate that antidepressants should not be routinely used for the initial treatment of mild depression, because the risk-benefit ratio is poor.
Physical activity has a protective effect against the emergence of depression in some people.
There is limited evidence suggesting yoga may help some people with depressive disorders or elevated levels of depression, but more research is needed.
Reminiscence of old and fond memories is another alternative form of treatment, especially for the elderly who have lived longer and have more experiences in life. It is a method that causes a person to recollect memories of their own life, leading to a process of self-recognition and identifying familiar stimuli. By maintaining one's personal past and identity, it is a technique that stimulates people to view their lives in a more objective and balanced way, causing them to pay attention to positive information in their life stories, which would successfully reduce depressive mood levels.
There is limited evidence that continuing antidepressant medication for one year reduces the risk of depression recurrence with no additional harm. Recommendations for psychological treatments or combination treatments in preventing recurrence are not clear.
Epidemiology
Depression is the leading cause of disability worldwide, the United Nations (UN) health agency reported, estimating that it affects more than 300 million people worldwide – the majority of them women, young people and the elderly. An estimated 4.4 percent of the global population has depression, according to a report released by the UN World Health Organization (WHO), which shows an 18 percent increase in the number of people living with depression between 2005 and 2015.
Depression is a major mental-health cause of disease burden. Its consequences further lead to significant burden in public health, including a higher risk of dementia, premature mortality arising from physical disorders, and maternal depression impacts on child growth and development. Approximately 76% to 85% of depressed people in low- and middle-income countries do not receive treatment; barriers to treatment include: inaccurate assessment, lack of trained health-care providers, social stigma and lack of resources.
The stigma comes from misguided societal views that people with mental illness are different from everyone else, and they can choose to get better only if they wanted to. Due to this more than half of the people with depression do not receive help with their disorders. The stigma leads to a strong preference for privacy. An analysis of 40,350 undergraduates from 70 institutions by Posselt and Lipson found that undergraduates who perceived their classroom environments as highly competitive had a 37% higher chance of developing depression and a 69% higher chance of developing anxiety. Several studies have suggested that unemployment roughly doubles the risk of developing depression.
The World Health Organization has constructed guidelines – known as The Mental Health Gap Action Programme (mhGAP) – aiming to increase services for people with mental, neurological and substance-use disorders. Depression is listed as one of conditions prioritized by the programme. Trials conducted show possibilities for the implementation of the programme in low-resource primary-care settings dependent on primary-care practitioners and lay health-workers.
Examples of mhGAP-endorsed therapies targeting depression include Group Interpersonal Therapy as group treatment for depression and "Thinking Health", which utilizes cognitive behavioral therapy to tackle perinatal depression. Furthermore, effective screening in primary care is crucial for the access of treatments. The mhGAP adopted its approach of improving detection rates of depression by training general practitioners. However, there is still weak evidence supporting this training.
According to 2011 study, people who are high in hypercompetitive traits are also likely to measure higher for depression and anxiety.
History
The term depression was derived from the Latin verb , "to press down". From the 14th century, "to depress" meant to subjugate or to bring down in spirits. It was used in 1665 in English author Richard Baker's Chronicle to refer to someone having "a great depression of spirit", and by English author Samuel Johnson in a similar sense in 1753.
In Ancient Greece, disease was thought due to an imbalance in the four basic bodily fluids, or humors. Personality types were similarly thought to be determined by the dominant humor in a particular person. Derived from the Ancient Greek , "black", and , "bile", melancholia was described as a distinct disease with particular mental and physical symptoms by Hippocrates in his Aphorisms, where he characterized all "fears and despondencies, if they last a long time" as being symptomatic of the ailment.
During the 18th century, the humoral theory of melancholia was increasingly being challenged by mechanical and electrical explanations; references to dark and gloomy states gave way to ideas of slowed circulation and depleted energy. German physician Johann Christian Heinroth, however, argued melancholia was a disturbance of the soul due to moral conflict within the patient.
In the 20th century, the German psychiatrist Emil Kraepelin distinguished manic depression. The influential system put forward by Kraepelin unified nearly all types of mood disorder into manic–depressive insanity. Kraepelin worked from an assumption of underlying brain pathology, but also promoted a distinction between endogenous (internally caused) and exogenous (externally caused) types.
Other psycho-dynamic theories were proposed. Existential and humanistic theories represented a forceful affirmation of individualism. Austrian existential psychiatrist Viktor Frankl connected depression to feelings of futility and meaninglessness. Frankl's logotherapy addressed the filling of an "existential vacuum" associated with such feelings, and may be particularly useful for depressed adolescents.
Researchers theorized that depression was caused by a chemical imbalance in neurotransmitters in the brain, a theory based on observations made in the 1950s of the effects of reserpine and isoniazid in altering monoamine neurotransmitter levels and affecting depressive symptoms. During the 1960s and 70s, manic-depression came to refer to just one type of mood disorder (now most commonly known as bipolar disorder) which was distinguished from (unipolar) depression. The terms unipolar and bipolar had been coined by German psychiatrist Karl Kleist.
In July 2022, British psychiatrist Joanna Moncrieff, also psychiatrist Mark Horowtiz and others proposed in a study on academic journal Molecular Psychiatry that depression is not caused by a serotonin imbalance in the human body, unlike what most of the psychiatry community points to, and that therefore anti-depressants do not work against the illness. However, such study was met with criticism from some psychiatrists, who argued the study's methodology used an indirect trace of serotonin, instead of taking direct measurements of the molecule. Moncrieff said that, despite her study's conclusions, no one should interrupt their treatment if they are taking any anti-depressant.
See also
Alain Ehrenberg, French sociologist, author of Weariness of the Self: Diagnosing the History of Depression in the Contemporary Age
Dysthymia
Major depressive disorder
References
External links
Emotions
Neuropsychology
Mental states | 0.761258 | 0.999385 | 0.76079 |
Unconscious mind | In psychoanalysis and other psychological theories, the unconscious mind (or the unconscious) is the part of the psyche that is not available to introspection. Although these processes exist beneath the surface of conscious awareness, they are thought to exert an effect on conscious thought processes and behavior. Empirical evidence suggests that unconscious phenomena include repressed feelings and desires, memories, automatic skills, subliminal perceptions, and automatic reactions. The term was coined by the 18th-century German Romantic philosopher Friedrich Schelling and later introduced into English by the poet and essayist Samuel Taylor Coleridge.
The emergence of the concept of the unconscious in psychology and general culture was mainly due to the work of Austrian neurologist and psychoanalyst Sigmund Freud. In psychoanalytic theory, the unconscious mind consists of ideas and drives that have been subject to the mechanism of repression: anxiety-producing impulses in childhood are barred from consciousness, but do not cease to exist, and exert a constant pressure in the direction of consciousness. However, the content of the unconscious is only knowable to consciousness through its representation in a disguised or distorted form, by way of dreams and neurotic symptoms, as well as in slips of the tongue and jokes. The psychoanalyst seeks to interpret these conscious manifestations in order to understand the nature of the repressed.
The unconscious mind can be seen as the source of dreams and automatic thoughts (those that appear without any apparent cause), the repository of forgotten memories (that may still be accessible to consciousness at some later time), and the locus of implicit knowledge (the things that we have learned so well that we do them without thinking). Phenomena related to semi-consciousness include awakening, implicit memory, subliminal messages, trances, hypnagogia and hypnosis. While sleep, sleepwalking, dreaming, delirium and comas may signal the presence of unconscious processes, these processes are seen as symptoms rather than the unconscious mind itself.
Some critics have doubted the existence of the unconscious altogether.
Historical overview
German
The term "unconscious" was coined by the 18th-century German Romantic philosopher Friedrich Schelling (in his System of Transcendental Idealism, ch. 6, § 3) and later introduced into English by the poet and essayist Samuel Taylor Coleridge (in his Biographia Literaria). Some rare earlier instances of the term "unconsciousness" can be found in the work of the 18th-century German physician and philosopher Ernst Platner.
Vedas
Influences on thinking that originate from outside an individual's consciousness were reflected in the ancient ideas of temptation, divine inspiration, and the predominant role of the gods in affecting motives and actions. The idea of internalised unconscious processes in the mind was present in antiquity, and has been explored across a wide variety of cultures. Unconscious aspects of mentality were referred to between 2,500 and 600 BC in the Hindu texts known as the Vedas, found today in Ayurvedic medicine.
Paracelsus
Paracelsus is credited as the first to make mention of an unconscious aspect of cognition in his work Von den Krankheiten (translates as "About illnesses", 1567), and his clinical methodology created a cogent system that is regarded by some as the beginning of modern scientific psychology.
Shakespeare
William Shakespeare explored the role of the unconscious in many of his plays, without naming it as such.
Philosophy
Western philosophers such as Arthur Schopenhauer, Baruch Spinoza, Gottfried Wilhelm Leibniz, Johann Gottlieb Fichte, Georg Wilhelm Friedrich Hegel, Karl Robert Eduard von Hartmann, Carl Gustav Carus, Søren Aabye Kierkegaard, Friedrich Wilhelm Nietzsche and Thomas Carlyle used the word unconscious.
In 1880 at the Sorbonne, Edmond Colsenet defended a philosophy thesis (PhD) on the unconscious. Elie Rabier and Alfred Fouillee performed syntheses of the unconscious "at a time when Freud was not interested in the concept".
Psychology
Nineteenth century
According to historian of psychology Mark Altschule, "It is difficult—or perhaps impossible—to find a nineteenth-century psychologist or psychiatrist who did not recognize unconscious cerebration as not only real but of the highest importance." In 1890, when psychoanalysis was still unheard of, William James, in his monumental treatise on psychology (The Principles of Psychology), examined the way Schopenhauer, von Hartmann, Janet, Binet and others had used the term 'unconscious' and 'subconscious.'" German psychologists, Gustav Fechner and Wilhelm Wundt, had begun to use the term in their experimental psychology, in the context of manifold, jumbled sense data that the mind organizes at an unconscious level before revealing it as a cogent totality in conscious form." Eduard von Hartmann published a book dedicated to the topic, Philosophy of the Unconscious, in 1869.
Freud
Sigmund Freud and his followers developed an account of the unconscious mind. He worked with the unconscious mind to develop an explanation for mental illness. It plays an important role in psychoanalysis.
Freud divided the mind into the conscious mind (or the ego) and the unconscious mind. The latter was then further divided into the id (or instincts and drive) and the superego (or conscience). In this theory, the unconscious refers to the mental processes of which individuals are unaware. Freud proposed a vertical and hierarchical architecture of human consciousness: the conscious mind, the preconscious, and the unconscious mind—each lying beneath the other. He believed that significant psychic events take place "below the surface" in the unconscious mind. Contents of the unconscious mind go through the preconscious mind before coming to conscious awareness. He interpreted such events as having both symbolic and actual significance.
In psychoanalytic terms, the unconscious does not include all that is not conscious, but rather that which is actively repressed from conscious thought. In the psychoanalytic view, unconscious mental processes can only be recognized through analysis of their effects in consciousness. Unconscious thoughts are not directly accessible to ordinary introspection, but they are capable of partially evading the censorship mechanism of repression in a disguised form, manifesting, for example, as dream elements or neurotic symptoms. Such symptoms are supposed to be capable of being "interpreted" during psychoanalysis, with the help of methods such as free association, dream analysis, and analysis of verbal slips and other unintentional manifestations in conscious life.
Jung
Carl Gustav Jung agreed with Freud that the unconscious is a determinant of personality, but he proposed that the unconscious be divided into two layers: the personal unconscious and the collective unconscious. The personal unconscious is a reservoir of material that was once conscious but has been forgotten or suppressed, much like Freud's notion. The collective unconscious, however, is the deepest level of the psyche, containing the accumulation of inherited psychic structures and archetypal experiences. Archetypes are not memories but energy centers or psychological functions that are apparent in the culture's use of symbols. The collective unconscious is therefore said to be inherited and contain material of an entire species rather than of an individual. The collective unconscious is, according to Jung, "[the] whole spiritual heritage of mankind's evolution, born anew in the brain structure of every individual".
In addition to the structure of the unconscious, Jung differed from Freud in that he did not believe that sexuality was at the base of all unconscious thoughts.
Dreams
Freud
The purpose of dreams, according to Freud, is to fulfill repressed wishes while simultaneously allowing the dreamer to remain asleep. The dream is a disguised fulfillment of the wish because the unconscious desire in its raw form would disturb the sleeper and can only avoid censorship by associating itself with elements that are not subject to repression. Thus Freud distinguished between the manifest content and latent content of the dream. The manifest content consists of the plot and elements of a dream as they appear to consciousness, particularly upon waking, as the dream is recalled. The latent content refers to the hidden or disguised meaning of the events and elements of the dream. It represents the unconscious psychic realities of the dreamer's current issues and childhood conflicts, the nature of which the analyst is seeking to understand through interpretation of the manifest content. In Freud's theory, dreams are instigated by the events and thoughts of everyday life. In what he called the "dream-work", these events and thoughts, governed by the rules of language and the reality principle, become subject to the "primary process" of unconscious thought, which is governed by the pleasure principle, wish gratification and the repressed sexual scenarios of childhood. The dream-work involves a process of disguising these unconscious desires in order to preserve sleep. This process occurs primarily by means of what Freud called condensation and displacement. Condensation is the focusing of the energy of several ideas into one, and displacement is the surrender of one idea's energy to another more trivial representative. The manifest content is thus thought to be a highly significant simplification of the latent content, capable of being deciphered in the analytic process, potentially allowing conscious insight into unconscious mental activity.
Neurobiological theory of dreams
Allan Hobson and colleagues developed what they called the activation-synthesis hypothesis which proposes that dreams are simply the side effects of the neural activity in the brain that produces beta brain waves during REM sleep that are associated with wakefulness. According to this hypothesis, neurons fire periodically during sleep in the lower brain levels and thus send random signals to the cortex. The cortex then synthesizes a dream in reaction to these signals in order to try to make sense of why the brain is sending them. However, the hypothesis does not state that dreams are meaningless, it just downplays the role that emotional factors play in determining dreams.
Contemporary cognitive psychology
Research
There is an extensive body of research in contemporary cognitive psychology devoted to mental activity that is not mediated by conscious awareness. Most of this research on unconscious processes has been done in the academic tradition of the information processing paradigm. The cognitive tradition of research into unconscious processes does not rely on the clinical observations and theoretical bases of the psychoanalytic tradition; instead it is mostly data driven. Cognitive research reveals that individuals automatically register and acquire more information than they are consciously aware of or can consciously remember and report.
Much research has focused on the differences between conscious and unconscious perception. There is evidence that whether something is consciously perceived depends both on the incoming stimulus (bottom up strength) and on top-down mechanisms like attention. Recent research indicates that some unconsciously perceived information can become consciously accessible if there is cumulative evidence. Similarly, content that would normally be conscious can become unconscious through inattention (e.g. in the attentional blink) or through distracting stimuli like visual masking.
Unconscious processing of information about frequency
An extensive line of research conducted by Hasher and Zacks has demonstrated that individuals register information about the frequency of events automatically (outside conscious awareness and without engaging conscious information processing resources). Moreover, perceivers do this unintentionally, truly "automatically", regardless of the instructions they receive, and regardless of the information processing goals they have. The ability to unconsciously and relatively accurately tally the frequency of events appears to have little or no relation to the individual's age, education, intelligence, or personality. Thus it may represent one of the fundamental building blocks of human orientation in the environment and possibly the acquisition of procedural knowledge and experience, in general.
Criticism of the Freudian concept
The notion that the unconscious mind exists at all has been disputed.
Franz Brentano rejected the concept of the unconscious in his 1874 book Psychology from an Empirical Standpoint, although his rejection followed largely from his definitions of consciousness and unconsciousness.
Jean-Paul Sartre offers a critique of Freud's theory of the unconscious in Being and Nothingness, based on the claim that consciousness is essentially self-conscious. Sartre also argues that Freud's theory of repression is internally flawed. Philosopher Thomas Baldwin argues that Sartre's argument is based on a misunderstanding of Freud.
Erich Fromm contends that "The term 'the unconscious' is actually a mystification (even though one might use it for reasons of convenience, as I am guilty of doing in these pages). There is no such thing as the unconscious; there are only experiences of which we are aware, and others of which we are not aware, that is, of which we are unconscious. If I hate a man because I am afraid of him, and if I am aware of my hate but not of my fear, we may say that my hate is conscious and that my fear is unconscious; still my fear does not lie in that mysterious place: 'the' unconscious."
John Searle has offered a critique of the Freudian unconscious. He argues that the Freudian cases of shallow, consciously held mental states would be best characterized as 'repressed consciousness,' while the idea of more deeply unconscious mental states is more problematic. He contends that the very notion of a collection of "thoughts" that exist in a privileged region of the mind such that they are in principle never accessible to conscious awareness, is incoherent. This is not to imply that there are not "nonconscious" processes that form the basis of much of conscious life. Rather, Searle simply claims that to posit the existence of something that is like a "thought" in every way except for the fact that no one can ever be aware of it (can never, indeed, "think" it) is an incoherent concept. To speak of "something" as a "thought" either implies that it is being thought by a thinker or that it could be thought by a thinker. Processes that are not causally related to the phenomenon called thinking are more appropriately called the nonconscious processes of the brain.
Other critics of the Freudian unconscious include David Stannard, Richard Webster, Ethan Watters, Richard Ofshe, and Eric Thomas Weber.
Some scientific researchers proposed the existence of unconscious mechanisms that are very different from the Freudian ones. They speak of a "cognitive unconscious" (John Kihlstrom), an "adaptive unconscious" (Timothy Wilson), or a "dumb unconscious" (Loftus and Klinger), which executes automatic processes but lacks the complex mechanisms of repression and symbolic return of the repressed, and the "deep unconscious system" of Robert Langs.
In modern cognitive psychology, many researchers have sought to strip the notion of the unconscious from its Freudian heritage, and alternative terms such as "implicit" or "automatic" have been used. These traditions emphasize the degree to which cognitive processing happens outside the scope of cognitive awareness, and show that things we are unaware of can nonetheless influence other cognitive processes as well as behavior. Active research traditions related to the unconscious include implicit memory (for example, priming), and Pawel Lewicki's nonconscious acquisition of knowledge.
See also
Adaptive unconscious
Consciousness
Dreams in analytical psychology
Introspection illusion
Philosophy of mind
Preconscious
Subconscious
Satanic panic
Social contagion
Unconscious cognition
Unconscious communication
Unconscious spirit
Mass delusion
Mass psychogenic illness
Notes
References
Matt Ffytche, The Foundation of the Unconscious: Schelling, Freud and the Birth of the Modern Psyche, Cambridge University Press, 2011.
Jon Mills, The Unconscious Abyss: Hegel's Anticipation of Psychoanalysis, SUNY Press, 2002.
Jon Mills, Underworlds: Philosophies of the Unconscious from Psychoanalysis to Metaphysics. Routledge, 2014.
S. J. McGrath, The Dark Ground of Spirit: Schelling and the Unconscious, Taylor & Francis Group, 2012.
External links
The Rediscovery of the Unconscious
18th-century neologisms
Analytical psychology
Central nervous system
Freudian psychology
Hypnosis
Mental processes
Neuropsychology
Psychoanalytic terminology | 0.762718 | 0.997411 | 0.760743 |
Mental health professional | A mental health professional is a health care practitioner or social and human services provider who offers services for the purpose of improving an individual's mental health or to treat mental disorders. This broad category was developed as a name for community personnel who worked in the new community mental health agencies begun in the 1970s to assist individuals moving from state hospitals, to prevent admissions, and to provide support in homes, jobs, education, and community. These individuals (i.e., state office personnel, private sector personnel, and non-profit, now voluntary sector personnel) were the forefront brigade to develop the community programs, which today may be referred to by names such as supported housing, psychiatric rehabilitation, supported or transitional employment, sheltered workshops, supported education, daily living skills, affirmative industries, dual diagnosis treatment, individual and family psychoeducation, adult day care, foster care, family services and mental health counseling.
Psychiatrists - physicians who use the biomedical model to treat mental health problems - may prescribe medication. The term counselors often refers to office-based professionals who offer therapy sessions to their clients, operated by organizations such as pastoral counseling (which may or may not work with long-term services clients) and family counselors. Mental health counselors may refer to counselors working in residential services in the field of mental health in community programs.
As community professionals
As Dr. William Anthony, father of psychiatric rehabilitation, described, psychiatric nurses (RNMH, RMN, CPN), clinical psychologists (PsyD or PhD), clinical social workers (MSW or MSSW), mental health counselors (MA or MS), professional counselors, pharmacists, as well as many other professionals are often educated in "psychiatric fields" or conversely, educated in a generic community approach (e.g. human services programs or health and human services in 2013). However, his primary concern is education that leads to a willingness to work with "long-term services and supports" community support in the community to lead to better life quality for the individual, the families and the community.
The community support framework in the US of the 1970s is taken-for-granted as the base for new treatment developments (e.g., eating disorders, drug addiction programs) which tend to be free-standing clinics for specific "disorders". Typically, the term "mental health professional" does not refer to other categorical disability areas, such as intellectual and developmental disability (which trains its own professionals and maintains its own journals, and US state systems and institutions). Psychiatric rehabilitation has also been reintroduced into the transfer to behavioral health care systems.
As certified and licensed (across institutions and communities)
These professionals often deal with the same illnesses, disorders, conditions, and issues (though may separate on-site locations, such as hospital or community for the same clientele); however, their scope of practice differs and more particularly, their positions and roles in the fields of mental health services and systems. The most significant difference between mental health professionals are the laws regarding required education and training across the various professions. However, the most significant change has been the Supreme Court Olmstead decision on the most integrated setting which should further reduce state hospital utilization; yet with new professionals seeking right for community treatment orders and rights to administer medications (original community programs, residents taught to self-administer medications, 1970s).
In 2013, new mental health practitioners are licensed or certified in the community (e.g., PhD, education in private clinical practice) by states, degrees and certifications are offered in fields such as psychiatric rehabilitation (MS, PhD), BA psychology (liberal arts, experimental/clinical/existential/community) to MA licensing is now more popular, BA (to PhD) mid-level program management, qualified civil service professionals, and social workers remain the mainstay of community admissions procedures (licensed by state, often generic training) in the US. Surprisingly, state direction has moved from psychiatry or clinical psychology to community leadership and professionalization of community services management.
Entry level recruitment and training remain a primary concern (since the 1970s, then often competing with fast food positions), and the US Direct Support Workforce includes an emphasis on also training of psychiatric aides, behavioral aides, and addictions aides to work in homes and communities. The Centers for Medicaid and Medicare have new provisions for "self-direction" in services and new options are in place for individual plans for better life outcomes. Community programs are increasingly using health care financing, such as Medicaid, and Mental Health Parity is now law in the US.
Professional distinctions
Comparison of American mental health professionals
Additional Sources/Clarifications: now operating programs with health care financing in the community. Higher paid medical and health services manager which only operates facilities, considered to be easier than dispersed services management in the community for long-term services and supports (LTSS) often by disability NGOs or state governments (civil service).
The Mental Health Professional Class has often not been included in these occupational schemas in which Occupational Handbooks often separate Human Service Management Classes and Professional Classes from the term Health Care. Common salary ranges are in the $30,000-40,000 for the higher professional at the small community agency. The professionals are considered to be part of the federal Health and Human Services Professions. Their responsibilities at the high gates are greater than a psychiatrist assistant who is responsible, to date, only to the psychiatrist. The occupational therapist is considered as an aide to that professional level, as is a behavioral specialist as hired by the agency and the nurse practitioner. Mental health workers in the community (e.g., workers with the homeless, in homes, families and jails, community programs such as group homes) may still be termed Community Support Workers with diverse degrees and qualifications [US Direct Support Professional Workforce].
Children's professionals in the field of mental health include inclusion educators (over $80,000 at the PhD levels) who have been cross-educated in the fields, and "residential treatment" personnel which need dual reviews of credentials (child care, family support, child welfare, independent living, special education and home life, residential skills training programs).
Treatment diversity and community mental health
Mental health professionals exist to improve the mental health of individuals, couples, families and the community-at-large. [In this generic use, mental health is available to the entire population, similar to the use by mental health associations.] Because mental health covers a wide range of elements, the scope of practice greatly varies between professionals. Some professionals may enhance relationships while others treat specific mental disorders and illness; still, others work on population-based health promotion or prevention activities. Often, as with the case of psychiatrists and psychologists, the scope of practice may overlap often due to common hiring and promotion practices by employers.
As indicated earlier, community mental health professionals have been involved in the beginning and operating community programs which include ongoing efforts to improve life outcomes, originally through long term services and supports (LTSS). Termed functional or competency-based programs, this service also stressed decision making and self-determination or empowerment as critical aspects. Community mental health professionals may also serve children who have different needs, as do families, including family therapy, financial assistance and support services. Community mental health professionals serve people of all ages from young children with autism, to children with emotional (or behavioral) needs, to grandma who has Alzheimer's or dementia and is living at home after dad dies.
Most qualified mental health professionals will refer a patient or client to another professional if the specific type of treatment needed is outside of their scope of practice. The main community concern is "zero rejection" from community services for individuals who have been termed "hard to serve" in the population ["schizophrenia"] ["dual diagnosis"] or who have additional needs such as mobility and sensory impairments. Additionally, many mental health professionals may sometimes work together using a variety of treatment options such as concurrent psychiatric medication and psychotherapy and supported housing. Additionally, specific mental health professionals may be utilized based upon their cultural and religious background or experience, as part of a theory of both alternative medicines and of the nature of helping and ethnicity.
Primary care providers, such as internists, pediatricians, and family physicians, may provide initial components of mental health diagnosis and treatment for children and adults; however, family physicians in some states refuse to even prescribe a psychotropic medication deferring to separately funded "medication management" services. Community programs in the categorical field of mental health were designed (1970s) to have a personal family physician for every client in their programs, except for institutional settings and nursing facilities which have only one or two for a large facility (1980, 2013).
In particular, family physicians are trained during residency in interviewing and diagnostic skills, and may be quite skilled in managing conditions such as ADHD in children and depression in adults. Likewise, many (but not all) pediatricians may be taught the basic components of ADHD diagnosis and treatment during residency. In many other circumstances, primary care physicians may receive additional training and experience in mental health diagnosis and treatment during their practice years.
Relative effectiveness
Both primary care physicians (PCP) (also known as General Practitioners (GP)) and psychiatrist are just as effective (in terms of remission rates) for the treatment of depression. However, treatment resistant depression, suicidal, homicidal ideation, psychosis and catatonia should be handled by mental health specialists. Treatment-resistant depression (or treatment refractory depression) refers to depression which remains at large after at least two antidepressant medications have been trailed on their own.
Peer workers
Some think that mental health professionals are less credible when they have personal experience of mental health. In fact, the mental health sector goes out of its way to hire people with mental illness experience. Those in the mental health workforce with personal experience of mental health are referred to as 'peer support workers' or 'peer support specialists'. The balance of evidence appears to favor their employment: Randomized controlled trials consistently demonstrate peer staff produce outcomes on par with non-peer staff in ancillary roles, but they actually perform better in reducing hospitalization rates, engaging clients who are difficult to reach, and cutting substance use. There is research that indicates peer workers cultivate a perception among service users that the service is more responsive to non-treatment things, increases their hope, family satisfaction, self-esteem and community belonging.
Psychiatrists
Psychiatrists are physicians and one of the few professionals in the mental health industry who specialize and are certified in treating mental illness using the biomedical approach to mental disorders including the use of medications. However, biological, genetic and social processes as part of premedicine have been the basis of education in fields such as other mental health training since the 1970s, and in 2013, such academic degrees also may include extensive work on the status of brain, DNA research and its applications.
Psychiatrists may also go through significant training to conduct psychotherapy and cognitive behavioral therapy. The amount of training a psychiatrist holds in providing these types of therapies varies from program to program and also differs greatly based upon region. [Cognitive therapy also stems from cognitive rehabilitation techniques, and may involve long-term community clients with brain injuries seeking jobs, education and community housing.] In the 1970s, psychiatrists were considered to be hospital-based, assessment, and clinical education personnel which was not involved in establishing community programs.
Specialties of psychiatrists
As part of their evaluation of the patient, psychiatrists are one of only a few mental health professionals who may conduct physical examinations, order and interpret laboratory tests and EEGs, and may order brain imaging studies such as CT or CAT, MRI, and PET scanning. A medical professional must evaluate the patient for any medical problems or diseases that may be the cause of the mental illness.
Historically psychiatrists have been the only mental health professional with the power to prescribe medication to treat specific types of mental illness. Currently, Physician Assistants response to the psychiatrist (in lieu of and supervised) and advanced practice psychiatric nurses may prescribe medications, including psychiatric medications. Clinical psychologists have gained the ability to prescribe psychiatric medications on a limited basis in a few U.S. states after completing additional training and passing an examination.
Educational requirements for psychiatrists
Typically the requirements to become a psychiatrist are substantial but differ from country to country. In general there is an initial period of several years of academic and clinical training and supervised work in different areas of medicine, in order to become a licensed medical doctor, followed by several years of supervised work and study in psychiatry, in order to become a licensed psychiatrist.
In the United States and Canada one must first complete a Bachelor's degree. Students may typically decide any major subject of their choice, however they must enroll in specific courses, usually outlined in a pre-medical program. One must then apply to and attend 4 years of medical school in order to earn his MD or DO and to complete his medical education. Psychiatrists must then pass three successive rigorous national board exams (United States Medical Licensing Exams "USMLE", Steps 1, 2, and 3), which draws questions from all fields of medicine and surgery, before gaining an unrestricted license to practice medicine. Following this, the individual must complete a four-year residency in Psychiatry as a psychiatric resident and sit for annual national in-service exams. Psychiatry residents are required to complete at least four post-graduate months of internal medicine (pediatrics may be substituted for some or all of the internal medicine months for those planning to specialize in child and adolescent psychiatry) and two months of neurology, usually during the first year, but some programs require more. Occasionally, some prospective psychiatry residents will choose to do a transitional year internship in medicine or general surgery, in which case they may complete the two months of neurology later in their residency. After completing their training, psychiatrists take written and then oral specialty board examinations. The total amount of time required to qualify in the field of psychiatry in the United States is typically 4 to 5 years after obtaining the MD or DO (or in total 8 to 9 years minimum). Many psychiatrists pursue an additional 1–2 years in subspecialty fellowships on top of this such as child psychiatry, geriatric psychiatry, and psychosomatic medicine.
In the United Kingdom, the Republic of Ireland, and most Commonwealth countries, the initial degree is the combined Bachelor of Medicine and Bachelor of Surgery, usually a single period of academic and clinical study lasting around five years. This degree is most often abbreviated 'MBChB', 'MB BS' or other variations, and is the equivalent of the American 'MD'. Following this the individual must complete a two-year foundation programmer that mainly consists of supervised paid work as a Foundation House Officer within different specialties of medicine. Upon completion the individual can apply for "core specialist training" in psychiatry, which mainly involves supervised paid work as a Specialty Registrar in different subspecialties of psychiatry. After three years there is an examination for Membership of the Royal College of Psychiatrists (abbreviated MRCPsych), with which an individual may then work as a "Staff grade" or "Associate Specialist" psychiatrist, or pursue an academic psychiatry route via a PhD. If, after the MRCPsych, an additional 3 years of specialization known as "advanced specialist training" are taken (again mainly paid work), and a Certificate of Completion of Training is awarded, the individual can apply for a post taking independent clinical responsibility as a "consultant" psychiatrist.
Clinical psychologists
A clinical psychologist studies and applies psychology for the purpose of understanding, preventing and relieving psychologically based distress or dysfunction and to promote subjective well-being and personal development. In many countries it is a regulated profession that addresses moderate to more severe or chronic psychological problems, including diagnosable mental disorders. Clinical psychology includes a wide range of practices, such as research, psychological assessment, teaching, consultation, forensic testimony, and program development and administration. Central to clinical psychology is the practice of psychotherapy, which uses a wide range of techniques to change thoughts, feelings, or behaviors in service to enhancing subjective well-being, mental health, and life functioning. Unlike other mental health professionals, psychologists are trained to conduct psychological assessment. Clinical psychologists can work with individuals, couples, children, older adults, families, small groups, and communities.
Specialties of clinical psychologists
Clinical psychologists who focus on treating mental health specializes in evaluating patients and providing psychotherapy. They do not prescribe medication as this is a role of a psychiatrist (physician who specializes in psychiatry). There are a wide variety of therapeutic techniques and perspectives that guide practitioners, although most fall into the major categories of Psychodynamic, Cognitive Behavioral, Existential-Humanistic, and Systems Therapy (e.g. family or couples therapy).
In addition to therapy, clinical psychologists are also trained to administer and interpret psychological personality tests such as the MCMI, MMPI and the Rorschach inkblot test, and various standardized tests of intelligence, memory, and neuropsychological functioning. Common areas of specialization include: specific disorders (e.g. trauma), neuropsychological disorders, child and adolescent, family and relationship counseling. Internationally, psychologists are generally not granted prescription privileges. In the US, prescriptive rights have been granted to appropriately trained psychologists only in the states of New Mexico and Louisiana, with some limited prescriptive rights in Indiana and the US territory of Guam.
Educational requirements for clinical psychologists
Clinical psychologists, having completed an undergraduate degree usually in psychology or other social science, generally undergo specialist postgraduate training lasting at least two years (e.g. Australia), three years (e.g. UK), or four to six years depending how much research activity is included in the course (e.g. US). In countries where the course is of shorter duration, there may be an informal requirement for applicants to have undertaken prior work experience supervised by a clinical psychologist, and a proportion of applicants may also undertake a separate PhD research degree.
Today, in the U.S., about half of licensed psychologists are trained in the Scientist-Practitioner Model of Clinical Psychology (PhD)—a model that emphasizes both research and clinical practice and is usually housed in universities. The other half are being trained within a Practitioner-Scholar Model of Clinical Psychology (PsyD), which focuses on practice. A third training model called the Clinical Scientist Model emphasizes training in clinical psychology research. Outside of coursework, graduates of both programs generally are required to have had 2 to 3 years of supervised clinical experience, a certain amount of personal psychotherapy, and the completion of a dissertation (PhD programs usually require original quantitative empirical research, whereas the PsyD equivalent of dissertation research often consists of literature review and qualitative research, theoretical scholarship, program evaluation or development, critical literature analysis, or clinical application and analysis).
Continuing education requirements for clinical psychologists
Most states in the US require clinical psychologists to obtain a certain number of continuing education credits in order to renew their license. This was established to ensure that psychologists stay current with information and practices in their fields. The license renewal cycle varies, but renewal is generally required every two years.
The number of continuing education credits required for clinical psychologists varies between states. In Nebraska, psychologists are required to obtain 24 hours of approved continuing education credits in the 24 months before their license renewal. In California, the requirement is for 36 hours of credits. New York State does not have any continuing education requirements for license renewal at this time (2014).
Activities that count towards continuing education credits generally include completing courses, publishing research papers, teaching classes, home study, and attending workshops. Some states require that a certain number of the education credits be in ethics. Most states allow psychologists to self-report their credits but randomly audit individual psychologists to ensure compliance.
Counseling psychologist or psychotherapist
Counseling generally involves helping people with what might be considered "normal" or "moderate" psychological problems, such as the feelings of anxiety or sadness resulting from major life changes or events. As such, counseling psychologists often help people adjust to or cope with their environment or major events, although many also work with more serious problems as well.
One may practice as a counseling psychologist with a PhD or EdD, and as a counseling psychotherapist with a master's degree. Compared with clinical psychology, there are fewer counseling psychology graduate programs (which are commonly housed in departments of education), counselors tend to conduct more vocational assessment and less projective or objective assessment, and they are more likely to work in public service or university clinics (rather than hospitals or private practice). Despite these differences, there is considerable overlap between the two fields and distinctions between them continue to fade.
Mental health counselors and residential counselors are also the name for another class of counselors or mental health professionals who may work with long-term services and supports (LTSS) clients in the community. Such counselors may be advanced or senior staff members in a community program, and may be involved in developing skill teaching, active listening (and similar psychological and educational methods), and community participation programs. They also are often skilled in on-site intervention, redirection and emergency techniques. Supervisory personnel often advance from this class of workers in community programs.
Behavior analysts and community/institutional roles
Behavior analysts are licensed in five states to provide services for clients with substance abuse, developmental disabilities, and mental illness. This profession draws on the evidence base of applied behavior analysis, behavior therapy, and the philosophy of radical behaviorism. Behavior analysts have at least a master's degree in behavior analysis or in a mental health related discipline as well as at least five core courses in applied behavior analysis (narrow focus in psychological education). Many behavior analysts have a doctorate. Most programs have a formalized internship program and several programs are offered online. Most practitioners have passed the examination offered by the behavior analysis certification board or the examination in clinical behavior therapy by the World Association for Behavior Analysis. The model licensing act for behavior analysts can be found at the Association for Behavior Analysis International's website.
Behavior analysts (who grew from the definition of mental health as a behavioral problem) often use community situational activities, life events, functional teaching, community "reinforcers", family and community staff as intervenors, and structured interventions as the base in which they may be called upon to provide skilled professional assistance. Approaches that are based upon person-centered approaches have been used to update the stricter, hospital based interventions used by behavior analysts for applicability to community environments Behavioral approaches have often been infused with efforts at client self-determination, have been aligned with community lifestyle planning, and have been criticized as "aversive technology" which was "outlawed" in the field of severe disabilities in the 1990s.
School psychologist and inclusion educators
School psychologists' primary concern is with the academic, social, and emotional well-being of children within a scholastic environment. Unlike clinical psychologists, they receive much more training in education, child development and behavior, and the psychology of learning, often graduating with a post-master's educational specialist degree (EdS), EdD or Doctor of Philosophy (PhD) degree. Besides offering individual and group therapy with children and their families, school psychologists also evaluate school programs, provide cognitive assessment, help design prevention programs (e.g. reducing drops outs), and work with teachers and administrators to help maximize teaching efficacy, both in the classroom and systemically.
In today's world, the school psychologist remains the responsible party in "mental health" regarding children with emotional and behavioral needs, and have not always met these needs in the regular school environment. Inclusion (special) educators support participation in local school programs and after school programs, including new initiatives such as Achieve my Plan by the Research and Training Center on Family Support and Children's Mental Health at Portland State University. Referrals to residential schools and certification of the personnel involved in the residential schools and campuses have been a multi-decade concern with counties often involved in national efforts to better support these children and youth in local schools, families, homes and communities.
Psychiatric rehabilitation
Psychiatric rehabilitation, similar to cognitive rehabilitation, is a designated field in the rehabilitation often academically prepared in either Schools of Allied Health and Sciences (near the field of Physical Medicine and Rehabilitation) and as rehabilitation counseling in the School of Education. Both have been developed specifically as preparing community personnel (at the MA and PHD levels) and to aid in the transition to professionally competent and integrated community services. Psychiatric rehabilitation personnel have a community integration-related base, support recovery and skills-based model of mental health, and may be involved with community programs based upon normalization and social role valorization throughout the US. Psychiatric rehabilitation personnel have been involved in upgrading the skills of staff in institutions in order to move clients into community settings. Most common in international fields are community rehabilitation personnel which traditionally come from the rehabilitation counseling or community fields. In the new "rehabilitation centers" (new campus buildings), designed similar to hospital "rehab" (physical and occupational therapy, sports medicine), often no designated personnel in the fields of mental health (now "senior behavioral services" or "residential treatment units"). Psychiatric rehabilitation textbooks are currently on the market describing the community services their personnel were involved within community development (commonly known as deinstitutionalization).
Psychiatric rehabilitation professionals (and psychosocial services) are the mainstay of community programs in the US, and the national service providers association itself may certify mental health staff in these areas. Psychiatric interventions which vary from behavioral ones are described in a review on their use in "residential, vocational, social or educational role functioning" as a "preferred methods for helping individuals with serious psychiatric disabilities". Other competencies in education may involve working with families, user-directed planning methods and financing, housing and support, personal assistance services, transitional or supported employment, Americans with Disabilities Act (ADA), supported housing, integrated approaches (e.g., substance use, or intellectual disabilities), and psychosocial interventions, among others. In addition, rehabilitation counselors (PhD, MS) may also be educated "generically" (breadth and depth) or for all diagnostic groups, and can work in these fields; other personnel may have certifications in areas such as supported employment which has been verified for use in psychiatric, neurological, traumatic brain injury, and intellectual disabilities, among others.
Social worker
Social workers in the area of mental health may assess, treat, develop treatment plans, provide case management and/or rights advocacy to individuals with mental health problems. They can work independently or within clinics/service agencies, usually in collaboration with other health care professionals.
In the US, they are often referred to as clinical social workers; each state specifies the responsibilities and limitations of this profession. State licensing boards and national certification boards require clinical social workers to have a master's or doctoral degree (MSW or DSW/PhD) from a university. The doctorate in social work requires submission of a major original contribution to the field in order to be awarded the degree.
In the UK, there is a now a standardized three-year undergraduate social work degree, or two-year postgraduate masters for those who already have an undergraduate social sciences degree or others and relevant work experience. These courses include mandatory supervised work experience in social work, which may include mental health services. Successful completion allows an individual to register and work as a qualified social worker. There are various additional optional courses for gaining qualifications specific to mental health, for example training in psychotherapy or, in England and Wales, for the role of Approved Mental Health Professional (two years' training for a legal role in the assessment and detention of eligible mentally disordered people under the Mental Health Act (1983) as amended in 2007).
Social workers in England and Wales are now able to become Approved Clinicians (AC) under the Mental Health Act 2007 following a period of further training (likely at postgraduate degree/diploma or doctoral level). Historically, this role was reserved for psychiatrist medical doctors, but has now extended to registered mental health professionals, such as social workers, psychologists and mental health nurses.
In general, it is the psycho-social model rather than, or in addition to, the dominant medical model, that is the underlying rationale for mental health social work. This may include a focus on social causation, labeling, critical theory and social constructiveness. Many argue social workers need to work with medical and health colleagues to provide an effective service but they also need to be at the forefront of processes that include and empower service users.
Social workers also prepare social work administration and may hold positions in human services systems as administration or Executives to Administration in the US. Social workers, similar to psychiatric rehabilitation, updates its professional education programs based upon current developments in the fields (e.g., support services) and serve a multicultural client base.
Educational requirements for social workers
In the United States, the minimum requirement for social workers is generally a bachelor's degree in social work, though a bachelor's degree in a related field such as sociology or psychology may qualify an applicant for certain jobs. Higher-level jobs typically require a master's degree in social work. Master's programs in social work usually last two years and consist of at least 900 hours of supervised instruction in the field. Regulatory boards generally require that degrees be obtained from programs that are accredited by the Council of Social Work Education (CSWE) or another nationally recognized accrediting agency for promotion and future collaboration.
Before social workers can practice, they are required to meet the licensing, certification, or registration requirements of the state. The requirements vary depending on the state but usually involve a minimum number of supervised hours in the field and passing of an exam. All states except California also require pre-licensure from the Association of Social Work Boards (ASWB).
The ASWB offers four categories of social work license. The lowest level is a Bachelors, for which a bachelor's degree in social work is required. The next level up is a Masters and a master's degree in social work is required. The Advanced Generalist category of social worker requires a master's degree in social work and two years of supervised post-degree experience. The highest ASWB category is a Clinical Social Worker which requires a master's degree in social work along with two years of post-master's direct experience in social work.
Continuing education requirements for social workers
Most states require social workers to acquire a minimum number of continuing education credits per license, certification, or registration renewal period. The purpose of these requirements is to ensure that social workers stay up-to-date with information and practices in their professions. In most states, the renewal process occurs every two or three years. The number of continuing education credits that is required varies between states but is generally 20 to 45 hours during the two- or three-year period prior to renewal.
Courses and programs that are approved as continuing education for social workers generally must be relevant to the profession and contribute to the advancement of professional competence. They often include continuing education courses, seminars, training programs, community service, research, publishing articles, or serving on a panel. Many states enforce that a minimum amount of the credits be on topics such as ethics, HIV/AIDs, or domestic violence.
Psychiatric and mental health nurse
Psychiatric Nurses or Mental Health Nurse Practitioners work with people with a large variety of mental health problems, often at the time of highest distress, and usually within hospital settings. These professionals work in primary care facilities, outpatient mental health clinics, as well as in hospitals and community health centers. MHNPs evaluate and provide care for patients who have anything from psychiatric disorders, medical mental conditions, to substance abuse problems. They are licensed to provide emergency psychiatric services, assess the psycho-social and physical state of their patients, create treatment plans, and continually manage their care. They may also serve as consultants or as educators for families and staff; however, the MHNP has a greater focus on psychiatric diagnosis (typically the province of the MD or PhD), including the differential diagnosis of medical disorders with psychiatric symptoms and on medication treatment for psychiatric disorders.
Educational requirements for psychiatric and mental health nurses
Psychiatric and mental health nurses receive specialist education to work in this area. In some countries, it is required that a full course of general nurse training be completed prior to specializing as a psychiatric nurse. In other countries, such as the U.K., an individual completes a specific nurse training course that determines their area of work. As with other areas of nursing, it is becoming usual for psychiatric nurses to be educated to degree level and beyond. Psychiatric aides, now being trained by educational psychology in 2014, are part of the entry-level workforce which is projected to be needed in communities in the US in the next decades.
In order to become a nurse practitioner in the U.S., at least six years of college education must be obtained. After earning the bachelor's degree (usually in nursing, although there are master's entry level nursing graduate programs intended for individuals with a bachelor's degree outside of nursing) the test for a license as a registered nurse (the NCLEX-RN) must be passed. Next, the candidate must complete a state-approved master's degree advanced nursing education program which includes at least 600 clinical hours. Several schools are now also offering further education and awarding a DNP (Doctor of Nursing Practice).
Individuals who choose a master's entry level pathway will spend an extra year at the start of the program taking classes necessary to pass the NCLEX-RN. Some schools will issue a BSN, others will issue a certificate. The student then continues with the normal MSN program.
Mental health care navigator
A mental health care navigator is an individual who assists patients and families to find appropriate mental health caregivers, facilities and services. Individuals who are care navigators are often also trained therapists and doctors. The need for mental health care navigators arises from the fragmentation of the mental health industry, which can often leave those in need with more questions than answers. Care navigators work closely with patients through discussion and collaboration to provide information on options and referrals to healthcare professionals, facilities, and organizations specializing in the patients' needs. The difference between other mental health professionals and a care navigator is that a care navigator provides information and directs a patient to the best help rather than offering diagnosis, prescription of medications or treatment.
Many mental health organizations use "navigator" and "navigation" to describe the service of providing guidance through the health care industry. Care navigators are also sometimes referred to as "system navigators". One type of care navigator is an "educational consultant".
Workforce shortage
Behavioral health disorders are prevalent in the United States, but accessing treatment can be challenging. Nearly 1 in 5 adults experience a mental health condition for which approximately only 43% received treatment. When asked about access to mental health treatment, two-thirds of primary care physicians reported that they were unable to secure outpatient mental health treatment for their patients. This is due, in part, to the workforce shortage in behavioral health. In rural areas, 55% of US counties have no practicing psychiatrist, psychologist, or social worker. Overall, 77% of counties have a severe shortage of mental health workers and 96% of counties had some unmet need. Some of the reasons for the workforce shortage include high turnover rates, high levels of work-related stress, and inadequate compensation. Annual turnover rate is 33% for clinicians and 23% for clinical supervisors. This is compared to an annual PCP turnover rate of 7.1%. Compensation in behavioral health field is notably low. The average licensed clinical social worker, a position that requires a master's degree and 2000 hours of post-graduate experience, earns $45,000/year. As a point of reference, the average physical therapist earns $75,000/year. Substance abuse counselor earnings are even lower, with an average salary of $34,000/year. Job stress is another factor that may lead to the high turnover rates and workforce shortage. It is estimated that 21-67% of mental health workers experience high levels of burnout including symptoms of emotional exhaustion, high levels of depersonalization and a reduced sense of personal accomplishment. Researchers have offered various recommendations to reduce the critical workforce gaps in behavioral health. Some of these recommendations include the following: expanding loan repayment programs to incentivize mental health providers to work in underserved (often rural) areas, integrating mental health into primary care, and increasing reimbursement to health care professionals.
Social workers also tend to experience competing for work and family demands, which negatively affects their job well-being and subsequently their job satisfaction, resulting in high turnover in the profession.
See also
Community integration
Community Psychology
Clinical Psychology
List of credentials in psychology
Psychologist
Psychotherapy
Clinical Associate (Psychology)
Global mental health
Health care providers
Inclusion (education)
Mental health
Mental illness
Psychiatric rehabilitation
Psychiatry
Anti-psychiatry
List of counseling topics
Supported housing
References
Further reading
Psychiatry
Mental health occupations | 0.77271 | 0.984502 | 0.760735 |
Goldilocks principle | The Goldilocks principle is named by analogy to the children's story "Goldilocks and the Three Bears", in which a young girl named Goldilocks tastes three different bowls of porridge and finds she prefers porridge that is neither too hot nor too cold but has just the right temperature. The concept of "just the right amount" is easily understood and applied to a wide range of disciplines, including developmental psychology, biology, astronomy, economics and engineering.
Applications
In cognitive science and developmental psychology, the Goldilocks effect or principle refers to an infant's preference to attend events that are neither too simple nor too complex according to their current representation of the world. This effect was observed in infants, who are less likely to look away from a visual sequence when the current event is moderately probable, as measured by an idealized learning model.
In astrobiology, the Goldilocks zone refers to the habitable zone around a star. As Stephen Hawking put it, "Like Goldilocks, the development of intelligent life requires that planetary temperatures be 'just right. The Rare Earth hypothesis uses the Goldilocks principle in the argument that a planet must be neither too far away from nor too close to a star and galactic centre to support life, while either extreme would result in a planet incapable of supporting life. Such a planet is colloquially called a "Goldilocks Planet". Paul Davies has argued for the extension of the principle to cover the selection of our universe from a (postulated) multiverse: "Observers arise only in those universes where, like Goldilocks' porridge, things are by accident 'just right.
In medicine, it can refer to a drug that can hold both antagonist (inhibitory) and agonist (excitatory) properties. For example, the antipsychotic Aripiprazole causes not only antagonism of dopamine D2 receptors in areas such as the mesolimbic area of the brain (which shows increased dopamine activity in psychosis) but also agonism of dopamine receptors in areas of dopamine hypoactivity, such as the mesocortical area.
In economics, a Goldilocks economy sustains moderate economic growth and low inflation, which allows a market-friendly monetary policy. A Goldilocks market occurs when the price of commodities sits between a bear market and a bull market. Goldilocks pricing, also known as good–better–best pricing, is a marketing strategy that uses product differentiation to offer three versions of a product to corner different parts of the market: a high-end version, a middle version, and a low-end version.
In communication, the Goldilocks principle describes the amount, type, and detail of communication necessary in a system to maximise effectiveness while minimising redundancy and excessive scope on the "too much" side and avoiding incomplete or inaccurate communication on the "too little" side.
In statistics, the "Goldilocks Fit" references a linear regression model that represents the perfect flexibility to reduce the error caused by bias and variance.
In the design sprint, the "Goldilocks Quality" means to create a prototype with just enough quality to evoke honest reactions from customers.
In machine learning, the Goldilocks learning rate is the learning rate that results in an algorithm taking the fewest steps to achieve minimal loss. Algorithms with a learning rate that is too large often fail to converge at all, while those with too small a learning rate take too long to converge.
See also
Cosmic Jackpot
Frugality
Anthropic principle
Big History
Fine-tuned universe
Golden mean (philosophy)
Anna Karenina principle
References
Astronomical hypotheses
Articles containing video clips
Goldilocks and the Three Bears
de: Goldlöckchen-Prinzip | 0.764169 | 0.995485 | 0.760719 |
Health geography | Health geography is the application of geographical information, perspectives, and methods to the study of health, disease, and health care. Medical geography, a sub-discipline of, or sister field of health geography, focuses on understanding spatial patterns of health and disease in relation to the natural and social environment. Conventionally, there are two primary areas of research within medical geography: the first deals with the spatial distribution and determinants of morbidity and mortality, while the second deals with health planning, help-seeking behavior, and the provision of health services.
Overview
Medical geography
The first area of study within medical geography has been described as geographical epidemiology or disease geography and is focused on the spatial patterns and processes of health and disease outcomes. This area of inquiry can be differentiated from the closely related discipline of epidemiology in that it uses concepts and methods from geography, allowing an ecologic perspective on health that considers how interactions between humans and the environment result in observed health outcomes. The second area of study focused on the planning and provision of health services, often with a focus on the spatial organization of health systems and exploration of how this arrangement affects accessibility of care.
Health geography
The study of health geography has been influenced by repositioning medical geography within the field of social geography due to a shift towards a social model in health care, rather than a medical model. This advocates for the redefinition of health and health care away from prevention and treatment of illness only to one of promoting well-being in general. Under this model, some previous illnesses (e.g., mental ill health) are recognized as behavior disturbances only, and other types of medicine (e.g., complementary or alternative medicine and traditional medicine) are studied by the medicine researchers, sometimes with the aid of health geographers without medical education. This shift changes the definition of care, no longer limiting it to spaces such as hospitals or doctor's offices. Also, the social model gives priority to the intimate encounters performed at non-traditional spaces of medicine and healthcare as well as to the individuals as health consumers.
This alternative methodological approach means that medical geography is broadened to incorporate philosophies such as Marxian political economy, structuralism, social interactionism, humanism, feminism and queer theory.
History
Relationships between place and health have long been recognized throughout human history, predating modern health delivery systems and providing insights into the transmission of infectious agents, well before the germ theory paradigm shift in the late 1800s. Throughout history, there have been many examples of place and location playing major roles in shaping perceptions of health and risk. The associations between geographical characteristics and health outcomes, which essentially form the foundation of modern medical geography, were recognized more than 2,000 years ago by Hippocrates in his treatise ’’On Airs, Waters, and Places’’ (ca. 400 BC).
The Industrial Revolution in the 1700s brought with it a plethora of novel public health issues stemming from rapid urban development and poor sanitation, conditions which fueled the development of disease mapping, or medical cartography. A precursor to medical geography, medical cartography arose from the need to communicate spatial discrepancies in risk for diseases of unknown cause, particularly urban outbreaks of cholera and yellow fever. One of the most prominent figures in both epidemiology and medical geography is John Snow, the physician who correctly identified the source of exposure during the 1854 Broad Street cholera outbreak. Snow's famous 1854 map of the cholera outbreak graphically demonstrates that cases were clustered around the Broad Street pump, the source of contaminated water that fueled the epidemic. This map led Snow to identify the contaminated pump and conclude that cholera was a waterborne illness, a remarkable feat given that bacteria were unknown to science at the time. While Snow's contributions to medical geography and epidemiology are irrefutable, the role of the map in this particular investigation is somewhat overstated. Dot maps of cases produced during the industrial period were powerful tools in communicating the findings of traditional epidemiological measures of association, but their role as analytic tools were restricted due to technological limitations.
Modern medical geography arose in the United States in the 1950s with the pioneering work of Jacques May, who worked as a surgeon in Thailand and Vietnam and noticed differences between the health experiences of his patients in these locations and in Europe. Although the notion that the environment could influence human health has been understood since Hippocrates, medical geography as envisioned by May built on this idea, describing medical geography as working to understand the nature of the relationships between pathogen transmission and geographical factors. May soon began mapping global distributions of disease and exploring the cultural and environmental factors that influenced these distributions.
Areas of study
Health geography is considered to be divided into two distinct elements. The first of which is focused on geographies of disease and ill health, involving descriptive research quantifying disease frequencies and distributions, and analytic research concerned with finding what characteristics make an individual or population susceptible to disease. This requires an understanding of epidemiology. The second component of health geography is the geography of health care, primarily facility location, accessibility, and utilization. This requires the use of spatial analysis and often borrows from behavioral economics.
Geographies of disease and ill health
Health geographers are concerned with the prevalence of different diseases along a range of spatial scales from a local to global view, and inspects the natural world, in all of its complexity, for correlations between diseases and locations. This situates health geography alongside other geographical sub-disciplines that trace human-environment relations. Health geographers use modern spatial analysis tools to map the dispersion of health, including various diseases, as individuals spread them amongst themselves, and across wider spaces as they migrate. Health geographers also consider all types of spaces as presenting health risks, from natural disasters, to interpersonal violence, stress, and other potential dangers.
Geography of health care provision
Although healthcare is a public good, it is not equally available to all individuals. Demand for public services is continuously increasing. People need advance knowledge and the latest prediction technology, that health geography offers. The latest example of such technology is Telemedicine. Many people in the United States are not able to access proper healthcare because of inequality in health insurance and the means to afford medical care.
Mobility and Disease Tracking:
With the advent of mobile technology and its spread, it is now possible to track individual mobility. By correlating the movement of individuals through tracking the devices using access towers or other tracking systems, it is now possible to determine and even control disease spread. While privacy laws question the legality of tracking individuals, the commercial mobile service providers are using covert techniques or obtaining government waivers to allow permission to track people.
Methods
Geographic Information Systems (GIS) are used extensively in medical geography to visualize and analyze georeferenced health-related data. These spatial data can be vector (point, line, or polygon) or raster (continuous grid) format and are often presented in quantitative thematic maps. Disease outcomes and sociodemographic characteristics collected through surveillance systems and population censuses are frequently used as data sources in medical geography studies. In disease ecology studies, interpolated climate data, gridded land surveys, and remote sensing imagery are examples of data used to quantify the environmental characteristics of disease systems. Spatial statistics or analysis are applied to test hypotheses regarding patterns or relationships within these data, such as the property of spatial dependency (spatially closer entities are more similar or related than spatially distant entities) or spatial heterogeneity (locations are unique relative to other locations). Some examples of the spatial analyses used in medical geography include point pattern analysis, tests for spatial autocorrelation, geographically weighted regression (GWR), ecological niche modeling, spatial scan statistics, and network analysis.
Health geographers
Notable health geographers include:
Sarah Curtis
William C. Gorgas
Kelvyn Jones
John Snow
Mei-Po Kwan
Nadine Schuurman
See also
Cluster (epidemiology)
Social model of disability
Spatial epidemiology
Tobler's first law of geography
Tobler's second law of geography
References
External links
Social and Spatial Inequalities
GeoHealth Laboratory
Human geography
Global health
Spatial epidemiology
Disease ecology | 0.780731 | 0.974356 | 0.76071 |
Ontology (information science) | In information science, an ontology encompasses a representation, formal naming, and definitions of the categories, properties, and relations between the concepts, data, or entities that pertain to one, many, or all domains of discourse. More simply, an ontology is a way of showing the properties of a subject area and how they are related, by defining a set of terms and relational expressions that represent the entities in that subject area. The field which studies ontologies so conceived is sometimes referred to as applied ontology.
Every academic discipline or field, in creating its terminology, thereby lays the groundwork for an ontology. Each uses ontological assumptions to frame explicit theories, research and applications. Improved ontologies may improve problem solving within that domain, interoperability of data systems, and discoverability of data. Translating research papers within every field is a problem made easier when experts from different countries maintain a controlled vocabulary of jargon between each of their languages. For instance, the definition and ontology of economics is a primary concern in Marxist economics, but also in other subfields of economics. An example of economics relying on information science occurs in cases where a simulation or model is intended to enable economic decisions, such as determining what capital assets are at risk and by how much (see risk management).
What ontologies in both information science and philosophy have in common is the attempt to represent entities, including both objects and events, with all their interdependent properties and relations, according to a system of categories. In both fields, there is considerable work on problems of ontology engineering (e.g., Quine and Kripke in philosophy, Sowa and Guarino in information science), and debates concerning to what extent normative ontology is possible (e.g., foundationalism and coherentism in philosophy, BFO and Cyc in artificial intelligence).
Applied ontology is considered by some as a successor to prior work in philosophy. However many current efforts are more concerned with establishing controlled vocabularies of narrow domains than with philosophical first principles, or with questions such as the mode of existence of fixed essences or whether enduring objects (e.g., perdurantism and endurantism) may be ontologically more primary than processes. Artificial intelligence has retained considerable attention regarding applied ontology in subfields like natural language processing within machine translation and knowledge representation, but ontology editors are being used often in a range of fields, including biomedical informatics, industry. Such efforts often use ontology editing tools such as Protégé.
Ontology in Philosophy
Ontology is a branch of philosophy and intersects areas such as metaphysics, epistemology, and philosophy of language, as it considers how knowledge, language, and perception relate to the nature of reality. Metaphysics deals with questions like "what exists?" and "what is the nature of reality?". One of five traditional branches of philosophy, metaphysics is concerned with exploring existence through properties, entities and relations such as those between particulars and universals, intrinsic and extrinsic properties, or essence and existence. Metaphysics has been an ongoing topic of discussion since recorded history.
Etymology
The compound word ontology combines onto-, from the Greek ὄν, on (gen. ὄντος, ontos), i.e. "being; that which is", which is the present participle of the verb εἰμί, eimí, i.e. "to be, I am", and -λογία, -logia, i.e. "logical discourse", see classical compounds for this type of word formation.
While the etymology is Greek, the oldest extant record of the word itself, the Neo-Latin form ontologia, appeared in 1606 in the work Ogdoas Scholastica by Jacob Lorhard (Lorhardus) and in 1613 in the Lexicon philosophicum by Rudolf Göckel (Goclenius).
The first occurrence in English of ontology as recorded by the OED (Oxford English Dictionary, online edition, 2008) came in Archeologia Philosophica Nova or New Principles of Philosophy by Gideon Harvey.
Formal Ontology
Since the mid-1970s, researchers in the field of artificial intelligence (AI) have recognized that knowledge engineering is the key to building large and powerful AI systems. AI researchers argued that they could create new ontologies as computational models that enable certain kinds of automated reasoning, which was only marginally successful. In the 1980s, the AI community began to use the term ontology to refer to both a theory of a modeled world and a component of knowledge-based systems. In particular, David Powers introduced the word ontology to AI to refer to real world or robotic grounding, publishing in 1990 literature reviews emphasizing grounded ontology in association with the call for papers for a AAAI Summer Symposium Machine Learning of Natural Language and Ontology, with an expanded version published in SIGART Bulletin and included as a preface to the proceedings. Some researchers, drawing inspiration from philosophical ontologies, viewed computational ontology as a kind of applied philosophy.
In 1993, the widely cited web page and paper "Toward Principles for the Design of Ontologies Used for Knowledge Sharing" by Tom Gruber used ontology as a technical term in computer science closely related to earlier idea of semantic networks and taxonomies. Gruber introduced the term as a specification of a conceptualization: An ontology is a description (like a formal specification of a program) of the concepts and relationships that can formally exist for an agent or a community of agents. This definition is consistent with the usage of ontology as set of concept definitions, but more general. And it is a different sense of the word than its use in philosophy.
Attempting to distance ontologies from taxonomies and similar efforts in knowledge modeling that rely on classes and inheritance, Gruber stated (1993): Ontologies are often equated with taxonomic hierarchies of classes, class definitions, and the subsumption relation, but ontologies need not be limited to these forms. Ontologies are also not limited to conservative definitions – that is, definitions in the traditional logic sense that only introduce terminology and do not add any knowledge about the world. To specify a conceptualization, one needs to state axioms that do constrain the possible interpretations for the defined terms.
As refinement of Gruber's definition Feilmayr and Wöß (2016) stated: "An ontology is a formal, explicit specification of a shared conceptualization that is characterized by high semantic expressiveness required for increased complexity."
Formal Ontology Components
Contemporary ontologies share many structural similarities, regardless of the language in which they are expressed. Most ontologies describe individuals (instances), classes (concepts), attributes and relations.
Types
Domain ontology
A domain ontology (or domain-specific ontology) represents concepts which belong to a realm of the world, such as biology or politics. Each domain ontology typically models domain-specific definitions of terms. For example, the word card has many different meanings. An ontology about the domain of poker would model the "playing card" meaning of the word, while an ontology about the domain of computer hardware would model the "punched card" and "video card" meanings.
Since domain ontologies are written by different people, they represent concepts in very specific and unique ways, and are often incompatible within the same project. As systems that rely on domain ontologies expand, they often need to merge domain ontologies by hand-tuning each entity or using a combination of software merging and hand-tuning. This presents a challenge to the ontology designer. Different ontologies in the same domain arise due to different languages, different intended usage of the ontologies, and different perceptions of the domain (based on cultural background, education, ideology, etc.).
At present, merging ontologies that are not developed from a common upper ontology is a largely manual process and therefore time-consuming and expensive. Domain ontologies that use the same upper ontology to provide a set of basic elements with which to specify the meanings of the domain ontology entities can be merged with less effort. There are studies on generalized techniques for merging ontologies, but this area of research is still ongoing, and it is a recent event to see the issue sidestepped by having multiple domain ontologies using the same upper ontology like the OBO Foundry.
Upper ontology
An upper ontology (or foundation ontology) is a model of the commonly shared relations and objects that are generally applicable across a wide range of domain ontologies. It usually employs a core glossary that overarches the terms and associated object descriptions as they are used in various relevant domain ontologies.
Standardized upper ontologies available for use include BFO, BORO method, Dublin Core, GFO, Cyc, SUMO, UMBEL, and DOLCE. WordNet has been considered an upper ontology by some and has been used as a linguistic tool for learning domain ontologies.
Hybrid ontology
The Gellish ontology is an example of a combination of an upper and a domain ontology.
Visualization
A survey of ontology visualization methods is presented by Katifori et al. An updated survey of ontology visualization methods and tools was published by Dudás et al. The most established ontology visualization methods, namely indented tree and graph visualization are evaluated by Fu et al. A visual language for ontologies represented in OWL is specified by the Visual Notation for OWL Ontologies (VOWL).
Engineering
Ontology engineering (also called ontology building) is a set of tasks related to the development of ontologies for a particular domain. It is a subfield of knowledge engineering that studies the ontology development process, the ontology life cycle, the methods and methodologies for building ontologies, and the tools and languages that support them.
Ontology engineering aims to make explicit the knowledge contained in software applications, and organizational procedures for a particular domain. Ontology engineering offers a direction for overcoming semantic obstacles, such as those related to the definitions of business terms and software classes. Known challenges with ontology engineering include:
Ensuring the ontology is current with domain knowledge and term use
Providing sufficient specificity and concept coverage for the domain of interest, thus minimizing the content completeness problem
Ensuring the ontology can support its use cases
Editors
Ontology editors are applications designed to assist in the creation or manipulation of ontologies. It is common for ontology editors to use one or more ontology languages.
Aspects of ontology editors include: visual navigation possibilities within the knowledge model, inference engines and information extraction; support for modules; the import and export of foreign knowledge representation languages for ontology matching; and the support of meta-ontologies such as OWL-S, Dublin Core, etc.
Learning
Ontology learning is the automatic or semi-automatic creation of ontologies, including extracting a domain's terms from natural language text. As building ontologies manually is extremely labor-intensive and time-consuming, there is great motivation to automate the process. Information extraction and text mining have been explored to automatically link ontologies to documents, for example in the context of the BioCreative challenges.
Research
Epistemological assumptions, which in research asks "What do you know? or "How do you know it?", creates the foundation researchers use when approaching a certain topic or area for potential research. As epistemology is directly linked to knowledge and how we come about accepting certain truths, individuals conducting academic research must understand what allows them to begin theory building. Simply, epistemological assumptions force researchers to question how they arrive at the knowledge they have.
Languages
An ontology language is a formal language used to encode an ontology. There are a number of such languages for ontologies, both proprietary and standards-based:
Common Algebraic Specification Language is a general logic-based specification language developed within the IFIP working group 1.3 "Foundations of System Specifications" and is a de facto standard language for software specifications. It is now being applied to ontology specifications in order to provide modularity and structuring mechanisms.
Common logic is ISO standard 24707, a specification of a family of ontology languages that can be accurately translated into each other.
The Cyc project has its own ontology language called CycL, based on first-order predicate calculus with some higher-order extensions.
DOGMA (Developing Ontology-Grounded Methods and Applications) adopts the fact-oriented modeling approach to provide a higher level of semantic stability.
The Gellish language includes rules for its own extension and thus integrates an ontology with an ontology language.
IDEF5 is a software engineering method to develop and maintain usable, accurate, domain ontologies.
KIF is a syntax for first-order logic that is based on S-expressions. SUO-KIF is a derivative version supporting the Suggested Upper Merged Ontology.
MOF and UML are standards of the OMG
Olog is a category theoretic approach to ontologies, emphasizing translations between ontologies using functors.
OBO, a language used for biological and biomedical ontologies.
OntoUML is an ontologically well-founded profile of UML for conceptual modeling of domain ontologies.
OWL is a language for making ontological statements, developed as a follow-on from RDF and RDFS, as well as earlier ontology language projects including OIL, DAML, and DAML+OIL. OWL is intended to be used over the World Wide Web, and all its elements (classes, properties and individuals) are defined as RDF resources, and identified by URIs.
Rule Interchange Format (RIF) and F-Logic combine ontologies and rules.
Semantic Application Design Language (SADL) captures a subset of the expressiveness of OWL, using an English-like language entered via an Eclipse Plug-in.
SBVR (Semantics of Business Vocabularies and Rules) is an OMG standard adopted in industry to build ontologies.
TOVE Project, TOronto Virtual Enterprise project
Published examples
Arabic Ontology, a linguistic ontology for Arabic, which can be used as an Arabic Wordnet but with ontologically-clean content.
AURUM – Information Security Ontology, An ontology for information security knowledge sharing, enabling users to collaboratively understand and extend the domain knowledge body. It may serve as a basis for automated information security risk and compliance management.
BabelNet, a very large multilingual semantic network and ontology, lexicalized in many languages
Basic Formal Ontology, a formal upper ontology designed to support scientific research
BioPAX, an ontology for the exchange and interoperability of biological pathway (cellular processes) data
BMO, an e-Business Model Ontology based on a review of enterprise ontologies and business model literature
SSBMO, a Strongly Sustainable Business Model Ontology based on a review of the systems based natural and social science literature (including business). Includes critique of and significant extensions to the Business Model Ontology (BMO).
CCO and GexKB, Application Ontologies (APO) that integrate diverse types of knowledge with the Cell Cycle Ontology (CCO) and the Gene Expression Knowledge Base (GexKB)
CContology (Customer Complaint Ontology), an e-business ontology to support online customer complaint management
CIDOC Conceptual Reference Model, an ontology for cultural heritage
COSMO, a Foundation Ontology (current version in OWL) that is designed to contain representations of all of the primitive concepts needed to logically specify the meanings of any domain entity. It is intended to serve as a basic ontology that can be used to translate among the representations in other ontologies or databases. It started as a merger of the basic elements of the OpenCyc and SUMO ontologies, and has been supplemented with other ontology elements (types, relations) so as to include representations of all of the words in the Longman dictionary defining vocabulary.
Computer Science Ontology, an automatically generated ontology of research topics in the field of computer science
Cyc, a large Foundation Ontology for formal representation of the universe of discourse
Disease Ontology, designed to facilitate the mapping of diseases and associated conditions to particular medical codes
DOLCE, a Descriptive Ontology for Linguistic and Cognitive Engineering
Drammar, ontology of drama
Dublin Core, a simple ontology for documents and publishing
Financial Industry Business Ontology (FIBO), a business conceptual ontology for the financial industry
Foundational, Core and Linguistic Ontologies
Foundational Model of Anatomy, an ontology for human anatomy
Friend of a Friend, an ontology for describing persons, their activities and their relations to other people and objects
Gene Ontology for genomics
Gellish English dictionary, an ontology that includes a dictionary and taxonomy that includes an upper ontology and a lower ontology that focusses on industrial and business applications in engineering, technology and procurement.
Geopolitical ontology, an ontology describing geopolitical information created by Food and Agriculture Organization(FAO). The geopolitical ontology includes names in multiple languages (English, French, Spanish, Arabic, Chinese, Russian and Italian); maps standard coding systems (UN, ISO, FAOSTAT, AGROVOC, etc.); provides relations among territories (land borders, group membership, etc.); and tracks historical changes. In addition, FAO provides web services of geopolitical ontology and a module maker to download modules of the geopolitical ontology into different formats (RDF, XML, and EXCEL). See more information at FAO Country Profiles.
GAO (General Automotive Ontology) – an ontology for the automotive industry that includes 'car' extensions
GOLD, General Ontology for Linguistic Description
GUM (Generalized Upper Model), a linguistically motivated ontology for mediating between clients systems and natural language technology
IDEAS Group, a formal ontology for enterprise architecture being developed by the Australian, Canadian, UK and U.S. Defence Depts.
Linkbase, a formal representation of the biomedical domain, founded upon Basic Formal Ontology.
LPL, Landmark Pattern Language
NCBO Bioportal, biological and biomedical ontologies and associated tools to search, browse and visualise
NIFSTD Ontologies from the Neuroscience Information Framework: a modular set of ontologies for the neuroscience domain.
OBO-Edit, an ontology browser for most of the Open Biological and Biomedical Ontologies
OBO Foundry, a suite of interoperable reference ontologies in biology and biomedicine
OMNIBUS Ontology, an ontology of learning, instruction, and instructional design
Ontology for Biomedical Investigations, an open-access, integrated ontology of biological and clinical investigations
ONSTR, Ontology for Newborn Screening Follow-up and Translational Research, Newborn Screening Follow-up Data Integration Collaborative, Emory University, Atlanta.
Plant Ontology for plant structures and growth/development stages, etc.
POPE, Purdue Ontology for Pharmaceutical Engineering
PRO, the Protein Ontology of the Protein Information Resource, Georgetown University
ProbOnto, knowledge base and ontology of probability distributions.
Program abstraction taxonomy
Protein Ontology for proteomics
RXNO Ontology, for name reactions in chemistry
SCDO, the Sickle Cell Disease Ontology, facilitates data sharing and collaborations within the SDC community, amongst other applications (see list on SCDO website).
Schema.org, for embedding structured data into web pages, primarily for the benefit of search engines
Sequence Ontology, for representing genomic feature types found on biological sequences
SNOMED CT (Systematized Nomenclature of Medicine – Clinical Terms)
Suggested Upper Merged Ontology, a formal upper ontology
Systems Biology Ontology (SBO), for computational models in biology
SWEET, Semantic Web for Earth and Environmental Terminology
SSN/SOSA, The Semantic Sensor Network Ontology (SSN) and Sensor, Observation, Sample, and Actuator Ontology (SOSA) are W3C Recommendation and OGC Standards for describing sensors and their observations.
ThoughtTreasure ontology
TIME-ITEM, Topics for Indexing Medical Education
Uberon, representing animal anatomical structures
UMBEL, a lightweight reference structure of 20,000 subject concept classes and their relationships derived from OpenCyc
WordNet, a lexical reference system
YAMATO, Yet Another More Advanced Top-level Ontology
YSO – General Finnish Ontology
The W3C Linking Open Data community project coordinates attempts to converge different ontologies into worldwide Semantic Web.
Libraries
The development of ontologies has led to the emergence of services providing lists or directories of ontologies called ontology libraries.
The following are libraries of human-selected ontologies.
COLORE is an open repository of first-order ontologies in Common Logic with formal links between ontologies in the repository.
DAML Ontology Library maintains a legacy of ontologies in DAML.
Ontology Design Patterns portal is a wiki repository of reusable components and practices for ontology design, and also maintains a list of exemplary ontologies.
Protégé Ontology Library contains a set of OWL, Frame-based and other format ontologies.
SchemaWeb is a directory of RDF schemata expressed in RDFS, OWL and DAML+OIL.
The following are both directories and search engines.
OBO Foundry is a suite of interoperable reference ontologies in biology and biomedicine.
Bioportal (ontology repository of NCBO)
Linked Open Vocabularies
OntoSelect Ontology Library offers similar services for RDF/S, DAML and OWL ontologies.
Ontaria is a "searchable and browsable directory of semantic web data" with a focus on RDF vocabularies with OWL ontologies. (NB Project "on hold" since 2004).
Swoogle is a directory and search engine for all RDF resources available on the Web, including ontologies.
Open Ontology Repository initiative
ROMULUS is a foundational ontology repository aimed at improving semantic interoperability. Currently there are three foundational ontologies in the repository: DOLCE, BFO and GFO.
Examples of applications
In general, ontologies can be used beneficially in several fields.
Enterprise applications. A more concrete example is SAPPHIRE (Health care) or Situational Awareness and Preparedness for Public Health Incidences and Reasoning Engines which is a semantics-based health information system capable of tracking and evaluating situations and occurrences that may affect public health.
Geographic information systems bring together data from different sources and benefit therefore from ontological metadata which helps to connect the semantics of the data.
Domain-specific ontologies are extremely important in biomedical research, which requires named entity disambiguation of various biomedical terms and abbreviations that have the same string of characters but represent different biomedical concepts. For example, CSF can represent Colony Stimulating Factor or Cerebral Spinal Fluid, both of which are represented by the same term, CSF, in biomedical literature. This is why a large number of public ontologies are related to the life sciences. Life science data science tools that fail to implement these types of biomedical ontologies will not be able to accurately determine causal relationships between concepts.
See also
Commonsense knowledge bases
Concept map
Controlled vocabulary
Classification scheme (information science)
Folksonomy
Formal concept analysis
Formal ontology
General Concept Lattice
Knowledge graph
Lattice
Ontology
Ontology alignment
Ontology chart
Open Semantic Framework
Semantic technology
Soft ontology
Terminology extraction
Weak ontology
Web Ontology Language
Related philosophical concepts
Alphabet of human thought
Characteristica universalis
Interoperability
Level of measurement
Metalanguage
Natural semantic metalanguage
References
Further reading
External links
Knowledge Representation at Open Directory Project
Library of ontologies (Archive, Unmaintained)
GoPubMed using Ontologies for searching
ONTOLOG (a.k.a. "Ontolog Forum") - an Open, International, Virtual Community of Practice on Ontology, Ontological Engineering and Semantic Technology
Use of Ontologies in Natural Language Processing
Ontology Summit - an annual series of events (first started in 2006) that involves the ontology community and communities related to each year's theme chosen for the summit.
Standardization of Ontologies
Knowledge engineering
Technical communication
Information science
Semantic Web
Knowledge representation
Knowledge bases
Ontology editors | 0.763092 | 0.996876 | 0.760708 |
Mentalism | Mentalism is a performing art in which its practitioners, known as mentalists, appear to demonstrate highly developed mental or intuitive abilities. Mentalists perform a theatrical act that includes special effects that may appear to employ psychic or supernatural forces but that are actually achieved by "ordinary conjuring means", natural human abilities (i.e. reading body language, refined intuition, subliminal communication, emotional intelligence), and an in-depth understanding of key principles from human psychology or other behavioral sciences. Performances may appear to include hypnosis, telepathy, clairvoyance, divination, precognition, psychokinesis, mediumship, mind control, memory feats, deduction, and rapid mathematics.
Mentalism is commonly classified as a subcategory of magic and, when performed by a stage magician, may also be referred to as mental magic. However, many professional mentalists today may generally distinguish themselves from magicians, insisting that their art form leverages a distinct skillset. Instead of doing "magic tricks", mentalists argue that they produce psychological experiences for the mind and imagination, and expand reality with explorations of psychology, suggestion, and influence. Mentalists are also often considered psychic entertainers, although that category also contains non-mentalist performers such as psychic readers and bizarrists.
Some well-known magicians, such as Penn & Teller, and James Randi, argue that a key differentiation between a mentalist and someone who purports to be an actual psychic is that the former is open about being a skilled artist or entertainer who accomplishes their feats through practice, study, and natural means, while the latter may claim to actually possess genuine supernatural, psychic, or extrasensory powers and, thus, operates unethically.
Renowned mentalist Joseph Dunninger, who also worked to debunk fraudulent mediums, captured this key sentiment when he explained his impressive abilities in the following way: "Any child of ten could do this – with forty years of experience." Like any performing art, mentalism requires years of dedication, extensive study, practice, and skill to perform well.
Background
Much of what modern mentalists perform in their acts can be traced back directly to "tests" of supernatural power that were carried out by mediums, spiritualists, and psychics in the 19th century. However, the history of mentalism goes back even further. Accounts of seers and oracles can be found in the Old Testament of the Bible and in works about ancient Greece. Paracelsus reiterated the theme, so reminiscent of the ancient Greeks, that three principias were incorporated into humanity: the spiritual, the physical, and mentalistic phenomena. The mentalist act generally cited as one of the earliest on record in the modern era was performed by diplomat and pioneering sleight-of-hand magician Girolamo Scotto in 1572. The performance of mentalism may utilize conjuring principles including sleights, feints, misdirection, and other skills of street or stage magic. Nonetheless, modern mentalists also now increasingly incorporate insights from human psychology and behavioral sciences to produce unexplainable experiences and effects for their audiences. Changing with the times, some mentalists incorporate an iPhone into their routine.
Techniques
Principle: sleight of hand and other traditional magicians' techniques
Mentalists typically seek to explain their effects as manifestations of psychology, hypnosis, an ability to influence by subtle verbal cues, an acute sensitivity to body language, etc. These are all genuine phenomena, but they are not sufficiently reliable or impressive to form the basis of a mentalism performance. These are in fact fake explanations - part of the mentalist's misdirection - and the true method being employed is classic magicians' trickery.
Written "billet"
A characteristic feature of "mind-reading" by a mentalist is that the spectator must write the thought down. Various justifications are given for this - in order to enable the spectator to focus on the thought, or in order to show it to other audience members etc. - but the real reason is to enable the mentalist then secretly to access the written-down information. There are various techniques which the mentalist can use. A classic method is the "centre tear". The spectator is asked to commit her thought to writing on a small piece of paper (referred to by mentalists as a "billet"). She is told to write the thought in the centre of the billet, and sometimes a circle or a line will be added by the mentalist onto the billet to make sure she writes in the middle. She is then instructed to fold the billet up so that the writing cannot be seen by the mentalist. The mentalist then takes the billet and tears it into small pieces, which he may then burn or throw away or return to the spectator's hand "for safe-keeping". Secretly, during the tearing process, the mentalist tears out and secretes the centre part of the billet which bears the written thought, and later finds an opportunity to read it covertly. Alternatively the mentalist may covertly peek at the written thought. There are a large number of detailed choreographies used by mentalists to achieve a peek. One popular version - known as the "acidus novus" peek - requires the spectator to write her thought on the bottom right-hand corner of the billet. Typically the mentalist will fill up the other 3 quadrants of the billet with writing so that only the bottom right-hand quadrant is left clear. Once the thought is written and the billet folded, the mentalist will hold the billet up to the light to demonstrate that no writing can be seen through the paper. In the course of this action he is able, unobserved by the audience, to slip his thumb between the folds of the billet and expose a view of the bottom right-hand quadrant. He then gestures with the billet, bringing it at eye-level across his field of vision, and in so-doing is able secretly to peek at the spectator's written thought. This is only one of a number of such peek choreographies. Some involve placing the billet in a gimmicked wallet, which allows the mentalist covertly to see the writing. Others employ sleights of hand derived from card or coin magic.
Modern gimmicks
In addition to these traditional magicians' techniques, there is today a huge range of electronic, computer and other gimmicks available to the mentalist. These include dice which secretly transmit the numbers thrown, decks of cards which secretly transmit the cards chosen, notepads which secretly transmit what has been written etc. Smartphones have added an additional range of possibilities. For example, the mentalist can use concealed NFC tags to covertly download onto a spectator’s phone a fake version of a popular website such as Google Images, which allows him to know an image which the spectator believes has been chosen secretly.
Nail writing and its technological equivalents
Where a mind-reading performance does not involve the spectator writing the secret thought down, generally the method employed is that the mentalist purports to predict the secret thought by (apparently) writing an unseen prediction, often behind a clipboard or other hard surface, then he asks the spectator to reveal the thought, and the mentalist at that point quickly and covertly writes or completes his prediction using a nail writer or swami gimmick. These are small devices which allow the mentalist to write unseen with his thumb under cover of a clipboard or in his pocket.
Again, these traditional magicians’ devices have now been supplemented by technology. The mentalist can now buy blackboards and whiteboards which are capable of writing (apparently) handwritten messages fed to them remotely, small printers which can print a spectator’s chosen number and feed the printed paper into an (apparently) sealed jar, and a host of other technological gadgets which make it appear that the mentalist predicted the spectator’s thought, when in fact he simply waited for it to be disclosed by the spectator and then created the evidence of his “prediction”.
Pre-show work
Many mentalism effects rely on pre-show work. This involves the mentalist or his assistant interacting with certain members of the audience before the performance begins. This can be in a pre-show reception, or in the auditorium itself as the audience take their seats, or even in the queue outside the performance venue before the performance. Pre-show work can take a number of forms. One type involves the mentalist talking to a spectator whom he will later, during his performance, involve in one of his effects. In this case the mentalist sets up the trick by covertly obtaining information from the spectator which he will later reveal during the performance. The interaction with the spectator may be made to seem like a casual “meet the audience” conversation, with no warning that the spectator is later to be involved in the performance. Alternatively, the mentalist may tell the spectator that he intends to involve her in his show. In that case the pre-show interaction is usually characterised as preparation “to save time during the show” or similar. Either way, the mentalist will use the occasion to obtain information from the spectator covertly for later revelation, either by traditional sleight of hand methods such as a billet peek, or by using electronic gimmickry such as a Parapad. Alternatively, the mentalist may ask the spectator to make a choice (eg a number, a playing card, a selection from a list of items) and to recall that choice when later asked to participate during the performance. Typically the spectator will believe she has a free choice, but in fact it will be a choice forced by the mentalist, This can be done in a number of ways. One popular method is a proprietary device called a Svenpad. This is a notepad in which every second page is imperceptibly shorter. The long pages each have written on them an item from a list of choices (eg film stars, holiday destinations etc), but the short pages each have the same item - the force choice - written on them. When riffled from front to back only the long pages are visible, showing the full range of different choices. But when riffled from back to front only the short pages are visible, each bearing the force selection. Other forcing methods include trick decks of cards, where all the cards are the same. During the performance itself, when the mentalist involves the spectator in his effect, he will usually aim by careful use of language to avoid any mention of the pre-show interaction to the wider audience, either by himself or the chosen spectator. His aim is to suggest that he and the spectator have not previously met. When the spectator’s covertly obtained information or forced choice is revealed, this greatly enhances the effect from the point of view of the wider audience. It will usually seem that the mentalist has elicited a wholly uncommunicated thought from a random audience member. The chosen spectator herself, having participated in the pre-show encounter, perceives a different and less spectacular effect. This is an example of dual reality (discussed below) in mentalism.
Suggestion
This technique involves implanting an idea, thought, or impression in the mind of the spectator or participant. The mentalist does this by using subtle verbal cues, gestures, body language, and sometimes visual aids to influence their thoughts. For instance, asking someone to "think of any card in a normal deck" automatically plants the general idea of a playing card in their mind. Similarly, asking them to "visualize the card clearly in your mind" can put the image of a particular card in their imagination.
Misdirection
Also known as diversion, this technique aims to divert the audience's attention away from the secret method or process behind a mentalism effect. Magicians and mentalists frequently use grand gestures, animated movement, music, and chatter to distract attention from a sneaky maneuver that sets up the trick. For example, a mentalist may engage in lively conversation while secretly writing something on his palm. Or he may dramatically throw his jacket on a chair to cover up a hidden assistant in the audience.
Cold reading
This technique involves making calculated guesses and drawing logical conclusions about a person by carefully observing their appearance, responses, mannerisms, vocal tones, and other unconscious reactions. Mentalists leverage these cues along with high probability assumptions about human nature to come up with surprisingly accurate character insights and details about someone. They can then present this as if they magically knew the information through psychic powers.
Hot reading
Hot reading refers to the practice of gathering background information about the audience or participants before doing a mentalism act or seance. Mentalists can then astonish spectators by revealing something they could not possibly have known otherwise. However, doing hot readings without informing the audience is considered unethical. Ethical mentalists only do hot readings if they explicitly disclose it, or do it for entertainment with the participant's consent.
Psychological manipulation
Master mentalists have an in-depth understanding of human psychology which allows them to subtly manipulate thoughts, emotions, and behaviors. They use verbal suggestion, social pressure, visual cues and mental framing to influence perceptions and reactions. This lets them guide participants towards the responses, outcomes or choices they want. For instance, a mentalist may hint that choosing a certain number will lead to something positive.
Dual reality
This principle involves structuring a routine to present different experiences to the observer versus the participant. For example, a mentalist may have an audience member pick a "random" card that is actually forced by the mentalist's assistant. The participant believes they freely chose any card, while the audience knows it's manipulated.
Subtle artistry
The most skilled mentalists ensure their performances seem completely natural, organic and unrehearsed even though they are carefully planned. They structure their acts, patter and effects to come across as pure luck, coincidence or chance rather than as clever illusions or tricks. This 'invisible' artistry maintains the mystique around mentalist performances.
Performance approaches
Styles of mentalist presentation can vary greatly. In this vein, Penn & Teller
explain that "[m]entalism is a genre of magic that exists across a spectrum of morality." In the past, at times, some performers such as Alexander and Uri Geller have promoted themselves as genuine psychics.
Some contemporary performers, such as Derren Brown, explain that their results and effects are from using natural skills, including the ability to master magic techniques and showmanship, read body language, and influence audiences with psychological principles, such as suggestion. In this vein, Brown explains that he presents and stages "psychological experiments" through his performances. Mentalist and psychic entertainer Banachek also rejects that he possesses any supernatural or actual psychic powers, having worked with the James Randi Educational Foundation for many years to investigate and debunk fake psychics. He is clear with the public that the effects and experiences he creates through his stage performance are the result of his highly developed performance skills and magic techniques, combined with psychological principles and tactics.
Max Maven often presented his performances as creating interactive mysteries and explorations of the mysterious dimensions of the human mind. He is described as a "mentalist and master magician" as well as a "mystery theorist." Other mentalists and allied performers also promote themselves as "mystery entertainers".
There are mentalists, including Maurice Fogel, Kreskin, Chan Canasta, and David Berglas, who make no specific claims about how effects are achieved and may leave it up to the audience to decide, creating what has been described as "a wonderful sense of ambiguity about whether they possess true psychic ability or not."
Contemporary mentalists often take their shows onto the streets and perform tricks to a live, unsuspecting audience. They do this by approaching random members of the public and ask to demonstrate so-called supernatural powers. However, some performers such as Derren Brown who often adopt this method of performance tell their audience before the trick starts that everything they see is an illusion and that they are not really "having their mind read." This has been the cause of a lot of controversy in the sphere of magic as some mentalists want their audience to believe that this type of magic is "real" while others think that it is morally wrong to lie to a spectator.
Distinction from magicians
Professional mentalists generally do not mix "standard" magic tricks with their mental feats. Doing so associates mentalism too closely with the theatrical trickery employed by stage magicians. Many mentalists claim not to be magicians at all, arguing that it is a different art form altogether. The argument is that mentalism invokes belief and imagination that, when presented properly, may allow the audience to interpret a given effect as "real" or may at least provide enough ambiguity that it is unclear whether it is actually possible to somehow achieve. This lack of certainty about the limits of what is real may lead individuals in an audience to reach different conclusions and beliefs about mentalist performers' claims – be they about their various so-called psychic abilities, photographic memory, being a "human calculator", power of suggestion, NLP, or other skills. In this way, mentalism may play on the senses and a spectator's perception or understanding of reality in a different way than conjuring techniques utilized in stage magic.
Magicians often ask the audience to suspend their disbelief, ignore natural laws, and allow their imagination to play with the various tricks they present. They admit that they are tricksters from the outset, and they know that the audience understands that everything is an illusion. Everyone knows that the magician cannot really achieve the impossible feats shown, such as sawing a person in half and putting them back together without injury, but that level of certainty does not generally exist among the mentalist's audience. Still, other mentalists believe it is unethical to portray their powers as real, adopting the same presentation philosophy as most magicians. These mentalists are honest about their deceptions, with some referring to this as "theatrical mentalism".
However, some magicians do still mix mentally-themed performance with magic illusions. For example, a mind-reading stunt might also involve the magical transposition of two different objects. Such hybrid feats of magic are often called mental magic by performers. Magicians who routinely mix magic with mental magic include David Copperfield, David Blaine, The Amazing Kreskin, and Dynamo.
Notable mentalists
Lior Suchard
The Amazing Kreskin
Uri Geller
Joseph Dunninger
Derren Brown
Alexander
Theodore Annemann
Banachek
Keith Barry
Guy Bavli
David Berglas
Nixon Pulladan
Paul Brook
Akshay Laxman
Chan Canasta
Bob Cassidy
The Clairvoyants
Corinda
Anna Eva Fay
Glenn Falkenstein
Maurice Fogel
Haim Goldenberg
Burling Hull
Al Koran
Nina Kulagina
Max Maven
Gerry McCambridge
Alexander J. McIvor-Tyndall
Wolf Messing
Alain Nu
Marc Paul
Richard Osterlind
The Piddingtons
Oz Pearlman
Princess Mysteria
Marc Salem
The Zancigs
Historical figures
Mentalism techniques have, on occasion, been allegedly used outside the entertainment industry to influence the actions of prominent people for personal and/or political gain. Famous examples of accused practitioners include:
Erik Jan Hanussen, alleged to have influenced Adolf Hitler
Grigori Rasputin, alleged to have influenced Tsaritsa Alexandra
Wolf Messing, alleged to have influenced Joseph Stalin
Count Alessandro di Cagliostro, accused of influencing members of the French aristocracy in the Affair of the Diamond Necklace
In Albert Einstein's preface to Upton Sinclair's 1930 book on telepathy, Mental Radio, he supported his friend's endeavor to test the abilities of purported psychics and skeptically suggested: "So if somehow the facts here set forth rest not upon telepathy, but upon some unconscious hypnotic influence from person to person, this also would be of high psychological interest." As such, Einstein here alluded to techniques of modern mentalism.
In popular culture
See also
Cold reading
Memory sport
Mnemonist
Scientific skepticism
Thirteen Steps To Mentalism
The Mentalist
Muscle memory
References
Further reading
H. J. Burlingame. (1891). Mind-Readers and Their Tricks. In Leaves from Conjurers' Scrap books: Or, Modern Magicians and Their Works. Chicago: Donohue, Henneberry & Co. pp. 108–127
Derren Brown (2007). Tricks of the Mind. Transworld Press. United Kingdom.
Steve Drury (2016). Beyond Knowledge. Drury.
Max Maven (1992). Max Maven's Book of Fortunetelling. Prentice Hall General; 1st edition.
William V. Rauscher. (2002). Mind Readers: Masters of Deception. Mystic Light Press.
Barry H. Wiley. (2012). The Thought Reader Craze: Victorian Science at the Enchanted Boundary. McFarland.
External links | 0.76278 | 0.997283 | 0.760707 |
Judgement | Judgement (or judgment) (in legal context, known as adjudication) is the evaluation of given circumstances to make a decision. Judgement is also the ability to make considered decisions. The term has at least five distinct uses.
Aristotle suggested one should think of the opposite of different uses of a term, if one exists, to help determine if the uses are in fact different. Some opposites help demonstrate that their uses are actually distinct:
Cognitive psychology In cognitive psychology (and related fields like experimental philosophy, social psychology, behavioral economics, or experimental economics), judgement is part of a set of cognitive processes by which individuals reason, make decisions, and form beliefs and opinions (collectively, judgement and decision making, abbreviated JDM). This involves evaluating information, weighing evidence, making choices, and coming to conclusions. Judgements are often influenced by cognitive biases, heuristics, prior experience, social context, abilities (e.g., numeracy, probabalistic thinking), and psychological traits (e.g., tendency toward analytical reasoning). In research, the Society for Judgment and Decision Making is an international academic society dedicated to the topic; they publish the peer-reviewed journal Judgment and Decision Making.
Informal Opinions expressed as facts.
Informal in psychology Used in reference to the quality of cognitive faculties and adjudicational capabilities of particular individuals, typically called wisdom or discernment. Opposite terms include foolishness or indiscretion.
Formal The mental act of affirming or denying one statement or another through comparison. Judgements are communicated to others using agreed-upon terms in the form of words or algebraic symbols as meanings to form propositions relating the terms, and whose further asserted meanings "of relation" are interpreted by those trying to understand the judgement.
Legal Used in the context of a legal trial, to refer to a final finding, statement, or ruling, based on a considered weighing of evidence, called "adjudication". Opposites could be suspension or deferment of adjudication. See Judgment (law) for further explanation.
Additionally, judgement can mean personality judgment; a psychological phenomenon in which a person forms specific opinions of other people.
Formal judgement
One may use the power or faculty of judgement to render judgements, in seeking to understand ideas and the things they represent, by means of ratiocination, using good or poor discernment or judgement. Each use of the word judgement has a different sense, corresponding to the triad of mental power, act, and habit.
Whether habits can be classified or studied scientifically, and whether there is such a thing as human nature, are ongoing controversies.
Judging power or faculty
Aristotle observed that our power to judge takes two forms: making assertions and thinking about definitions. He defined these powers in distinctive terms. Making an assertion as a result of judging can affirm or deny something; it must be either true or false. In a judgement, one affirms a given relationship between two things, or one denies a relationship between two things exists. The kinds of definitions that are judgements are those that are the intersection of two or more ideas rather than those indicated only by usual examples — that is, constitutive definitions.
Later Aristotelians, like Mortimer Adler, questioned whether "definitions of abstraction" that come from merging examples in one's mind are really analytically distinct from judgements. The mind may automatically tend to form a judgement upon having been given such examples.
Distinction of parts
In informal use, words like "judgement" are often used imprecisely, even when keeping them separated by the triad of power, act, and habit.
Aristotle observed that while we interpret propositions drawn from judgements and call them "true" and "false", the objects that the terms try to represent are only "true" or "false"—with respect to the judging act or communicating that judgement—in the sense of "well-chosen" or "ill-chosen".
For example, we might say the proposition "the orange is round" is a true statement because we agree with the underlying judged relation between the objects of the terms, making us believe the statement to be faithful to reality. However the object of the term "orange" is no relation to be judged true or false, and the name taken separately as a term merely represents something brought to our attention, correctly or otherwise, for the sake of the judgement with no further evaluation possible.
Or one might see "2 + 2 = 4" and call this statement derived from an arithmetical judgement true, but one would most likely agree that the objects of the number terms "2" and "4" are by themselves neither true nor false.
As a further example, consider the language of the math problem; "express composite number n in terms of prime factors". Once a composite number is separated into prime numbers as the objects of the assigned terms of the problem, one can see they are, in a sense, called terms because their objects are the final components that arise at the point of certain judgements, like in the case of "judgement of separation". These are types of judgements described in this example, which must terminate, because reaching the place where no further "judgements of reduction" of a certain quality (in this case, non-unity integers dividing integers into non-unity integer quotients) can occur.
Judgement in religion
Christianity – Jesus warned about judging others in the Sermon on the Mount: "Do not judge, so that you may not be judged.".
The Last Judgement is a significant concept in the Abrahamic religions (Judaism, Christianity, and Islam), and also found in the Frashokereti of Zoroastrianism.
See also
:Category:Judgment in Christianity
References
Further reading
Concepts in aesthetics
Thought
Psychological concepts
Decision-making | 0.768025 | 0.990435 | 0.760679 |
Mens sana in corpore sano | () is a Latin phrase, usually translated as "a healthy mind in a healthy body". The phrase is widely used in sporting and educational contexts to express that physical exercise is an important or essential part of mental and psychological well-being.
History
The phrase comes from Satire X of the Roman poet Juvenal (10.356). It is the first in a list of what is desirable in life:
Traditional commentators believe that Juvenal’s intention was to teach his fellow Roman citizens that in the main, their prayers for such things as long life are misguided and that the gods had provided man with virtues which he then lists for them.
Over time and separated from its context, the phrase has come to have a range of meanings. It can be construed to mean that only a healthy mind can lead to a healthy body, or equally that only a healthy body can produce or sustain a healthy mind. Its most general usage is to express the hierarchy of needs, with physical and mental health at the root.
An earlier, similar saying is attributed to the 6th century BC Greek pre-Socratic philosopher Thales of Miletus:
Usages
Later usages
John Locke (1632–1704) uses the phrase in his book Some Thoughts Concerning Education, 1693.
Heinrich von Treitschke used this phrase in his work titled The Army. He uses the phrase to highlight a sound principle of his German nationalistic doctrine. His work echoes the principles of late nineteenth century Prussian society.
Its first use in an athletic context appears to have been by John Hulley in December 1861. In 1862, he chose it as the motto of the Liverpool Athletic Club and Liverpool Olympic Games.
, leader of the Public Health Council of the Netherlands during the Second World War, used the phrase as a goal for public health care.
Slavic Sokol movement (f. 1862)
Usage as the motto of athletic clubs:
Liverpool Athletic Club
Paraná Clube
Gimnasia y Esgrima de Buenos Aires
Club de Gimnasia y Esgrima La Plata
Georgetown Hoyas
R.S.C. Anderlecht
C.D. Santa Clara
Associação de Educação Física e Desportiva
The Turners Organization American Turners and their local organizations like the Los Angeles turners.
Carlton Football Club
Asociacion Atletica Argentinos Juniors
The Israeli Institute of Technology athletics teams
Mens Sana Basket
Beale Gaelic Football Club from County Kerry
Torrens Rowing Club
Sydney Rowing Club
Usage as the motto of military institutions:
Royal Marines physical training instructors (PTI).
Riverside Military Academy in Gainesville, Georgia
Hargrave Military Academy in Chatham, Virginia
Army Physical Training Corps (APTC)
PERI (Physical Education & Recreation Instructors), which is part of the Canadian Military
New Zealand Defence Force Physical Training Instructors.
Usage as the motto of educational institutions:
East Brisbane State School, Queensland, Australia
Windham High School (Ohio)
Hiranandani Foundation School, Mumbai, India
Teacher's College of Columbia University has this phrase engraved on its Horace Mann hall, on 120th Street in New York City
The University College London Men's Rugby Football Club, Based out of the Bloomsbury in London
Grant Medical College and Sir J.J. Hospital, Mumbai
Widener University and the State University of New York at Buffalo
The phrase appears in stone on the western facade of the School of Public Health at Indiana University in Bloomington, Indiana
The phrase appears in stone above the entranceway to the Athletic Center at Mount Allison University in Sackville, New Brunswick
Albert Schweitzer Pastoral Medicine Institute
Dhaka Physical Education College in Dhaka, Bangladesh
Sparta High School in Sparta, New Jersey
Charleston Female Seminary
Detroit Country Day School in Beverly Hills, Michigan
Erskine Academy in South China, Maine
Roger Bacon High School, St. Bernard, Ohio
Bjelke-Petersen School of Physical Culture, Australia
Bridgewater Junior Senior High School in Bridgewater, Nova Scotia
Kongsbakken videregående skole in Tromsø, Norway
Lakefield College School in Lakefield, Canada
Polish Association of Sport named SOKÓŁ before World War I. Poland, Galicja in that time Austria
The Internado Nacional Barros Arana in Santiago, Chile.
Used as a line in the school song of Bangor Grammar School, in Bangor, County Down, Northern Ireland.
Used as motto for Lundsbergs skola, an elite school in Sweden.
Used as motto for Foxcroft School, an all-girls' boarding school in Middleburg, Virginia.
Westholme School, an independent school set on the edge of the countryside of Blackburn, England
Loyola High School in Montreal, Quebec, Canada
Winsor School in Boston, Massachusetts uses the English translation as their motto.
Usage in other cases:
The phrase was a favorite of Harry S. Truman, the 33rd President of the United States.
The sneaker and sports equipment manufacturer Asics takes its name from an acronym of a variant: "anima sana in corpore sano" 'a healthy soul in a healthy body'.
Mensa, a high-IQ society, derives its name both from the Latin word for table, "mensa" and a pun on the phrase "mens sana".
Sound Body Sound Mind, a United States nonprofit organization (501(c)(3) that promotes self-confidence and healthy lifestyle choices among children.
A variant, in Danish En sund sjæl i et sundt legeme was the motto of Captain J.P. Jespersen, a Danish gymnastics educator/instructor.
Nikola Tesla, in his work titled "The Problem of Increasing Human Energy" supports the idea, recommending moderate exercise and avoiding overemphasis on physical fitness.
"In corpore sano" is a song by the Serbian singer-songwriter Konstrakta which she represented Serbia in the Eurovision Song Contest 2022 with, finishing 5th.
Victoria Wood uses it in comedic parody Mens Sana In Thingummy Doodah
See also
Mind-body dualism
Footnotes
References
Mens Sana in Corpore Sano? Body and Mind in Ancient Greece" by David C. Young. The International Journal of the History of Sport, Vol.22, No.1, (January 2005), pp.22–41
Latin words and phrases
Health | 0.763934 | 0.995701 | 0.76065 |
Parasocial interaction | Parasocial interaction (PSI) refers to a kind of psychological relationship experienced by an audience in their mediated encounters with performers in the mass media, particularly on television and on online platforms. Viewers or listeners come to consider media personalities as friends, despite having no or limited interactions with them. PSI is described as an illusory experience, such that media audiences interact with personas (e.g., talk show hosts, celebrities, fictional characters, social media influencers) as if they are engaged in a reciprocal relationship with them. The term was coined by Donald Horton and Richard Wohl in 1956.
A parasocial interaction, an exposure that garners interest in a persona, becomes a parasocial relationship after repeated exposure to the media persona causes the media user to develop illusions of intimacy, friendship, and identification. Positive information learned about the media persona results in increased attraction, and the relationship progresses. Parasocial relationships are enhanced due to trust and self-disclosure provided by the media persona.
Media users are loyal and feel directly connected to the persona, much as they are connected to their close friends, by observing and interpreting their appearance, gestures, voice, conversation, and conduct. Media personas have a significant amount of influence over media users, positive or negative, informing the way that they perceive certain topics or even their purchasing habits. Studies involving longitudinal effects of parasocial interactions on children are still relatively new, according to developmental psychologist Sandra L. Calvert.
Social media introduces additional opportunities for parasocial relationships to intensify because it provides more opportunities for intimate, reciprocal, and frequent interactions between the user and persona. These virtual interactions may involve commenting, following, liking, or direct messaging. The consistency in which the persona appears could also lead to a more intimate perception in the eyes of the user.
Evolution of the term
Parasocial interaction was first described from the perspective of media and communication studies. In 1956, Horton and Wohl explored the different interactions between mass media users and media figures and determined the existence of a parasocial relationship (PSR), where the user acts as though they are involved in a typical social relationship. However, parasocial interaction existed before mass media, when a person would establish a bond with political figures, gods or even spirits.
Since then, the term has been adopted by psychologists in furthering their studies of the social relationships that emerge between consumers of mass media and the figures they see represented there. Horton and Wohl suggested that for most people, parasocial interactions with personae complement their current social interactions, while also suggesting that there are some individuals who exhibit extreme parasociality, or they substitute parasocial interactions for actual social interactions. Perse and Rubin (1989) contested this view, finding that parasocial interactions occurred as a natural byproduct of time spent with media figures.
Although the concept originated from a psychological topic, extensive research of PSI has been performed in the area of mass communication with manifold results. Psychologists began to show their interest in the concept in the 1980s, and researchers began to develop the concept extensively within the field of communication science. Many important questions about social psychology were raised concerning the nature of these relationships that are problematic for existing theories in those fields. The concept of parasocial interaction and detailed examination of the behavioral phenomena that it seeks to explain have considerable potential for developing psychological theory.
The conceptual development of parasocial interaction (PSI) and parasocial relationship (PSR) are interpreted and employed in different ways in various literatures. When it is applied in the use-and-gratifications (U&G) approaches, the two concepts are typically treated interchangeably, with regard primarily to a special type of "interpersonal involvement" with media figures that includes different phenomena such as interaction and identification. In contrast to the U&G approaches, research domains such as media psychology and semiotics argue for a clear distinction between the terms.
PSI specifically means the "one-sided process of media person perception during media exposure", whereas PSR stands for "a cross-situational relationship that a viewer or user holds to a media person, which includes specific cognitive and affective components". Schmid & Klimmt (2011) further argue that PSI and PSR are progressive states such that what begins as a PSI has the potential to become a PSR. Dibble, Hartmann and Rosaen (2016) suggest that a PSR can develop without a PSI occurring, such as when the characters do not make a direct connection with the viewer.
In sum, the terms, definitions, and models explicating PSI and PSR differ across scientific backgrounds and traditions. For example, Dibble et al. (2016) argued that PSI and PSR are often "conflated conceptually and methodologically". To test their assertion, they tested for parasocial indicators with two different scales used for parasocial inquiry: the traditional PSI-Scale and the newer EPSI-Scale, and compared results between the two.
The traditional PSI-Scale, along with modified forms of it, is the most widely used measure of PSI assessment. However, Dibble et al. (2016) found evidence supporting their hypothesis that the newer EPSI-Scale was a better measure of PSIs and that the traditional scale merely revealed participants' liking of characters. Because of varying conceptions, it is difficult for researchers to reach a consensus.
Scientific research
Studying social interaction, and by extension parasocial interaction (PSI), follows a social cognitive approach to defining individual cognitive activity. Accordingly, there are similar psychological processes at work in both parasocial relationships and face-to-face interactions. However, the parasocial relationship does not follow the process of the typical long-term relationship. The media user remains a stranger to the media figure, whereas this "strangeness" would gradually evaporate in typical social interaction.
Many parasocial relationships fulfill the needs of typical social interaction, but potentially reward insecurity. Many who possess a dismissive attachment style to others may find the one-sided interaction to be preferable in lieu of dealing with others, while those who experience anxiety from typical interactions may find comfort in the lives of celebrities consistently being present. Additionally, whatever a celebrity or online figure may do can provoke emotional responses from their audiences—some even going as far as suffering from negative feelings because of it.
The research of PSI obtained significant interest after the advent of the uses and gratifications approach to mass communication research in the early 1970s. A study of early soap opera identified two essential functions of PSI: companionship and personal identity. Rosengren and Windahl further argued that PSI could be identified in the process of viewers' interacting with media figures, but such interaction did not produce identification. This is an important distinction, because identification has a longer history than PSI. Subsequent research has indicated that PSI is evident when identification is not present.
During the last several decades, PSI has been documented in the research analyzing the relationship between audience members and television newscasters, TV and radio talk-show hosts, sitcom characters and other TV celebrities or performers. Research has also been conducted on how a favorable PSI can be facilitated between celebrities and their followers on social media, specifically through the interactions followers have with the celebrities posts on social media. Although different PSI scales have been employed in these studies, PSI was clearly documented with each persona.
Noticing the importance of media in the area of psychological research, academic David Giles asserted in his 2002 paper that there is a need for PSI research to move away from the field of mass communication and into the field of psychology. Studies in this area are commonly conducted by focusing a key psychological issue for PSI: concerning the similarity between parasocial relations and ordinary social relations. For example, academic John Turner adopted the idea of homophily (i.e., the tendency for friendships to form between people that are alike in some designated respect) to examine the interpersonal and psychological predictors of parasocial interaction with television performers. The author found that one dimension of homophily (i.e., attitude) was the best predictor of parasocial interaction.
Hataway indicated that although there seems to be prevailing to analyze PSI in the domain of social psychology, a solid connection to psychological theory and developmental theory has been missing. Hataway further suggested that more psychological research is needed in order to develop parasocial theory. Specific issues cited were "how parasocial relationships are derived from parasocial interaction and the way those relationships further influence media usage as well as a social construction of reality, and how parasocial interaction is cognitively produced". He saw that the majority of PSI research has been conducted by mass communication scholars as a weakness and called for psychologists to refer to Giles's 2002 paper for directions of studies.
Another important consideration for the study of PSI at a psychological level is that there is a form of PSI existing even in interpersonal social situation. People may use fundamentally the same cognitive processes in both interpersonal and mediated communication. Giles's 2002 paper also suggested that the element of direct interaction occurred in mediated interaction, such as talking to a presenter or celebrity guest, may continue in social interaction, with a cartoon character or a fictional protagonist in the mind. This may finally constitute a new way of interpreting social interaction. A further consideration is application of social cognitive approaches in individual levels. It is traditionally accepted that this approach is inadequate by itself for the study of relationships.
However, a number of growing literature on the role of imagination in social interaction suggests that some imaginative activity (e.g., imaginary friends) may be an influential factor in the outcome of real social interaction. PSI is nowadays regarded as an extension of normal social cognition, specifically in terms of the use of the imagination. Current PSI literature commonly acknowledge that the psychological processes acting at the individual level parallel those used in ordinary social activity and relationship building.
Psychological implications during childhood
The formation of parasocial relationships occurs frequently among adolescents, often creating one-sided and unreciprocated bonds with celebrities they encounter in the media. Parasocial interaction is best explored across a lifespan, which explains the growing focus on parasocial interaction in children and adolescents. Studies conducted have found differences between young girls and boys and how each group engages in parasocial behaviour. Adolescent boys have the tendency to favour male athletes, as opposed to adolescent girls who preferred celebrities such as musicians or actresses.
Sex-role stereotyping is more common in children ages 5–6, but decreases in children age 10–11. Existing literature intimates that attachments, parasocial or otherwise, established in early childhood, are highly influential on relationships created later in life. Many studies have focused on adolescent girls because they are more likely to form a strong bond with a media figure and be influenced in terms of lifestyle choices.
Positive consequences
Identity formation
The primary effect is that of learning: consistent with Bandura's (1986) social cognitive theory, much evidence shows that children learn from positive and negative televised role models, and acquire norms and standards for conduct through media outlets such as television and video games. This is supported by a study by Cynthia Hoffner with children aged 7–12, which showed that the gender of children's favorite televised characters was strongly correlated to the gender of the children. The research showed "wishful identification" with parasocial relationships, namely, that boys preferred intelligence, while girls preferred attractiveness when picking favorite characters. These alternatives are both enhanced and mitigated by their separation from reality.
Parasocial interactions are particularly appealing to adolescents in the throes of identity formation and increasing autonomy from parents because these relationships provide idealized figures with whom the adolescent can envision total acceptance. The lack of actual contact with these idealized figures can offer positive social interactions without risk of rejection or consequent feelings of unworthiness. One cannot know everything about a media figure or icon, allowing adolescents to attach fantasized attributes onto these figures in order to meet their own specific wants or needs. On the other hand, entities far removed from reality tend to be less influential on children.
A study by Rosaen and Dibble examined correlation between realism of favorite television character and strength of parasocial relationships. Results showed a positive correlation between social realism (how realistic the character is) and strength of parasocial relationships. Results also show age-related differences among children. Older children tended to prefer more realistic characters, while younger children generally had more powerful parasocial relationships with any character. Age did not impact the correlation between social realism and strength of parasocial interaction, which suggests that more real characters are grounds for more powerful parasocial relationships in children of all ages.
Learning through the media
Parasocial relationships may be formed during an individual's early childhood. In particular, toddlers have a tendency to form parasocial connections with characters that they are exposed to from TV shows and film. Children's television shows, such as Dora the Explorer, involve the show's characters directly addressing the audience. The result is young children participating in "pseudo-conversations" with the on-screen characters. The process of engagement and interaction lead children to creating a one-sided bond where they believe that they have formed a relationship with these fictional characters, viewing them as friends. Exposure to this type of media often leads to opportunities for educating the child. Research has shown that children are more capable of grasping a concept if a character that they are parasocially connected to is the one to present it to them.
The ability to learn from parasocial relationships is directly correlated to the strength of the relationship, as has been shown in work by Sandra L. Calvert and colleagues. In a 2011 study by Lauricella, Gola, and Calvert, eight 21-month-old American infants were taught seriation sequencing (placing objects in the correct order—in this study, nesting a set of cups of various sizes) by one of two characters. One character, Elmo, is iconic in American culture and therefore socially meaningful, and the other, DoDo, although popular with children in Taiwan, is less well known in American media. Children were better able to learn from the socially meaningful character (Elmo) than from the character who was less easily recognized (DoDo).
Children could become better able to learn from less socially-relevant characters such as DoDo, by developing a parasocial relationship with that character. Accordingly, after children were given DoDo toys to play with, their ability to learn from that character increased. In a later study, this effect was found to be greatest when children showed stronger parasocial relationships: Children's success on the seriation task, and therefore their ability to learn from a less familiar character, was greatest for children who exhibited more emotional nurturing behaviors toward the DoDo toy during play.
Personalization of a character makes a child more likely to nurture the character, and thus more likely to form a parasocial relationship that would improve learning from videos featuring the character. In place of DoDo and Elmo, a 2014 study instead gave children Scout and Violet dolls. These interactive plush toy dogs can be programmed to say a child's name and have particular favorites (i.e., a favorite food, color, and song). 18-month-old children were given either personalized toys (matched for gender, programmed to say the child's name, and programmed to have the same favorites as the child) or non-personalized toys (the opposite gender, programmed to call the children "Pal" and have random favorites).
At the end of the study, children who had received personalized dolls were better able to learn from their characters than were children who had received non-personalized toys. Children also nurtured personalized toys more than non-personalized toys. It seems that perceived similarities increase children's interest and investment in the characters, which motivates the development of parasocial relationships and helps improve later screen-based learning.
Negative consequences
In the past two decades, people have become increasingly interested in the potential negative impacts media has on people's behavior and cognition. Many researchers have begun to look more closely at how people's relationships with various media outlets affect behavior, self-perception and attachment styles, and specifically in regards to creating parasocial relationships.
Body image
Further research has examined these relationships with regard to body image and self-perception. Interest in this more narrow area of research has increased as body image issues have become more prevalent in today's society.
A study was conducted to examine the relationship between media exposure and adolescents' body image. Specifically, researchers looked at parasocial relationships and the different motivations for self-comparison with a character. This study surveyed 391 7th and 8th grade students and found that media exposure negatively predicted body image. In addition to the direct negative impact, the study indicated that parasocial relationships with favorite characters, motivations to self-compare, and engagement in social comparison with characters amplified the negative effects on kids' body images. Furthermore, the researchers found that making social comparisons with favorite characters distorted actual, or ideal, body image and self-perception. Studies have been done exploring these effects across gender.
A study examined the parasocial relationships between men and superheroes; the study looked at muscular versus non-muscular superheroes and men who either did or did not develop a one-sided psychological bond with a superhero character. The results from this study indicated a significant impact on body image, particularly when exposed to muscular superhero characters. Research conducted by Ariana F. Young, Shira Gabriel, and Jordan L. Hollar in 2013 showed that men who did not form a parasocial relationship with a muscular superhero had poor self-perception and felt negative about their bodies after exposure to the muscular character. However, if the men had a PSR with the superhero, the negative effects on body satisfaction were eliminated.
The increasing presence of beauty filters on social media has also played a large role in users' body image. On Facebook, within the first year filters were available, over 400,000 creators released and utilized over 1.2 million filters. These filters were consistently seen by billions of viewers, as more than 150 creators surpassed 1 billion views on their content. These filters edit the appearance of the creator which can give a false reality to viewers.
Aggression
Further studies have looked into parasocial relationships and more specifically at the impacts on violent and aggressive behavior. A study done by Keren Eyal and Alan M. Rubin examined aggressive and violent television characters and the potential negative impacts they may have on viewers. The study was based on social cognitive theory and looked at trait aggression in viewers and identification and parasocial interaction with aggressive characters. The researchers measured trait aggression in each of the participants and compared that to the level of identification with aggressive characters. The study found that more aggressive viewers were more likely to identify with aggressive characters and further develop parasocial relationships with the aggressive characters.
Parasocial interaction has been linked to psychological attachment theory and its consequences have seen the same dramatic effects as real relationship breakups. In considering the relationship between parasocial interaction and attachment styles, Jonathan Cohen found that individuals who were more anxious media consumers tended to be more invested in parasocial relationships.
In parasocial interaction there is no "normal" social interaction; it is a very one-sided relationship. The knowledgeable side has no direct control over the actions of the side it observes, and it is very difficult for it to contact and influence it.
Parasocial breakup
While much research focuses on the formation and maintenance of parasocial relationships, other research has begun to focus on what happens when a parasocial relationship is dissolved. Eyal and Cohen, who examined responses to the end of the television series Friends, define parasocial breakup as "a situation where a character with whom a viewer has developed a PSR goes off the air". The distress that media consumers experienced after a parasocial breakup was quite similar to that of a social relationship. However, the emotional distress experienced after the parasocial breakup was weaker than that of the real life interpersonal relationship.
Lather and Moyer-Guse also considered the concept of parasocial breakup, but in a more temporary sense. While the study focused on parasocial breakups as a result of the writers' strike from 2007 to 2008, the researchers found that media consumers still experienced different levels of emotional distress. This study, like previous studies, showed that parasocial relationships operate very similarly to real-life relationships.
Gerace examined fans' reactions to the end of the long-running Australian television series Neighbours. Fans reported feelings of considerable grief and perceptions of a parasocial breakup with their favorite character. Fans who formed stronger parasocial relationships with their favorite character, self-identified strongly as a fan of the series, and viewed the series for motives such as entertainment and exposure to different lifestyles reported greater grief and distress at the end of the series. In this study, parasocial bonding with a favorite character involved empathizing with their on-screen experiences and imagining what they were thinking and feeling.
On the internet
In 1998, John Eighmey, from Iowa State University, and Lola McCord, from the University of Alabama, published a study titled "Adding Value in the Information Age: Uses and Gratifications of Sites on the World Wide Web." In the study, they observed that the presence of parasocial relationships constituted an important determinant of website visitation rates. "It appears," the study states, "that websites projecting a strong sense of personality may also encourage the development of a kind of parasocial relationship with website visitors."
In 1999, John Hoerner, from the University of Alabama, published a study titled "Scaling the Web: A Parasocial Interaction Scale for World Wide Web Sites", in which he proposed a method for measuring the effects of parasocial interaction on the Internet. The study explained that websites may feature "personae" that host to the visitors to the sites in order to generate public interest.
Personae, in some cases, are nothing more than the online representations of the actual people, often prominent public figures, but sometimes, according to the study, will be the fictional creations of the sites' webmasters. Personae "take on many of the characteristics of a [real-life] companion, including regular and frequent appearances, a sense of immediacy...and the feeling of a face-to-face meeting." The study makes the point that, even when no such personae have been created, parasocial relationships might still develop. Webmasters might foster parasocial interactions through a conversational writing style, extensive character development and opportunities for email exchange with the website's persona.
Hoerner used the Parasocial Interaction (PSI) scale, developed by Rubin, Perse, and Powell in 1985, and modified the scale to more accurately assess parasocial interactions on the Internet. They used the scale to gauge participants' reactions to a number of different websites, and, more generally, to determine whether or not parasocial interaction theory could be linked to Internet use. The study concluded, first, that parasocial interaction is not dependent on the presence of a traditional persona on a website. Data showed that websites with described "strong personae" did not attract significantly more hits than other websites selected by the study conductors. "The literal, mediated personality from the newscast or soap opera of the past [around which the original PSI-scale was framed] is gone. The design metaphor, flow of the web experience, and styles of textual and graphic presentations of the information all become elements of a website persona and encourage parasocial interaction by the visitor/user with that persona."
Social media
Though most literature has focused on parasocial interaction as a television and film phenomenon, new technologies, namely the Internet, have necessitated a closer look at such interactions. The applications of PSI to computer-mediated environments are continuously documented in literature from the early 2000s and 2010s. Many researchers concluded that, just as parasocial relationships are present in television and radio, they are also present in online environments such as blogs and other social networking sites. Through an exploration of followers on politicians' blogs, academics Kjerstin Thorson and Shelly Rodgers found that parasocial interacting with the politician influences people's opinions about the politician, and promotes them to vote for the politician.
Social media is designed to be a new channel through which parasocial interaction/relationship can be formed. Research has shown that interacting with individuals through blogs and social media such as Twitter can influence the perceptions of those individuals. As Internet users become more active on social media platforms such as Facebook and Twitter, followers often feel more engaged with them, making the parasocial relationships stronger.
Social media is defined as "Internet-based applications that build on the ideological and technological foundations of Web 2.0 and that allow the creation and exchange of user-generated content". While the usage of social media for personal means is common, the use of social media by celebrities has given them an opportunity to have a larger platform for personal causes or brand promotion by facilitating word-of-mouth.
Social media networks inherit at least one key attribute from the Internet, in that they offer open accessibility for all users. Philip Drake and Andy Miah argue that the Internet, and therefore social networks and blogs, downsize the gatekeeping processes that exist in other mass media forms. They further state that this means that online information can spread unfiltered and thus does not rest on strict framework conditions such as those on television or in newspapers. This, however, remains subject to an ongoing debate within research.
Through presence on social media platforms, stars and celebrities attempt on the one hand to participate in the production of their image; on the other hand, they must remain present in these media in order to stay on the media's and consequently on the audience's agenda. According to German scholars such as Gregor Daschmann and Holger Schramm, celebrities all have to compete for the public's (limited) attention. In such a competitive environment a famous person must therefore remain present on all accessible media channels.
Twitter
Twitter is one of the most popular social media platforms and a common choice for celebrities who want to chat with their fans without divulging personal access information. In 2013, the analysis from Stever and Lawson assumed that Twitter can be used to learn about parasocial interaction and the study provided a first step in that endeavor. The study included a sample of 12 entertainment media celebrities, 6 males and 6 females, all taken from 2009 to 2012 Twitter feeds.
The result showed that, although fans interacting with celebrities via Twitter have limited access to communicate with the celebrity, the relationship is still parasocial even though a fan might receive the occasional reply from the celebrity. Twitter can provide a direct connection between followers and celebrities or influencers that gives access to everyday information. It is an entertaining way for most fans since Twitter enables them to be a part of life that they enjoy.
The more followers one has on Twitter, the greater perceived social influence one has. This is particularly because tweets are broadcast to every follower, who may then retweet these posts to their own followers, which are then rebroadcast to thousands of other Twitter members. Seen as the equivalent to a movie earning a box-office hit or a single track hitting the top of the Billboard charts, the phenomenon of "trending" (i.e., words tagged at a higher rate than others on a social media platform) on Twitter grants users the ability to earn influence on the platform. Twitter, alongside other social media websites, can be utilized by its users as a form of gaining social capital.
Online video and livestreaming
Academics at the 2022 Hawaii International Conference on System Sciences referred to interactions on livestreaming services as "cyber-social relations"; they stated that these interactions "take a middle position between" social (there are no spatial proximity and no bodily contact) and parasocial relations (as there is reciprocity and temporal proximity).
YouTube
YouTube, a social media platform dedicated to sharing video-related content produced by its users, has grown in popularity to become a form of media that's likened to television for the current generation. By content creators granting insight into their daily lives through the practice of vlogging, viewers form close one-sided relationships with these creators that manifest in comment chains, fan art and consistent responses with the creator in question. Parasocial interaction and relationships are commonly formed between the creators and their audiences due to the creator's desire to interact with their fanbase through comments or posts. Many creators share "personal" details of their lives, even if there is little authenticity in the polished identity they convey online.
The interaction between viewers and celebrities is not limited to product placement or branding – the viewers could socialize with celebrities or influencers that they might not have any chance to contact in reality. Megan Farokhmanesh, for The Verge, wrote that parasocial relationships "are vital to YouTubers' success, and they are what turns viewers into a loyal community. ... Viewers who feel friendship or intimacy with their favorite creators can also have higher expectations and stronger reactions when those expectations are disappointed. ... Because creators often earn money off their fans through memberships, Patreons, and other cash avenues, there are fans who feel entitled to specific details about the lives of creators or even specific content. ... The divide between creators' lives and their work is a fine line".
In a study conducted by Google in 2017, a reported 40% of millennial YouTube subscribers claimed their "favorite creators understands them better than their friends." For many viewers, parasocial relationships check off the four factors that are defined by Mark Granovetter's "The Strength of Weak Ties" theory: intimacy is gained by the creator's sharing of personal details, by which their viewers may react emotionally; viewers dedicate time to watching content the creator uploads; and what the creator posts—whether sponsored or not—may make the viewer feel as if they are being offered something, like a favor.
Twitch
Twitch, a video livestreaming service with focuses such as video game live streaming, creative content, and "in real life" streams, has also grown in popularity since launching in June 2011. Twitch's platform encourages creators to directly engage with their fans. According to research, a large draw towards the website is the aspect of users directly participating in a livestream through the chat function. In turn, streamers interact with their audience by greeting them by username or addressing their messages in comments.
As noted in one study, this type of interaction forms "a sense of community". Twitch livestreams create a digital "third place", a term coined by Ray Oldenburg that describes a public and informal get-together of individuals that are foundational to building a community. This sense of community is further enhanced when users become regular participants of a stream, either by watching live shows frequently or subscribing to the creator.
Subscribing to a Twitch channel is another way in which viewers participate in a live stream. This is a considered a form of digital patronage where audiences pay money to financially support a creator. Forming what an audience member perceives as a personal relationship with their favourite streamer plays a large role in whether or not they choose to subscribe.
Wired stated that Twitch pioneered "the digital parasocial thing. More specifically, monetizing it on a massive scale". David Finch, in the book Implications and Impacts of eSports on Business and Society, highlighted that streamers on Twitch have many options to monetize their content such as donations through Twitch, channel subscriptions and ad revenue; additionally, Twitch is more associated with livestreaming than YouTube and has "a much higher degree of interaction" between the content creator and the viewer. Finch wrote that "the popularity of Twitch parallels other emerging digital media forms in that it is user-generated, draws on parasocial relationships established online and establishes intimacy in new ways. ... Twitch viewers might similarly regard their time on their favourite Twitch channels as familiar, hilarious and informative encounters with their gaming pals".
Academics Time Wulf, Frank Schneider, and Stefan Beckert found that parasocial relationships are a key component to a Twitch streamer's success and the audience's enjoyment of Twitch; particularly, Twitch's chat features can foster this relationship. They highlighted that "professional streamers have a personal schedule of streaming times so that users can rely on seeing their friends again—similar to characters of a periodic TV show. Therefore, viewers are able to maintain their relationships to streamers. The stronger bonds between viewers and streamers grow, the more users may root for their favorite streamer's success". The Guardian also highlighted the interactive nature of Twitch and that the "format is extremely good at cultivating community, a virtual hangout spot for its millions of teenage and college-age users".
Twitch streamers have also discussed the negatives associated with these parasocial relationships such as harassment and stalking by fans. Cecilia D'Anastasio, for Kotaku, wrote that "Twitch streamers are like digital-age geisha. They host, they entertain, they listen, they respond. They perform their skill—gaming, in most cases—from behind a thick veneer of familiarity. Maybe it's because they let viewers into their homes, or because the live-streaming format feels candid or because of their unprecedented accessibility, but there's something about being an entertainer on Twitch that blurs the line between viewer and friend. It can be hard to keep a healthy distance from fans. And, for fans, it can occasionally be hard to tell the difference between entertainer and companion".
The Verge and the HuffPost have both specifically highlighted the harassment female Twitch streamers experience. Jesselyn Cook, for HuffPost, wrote that "most all women who earn a living on Twitch know what it's like to have male viewers who, after spending countless hours watching them in real time, develop obsessive feelings of romantic and sexual entitlement. The result is an environment where extreme harassment, rape and death threats, blackmailing, stalking and worse have become regular workplace hazards. Female streamers who spoke to HuffPost said they wish they'd known before joining Twitch that they were also signing up for a torrent of endless, dehumanizing harassment with little to no recourse".
Podcasters
Podcasts, episodic series of spoken-word digital audio files that a user can download to a personal device, are also known for fostering parasocial relationships between podcasters and listeners. As early as 2012, Robert C. MacDougall wrote "The podcast, and particularly the podcast listened to on the move, may be part of an evolution in parasocial phenomena and a fundamentally new part of mediated interpersonal communication. Podcasts enhance the personal feel and all attendant psychodynamic effects of Fessenden's primordial radio show."
Laith Zuraikat wrote on "The Parasocial Nature of the Podcast" in his book Radio's Second Century (2020). Author Wil Williams wrote, "there is a difference between feeling a friendship, a sense of comfort, between yourself and a podcaster, and assuming that friendship to be real. Something that makes podcasters appealing is that they have an everyman quality to them: anyone can make a podcast, and this means that in many genres, podcasters feel more "normal" than creators in other mediums. It's easy to feel like podcasters would be your friend if you ever met: it's likely the listener is of the same socioeconomic status as the podcaster, sharing not only other basic demographics as the podcaster, but also their interests, jokes, and philosophies."
In The Guardian, Rachel Aroesti wrote about how, during the lockdowns necessitated by the COVID-19 pandemic, "podcasters replaced our real friends […] providing companionship that is increasingly difficult to distinguish from the real thing." She wrote how "Podcasts are intimate, with no in-the-room audience to remind you of your own distance."
Semantic scholar Mikhaela Nadora (Portland State University) wrote that "[Parasocial relationships] with podcast hosts may cultivate the same way real-life relationships do. As social relationships are important to us, with the new self-autonomous and personalized advances in our media and technology landscape, we can have the same intimate relationships with media figures."
Commercial influences
Parasocial interaction (PSI) theory was used to understand consumers' purchasing behavior in online context. With the development of social media, such as Facebook, Twitter, Instagram, and YouTube, both companies and consumers start to use social commerce platforms more frequently. Many studies indicate that, among various factors affecting consumers' purchase decision on SCPs, such as credibility of products, parasocial interaction exerts more potential influence on users' final decisions. Sokolova and Kefi conducted a study with large data set (1209 respondents) from the audiences of four popular influencers in the beauty and fashion sector in France to discover the influence of parasocial interaction and credibility on consumers' purchase intention. Their study discovers that younger generations value parasocial interaction and their personal attachment to influencers more than credibility.
On the social commerce platforms, users intend to build parasocial interaction, one with other users, one with celebrities.
PSI with other users
Certain social media users, called influencers, are active creators of online content, such as personal experiences, ideas, or reviews for targeted audiences. Influencers can become experts, similar to celebrities to some extent, and their posts may influence products and brands and affect potential customers, i.e. their followers. The users in a social commerce platform "meet" with other users and influencers through the images, videos, and feedbacks that they share on the social media. By the time, after multiples times of "meetings", the imaginary intimacy is improved, and the users will deliberately maintain the online friendship, which is a parasocial interaction. Influencers on the social media platforms often comment on the products they have tested, and promote them online to other users by providing their feelings and personal experiences along with images and videos.
Some brands have their Instagram influencer marketing strategies to increase followers' buying intention and trustworthiness perception. Under the parasocial relationship, users intend to rely on the images or comments that influencers had on the products, which will influence consumers' final decisions. Thus, many social commerce merchants utilize this psychological implication, and propagate beautiful images and positive comments on products to provide users a more intuitive shopping experience.
For instance, the popularity of vlogs creates a social space where strangers can share feelings and build intimate relationship. Followers of influencers would make a comparison of the tastes between themselves and the celebrities or influencers they watch, so that PSI has generated among them. Many merchants pay vloggers for recommending and persuading their subscribers to purchase the products. Vlogs provide vivid visual experience for viewers, and the audience could perceive the closeness with the vloggers that build a virtual relationship like face to face via this medium. The audience may be influenced by those vlogs and vloggers to make their purchasing decisions.
PSI with celebrities
PSI relationships are more readily formed between social media users and celebrities. On social media, celebrities build and strengthen more intimate relationships with consumers and fans. Celebrities' self-disclosure could allow their fans and audience get connected with those celebrities and stimulate their illusion of in-person relationship with celebrities. Simultaneously, the context of SCPs, supported by Web 2.0 social media technologies, stimulates users' parasocial interactions with celebrities and experts. Uncertainty reduction theory is an example of a way that this can occur. The process of repeated exposure to an individual gradually reduces the user's level of uncertainty, which increases the user's chances of liking this celebrity.
On some social shopping websites, users could follow celebrities and interact with those people to generate an illusory bond between the celebrities and themselves. Repeated exposure to the celebrity gives users a sense of predictability in their actions, which engenders a sense of loyalty. PSI help increased more social attractions for celebrities and more credibility that the customers could trust.
Users who are immersed in celebrity-fans PSI may affirm their loyalty through various activities, including purchasing products endorsed by celebrities. Unlike influencers, celebrities bring their fans with stronger impulse purchase. Targeted consumers (fans) desire to interact with celebrities, instead of passively receiving information from celebrities. By purchasing and supporting the celebrity endorsing products, fans may build more intimate relationship with celebrities in their imagination.
In a 2014 journal article, Seung-A Annie Jin and Joe Phua discussed how they conducted studies to determine multiple hypotheses based on the number of followers a celebrity had in correlation to the trust that imparted onto a consumer. This study was done in terms of a celebrity endorsing a product and the likelihood of the consumer to purchase the product after seeing the promotion. Consumers perceived the celebrity with a high number of followers as being more physically attractive, trustworthy, and competent.
A high number of followers on the celebrity endorser's profile also significantly increased consumers' intention to build an online friendship with the celebrity. The study found that if a celebrity with a higher number of followers was perceived as more trustworthy, the consumer exhibited significantly higher postexposure product involvement and buying intention as opposed to those who were exposed to a celebrity with a lower follower count.
Merchants on social commerce platforms will find huge potentials of analyzing and applying parasocial interactions to manipulate consumers' purchase intention. In addition to influencing fans to purchase products, celebrities can also influence fans to engage in similar conversational styles. Fans, or audience members, in parasocial relationships may "appear to be accommodating to characters' linguistic styles". As fans continue to interact in a parasocial relationship, there is potential for them to mirror the conversational style of the celebrity while communicating on different platforms of social media.
PSI with companies
As social media relationships grew between celebrities and influencers, businesses created social media profiles for audience engagement. Fast food restaurants have started comedy Twitter accounts to interact with their customers in a personal way. The companies' Twitter accounts respond to tweets from customers, tell jokes and engage in the online industry in ways that create PSI with the consumers. This strategy is working.
A study done by Lauren I. Labrecque in 2013 found that customers have higher loyalty intentions and are more likely to provide information for the brand when the brand fostered PSIs. The study also showed that these outcomes were less likely when the consumer felt the response from the company's social media account were automated. Furthermore, including personal details and behind-the-scenes ideation in interactions with consumers also triggers PSI and results in a positive impact.
Another parasocial interaction usage in organizational communication is that CEOs' social media image also contributes to the company's image and reputation. Therefore, CEOs have paid attention to their communication with customers, employees, and investors. They would improve their public features through social media to communicate with the stakeholders.
In a 2023 study involving the quick serving restaurant (QSR) industry, Banerjee, Sen, and Zahay find that customers’ in-store engagement in the form of their social media usage can have strong predictive power. The authors find that social media posts containing product brand mentions created by an engaged customer within a store premise can trigger parasocial interactions in the form of likes, retweets, and replies which can further lead to an increased competitive spillover. Such effects can either increase or decrease based on the competitor density in the area. Combining data from six different databases, the authors show how social media can be leveraged to influence competitive positions in local markets. In this interesting study, the authors caution that seemingly positive customer testimonials from within a store can ultimately end up helping competitor brands and hence store managers must practice diligence in monitoring customer social media posts.
Livestreaming
According to Ko and Chen (2020), "Live streaming was originally used in broadcasting sporting events or news issues on TV. As the mobile Internet gets more and more popular, now the netizen and small companies can broadcast themselves via the use of live-streaming APP". Many platforms have developed and launched their live stream function, like Taobao.com and Facebook. For online retailers like Taobao.com or Tmall.com, users could follow and interact with the hosts and celebrities like being friends with them.
"China had up to 433 million live streaming viewers in August 2019 [CNNIC 2019]. The use of live streaming to promote brands and products is "exploding" in the E-commerce field in China [Aliresearch 2020]. For example, during the "June 18" event in 2019, Taobao's live streaming platform drove sales of 13 billion yuan, with the number of merchants broadcasting live streaming increasing by nearly 120% year-on-year. The number of broadcasts grew by 150% year- on-year [CNNIC 2019]."
From the perspective of a retailer, live streaming provides more opportunities for marketing, branding, improving customer services and increasing revenue. As a customer, live streaming also offers a more synchronic and interactive shopping experience than before. Interactions between streamers/sellers and consumers also help customers get higher quality information about the products, which is different from traditional shopping method.
According to Xu, Wu and Li (2020), "streaming commerce creates a novel shopping environment that provides multiple stimuli to motivate potential consumers to indulge in their shopping behaviors. It has emerged and shows great potential as a novel business model to add dynamic real-time interaction among sellers (streamers) and consumers (viewers), provide accurate information, and involve hedonic factors to attract consumers to indulge in consumption processes. Viewers are enabled to obtain dynamic and accurate information by watching live streams, develop virtual social relationships with streamers, and enjoy relaxing and entertaining hours while watching attractive streamers".
Livestreaming permit the viewers and the streamers engage in a real-time interaction to create intimacy and closeness, thus, the credibility and trustworthiness would be reinforced through dynamic interactions. In America, retailers like Amazon and QVC have also worked on their own live streaming shopping platforms to take this huge advantage. Interpersonal relations on livestreaming services occupy a position in between social and parasocial relations, giving livestreaming an exceptional position in the entire landscape of social media.
Limitations
Most studies find that PSI only occurs as friendship, which is overly restrictive theoretically and practically. It is common that people may build parasocial interactions with certain media figures, even though they do not consider to be "friends", such as a villain in a show. Though PSI with disliked figures occurs less likely than with heroes and positive characters, the situation of "love to hate" relationship with disliked characters still occurs. Some researchers realize the restriction of limiting PSIs as friendship, which may preclude them from capture broader situation of meaningful media user reactions.
In 2010, Tian and Hoffner conduct an online questionnaire measuring the responses from 174 participants to a liked, neutral, or disliked character from the ABC drama Lost. All participants reported the identification they had perceived with the character, as well as the parasocial interaction and how did they try to change their perspectives to be more like the character. According to the whole sample, perceived similarity was a significant positive predictor of both identification and parasocial interaction. Undeniably, parasocial interaction was higher for liked than for neutral and disliked characters based on the study. Parasocial interaction still appeared with liked, neutral and disliked characters. The prevailing perspective of PSI as a friendship is not appropriate based on the theoretical and experimental findings, and many researchers start to improve the measurement of the PSI's concept.
Future research
One direction for future PSI concerns the advancement of methodology. As theories become more defined and complex, experiments seem to be necessary to be employed in testing hypotheses. Because the meanings of perception and emotion take up much of what parasocial interaction/relationship research interest, the cause and effect is hard to be distinguished and potential spuriousness is difficult to be avoided. For example, whether similarity precedes PSI and whether mediated interaction create a sense of similarity requires experimental validation.
Cohen also suggested that different types of relationships are encouraged to be analyzed within different genres, which particularly challenges scholars in examining the mediated relationship in reality TV shows (e.g., Survivor). These prototypical reality shows are built around narratives, displaying a lot of emotions which seem to solicit empathy and identification, and also demonstrating the characters' skills towards developing fandom. Ratings and audience responses provide strong evidence that those reality shows create significant mediated relationship, but future inquiries should examine whether this new kind of mediated interaction/relationship evolves or do these interactions/relationships conform to existing patterns.
The influence of media in childhood has received little attention from developmental psychologists, even though children have a high degree of exposure to media. While many studies and experiments have explored the nature of parasocial relationships, there are many opportunities for future research. For example, a potential future area of research could be the issue of reruns, where the relationships have outcomes which are already known or well-established. In addition, another area of research could focus on production techniques or televisual approaches. This would include techniques such as chiaroscuro or flat lighting, the strategic placement of close-ups or establishing shots, deductive or inductive shot sequences, hip hop editing, or desaturation. These techniques have long been theorized to have some sort of influence on the formation of parasocial relationships, but their influence has yet to be determined.
The prevailing use of social media and its impact on mediated relationships also requires further study of PSI. Different social media platforms provide channels through which celebrities communicate with their followers easily, making parasocial interaction/relationships seem less unidirectional and perhaps more satisfying and intense. As such, whether social media has made PSI more a part of everyday life needs further exploration. Technological development has been raising questions regarding the role of PSI in our social lives, as media content is available in more places and times. Our mediated friends are never too far away; instead, they actually rest in our pockets and sleep in our beds. Whether this means that we spend more time and effort on cultivating these relationships and will be less dependent on real social relationships, needs further exploration.
While parasocial relationships are typically seen as a relationship with a media persona whom the individual views positively, more research should be done surround parasocial relationships with media personas who the individual views in a negative sense. There are many instances on social media where negative interactions exist (negative sentiments expressed towards politicians, athletes, etc.). It would be interesting to understand how these negative interactions and relationships can affect us, and our other relationships. Additionally, more research should be done on the well-being consequences of parasocial relationships with media persona who may inspire hate or other negative emotions, such as towards a particular group of people.
The role that mediated communication and engagement played during the pandemic may have led to media personas being evaluated with similar (or the same) cognitive processes we use when interacting with real-life friends. This may continue to influence our parasocial relationships, and more should be learned regarding the long-term effects of COVID-19 on the functioning of parasocial relationships.
Other concerns include the continuity of media figures representation across various media outlets, and the notion of parasocial interaction as compensation for lack of social outlets. Popstars, for example, may not only appear on television, but on several different television or radio programs, as either a chat guest or a performer; further repeated viewings of these stars would intensify visual aspects of parasocial interaction with that star. Most research has typically characterized media users as a television viewer who is often solitary and in need of social interaction. The different types of user-figure interaction can be addressed by conceptualizing parasocial interaction as an extension of ordinary social interaction. Through close examination of social encounters that are significant for parasocial relationships, we can continue to distinguish between parasocial interaction as an isolated activity and as longer-term interaction.
Focus on relationships
Background
The terms parasocial interactions and parasocial relationships were coined by anthropologist Donald Horton and sociologist R. Richard Wohl in 1956, laying the foundation for the topic within the field of communication studies. Originating from psychology, parasocial phenomena comes from a wide range of scientific backgrounds and methodological approaches. The study of parasocial relationships has increased with the growth of mass and social media such as Facebook, Twitter, and Instagram, particularly by those investigating advertising effectiveness and journalism. Horton and Wohl have stated that television personas offer the media user a sense of intimacy and have influence over them by using their appearance and gesture in a way that is seen as being engaging, directly addressing the audience, and conversing with them in a friendly and personal manner. By viewing media personas regularly and feeling a sense of trust with the persona, parasocial relationships offer the media user a continuous relationship that intensifies.
Celebrity endorsements and advertising
Advertising and marketing can use media personas to increase brand awareness, keep media users engaged, and increase purchase intention by seeking out attractive media personas. If media personas show that they are interested in and engage in rewarding interactions with media users, then if the media user likes the persona they will reciprocate interactions and over time form a parasocial relationship with them.
In this social media era, media users are able to have interactions with the media persona that are more intimate, open, reciprocal, and frequent. More media personas are using social media platforms for personal communication, revealing their personal lives and thoughts to consumers. The more frequent and conversational that the media persona self discloses via social media results in media users feeling high levels of intimacy, loyalty, and friendship. Media users know that the chances of receiving a direct message or getting a retweet from a celebrity are highly unlikely, but the possibility gives fans a sense of intimacy and adds authenticity to one-sided parasocial relationships with their favorite personas.
Celebrity endorsements are so effective with purchase intention because parasocial relationships form such an influential bond of trust. The acceptance and trustworthiness that the media user feels towards the media persona is carried over into the brand that is being promoted. Media users feel that they understand media personas and appreciate their values and motives. This accumulation of time and knowledge acquired of the media persona translates into feelings of loyalty, which can then influence their attitudes, voting decisions, prejudices, change their ideas about reality, willingness to donate, and purchasing advertised products. Celebrities and popular social media personalities who engage in social media endorsements are referred to as influencers.
Causes and impact
Parasocial relationships are a psychological attachment in which the media persona offers a continuing relationship with the media user. They grow to depend on them, plan to interact with them, count on them much like a close friend. They acquire a history with them and believe they know the persona better than others. Media users are free to partake in the benefits of real relations with no responsibility or effort. They can control their experience or walk away from parasocial relationships freely.
A media user's bond with media personas can lead to higher self-confidence, a stronger perception of problem focused coping strategies, and a stronger sense of belonging. However these one-sided relationships can also foster an impractical body image, can reduce self-esteem, increase media consumption, and media addiction.
Parasocial relationships are seen frequently with post-retirement aged media users due to high television consumption and loss of social contacts or activities. However, adolescents are also prone to form parasocial relationships. This is attributed to puberty, the discovery of sexuality and identity, and the idolization of media stars. Women are generally more likely to form stronger parasocial relationships than men.
Some results indicate that parasocial relationships with media personas increase because the media user is lonely, dissatisfied, emotionally unstable, and/or has unattractive relationship alternatives. Some can use these parasocial relationships as a substitute for real social contact. A media user's personality affects how they use social media and may also vary an individual's pursuit of intimacy and approach to relationships i.e. extroverts may prefer to seek social gratification through face-to-face interactions as opposed to mediated ones.
Media users use mediated communication to gratify their personal needs, such as to relax, seek pleasure, boredom, or out of habit. In this era of social media and the internet media users have constant access to on demand viewing, constant interactions on hand held mobile devices, and widespread Internet access.
Parasocial breakup
Experiencing negative emotional responses as a result of an ending parasocial relationship, i.e. death of a television persona in a series, is known as a parasocial breakup. More intense levels of parasocial breakup could be predicted by loneliness and observing media for companionship.
Jonathan Cohen, from the Department of Communications in University of Haifa, links parasocial relationships and breakups to social relationship attachment styles. The results and lasting effects of a parasocial breakup may rely on the attachment styles of the person experiencing and initiating the attachment, much like social relationships. Individuals with an anxious attachment style experience more extreme reactions to parasocial breakups, compared to avoidant and secure types.
The in-person/real life social relationship status of an individual does not affect the intensity of distress or discomfort felt in parasocial breakups.
Age, however, shows correlation with the intensity of distress at parasocial breakups, as Cohen finds young people (under the age of 20) are more susceptible to strong symptoms of PSB (Parasocial breakup).
For some people, parasocial breakups can be as simple as avoiding the content concerning the subject of the parasocial tie.
According to Cohen, the level of distress to the individuals experiencing parasocial breakup depends on the strength and extremity of the bond.
Parasocial relationships with fictional characters
Parasocial relationships with fictional characters are more intense than with nonfictional characters, because of the feeling of being completely present in a fictional world. Narrative realism—the plausibility that a fictional world and its characters could exist—and external realism—the level at which aspects of the story map to a person's real world experiences—play a part in heightening one's connections to fictional characters. If a narrative can convince a viewer that a character is plausible and/or relatable, it creates a space for the viewer to form a parasocial relationship with said character. There is a desire for camaraderie that can be built through bonding over a fictional persona.
Due to the span and breadth of media franchises such as the Harry Potter, Disney, and Star Wars series, consumers are able to engage more deeply and form strong parasocial relationships. These fictional parasocial relationships can extend further than watching the movies or reading the books into official and fan fiction websites, social media, and even extend beyond media to have an in-person experience at national and international theme park attractions.
For individuals who become very attached to fictional characters, and may or may not depend on them emotionally, even the thought of that character being removed from the story in some way (death, being written out of the story, etc.) can be extremely painful.
The dread of a fictional character's death (the parasocial breakup) can be much stronger than that of a parasocial breakup with a public figure.
Parasocial relationships with fictional characters may be affected by external events relating to the actors who play them, and vice versa. For example, if a scandal were to occur with an actor, individuals who had parasocial connections to the character they played may reevaluate their opinions on the character. A parasocial breakup may occur with the fictional character, as a result of the scandal. However, the reverse, where a positive impression of an actor due to an event is created, does not apply. Fictional characters, in this case, are seen as separate from the actor and their good contributions or personality outside of the role.
Theoretical connections and measurement instruments
Rubin analyzed the process of parasocial relationship development by applying principles of uncertainty reduction theory, which states that uncertainty about others is reduced over time through communication, allowing for increased attraction and relationship growth. Other theories that apply to parasocial relationships are social penetration theory, which is based on the premise that positive, intimate interactions produce further rewards in the relationship and the uses and gratifications theory, which states that media users are goal driven and want media to gratify their needs.
In 1956, T.M. Newcomb's (1956) reinforcement theory explained that following a rewarding interaction an attraction is formed. A gratifying relationship is formed as a result of social attraction and interactive environments created by the media persona.
The most used measurement instrument for parasocial phenomena is the Parasocial Interaction Scale (PSI Scale), which was developed by Rubin, Perse, and Powell in 1985 to assess interpersonal relationships with media personalities.
Mina Tsay and Brianna Bodine developed a revised version of Rubin's scale by addressing that parasocial relationship engagement is dictated by a media users personality and motivations. They identified four distinct dimensions that address engagement with media personas from affective, cognitive, and behavioral perspectives. The dimensions assessed how people see media personas as role models, how people desire to communicate with them and learn more about them, and how familiar they are to the individuals. Tsay and Bodine noticed how greater levels of interaction can be formed between the media user and the media persona because of the shift of media and mass communication in recent years. Media users are now able to choose how they want to interact with and initiate in their own media experiences online, such as fan groups, Twitter, and character blogs.
During the COVID-19 pandemic
Parasocial relationships have been studied in various contexts, but the COVID-19 pandemic created a unique environment in which to study them. Due to the global pandemic, people's social routines were abruptly interrupted; with the beginning of social-distancing and isolation protocols, people forcibly experienced a decrease in their face-to-face (FtF) interactions. To maintain communication with others, people turned to screens and media such as Zoom and FaceTime (“mediated” communication). This shift to mediated communication “blurred” the differences that existed between social and parasocial interactions and relationships. People would interact with their friends in similar ways as they would with media personas who they had parasocial relationships with. For example, one could like and comment on someone's Instagram posts in the same way in both scenarios.
This environment presented a unique environment to study the parasocial compensation hypothesis, which suggests that parasocial relationships can function as an alternative to typical social relationships.
Findings
One study found that individuals in a certain identity domain who lacked friends in real life made up for this deficiency by forming intense parasocial relationships (Bond, 2018), supporting this hypothesis. During the pandemic, studies found that parasocial relationships strengthened during the social distancing phase of COVID-19, further demonstrating how parasocial relationships may in fact have a compensation function.
The individuals who reported the strongest growth in their parasocial relationships were those who decreased their face-to-face social interactions; this supports the parasocial compensation hypothesis. However, these weren't the only individuals who experienced an increase in strength in their parasocial relationships. Those who used increased their use of mediated communication to communicate with friends also demonstrated growth. Overall, there was a strengthening of parasocial relationships during the beginning phases of the pandemic.
Explaining increases in growth
One potential explanation for the growth in parasocial relationships is the cognitive distinctions between social and parasocial interactions which were no longer as well-defined. There were likely greater similarities in processing the social engagement of friends and media persona. When the only way to interact with others is through a screen, social engagement becomes much more similar to parasocial engagement. When utilizing these mediated communications, users can perceive greater distance (as compared to real-life interactions), leading them to cognitively process their actual friends in a similar manner as liked media persona. This could lead to processing parasocial interactions with greater intention, which could develop into greater and stronger parasocial relationships.
Another explanation would be that individuals who need higher amounts social connection increased their parasocial relationships, while still maintaining their social relationships online. Individuals who spent more time in mediated communication with real-life friends were likely to experience growth in their parasocial relationships, especially if the media persona would perform behaviors that increase their parasocial interaction potential. Parasocial interaction potential is how likely a media persona is to “engage” their audience in interactions.
This explanation would appear to refute the parasocial compensation hypothesis, suggesting that parasocial relationship cannot function as an alternative to social relationships, rather as an accompaniment. However, this explanation is weakened when considering how parasocial interaction potential should have had moderating influence, which was not seen. When people experienced a decrease in face-to-face interactions, they experienced a growth in their parasocial relationships, regardless of the parasocial interaction potential of the media persona with whom they experienced growth. Therefore, the parasocial compensation hypothesis is still supported; during a time when options and alternatives were limited, when individuals experienced less face-to-face interactions with friends, they may have attempted to replace it with parasocial interactions, regardless of the parasocial interaction potential of their liked media persona.
During this time, people spent more time in their parasocial interactions than they did in face-to-face social interactions. Parasocial relationships grew, especially if people were spending less time in their real-life social relationships, supporting the compensation function of parasocial relationships. Parasocial relationships could be particularly important for social connection during critical times such as the global pandemic. There may be other cases where individuals are experiencing “social deficiencies” with their real-life friends, or when they are not able to be around many “like-others” where parasocial relationships may function in similar capacities. The parasocial compensation hypothesis highlights how much value these type of relationships could have.
Overall, parasocial relationships with media personae grew during the global pandemic, especially for those who may have used these types of interactions to make up for the “social deficiencies” that were caused by COVID-19.
See also
Celebrity worship syndrome
Contact hypothesis
Parasocial contact hypothesis
Personal god
Simp
"Stan" – 2000 single by Eminem about a fictional fan who is obsessed with the rapper.
Stan Twitter
Uses and gratifications theory
References
Further reading
Media studies
Social influence
Social psychology
Sociological terminology | 0.761484 | 0.998884 | 0.760634 |
Six-factor model of psychological well-being | The six-factor model of psychological well-being is a theory developed by Carol Ryff that determines six factors that contribute to an individual's psychological well-being, contentment, and happiness. Psychological well-being consists of self-acceptance, positive relationships with others, autonomy, environmental mastery, a feeling of purpose and meaning in life, and personal growth and development. Psychological well-being is attained by achieving a state of balance affected by both challenging and rewarding life events.
Measurement
The Ryff Scale of Measurement is a psychometric inventory consisting of two forms (either 54 or 84 items) in which respondents rate statements on a scale of 1 to 6, where 1 indicates strong disagreement and 6 indicates strong agreement. Ryff's model is not based on merely feeling happy, but is based on Aristotle's Nicomachean Ethics, "where the goal of life isn't feeling good, but is instead about living virtuously".
The Ryff Scale is based on six factors: autonomy, environmental mastery, personal growth, positive relations with others, purpose in life, and self-acceptance. Higher total scores indicate higher psychological well-being. Following are explanations of each criterion, and an example statement from the Ryff Inventory to measure each criterion.
Autonomy: High scores indicate that the respondent is independent and regulates his or her behavior independent of social pressures. An example statement for this criterion is "I have confidence in my opinions, even if they are contrary to the general consensus".
Environmental Mastery: High scores indicate that the respondent makes effective use of opportunities and has a sense of mastery in managing environmental factors and activities, including managing everyday affairs and creating situations to benefit personal needs. An example statement for this criterion is "In general, I feel I am in charge of the situation in which I live".
Personal Growth: High scores indicate that the respondent continues to develop, is welcoming to new experiences, and recognizes improvement in behavior and self over time. An example statement for this criterion is "I think it is important to have new experiences that challenge how you think about yourself and the world".
Positive Relations with Others: High scores reflect the respondent's engagement in meaningful relationships with others that include reciprocal empathy, intimacy, and affection. An example statement for this criterion is "People would describe me as a giving person, willing to share my time with others".
Purpose in Life: High scores reflect the respondent's strong goal orientation and conviction that life holds meaning. An example statement for this criterion is "Some people wander aimlessly through life, but I am not one of them".
Self-Acceptance: High scores reflect the respondent's positive attitude about his or her self. An example statement for this criterion is "I like most aspects of my personality"
Applications and research-findings
Contributing factors
Positive contributing factors
Positive psychological well-being may emerge from numerous sources. A happy marriage is contributive, for example, as is a satisfying job or a meaningful relationship with another person. When marriages include forgiveness, optimistic expectations, positive thoughts about one's spouse, and kindness, a marriage significantly improves psychological well-being. A propensity to unrealistic optimism and over-exaggerated self-evaluations can be useful. These positive illusions are especially important when an individual receives threatening negative feedback, as the illusions allow for adaptation in these circumstances to protect psychological well-being and self-confidence (Taylor & Brown, 1988). Optimism also can help an individual cope with stresses to their well-being.
Negative contributing factors
Psychological well-being can also be affected negatively, as is the case with a degrading and unrewarding work environment, unfulfilling obligations and unsatisfying relationships. Social interaction has a strong effect on well-being as negative social outcomes are more strongly related to well-being than are positive social outcomes. Childhood traumatic experiences diminish psychological well-being throughout adult life, and can damage psychological resilience in children, adolescents, and adults. Perceived stigma also diminished psychological well-being, particularly stigma in relation to obesity and other physical ailments or disabilities.
Extrinsic and intrinsic psychological needs
A study conducted in the early 1990s exploring the relationship between well-being and those aspects of positive functioning that were put forth in Ryff's model indicates that persons who aspired more for financial success relative to affiliation with others or their community scored lower on various measures of well-being.
Individuals that strive for a life defined by affiliation, intimacy, and contributing to one's community can be described as aspiring to fulfil their intrinsic psychological needs. In contrast, those individuals who aspire for wealth and material, social recognition, fame, image, or attractiveness can be described as aiming to fulfil their extrinsic psychological needs. The strength of an individual's intrinsic (relative to extrinsic) aspirations as indicated by rankings of importance correlates with an array of psychological outcomes. Positive correlations have been found with indications of psychological well-being: positive affect, vitality, and self-actualization. Negative correlations have been found with indicators of psychological ill-being: negative affect, depression, and anxiety.
Relations with others
A more recent study confirming Ryff's notion of maintaining positive relations with others as a way of leading a meaningful life involved comparing levels of self-reported life satisfaction and subjective well-being (positive/negative affect). Results suggested that individuals whose actions had underlying eudaimonic tendencies as indicated by their self-reports (e.g., "I seek out situations that challenge my skills and abilities") were found to possess higher subjective well-being and life satisfaction scores compared to participants who did not. Individuals were grouped according to their chosen paths/strategies to happiness as identified by their answers on an Orientation to Happiness Questionnaire. The questionnaire describes and differentiates individuals on the basis of three orientations to happiness which can be pursued, though some individuals do not pursue any. The "pleasure" orientation describes a path to happiness that is associated with adopting hedonistic life goals to satisfy only one's extrinsic needs. Engagement and meaning orientations describe a pursuit of happiness that integrates two positive psychology constructs "flow/engagement" and "eudaimonia/meaning". Both of the latter orientations are also associated with aspiring to meet intrinsic needs for affiliation and community and were amalgamated by Anić and Tončić into a single "eudaimonic" path to happiness that elicited high scores on all measures of well-being and life satisfaction. Importantly, she also produced scales for assessing mental health. This factor structure has been debated, but has generated much research in wellbeing, health, and successful aging.
Personality
Meta-analytic research shows that psychological well-being scales correlate strongly with all of the Big Five personality traits. Neuroticism is the strongest Big Five predictor of psychological well-being, correlating negatively with psychological well-being. In particular, openness has strong connections with personal growth, agreeableness and extraversion are notably related to positive relations, and conscientiousness is notably related to environmental mastery and purpose in life.
Heritability
Individual differences in both overall Eudaimonia, identified loosely with self-control and in the facets of eudaimonia are heritable. Evidence from one study supports 5 independent genetic mechanisms underlying the Ryff facets of this trait, leading to a genetic construct of eudaimonia in terms of general self-control, and four subsidiary biological mechanisms enabling the psychological capabilities of purpose, agency, growth, and positive social relations.
Well-being therapy
According to Seligman, positive interventions to attain positive human experience should not be at the expense of disregarding human suffering, weakness, and disorder. A therapy based on Ryff's six elements was developed by Fava and others in these regards.
See also
Flourishing
Subjective vitality
References
Further reading
External links
Representative Publications by Caroll Ryff (partly downloadable)
Carol Ryff's Model of Psychological Well-being. The Six Criteria of Well-Being
Tricia A. Seifert, The Ryff Scales of Psychological Well-Being
Ryff's Psychological Well-Being Scales (PWB), 42 Item version
Psychological models
Well-being | 0.768684 | 0.989492 | 0.760607 |
Pharmacodynamics | Pharmacodynamics (PD) is the study of the biochemical and physiologic effects of drugs (especially pharmaceutical drugs). The effects can include those manifested within animals (including humans), microorganisms, or combinations of organisms (for example, infection).
Pharmacodynamics and pharmacokinetics are the main branches of pharmacology, being itself a topic of biology interested in the study of the interactions of both endogenous and exogenous chemical substances with living organisms.
In particular, pharmacodynamics is the study of how a drug affects an organism, whereas pharmacokinetics is the study of how the organism affects the drug. Both together influence dosing, benefit, and adverse effects. Pharmacodynamics is sometimes abbreviated as PD and pharmacokinetics as PK, especially in combined reference (for example, when speaking of PK/PD models).
Pharmacodynamics places particular emphasis on dose–response relationships, that is, the relationships between drug concentration and effect. One dominant example is drug-receptor interactions as modeled by
L + R <=> LR
where L, R, and LR represent ligand (drug), receptor, and ligand-receptor complex concentrations, respectively. This equation represents a simplified model of reaction dynamics that can be studied mathematically through tools such as free energy maps.
Basics
There are four principal protein targets with which drugs can interact:
Enzymes (e.g. neostigmine and acetyl cholinesterase)
Inhibitors
Inducers
Activators
Membrane carriers [Reuptake vs Efflux] (e.g. tricyclic antidepressants and catecholamine uptake-1)
Enhancer (RE)
Inhibitor (RI)
Releaser (RA)
Ion channels (e.g. nimodipine and voltage-gated Ca2+ channels)
Blocker
Opener
Receptor (e.g. Listed in table below)
Agonists can be full, partial or inverse.
Antagonists can be competitive, non-competitive, or uncompetive.
Allosteric modulator can have 3 effects within a receptor. One is its capability or incapability to activate a receptor (2 possibilities). The other two are agonist affinity and efficacy. They may be increased, decreased or unaffected (3 and 3 possibilities).
NMBD = neuromuscular blocking drugs; NMDA = N-methyl-d-aspartate; EGF = epidermal growth factor.
Effects on the body
The majority of drugs either
There are 7 main drug actions:
stimulating action through direct receptor agonism and downstream effects
depressing action through direct receptor agonism and downstream effects (ex.: inverse agonist)
blocking/antagonizing action (as with silent antagonists), the drug binds the receptor but does not activate it
stabilizing action, the drug seems to act neither as a stimulant or as a depressant (ex.: some drugs possess receptor activity that allows them to stabilize general receptor activation, like buprenorphine in opioid dependent individuals or aripiprazole in schizophrenia, all depending on the dose and the recipient)
exchanging/replacing substances or accumulating them to form a reserve (ex.: glycogen storage)
direct beneficial chemical reaction as in free radical scavenging
direct harmful chemical reaction which might result in damage or destruction of the cells, through induced toxic or lethal damage (cytotoxicity or irritation)
Desired activity
The desired activity of a drug is mainly due to successful targeting of one of the following:
Cellular membrane disruption
Chemical reaction with downstream effects
Interaction with enzyme proteins
Interaction with structural proteins
Interaction with carrier proteins
Interaction with ion channels
Ligand binding to receptors:
Hormone receptors
Neuromodulator receptors
Neurotransmitter receptors
General anesthetics were once thought to work by disordering the neural membranes, thereby altering the Na+ influx. Antacids and chelating agents combine chemically in the body. Enzyme-substrate binding is a way to alter the production or metabolism of key endogenous chemicals, for example aspirin irreversibly inhibits the enzyme prostaglandin synthetase (cyclooxygenase) thereby preventing inflammatory response. Colchicine, a drug for gout, interferes with the function of the structural protein tubulin, while digitalis, a drug still used in heart failure, inhibits the activity of the carrier molecule, Na-K-ATPase pump. The widest class of drugs act as ligands that bind to receptors that determine cellular effects. Upon drug binding, receptors can elicit their normal action (agonist), blocked action (antagonist), or even action opposite to normal (inverse agonist).
In principle, a pharmacologist would aim for a target plasma concentration of the drug for a desired level of response. In reality, there are many factors affecting this goal. Pharmacokinetic factors determine peak concentrations, and concentrations cannot be maintained with absolute consistency because of metabolic breakdown and excretory clearance. Genetic factors may exist which would alter metabolism or drug action itself, and a patient's immediate status may also affect indicated dosage.
Undesirable effects
Undesirable effects of a drug include:
Increased probability of cell mutation (carcinogenic activity)
A multitude of simultaneous assorted actions which may be deleterious
Interaction (additive, multiplicative, or metabolic)
Induced physiological damage, or abnormal chronic conditions
Therapeutic window
The therapeutic window is the amount of a medication between the amount that gives an effect (effective dose) and the amount that gives more adverse effects than desired effects. For instance, medication with a small pharmaceutical window must be administered with care and control, e.g. by frequently measuring blood concentration of the drug, since it easily loses effects or gives adverse effects.
Duration of action
The duration of action of a drug is the length of time that particular drug is effective. Duration of action is a function of several parameters including plasma half-life, the time to equilibrate between plasma and target compartments, and the off rate of the drug from its biological target.
Recreational drug use
In recreational psychoactive drug spaces, duration refers to the length of time over which the subjective effects of a psychoactive substance manifest themselves.
Duration can be broken down into 6 parts: (1) total duration (2) onset (3) come up (4) peak (5) offset and (6) after effects. Depending upon the substance consumed, each of these occurs in a separate and continuous fashion.
Total
The total duration of a substance can be defined as the amount of time it takes for the effects of a substance to completely wear off into sobriety, starting from the moment the substance is first administered.
Onset
The onset phase can be defined as the period until the very first changes in perception (i.e. "first alerts") are able to be detected.
Come up
The "come up" phase can be defined as the period between the first noticeable changes in perception and the point of highest subjective intensity. This is colloquially known as "coming up."
Peak
The peak phase can be defined as period of time in which the intensity of the substance's effects are at its height.
Offset
The offset phase can be defined as the amount of time in between the conclusion of the peak and shifting into a sober state. This is colloquially referred to as "coming down."
After effects
The after effects can be defined as any residual effects which may remain after the experience has reached its conclusion. After effects depend on the substance and usage. This is colloquially known as a "hangover" for negative after effects of substances, such as alcohol, cocaine, and MDMA or an "afterglow" for describing a typically positive, pleasant effect, typically found in substances such as cannabis, LSD in low to high doses, and ketamine.
Receptor binding and effect
The binding of ligands (drug) to receptors is governed by the law of mass action which relates the large-scale status to the rate of numerous molecular processes. The rates of formation and un-formation can be used to determine the equilibrium concentration of bound receptors. The equilibrium dissociation constant is defined by:
L + R <=> LR
where L=ligand, R=receptor, square brackets [] denote concentration. The fraction of bound receptors is
Where is the fraction of receptor bound by the ligand.
This expression is one way to consider the effect of a drug, in which the response is related to the fraction of bound receptors (see: Hill equation). The fraction of bound receptors is known as occupancy. The relationship between occupancy and pharmacological response is usually non-linear. This explains the so-called receptor reserve phenomenon i.e. the concentration producing 50% occupancy is typically higher than the concentration producing 50% of maximum response. More precisely, receptor reserve refers to a phenomenon whereby stimulation of only a fraction of the whole receptor population apparently elicits the maximal effect achievable in a particular tissue.
The simplest interpretation of receptor reserve is that it is a model that states there are excess receptors on the cell surface than what is necessary for full effect. Taking a more sophisticated approach, receptor reserve is an integrative measure of the response-inducing capacity of an agonist (in some receptor models it is termed intrinsic efficacy or intrinsic activity) and of the signal amplification capacity of the corresponding receptor (and its downstream signaling pathways). Thus, the existence (and magnitude) of receptor reserve depends on the agonist (efficacy), tissue (signal amplification ability) and measured effect (pathways activated to cause signal amplification). As receptor reserve is very sensitive to agonist's intrinsic efficacy, it is usually defined only for full (high-efficacy) agonists.
Often the response is determined as a function of log[L] to consider many orders of magnitude of concentration. However, there is no biological or physical theory that relates effects to the log of concentration. It is just convenient for graphing purposes. It is useful to note that 50% of the receptors are bound when [L]=Kd .
The graph shown represents the conc-response for two hypothetical receptor agonists, plotted in a semi-log fashion. The curve toward the left represents a higher potency (potency arrow does not indicate direction of increase) since lower concentrations are needed for a given response. The effect increases as a function of concentration.
Multicellular pharmacodynamics
The concept of pharmacodynamics has been expanded to include Multicellular Pharmacodynamics (MCPD). MCPD is the study of the static and dynamic properties and relationships between a set of drugs and a dynamic and diverse multicellular four-dimensional organization. It is the study of the workings of a drug on a minimal multicellular system (mMCS), both in vivo and in silico. Networked Multicellular Pharmacodynamics (Net-MCPD) further extends the concept of MCPD to model regulatory genomic networks together with signal transduction pathways, as part of a complex of interacting components in the cell.
Toxicodynamics
Pharmacokinetics and pharmacodynamics are termed toxicokinetics and toxicodynamics in the field of ecotoxicology. Here, the focus is on toxic effects on a wide range of organisms. The corresponding models are called toxicokinetic-toxicodynamic models.
See also
Mechanism of action
Dose-response relationship
Pharmacokinetics
ADME
Antimicrobial pharmacodynamics
Pharmaceutical company
Schild regression
References
External links
Vijay. (2003) Predictive software for drug design and development. Pharmaceutical Development and Regulation 1 ((3)), 159–168.
Werner, E., In silico multicellular systems biology and minimal genomes, DDT vol 8, no 24, pp 1121–1127, Dec 2003. (Introduces the concepts MCPD and Net-MCPD)
Dr. David W. A. Bourne, OU College of Pharmacy Pharmacokinetic and Pharmacodynamic Resources.
Introduction to Pharmacokinetics and Pharmacodynamics (PDF)
Pharmacy
Medicinal chemistry
Life sciences industry | 0.764381 | 0.995027 | 0.76058 |
Emotional and behavioral disorders | Emotional and behavioral disorders (EBD; also known as behavioral and emotional disorders) refer to a disability classification used in educational settings that allows educational institutions to provide special education and related services to students who have displayed poor social and/or academic progress.
The classification is often given to students after conducting a Functional Behavior Analysis. These students need individualized behavior supports such as a Behavior Intervention Plan, to receive a free and appropriate public education. Students with EBD may be eligible for an Individualized Education Plan (IEP) and/or accommodations in the classroom through a 504 Plan.
History
Early history
Before any studies were done on the subject, mental illnesses were often thought to be a form of demonic possession or witchcraft. Since much was unknown, there was little to no distinction between the different types of mental illness and developmental disorders that we refer to today. Most often, they were dealt with by performing an exorcism on the person exhibiting signs of any mental illness. In the early to mid-1800s, asylums were introduced to America and Europe. There, patients were treated cruelly and often referred to as lunatics by doctors in the professional fields. The main focus of asylums were to shun people with mental illnesses from the public. In 1963, the Community Mental Health Centers Construction Act (Public Law 88–164), was passed by Congress and signed by John F. Kennedy, which provided federal funding to community mental health centers. This legislation changed the way that mental health services were handled and also led to the closure of many large asylums. Many laws soon followed assisting more and more people with EBDs. 1978 came with the passing of Public Law 94- 142 which required free and public education to all disabled children including those with EBDs. An extension of PL 94–142, PL 99-457, was put into act which would provide services to all disabled children from the ages of 3-5 by the 1990–91 school year. PL 94-142 has since been renamed to the Individuals with Disabilities Education Act (IDEA).
Use and development of the term
Various terms have been used to describe irregular emotional and behavioral disorders. Many of the terms such as mental illness and psychopathology were used to describe adults with such conditions. Mental illness was a label for most people with any type of disorder and it was common for people with emotional and behavioral disorders to be labeled with a mental illness. However, those terms were avoided when describing children as it seemed too stigmatizing. In the late 1900s the term "behaviorally disordered" appeared. Some professionals in the field of special education accepted the term while others felt it ignored emotional issues. In order to make a more uniformed terminology, the National Mental Health and Special Education Coalition, which consists of over thirty professional and advocacy groups, coined the term "emotional and behavioral disorders" in 1988.
Criteria
According to the Individuals with Disabilities Education Act an EBD classification is required if one or more of the following characteristics is excessively observed in a student over a significant amount of time:
Learning challenges that cannot be explained by intellectual, sensory, or health factors.
Trouble keeping up or building satisfactory relationships with peers and teachers.
Inappropriate behavior (against self or others) or emotions (shares the need to harm others or self, low self-worth) in normal conditions.
An overall attitude of unhappiness or depression.
A tendency to develop physical symptoms or fears related with individual or school issues.
The term "EBD" includes students diagnosed with schizophrenia. However, it does not have any significant bearing on students who are socially maladjusted unless they also meet the above criteria.
Criticisms
Providing or failing to provide an EBD classification to a student may be controversial, as the IDEA does not clarify which children would be considered "socially maladjusted". Students with a psychiatric diagnosis of conduct disorder are not guaranteed to receive additional educational services under an EBD classification. Students with an EBD classification who meet the diagnostic criteria for various disruptive behavior disorders, including attention-deficit hyperactivity disorder (ADHD), oppositional defiant disorder (ODD), or conduct disorder (CD) do not have an automatic eligibility to receive an IEP or 504 Plan. Students considered "socially maladjusted", but ineligible for an EBD classification (i.e., students diagnosed with conduct disorder), often receive better educational services in special education classrooms or alternative schools with high structure, clear rules, and consistent consequences.
Student characteristics
Students with EBD are a diverse population with a wide range of intellectual and academic abilities. Males, African-Americans, and economically disadvantaged students are over-represented in the EBD population, and students with EBD are more likely to live in single-parent homes, foster homes, or other non-traditional living situations. These students also tend to have low rates of positive social interactions with peers in educational contexts. Students with EBD are often categorized as "internalizers" (e.g., have poor self-esteem, or are diagnosed with an anxiety disorder or mood disorder) or "externalizers" (e.g., disrupt classroom instruction, or are diagnosed with disruptive behavior disorders such as oppositional defiant disorder and conduct disorder). Male students may be over-represented in the EBD population because they appear to be more likely to exhibit disruptive externalizing behavior that interferes with classroom instruction. Females may be more likely to exhibit internalizing behavior that does not interfere with classroom instruction, though to what extent this perception is due to social expectations of differences in male and female behavior is unclear. In any case, it is important to note that both internalizing and externalizing behaviour can and do occur in either sex; Students with EBD are also at an increased risk for learning disabilities, school dropout, substance abuse, and juvenile delinquency.
Internalizing and externalizing behavior
A person with EBD with "internalizing" behavior may have poor self-esteem, have depression, experience loss of interest in social, academic, and other life activities, and may exhibit non-suicidal self-injury or substance abuse. Students with internalizing behavior may also have a diagnosis of separation anxiety or another anxiety disorder, post-traumatic stress disorder (PTSD), specific or social phobia, obsessive–compulsive disorder (OCD), panic disorder, and/or an eating disorder. Teachers are more likely to write referrals for students that are overly disruptive. Screening tools used to detect students with high levels of "internalizing" behavior are not sensitive and are rarely used in practice. Students with EBD with "externalizing" behavior may be aggressive, non-compliant, extroverted, or disruptive.
Students with EBD that show externalizing behavior are often diagnosed with attention deficit hyperactivity disorder (ADHD), oppositional defiant disorder (ODD), conduct disorder,autism spectrum disorder and/or bipolar disorder; however, this population can also include typically developing children that have learned to exhibit externalizing behavior for various reasons (e.g., escape from academic demands or access to attention). These students often have difficulty inhibiting emotional responses resulting from anger, frustration, and disappointment. Students who "externalize" exhibit behaviors such as insulting, provoking, threatening, bullying, cursing, and fighting, along with other forms of aggression. Male students with EBD exhibit externalizing behavior more often than their female counterparts.
Children and adolescents with ADD or ADHD may display different types of externalizing behavior and should be either medicated or going through behavioral treatment for their diagnosis. Adolescents with severe ADHD would likely benefit most from both medication and behavioral treatment. Younger children should go through behavioral treatment before being treated with medication. Another recommended form of treatment for children and adolescents diagnosed with ADHD would be counseling from a mental health professional. Treatment options will improve performance of children and adolescents on emotion recognition tasks, specifically response time as there is no difficulty recognizing human emotions. The degree of required treatments vary depending on the degree of ADD or ADHD the individual has.
Treatment for these types of behaviors should include the parents as it is evident that their parenting skills impact on how their child deals with their symptoms, especially when at a younger age. Parents going through a parenting skills training program were reported a decrease in internalizing and externalizing behavior in their children post-training program. The program included learning how to give positive attention, increase good behavior with small frequent rewards and specific praise as well as learning how to decrease attention when the child behaved poorly.
Effect in cognition
In recent years, many researchers have been interested in exploring the relationship between emotional disorders and cognition. Evidence has revealed that there is a relationship between the two. Strauman (1989) investigated how emotional disorders shape a person's cognitive structure, that is, the mental processes people utilize to make sense of the world around them. He recruited three groups of individuals: those with social phobias, those with depression, and controls with no emotional disorder diagnosis. He wanted to determine whether these groups had a cognitive structure showing an actual/ideal (AI) discrepancy (referring to an individual not believing that they have achieved their personal desires) or actual/own/other (AOO) discrepancy (referring to an individual's actions not living up to what their significant other believes that they need to be). He found that depressed individuals had the highest AI discrepancy and social phobics had the greatest AOO discrepancy, while the controls were lower or in between the two for both discrepancies.
Specific cognitive processes (e.g., attention) may be different in those with emotional disorders. MacLeod, Mathews, and Tata (1986) tested the reaction times of 32 participants, some of whom were diagnosed with Generalized Anxiety disorder, when presented with threatening words. They found that when threatening words were presented, people with greater anxiety tended to have increased selective attention, meaning that they reacted quicker to a stimulus in an area where a threatening word was just presented (32-59ms faster). When in the control group, subjects reacted slower when there was a threatening word proceeding the stimulus (16-32ms slower).
Emotional disorders can also alter the way people regulate their emotions. Joormann and Gotlib (2010) conducted a study with depressed, or previously depressed, individuals to test this. They found that, when compared to individuals who have never had a depressive episode, previously and currently depressed individuals tended to use maladaptive emotion regulation strategies (such as rumination or brooding) more. They also found that when depressed individuals displayed cognitive inhibition (slowing of response to a variable that had been previously ignored) when asked to describe a negative word (ignored variable was a positive word), they were less likely to ruminate or brood. When they displayed cognitive inhibition when asked to describe a positive word (ignored variable was a negative word), they were more likely to reflect.
Services in the United States
There are many types of services available to EBD students, referenced below. One service is one-on-one support (or an aide) who assists in everyday activities and academics. Another service is foundations offer behavior services as well as counseling support. Some services include classrooms that are dedicated to educational foundations and work on building the student up possessively. States also offer dedicated schools with multiple resources that help students with EBD excel and transition (back) into local schools.
Texas
The state of Texas has the Texas Behavior Support Initiative (TBSI) authorized by Senate Bill 1196 and Texas Administrative Code §89.1053. With its design to provide knowledge for the use of constructive behavior interventions and to aid students, including students with disabilities. TBSI meets the legislative requirements for the use of restraint and time-out, along with providing the baseline work for behavior strategies and prevention throughout each environment.
New York
The state of New York has the Foundations Behavioral Health that has been approved out of state educations and residential provider with the New York State Education Dept. Foundations offer Academic and Behavioral Health Services to students between the ages of 14 and 21. This program allows students educational experience to have strategic interventions to aid their social and behavioral functioning. Some of the program's highlights include Functional Behavioral Assessment (FBA), Behavioral Intervention Plan (BIP) & Community Based Instruction (CBI).
California
The state of California has Spectrum Center classrooms in Los Angeles and the San Francisco area which are providing Emotional Disabilities and Behavioral Services. They provide academic classrooms for students who are actively working to improve grade-level standards and working toward getting their high school diploma. The main practice is the use of Positive Behavior Interventions and Supports (PBIS). PBIS instructional practices help students determine their skill level and progress, restore their skills through direct instruction, knowing the standards on their grade level and small group counseling.
Michigan
The state of Michigan has a Behavioral Education Center (BEC) in Bangor. Its purpose is to aid local schools directs with students between the ages of 5 and 26 years old with EBD's. Along with having students use appropriate behaviors and skills to successfully return to their local school setting. Classroom programs, consultation, coaching, and professional development services are available within the school districts.
Florida
The state of Florida has Students with Emotional/Behavioral Disabilities Network (SEDNET). SEDNET projects across the state aid the local school districts to work with those at-risk of EBD's. “Dealing with adverse behavior in the educational environment,” it serves students who poorly function at home, school, or community due to drugs and substance abuse or mental health issues. SEDNET 2A Services: Family Services Planning Team (FSPT)- agencies, school officials and SEDNET meet with parents to assist and aid the child's poor performance at school and home. Positive Behavior Support providing technical assistance to promote positive behavior. Classroom Observation/Teacher Consultation- working with EBD children using successful strategies and tips in a classroom environment.
References
External links
Behaviour Management (EBD) Review Group: Published reviews
School and classroom behaviour
Disability by type
Mental disorders diagnosed in childhood
Emotional issues | 0.768165 | 0.990121 | 0.760577 |
Somatic symptom disorder | Somatic symptom disorder, also known as somatoform disorder, or somatization disorder, is defined by one or more chronic physical symptoms that coincide with excessive and maladaptive thoughts, emotions, and behaviors connected to those symptoms. The symptoms are not deliberately produced or feigned, and they may or may not coexist with a known medical ailment.
Manifestations of somatic symptom disorder are variable; symptoms can be widespread, specific, and often fluctuate. Somatic symptom disorder corresponds to the way an individual views and reacts to symptoms rather than the symptoms themselves. Somatic symptom disorder may develop in those who suffer from an existing chronic illness or medical condition.
Several studies have found a high rate of comorbidity with major depressive disorder, generalized anxiety disorder, and phobias. Somatic symptom disorder is frequently associated with functional pain syndromes like fibromyalgia and IBS. Somatic symptom disorder typically leads to poor functioning, interpersonal issues, unemployment or problems at work, and financial strain as a result of excessive health-care visits.
The cause of somatic symptom disorder is unknown. Symptoms may result from a heightened awareness of specific physical sensations paired with a tendency to interpret these experiences as signs of a medical ailment. The diagnosis is controversial, as people with a medical illness can be mislabeled as mentally ill. This is especially true for women, who are more often dismissed when they present with physical symptoms.
Signs and symptoms
Somatic symptom disorder can be detected by an ambiguous and often inconsistent history of symptoms that are rarely relieved by medical treatments. Additional signs of somatic symptom disorder include interpreting normal sensations for medical ailments, avoiding physical activity, being disproportionately sensitive to medication side effects, and seeking medical care from several physicians for the same concerns.
Manifestations of somatic symptom disorder are highly variable. Recurrent ailments usually begin before the age of 30; most patients have many somatic symptoms, while others only experience one. The severity may fluctuate, but symptoms rarely go away completely for long periods of time. Symptoms might be specific, such as regional pain and localized sensations, or general, such as fatigue, muscle aches, and malaise.
Those suffering from somatic symptom disorder experience recurring and obsessive feelings and thoughts concerning their well-being. Common examples include severe anxiety regarding potential ailments, misinterpreting normal sensations as indications of severe illness, believing that symptoms are dangerous and serious despite lacking medical basis, claiming that medical evaluations and treatment have been inadequate, fearing that engaging in physical activity will harm the body, and spending a disproportionate amount of time thinking about symptoms.
Somatic symptoms disorder pertains to how an individual interprets and responds to symptoms as opposed to the symptoms themselves. Somatic symptom disorder can occur even in those who have an underlying chronic illness or medical condition. When a somatic symptom disorder coexists with another medical ailment, people overreact to the ailment's adverse effects. They may be unresponsive toward treatment or unusually sensitive to drug side effects. Those with somatic symptom disorder who also have another physical ailment may experience significant impairment that is not expected from the condition.
Comorbidities
Most research that looked at additional mental illnesses or self-reported psychopathological symptoms among those with somatic symptom disorder identified significant rates of comorbidity with depression and anxiety, but other psychiatric comorbidities were not usually looked at. Major depression, generalized anxiety disorder, and phobias were the most common concurrent conditions.
In studies evaluating different physical ailments, 41.5% of people with semantic dementia, 11.2% of subjects with Alzheimer's disease, 25% of female patients suffering from non-HIV lipodystrophy, and 18.5% of patients with congestive heart failure fulfilled somatic symptom disorder criteria. 25.6% of fibromyalgia patients met the somatic symptom disorder criteria exhibited higher depression rates than those who did not. In one study, 28.8% of those with somatic symptom disorder had asthma, 23.1% had a heart condition, and 13.5% had gout, rheumatoid arthritis, or osteoarthritis.
Complications
Alcohol and drug abuse are frequently observed, and sometimes used to alleviate symptoms, increasing the risk of dependence on controlled substances. Other complications include poor functioning, problems with relationships, unemployment or difficulties at work, and financial stress due to excessive hospital visits.
Causes
Somatic symptoms can stem from a heightened awareness of sensations in the body, alongside the tendency to interpret those sensations as ailments. Studies suggest that risk factors of somatic symptoms include childhood neglect, sexual abuse, a chaotic lifestyle, and a history of substance and alcohol abuse. Psychosocial stressors, such as unemployment and reduced job performance, may also be risk factors. There could also be a genetic element. A study of monozygotic and dizygotic twins found that genetic components contributed 7% to 21% of somatic symptoms, with the remainder related to environmental factors. In another study, various single nucleotide polymorphisms were linked to somatic symptoms.
Psychological
Evidence suggests that along with more broad factors such as early childhood trauma or insecure attachment, negative psychological factors including catastrophizing, negative affectivity, rumination, avoidance, health anxiety, or a poor physical self-concept have a significant impact on the shift from unproblematic somatic symptoms to a severely debilitating somatic symptom disorder. Those who experience more negative psychological characteristics may regard medically unexplained symptoms to be more threatening and, therefore, exhibit stronger cognitive, emotional, and behavioral awareness of such symptoms. In addition, evidence suggests that negative psychological factors have a significant impact on the impairments and behaviors of people suffering from somatic symptom disorder, as well as the long-term stability of such symptoms.
Psychosocial
Psychosocial stresses and cultural norms influence how patients present to their physicians. American and Koreans engaged in a study to measure somatization within the cultural context. It was discovered that Korean participants used more body-related phrases while discussing their connections with stressful events and experienced more sympathy when asked to read texts using somatic expressions when discussing their emotions.
Those raised in environments where expressing emotions during stages of development is discouraged face the highest risk of somatization. In primary care settings, studies indicated that somaticizing patients had much greater rates of unemployment and decreased occupational functioning than non-somaticizing patients.
Traumatic life events may cause the development of somatic symptom disorder. Most people with somatic symptom disorder originate from dysfunctional homes. A meta-analysis study revealed a connection between sexual abuse and functional gastrointestinal syndromes, chronic pain, non-epileptic seizures, and chronic pelvic pain.
Physiological
The hypothalamo pituitary adrenal axis (HPA) has a crucial role in stress response. While the HPA axis may become more active with depression, there is evidence of hypocortisolism in somatization. In somatic disorder, there is a negative connection between elevated pain scores and 5-hydroxy indol acetic acid (5-HIAA) and tryptophan levels.
It has been suggested that proinflammatory processes may have a role in somatic symptom disorder, such as an increase of non-specific somatic symptoms and sensitivity to painful stimuli. Proinflammatory activation and anterior cingulate cortex activity have been shown to be linked in those who experienced stressful life events for an extended period of time. It is further claimed that increased activity of the anterior cingulate cortex, which acts as a bridge between attention and emotion, leads to increased sensitivity of unwanted stimuli and bodily sensations.
Pain is a multifaceted experience, not just a sensation. While nociception refers to afferent neural activity that transmits sensory information in response to stimuli that may cause tissue damage, pain is a conscious experience requiring cortical activity and can occur in the absence of nociception. Those with somatic symptoms are thought to exaggerate their somatic symptoms through choice perception and perceive them in accordance with an ailment. This idea has been identified as a cognitive style known as "somatosensorial amplification". The term "central sensitization" has been created to describe the neurobiological notion that those predisposed to somatization have an overly sensitive neural network. Harmless and mild stimuli stimulate the nociceptive specific dorsal horn cells after central sensitization. As a result, pain is felt in response to stimuli that would not typically cause pain.
Neuroimaging evidence
Some literature reviews of cognitive–affective neuroscience on somatic symptom disorder suggested that catastrophization in patients with somatic symptom disorders tends to present a greater vulnerability to pain. The relevant brain regions include the dorsolateral prefrontal, insular, rostral anterior cingulate, premotor, and parietal cortices.
Genetic
Genetic investigations have suggested modifications connected to the monoaminergic system, in particular, may be relevant while a shared genetic source remains unknown. Researchers take into account the various processes involved in the development of somatic symptoms as well as the interactions between various biological and psychosocial factors. Given the high occurrence of trauma, particularly throughout childhood, it has been suggested that the epigenetic changes could be explanatory. Another study found that the glucocorticoid receptor gene (NR3C1) is hypomethylated in those with somatic symptom disorder and in those with depression.
Diagnosis
Because those with somatic syndrome disorder typically have comprehensive previous workups, minimal laboratory testing is encouraged. Excessive testing increases the possibility of false-positive results, which may result in further interventions, associated risks, and greater expenses. While some practitioners order tests to reassure patients, research shows that diagnostic testing fails to alleviate somatic symptoms.
Specific tests, such as thyroid function assessments, urine drug screens, restricted blood studies, and minimal radiological imaging, may be conducted to rule out somatization because of medical issues.
Somatic Symptom Scale – 8
The Somatic Symptom Scale – 8 (SSS-8) is a short self-report questionnaire that is used to evaluate somatic symptoms. It examines the perceived severity of common somatic symptoms. The SSS-8 is a condensed version of the well-known Patient Health Questionnaire-15 (PHQ-15).
On a five-point scale, respondents rate how much stomach or digestive issues, back discomfort, pain in the legs, arms, or joints, headaches, chest pain or shortness of breath, dizziness, feeling tired or having low energy, and trouble sleeping impacted them in the preceding seven days. Ratings are added together to provide a sum score that ranges from 0 to 32 points.
DSM-5
The fifth edition of the Diagnostic and Statistical Manual of Mental Disorders (DSM-5) modified the entry titled "somatoform disorders" to "somatic symptom and related disorders", and modified other diagnostic labels and criteria.
The DSM-5 criteria for somatic symptom disorder includes "one or more somatic symptoms which are distressing or result in substantial impairment of daily life". Additional criteria, often known as B criteria, include "excessive thoughts, feelings, or behaviors regarding somatic symptoms or corresponding health concerns manifested by disproportionate and persistent thoughts about the severity of one's symptoms". It continues: "Although any one somatic symptom might not be consistently present, one's state of being symptomatic is continuous (typically lasting more than 6 months)."
The DSM includes five distinct descriptions for somatic symptom disorder. These include somatic symptom disorder with predominant pain, formally referred to as pain disorder, as well as classifications for mild, moderate, and severe symptoms.
International Classification of Diseases
The ICD-11 classifies somatic symptoms as "Bodily distress disorder". Bodily distress disorder is characterized by the presence of distressing bodily symptoms and excessive attention devoted to those symptoms. The ICD-11 further specifies that if another health condition is causing or contributing to the symptoms, the level of attention must be clearly excessive in relation to the nature and course of the condition.
Differential diagnosis
Somatic syndrome disorder's widespread, non-specific symptoms may conceal and mimic the manifestations of other medical disorders, making diagnosis and therapy challenging. Adjustment disorder, body dysmorphic disorder, obsessive-compulsive disorder, and illness anxiety disorder may all exhibit excessive and exaggerated emotional and behavioral responses. Other functional diseases with unknown etiology, such as fibromyalgia and irritable bowel syndrome, tend not to present with excessive thoughts, feelings, or maladaptive behavior.
Somatic symptom disorder overlaps with illness anxiety disorder and conversion disorder. Illness anxiety disorder is characterized by an obsession with having or developing a dangerous, undetected medical ailment, despite the absence of bodily symptoms. Conversion disorder may present with one or more symptoms of various sorts. Motor symptoms involve weakness or paralysis; aberrant movements including tremor or dystonic movements; abnormal gait patterns; and abnormal limb posture. The presenting symptom in conversion disorder is loss of function, but in somatic symptom disorder, the emphasis is on the discomfort that specific symptoms produce. Conversion disorder often lacks the overwhelming thoughts, feelings, and behaviors that characterize somatic symptom disorder.
Treatment
Rather than focusing on treating the symptoms, the key objective is to support the patient in coping with symptoms, including both physical symptoms and psychological/behavioral (such as health anxiety and harmful behaviors).
Early psychiatric treatment is advised. Evidence suggests that SSRIs and SNRIs can lower pain perception. Because the somatic symptomatic may have a low threshold for adverse reactions, medication should be started at the lowest possible dose and gradually increased to produce a therapeutic effect.
Cognitive-behavioral therapy has been linked to significant improvements in patient-reported function and somatic symptoms, a reduction in health-care expenses, and a reduction in symptoms of depression. CBT aims to help patients realize their ailments are not catastrophic and to enable them to gradually return to activities they previously engaged in, without fear of "worsening their symptoms". Consultation and collaboration with the primary care physician also demonstrated some effectiveness. Furthermore, brief psychodynamic interpersonal psychotherapy (PIT) for patients with somatic symptom disorder has been proven to improve the physical quality of life in patients with many, difficult-to-treat, medically unexplained symptoms over time
CBT can help in some of the following ways:
Learn to reduce stress
Learn to cope with physical symptoms
Learn to deal with depression and other psychological issues
Improve quality of life
Reduce preoccupation with symptom
Electroconvulsive therapy (ECT) has been used in treating somatic symptom disorder among the elderly; however, the results were still debatable with some concerns around the side effects of using ECT. Overall, psychologists recommend addressing a common difficulty in patients with somatic symptom disorder in the reading of their own emotions. This may be a central feature of treatment; as well as developing a close collaboration between the GP, the patient and the mental health practitioner.
Outlook
Somatic symptom disorder is typically persistent, with symptoms that wax and wane. Chronic limitations in general function, substantial psychological impairment, and a reduction in quality of life are all common. Some investigations suggest people can recover; the natural history of the illnesses implies that around 50% to 75% of patients with medically unexplained symptoms improve, whereas 10% to 30% deteriorate. Fewer physical symptoms and better baseline functioning are stronger prognostic indicators. A strong, positive relationship between the physician and the patient is crucial, and it should be accompanied by frequent, supportive visits to avoid the temptation to medicate or test when these interventions are not obviously necessary.
Epidemiology
Somatic symptom disorder affects 5% to 7% of the general population, with a higher female representation, and can arise throughout childhood, adolescence, or adulthood. Evidence suggests that the emergence of prodromal symptoms often begins in childhood and that symptoms fitting the criteria for somatic symptom disorder are common during adolescence. A community study of adolescents found that 5% had persistent distressing physical symptoms paired with psychological concerns. In the primary care patient population, the rate rises to around 17%. Patients with functional illnesses such as fibromyalgia, irritable bowel syndrome, and chronic fatigue syndrome have a greater prevalence of somatic symptom disorder. The reported frequency of somatic symptom disorder, as defined by DSM-5 criteria, ranges from 25 to 60% among these patients.
There are cultural differences in the prevalence of somatic symptom disorder. For example, somatic symptom disorder and symptoms were found to be significantly more common in Puerto Rico. In addition the diagnosis is also more prevalent among African Americans and those with less than a high school education or lower socioeconomic status.
There is usually co-morbidity with other psychological disorders, particularly mood disorders or anxiety disorders. Research also showed comorbidity between somatic symptom disorder and personality disorders, especially antisocial, borderline, narcissistic, histrionic, avoidant, and dependent personality disorder.
About 10-20 percent of female first degree relatives also have somatic symptom disorder and male relatives have increased rates of alcoholism and sociopathy.
History
Somatization is an idea that physicians have been attempting to comprehend since the dawn of time. The Egyptians and Sumerians were reported to have utilized the notions of melancholia and hysteria as early as 2600 BC. For many years, somatization was used in conjunction with the terms hysteria, melancholia, and hypochondriasis.
During the 17th century, knowledge of the central nervous system grew, giving rise to the notion that numerous inexplicable illnesses could be linked to the brain. Thomas Willis, widely regarded as the father of neurology, recognized hysteria in women and hypochondria in males as brain disorders. Thomas Sydenham contributed significantly to the belief that hysteria and hypochondria are mental rather than physical illnesses. The term "English Malady" was used by George Cheyne to denote that hysteria and hypochondriasis are brain and/or mind-related disorders.
Wilhelm Stekel, a German psychoanalyst, was the first to introduce the term somatization, and Paul Briquet was the first to characterize what is now known as Somatic symptom disorder. Briquet reported respondents who had been unwell for most of their lives and complained of a variety of symptoms from various organ systems. Despite many appointments, hospitalizations, and tests, symptoms continue. Somatic symptom disorder was later dubbed "Briquet Syndrome" in his honor. Over time, the concept of hysteria was used in place of a personality or character type, conversion responses, phobia, and anxiety to accompany psychoneuroses, and its incorporation in everyday English as a negative word led to a distancing from this concept.
Controversy
Somatic symptom disorder has long been a contentious diagnosis because it was based solely on negative criteria, namely the absence of a medical explanation for the presenting physical problems. As a result, any person suffering from a poorly understood illness may meet the criteria for this psychological diagnosis, regardless of whether they exhibit psychiatric symptoms in the traditional sense.
Misdiagnosis
In the opinion of Allen Frances, chair of the DSM-IV task force, the DSM-5's somatic symptom disorder brings with it a risk of mislabeling a sizable proportion of the population as mentally ill.
See also
Conversion Disorder
Jurosomatic illness
Munchausen syndrome
Nocebo
Psychosomatic medicine
Psychoneuroimmunology
Functional neurological disorder
References
Further reading
Somatic psychology | 0.761647 | 0.998594 | 0.760576 |
Cognitive miser | In psychology, the human mind is considered to be a cognitive miser due to the tendency of humans to think and solve problems in simpler and less effortful ways rather than in more sophisticated and effortful ways, regardless of intelligence. Just as a miser seeks to avoid spending money, the human mind often seeks to avoid spending cognitive effort. The cognitive miser theory is an umbrella theory of cognition that brings together previous research on heuristics and attributional biases to explain when and why people are cognitive misers.
The term cognitive miser was first introduced by Susan Fiske and Shelley Taylor in 1984. It is an important concept in social cognition theory and has been influential in other social sciences such as economics and political science.
Assumption
The metaphor of the cognitive miser assumes that the human mind is limited in time, knowledge, attention, and cognitive resources. Usually people do not think rationally or cautiously, but use cognitive shortcuts to make inferences and form judgments. These shortcuts include the use of schemas, scripts, stereotypes, and other simplified perceptual strategies instead of careful thinking. For example, people tend to make correspondent reasoning and are likely to believe that behaviors should be correlated to or representative of stable characteristics.
Background
The naïve scientist and attribution theory
Before Fiske and Taylor's cognitive miser theory, the predominant model of social cognition was the naïve scientist. First proposed in 1958 by Fritz Heider in The Psychology of Interpersonal Relations, this theory holds that humans think and act with dispassionate rationality whilst engaging in detailed and nuanced thought processes for both complex and routine actions. In this way, humans were thought to think like scientists, albeit naïve ones, measuring and analyzing the world around them. Applying this framework to human thought processes, naïve scientists seek the consistency and stability that comes from a coherent view of the world and need for environmental control.
In order to meet these needs, naïve scientists make attributions. Thus, attribution theory emerged from the study of the ways in which individuals assess causal relationships and mechanisms. Through the study of causal attributions, led by Harold Kelley and Bernard Weiner amongst others, social psychologists began to observe that subjects regularly demonstrate several attributional biases including but not limited to the fundamental attribution error.
The study of attributions had two effects: it created further interest in testing the naive scientist and opened up a new wave of social psychology research that questioned its explanatory power. This second effect helped to lay the foundation for Fiske and Taylor's cognitive miser.
Stereotypes
According to Walter Lippmann's arguments in his classic book Public Opinion, people are not equipped to deal with complexity. Attempting to observe things freshly and in detail is mentally exhausting, especially among busy affairs. The term stereotype is thus introduced: people have to reconstruct the complex situation on a simpler model before they can cope with it, and the simpler model can be regarded as a stereotype. Stereotypes are formed from outside sources which identified with people's interests and can be reinforced since people could be impressed by those facts that fit their philosophy.
On the other hand, in Lippmann's view, people are told about the world before they see it. People's behavior is not based on direct and certain knowledge, but pictures made or given to them. Hence, influence from external factors are unneglectable in shaping people’s stereotypes. "The subtlest and most pervasive of all influences are those which create and maintain the repertory of stereotypes." That is to say, people live in a second-handed world with mediated reality, where the simplified model for thinking (i.e., stereotypes) could be created and maintained by external forces. Lippmann suggested that the public "cannot be wise", since they can be easily misled by overly simplified reality which is consistent with their pre-existing pictures in mind, and any disturbance of the existing stereotypes will seem like "an attack upon the foundation of the universe".
Although Lippmann did not directly define the term cognitive miser, stereotypes have important functions in simplifying people's thinking process. As cognitive simplification, it is useful for realistic economic management, otherwise people will be overwhelmed by the complexity of the real rationales. Stereotype, as a phenomenon, has become a standard topic in sociology and social psychology.
Heuristics
Much of the cognitive miser theory is built upon work done on heuristics in judgment and decision-making, most notably Amos Tversky and Daniel Kahneman results published in a series of influential articles. Heuristics can be defined as the "judgmental shortcuts that generally get us where we need to go—and quickly—but at the cost of occasionally sending us off course." In their work, Kahneman and Tversky demonstrated that people rely upon different types of heuristics or mental short cuts in order to save time and mental energy. However, in relying upon heuristics instead of detailed analysis, like the information processing employed by Heider's naïve scientist, biased information processing is more likely to occur. Some of these heuristics include:
representativeness heuristic (the inclination to assign specific attributes to an individual the more he/she matches the prototype of that group).
availability heuristic (the inclination to judge the likelihood of something occurring because of the ease of thinking of examples of that event occurring)
anchoring and adjustment heuristic (the inclination to overweight the importance and influence of an initial piece of information, and then adjusting one's answer away from this anchor).
The frequency with which Kahneman and Tversky and other attribution researchers found the individuals employed mental shortcuts to make decisions and assessments laid important groundwork for the overarching idea that individuals and their minds act efficiently instead of analytically.
Cognitive miser theory
The wave of research on attributional biases done by Kahneman, Tversky and others effectively ended the dominance of Heider's naïve scientist within social psychology. Fiske and Taylor, building upon the prevalence of heuristics in human cognition, offered their theory of the cognitive miser. It is, in many ways, a unifying theory of ad-hoc decision-making which suggests that humans engage in economically prudent thought processes instead of acting like scientists who rationally weigh cost and benefit data, test hypotheses, and update expectations based upon the results of the discrete experiments that are our everyday actions. In other words, humans are more inclined to act as cognitive misers using mental short cuts to make assessments and decisions regarding issues and ideas about which they know very little, including issues of great salience. Fiske and Taylor argue that it is rational to act as a cognitive miser due to the sheer volume and intensity of information and stimuli humans intake. Given the limited information processing capabilities of individuals, people try to adopt strategies that economise complex problems. Cognitive misers usually act in two ways: by disregarding part of the information to reduce their own cognitive load, or by overusing some kind of information to avoid the burden of finding and processing more information.
Other psychologists also argue that the cognitively miserly tendency of humans is a primary reason why "humans are often less than rational". This view holds that evolution has made the brain's allocation and use of cognitive resources extremely embarrassing. The basic principle is to save mental energy as much as possible, even when it is required to "use your head". Unless the cognitive environment meets certain criteria, we will, by default, try to avoid thinking as much as possible.
Implications
The implications of this theory raise important questions about both cognition and human behavior. In addition to streamlining cognition in complicated, analytical tasks, the cognitive miser approach is also used when dealing with unfamiliar issues and issues of great importance.
Politics
Voting behavior in democracies are an arena in which the cognitive miser is at work. Acting as a cognitive miser should lead those with expertise in an area to more efficient information processing and streamlined decision making. However, as Lau and Redlawsk note, acting as cognitive miser who employs heuristics can have very different results for high-information and low-information voters. They write, "...cognitive heuristics are at times employed by almost all voters, and that they are particularly likely to be used when the choice situation facing voters is complex... heuristic use generally increases the probability of a correct vote by political experts but decreases the probability of a correct vote by novices." In democracies, where no vote is weighted more or less because of the expertise behind its casting, low-information voters, acting as cognitive misers, can have broad and potentially deleterious choices for a society.
Samuel Popkin argues that voters make rational choices by using information shortcuts that they receive during campaigns, usually using something akin to a drunkard's search. Voters use small amounts of personal information to construct a narrative about candidates. Essentially, they ask themselves this: "Based on what I know about the candidate personally, what is the probability that this presidential candidate was a good governor? What is the probability that he will be a good president?" Popkin's analysis is based on one main premise: voters use low information rationality gained in their daily lives, through the media and through personal interactions, to evaluate candidates and facilitate electoral choices.
Economics
Cognitive misers could also be one of the contributors to the prisoner's dilemma in game theory. To save cognitive energy, cognitive misers tend to assume that other people are similar to themselves. That is, habitual cooperators assume most of the others as cooperators, and habitual defectors assume most of the others as defectors. Experimental research has shown that since cooperators offer to play more often, and fellow cooperators will also more often accept their offer, cooperators would have a higher expected payoff compared with defectors when certain boundary conditions are met.
Mass communication
Lack of public support towards emerging techniques are commonly attributed to lack of relevant information and the low scientific literacy among the public. Known as the knowledge deficit model, this point of view is based on idealistic assumptions that education for science literacy could increase public support of science, and the focus of science communication should be increasing scientific understanding among lay public. However, the relationship between information and attitudes towards scientific issues are not empirically supported.
Based on the assumption that human beings are cognitive misers and tend to minimize the cognitive costs, low-information rationality was introduced as an empirically grounded alternative in explaining decision making and attitude formation. Rather than using an in-depth understanding of scientific topics, people make decisions based on other shortcuts or heuristics such as ideological predistortions or cues from mass media due to the subconscious compulsion to use only as much information as necessary. The less expertise citizens have on an issue initially, the more likely they will rely on these shortcuts. Further, people spend less cognitive effort in buying toothpaste than they do when picking a new car, and that difference in information-seeking is largely a function of the costs.
The cognitive miser theory thus has implications for persuading the public: attitude formation is a competition between people's value systems and prepositions (or their own interpretive schemata) on a certain issue, and how public discourses frame it. Framing theory suggest that the same topic will result in different interpretations among audience, if the information is presented in different ways. Audiences' attitude change is closely connected with relabeling or re-framing the certain issue. In this sense, effective communication can be achieved if media provide audiences with cognitive shortcuts or heuristics that are resonate with underlying audience schemata.
Risk assessment
The metaphor of cognitive misers could assist people in drawing lessons from risks, which is the possibility that an undesirable state of reality may occur. People apply a number of shortcuts or heuristics in making judgements about the likelihood of an event, because the rapid answers provided by heuristics are often right. Yet certain pitfalls may be neglected in these shortcuts. A practical example of the cognitively miserly way of thinking in the context of a risk assessment of Deepwater Horizon explosion, is presented below.
People have trouble in imagining how small failings can pile up to form a catastrophe;
People tend to get accustomed to risk. Due to the seemingly smooth current situation, people unconsciously adjust their acceptance of risk;
People tend to over-express their faith and confidence in backup systems and safety devices;
People regard complicated technical systems in line with complicated governing structures;
When concerned with a certain issue, people tend to spread good news and hide bad news;
People tend to think alike if they are in the same field (see also: echo chamber), regardless of their position in a project's hierarchy.
Psychology
The theory that human beings are cognitive misers, also shed light on the dual process theory in psychology. Dual process theory proposes that there are two types of cognitive processes in human mind. Daniel Kahneman described these as intuitive (System 1) and reasoning (System 2), respectively.
When processing with System 1, which starts automatically and without control, people expend little to no effort, but can generate complex patterns of ideas. When processing with System 2, people actively consider how best to distribute mental effort to accurately process data, and can construct thoughts in an orderly series of steps. These two cognitive processing systems are not separate and can have interactions with each other. Here is an example of how people's beliefs are formed under the dual process model:
System 1 generates suggestions for System 2, with impressions, intuitions, intentions or feelings;
If System 1's proposal is endorsed by System 2, those impressions and intuitions will turn into beliefs, and the sudden inspiration generated by System 1 will turn into voluntary actions;
When everything goes smoothly (as is often the case), System 2 adopts the suggestions of System 1 with little or no modification. Herein there is a window for bias to form, as System 2 may be trained to incorrectly regard the accuracy of data derived from observations gathered via System 1.
The reasoning process can be activated to help with the intuition when:
A question arises, but System 1 does not generate an answer
An event is detected to violate the model of world that System 1 maintains.
Conflicts also exists in this dual-process. A brief example provided by Kahneman is that when we try not to stare at the oddly dressed couple at the neighboring table in a restaurant, our automatic reaction (System 1) makes us stare at them, but conflicts emerge as System 2 tries to control this behavior.
The dual processing system can produce cognitive illusions. System 1 always operates automatically, with our easiest shortcut but often with error. System 2 may also have no clue to the error. Errors can be prevented only by enhanced monitoring of System 2, which costs a plethora of cognitive efforts.
Limitations
Omission of motivation
The cognitive miser theory did not originally specify the role of motivation. In Fiske's subsequent research, the omission of the role of intent in the metaphor of cognitive miser is recognized. Motivation does affect the activation and use of stereotypes and prejudices.
Updates and later research
Motivated tactician
People tend to use heuristic shortcuts when making decisions. But the problem remains that although these shortcuts could not compare to effortful thoughts in accuracy, people should have a certain parameter to help them adopt one of the most adequate shortcuts. Kruglanski proposed that people are combination of naïve scientists and cognitive misers: people are flexible social thinkers who choose between multiple cognitive strategies (i.e., speed/ease vs. accuracy/logic) based on their current goals, motives, and needs.
Later models suggest that the cognitive miser and the naïve scientist create two poles of social cognition that are too monolithic. Instead, Fiske, Taylor, and Arie W. Kruglanski and other social psychologists offer an alternative explanation of social cognition: the motivated tactician. According to this theory, people employ either shortcuts or thoughtful analysis based upon the context and salience of a particular issue. In other words, this theory suggests that humans are, in fact, both naive scientists and cognitive misers. In this sense people are strategic instead of passively choosing the most effortless shortcuts when they allocate their cognitive efforts, and therefore they can decide to be naïve scientists or cognitive misers depending on their goals.
See also
Bounded rationality
Low-information voter
Motivated reasoning
Representativeness heuristic
Path of least resistance
References
Further reading
Cognition
Psychological theories
Information
Social theories | 0.775333 | 0.980916 | 0.760536 |
Chemistry | Chemistry is the scientific study of the properties and behavior of matter. It is a physical science within the natural sciences that studies the chemical elements that make up matter and compounds made of atoms, molecules and ions: their composition, structure, properties, behavior and the changes they undergo during reactions with other substances. Chemistry also addresses the nature of chemical bonds in chemical compounds.
In the scope of its subject, chemistry occupies an intermediate position between physics and biology. It is sometimes called the central science because it provides a foundation for understanding both basic and applied scientific disciplines at a fundamental level. For example, chemistry explains aspects of plant growth (botany), the formation of igneous rocks (geology), how atmospheric ozone is formed and how environmental pollutants are degraded (ecology), the properties of the soil on the Moon (cosmochemistry), how medications work (pharmacology), and how to collect DNA evidence at a crime scene (forensics).
Chemistry has existed under various names since ancient times. It has evolved, and now chemistry encompasses various areas of specialisation, or subdisciplines, that continue to increase in number and interrelate to create further interdisciplinary fields of study. The applications of various fields of chemistry are used frequently for economic purposes in the chemical industry.
Etymology
The word chemistry comes from a modification during the Renaissance of the word alchemy, which referred to an earlier set of practices that encompassed elements of chemistry, metallurgy, philosophy, astrology, astronomy, mysticism, and medicine. Alchemy is often associated with the quest to turn lead or other base metals into gold, though alchemists were also interested in many of the questions of modern chemistry.
The modern word alchemy in turn is derived from the Arabic word . This may have Egyptian origins since is derived from the Ancient Greek , which is in turn derived from the word , which is the ancient name of Egypt in the Egyptian language. Alternately, may derive from 'cast together'.
Modern principles
The current model of atomic structure is the quantum mechanical model. Traditional chemistry starts with the study of elementary particles, atoms, molecules, substances, metals, crystals and other aggregates of matter. Matter can be studied in solid, liquid, gas and plasma states, in isolation or in combination. The interactions, reactions and transformations that are studied in chemistry are usually the result of interactions between atoms, leading to rearrangements of the chemical bonds which hold atoms together. Such behaviors are studied in a chemistry laboratory.
The chemistry laboratory stereotypically uses various forms of laboratory glassware. However glassware is not central to chemistry, and a great deal of experimental (as well as applied/industrial) chemistry is done without it.
A chemical reaction is a transformation of some substances into one or more different substances. The basis of such a chemical transformation is the rearrangement of electrons in the chemical bonds between atoms. It can be symbolically depicted through a chemical equation, which usually involves atoms as subjects. The number of atoms on the left and the right in the equation for a chemical transformation is equal. (When the number of atoms on either side is unequal, the transformation is referred to as a nuclear reaction or radioactive decay.) The type of chemical reactions a substance may undergo and the energy changes that may accompany it are constrained by certain basic rules, known as chemical laws.
Energy and entropy considerations are invariably important in almost all chemical studies. Chemical substances are classified in terms of their structure, phase, as well as their chemical compositions. They can be analyzed using the tools of chemical analysis, e.g. spectroscopy and chromatography. Scientists engaged in chemical research are known as chemists. Most chemists specialize in one or more sub-disciplines. Several concepts are essential for the study of chemistry; some of them are:
Matter
In chemistry, matter is defined as anything that has rest mass and volume (it takes up space) and is made up of particles. The particles that make up matter have rest mass as well – not all particles have rest mass, such as the photon. Matter can be a pure chemical substance or a mixture of substances.
Atom
The atom is the basic unit of chemistry. It consists of a dense core called the atomic nucleus surrounded by a space occupied by an electron cloud. The nucleus is made up of positively charged protons and uncharged neutrons (together called nucleons), while the electron cloud consists of negatively charged electrons which orbit the nucleus. In a neutral atom, the negatively charged electrons balance out the positive charge of the protons. The nucleus is dense; the mass of a nucleon is approximately 1,836 times that of an electron, yet the radius of an atom is about 10,000 times that of its nucleus.
The atom is also the smallest entity that can be envisaged to retain the chemical properties of the element, such as electronegativity, ionization potential, preferred oxidation state(s), coordination number, and preferred types of bonds to form (e.g., metallic, ionic, covalent).
Element
A chemical element is a pure substance which is composed of a single type of atom, characterized by its particular number of protons in the nuclei of its atoms, known as the atomic number and represented by the symbol Z. The mass number is the sum of the number of protons and neutrons in a nucleus. Although all the nuclei of all atoms belonging to one element will have the same atomic number, they may not necessarily have the same mass number; atoms of an element which have different mass numbers are known as isotopes. For example, all atoms with 6 protons in their nuclei are atoms of the chemical element carbon, but atoms of carbon may have mass numbers of 12 or 13.
The standard presentation of the chemical elements is in the periodic table, which orders elements by atomic number. The periodic table is arranged in groups, or columns, and periods, or rows. The periodic table is useful in identifying periodic trends.
Compound
A compound is a pure chemical substance composed of more than one element. The properties of a compound bear little similarity to those of its elements. The standard nomenclature of compounds is set by the International Union of Pure and Applied Chemistry (IUPAC). Organic compounds are named according to the organic nomenclature system. The names for inorganic compounds are created according to the inorganic nomenclature system. When a compound has more than one component, then they are divided into two classes, the electropositive and the electronegative components. In addition the Chemical Abstracts Service has devised a method to index chemical substances. In this scheme each chemical substance is identifiable by a number known as its CAS registry number.
Molecule
A molecule is the smallest indivisible portion of a pure chemical substance that has its unique set of chemical properties, that is, its potential to undergo a certain set of chemical reactions with other substances. However, this definition only works well for substances that are composed of molecules, which is not true of many substances (see below). Molecules are typically a set of atoms bound together by covalent bonds, such that the structure is electrically neutral and all valence electrons are paired with other electrons either in bonds or in lone pairs.
Thus, molecules exist as electrically neutral units, unlike ions. When this rule is broken, giving the "molecule" a charge, the result is sometimes named a molecular ion or a polyatomic ion. However, the discrete and separate nature of the molecular concept usually requires that molecular ions be present only in well-separated form, such as a directed beam in a vacuum in a mass spectrometer. Charged polyatomic collections residing in solids (for example, common sulfate or nitrate ions) are generally not considered "molecules" in chemistry. Some molecules contain one or more unpaired electrons, creating radicals. Most radicals are comparatively reactive, but some, such as nitric oxide (NO) can be stable.
The "inert" or noble gas elements (helium, neon, argon, krypton, xenon and radon) are composed of lone atoms as their smallest discrete unit, but the other isolated chemical elements consist of either molecules or networks of atoms bonded to each other in some way. Identifiable molecules compose familiar substances such as water, air, and many organic compounds like alcohol, sugar, gasoline, and the various pharmaceuticals.
However, not all substances or chemical compounds consist of discrete molecules, and indeed most of the solid substances that make up the solid crust, mantle, and core of the Earth are chemical compounds without molecules. These other types of substances, such as ionic compounds and network solids, are organized in such a way as to lack the existence of identifiable molecules per se. Instead, these substances are discussed in terms of formula units or unit cells as the smallest repeating structure within the substance. Examples of such substances are mineral salts (such as table salt), solids like carbon and diamond, metals, and familiar silica and silicate minerals such as quartz and granite.
One of the main characteristics of a molecule is its geometry often called its structure. While the structure of diatomic, triatomic or tetra-atomic molecules may be trivial, (linear, angular pyramidal etc.) the structure of polyatomic molecules, that are constituted of more than six atoms (of several elements) can be crucial for its chemical nature.
Substance and mixture
A chemical substance is a kind of matter with a definite composition and set of properties. A collection of substances is called a mixture. Examples of mixtures are air and alloys.
Mole and amount of substance
The mole is a unit of measurement that denotes an amount of substance (also called chemical amount). One mole is defined to contain exactly particles (atoms, molecules, ions, or electrons), where the number of particles per mole is known as the Avogadro constant. Molar concentration is the amount of a particular substance per volume of solution, and is commonly reported in mol/dm3.
Phase
In addition to the specific chemical properties that distinguish different chemical classifications, chemicals can exist in several phases. For the most part, the chemical classifications are independent of these bulk phase classifications; however, some more exotic phases are incompatible with certain chemical properties. A phase is a set of states of a chemical system that have similar bulk structural properties, over a range of conditions, such as pressure or temperature.
Physical properties, such as density and refractive index tend to fall within values characteristic of the phase. The phase of matter is defined by the phase transition, which is when energy put into or taken out of the system goes into rearranging the structure of the system, instead of changing the bulk conditions.
Sometimes the distinction between phases can be continuous instead of having a discrete boundary' in this case the matter is considered to be in a supercritical state. When three states meet based on the conditions, it is known as a triple point and since this is invariant, it is a convenient way to define a set of conditions.
The most familiar examples of phases are solids, liquids, and gases. Many substances exhibit multiple solid phases. For example, there are three phases of solid iron (alpha, gamma, and delta) that vary based on temperature and pressure. A principal difference between solid phases is the crystal structure, or arrangement, of the atoms. Another phase commonly encountered in the study of chemistry is the aqueous phase, which is the state of substances dissolved in aqueous solution (that is, in water).
Less familiar phases include plasmas, Bose–Einstein condensates and fermionic condensates and the paramagnetic and ferromagnetic phases of magnetic materials. While most familiar phases deal with three-dimensional systems, it is also possible to define analogs in two-dimensional systems, which has received attention for its relevance to systems in biology.
Bonding
Atoms sticking together in molecules or crystals are said to be bonded with one another. A chemical bond may be visualized as the multipole balance between the positive charges in the nuclei and the negative charges oscillating about them. More than simple attraction and repulsion, the energies and distributions characterize the availability of an electron to bond to another atom.
The chemical bond can be a covalent bond, an ionic bond, a hydrogen bond or just because of Van der Waals force. Each of these kinds of bonds is ascribed to some potential. These potentials create the interactions which hold atoms together in molecules or crystals. In many simple compounds, valence bond theory, the Valence Shell Electron Pair Repulsion model (VSEPR), and the concept of oxidation number can be used to explain molecular structure and composition.
An ionic bond is formed when a metal loses one or more of its electrons, becoming a positively charged cation, and the electrons are then gained by the non-metal atom, becoming a negatively charged anion. The two oppositely charged ions attract one another, and the ionic bond is the electrostatic force of attraction between them. For example, sodium (Na), a metal, loses one electron to become an Na+ cation while chlorine (Cl), a non-metal, gains this electron to become Cl−. The ions are held together due to electrostatic attraction, and that compound sodium chloride (NaCl), or common table salt, is formed.
In a covalent bond, one or more pairs of valence electrons are shared by two atoms: the resulting electrically neutral group of bonded atoms is termed a molecule. Atoms will share valence electrons in such a way as to create a noble gas electron configuration (eight electrons in their outermost shell) for each atom. Atoms that tend to combine in such a way that they each have eight electrons in their valence shell are said to follow the octet rule. However, some elements like hydrogen and lithium need only two electrons in their outermost shell to attain this stable configuration; these atoms are said to follow the duet rule, and in this way they are reaching the electron configuration of the noble gas helium, which has two electrons in its outer shell.
Similarly, theories from classical physics can be used to predict many ionic structures. With more complicated compounds, such as metal complexes, valence bond theory is less applicable and alternative approaches, such as the molecular orbital theory, are generally used. See diagram on electronic orbitals.
Energy
In the context of chemistry, energy is an attribute of a substance as a consequence of its atomic, molecular or aggregate structure. Since a chemical transformation is accompanied by a change in one or more of these kinds of structures, it is invariably accompanied by an increase or decrease of energy of the substances involved. Some energy is transferred between the surroundings and the reactants of the reaction in the form of heat or light; thus the products of a reaction may have more or less energy than the reactants.
A reaction is said to be exergonic if the final state is lower on the energy scale than the initial state; in the case of endergonic reactions the situation is the reverse. A reaction is said to be exothermic if the reaction releases heat to the surroundings; in the case of endothermic reactions, the reaction absorbs heat from the surroundings.
Chemical reactions are invariably not possible unless the reactants surmount an energy barrier known as the activation energy. The speed of a chemical reaction (at given temperature T) is related to the activation energy E, by the Boltzmann's population factor – that is the probability of a molecule to have energy greater than or equal to E at the given temperature T. This exponential dependence of a reaction rate on temperature is known as the Arrhenius equation.
The activation energy necessary for a chemical reaction to occur can be in the form of heat, light, electricity or mechanical force in the form of ultrasound.
A related concept free energy, which also incorporates entropy considerations, is a very useful means for predicting the feasibility of a reaction and determining the state of equilibrium of a chemical reaction, in chemical thermodynamics. A reaction is feasible only if the total change in the Gibbs free energy is negative, ; if it is equal to zero the chemical reaction is said to be at equilibrium.
There exist only limited possible states of energy for electrons, atoms and molecules. These are determined by the rules of quantum mechanics, which require quantization of energy of a bound system. The atoms/molecules in a higher energy state are said to be excited. The molecules/atoms of substance in an excited energy state are often much more reactive; that is, more amenable to chemical reactions.
The phase of a substance is invariably determined by its energy and the energy of its surroundings. When the intermolecular forces of a substance are such that the energy of the surroundings is not sufficient to overcome them, it occurs in a more ordered phase like liquid or solid as is the case with water (H2O); a liquid at room temperature because its molecules are bound by hydrogen bonds. Whereas hydrogen sulfide (H2S) is a gas at room temperature and standard pressure, as its molecules are bound by weaker dipole–dipole interactions.
The transfer of energy from one chemical substance to another depends on the size of energy quanta emitted from one substance. However, heat energy is often transferred more easily from almost any substance to another because the phonons responsible for vibrational and rotational energy levels in a substance have much less energy than photons invoked for the electronic energy transfer. Thus, because vibrational and rotational energy levels are more closely spaced than electronic energy levels, heat is more easily transferred between substances relative to light or other forms of electronic energy. For example, ultraviolet electromagnetic radiation is not transferred with as much efficacy from one substance to another as thermal or electrical energy.
The existence of characteristic energy levels for different chemical substances is useful for their identification by the analysis of spectral lines. Different kinds of spectra are often used in chemical spectroscopy, e.g. IR, microwave, NMR, ESR, etc. Spectroscopy is also used to identify the composition of remote objects – like stars and distant galaxies – by analyzing their radiation spectra.
The term chemical energy is often used to indicate the potential of a chemical substance to undergo a transformation through a chemical reaction or to transform other chemical substances.
Reaction
When a chemical substance is transformed as a result of its interaction with another substance or with energy, a chemical reaction is said to have occurred. A chemical reaction is therefore a concept related to the "reaction" of a substance when it comes in close contact with another, whether as a mixture or a solution; exposure to some form of energy, or both. It results in some energy exchange between the constituents of the reaction as well as with the system environment, which may be designed vessels—often laboratory glassware.
Chemical reactions can result in the formation or dissociation of molecules, that is, molecules breaking apart to form two or more molecules or rearrangement of atoms within or across molecules. Chemical reactions usually involve the making or breaking of chemical bonds. Oxidation, reduction, dissociation, acid–base neutralization and molecular rearrangement are some examples of common chemical reactions.
A chemical reaction can be symbolically depicted through a chemical equation. While in a non-nuclear chemical reaction the number and kind of atoms on both sides of the equation are equal, for a nuclear reaction this holds true only for the nuclear particles viz. protons and neutrons.
The sequence of steps in which the reorganization of chemical bonds may be taking place in the course of a chemical reaction is called its mechanism. A chemical reaction can be envisioned to take place in a number of steps, each of which may have a different speed. Many reaction intermediates with variable stability can thus be envisaged during the course of a reaction. Reaction mechanisms are proposed to explain the kinetics and the relative product mix of a reaction. Many physical chemists specialize in exploring and proposing the mechanisms of various chemical reactions. Several empirical rules, like the Woodward–Hoffmann rules often come in handy while proposing a mechanism for a chemical reaction.
According to the IUPAC gold book, a chemical reaction is "a process that results in the interconversion of chemical species." Accordingly, a chemical reaction may be an elementary reaction or a stepwise reaction. An additional caveat is made, in that this definition includes cases where the interconversion of conformers is experimentally observable. Such detectable chemical reactions normally involve sets of molecular entities as indicated by this definition, but it is often conceptually convenient to use the term also for changes involving single molecular entities (i.e. 'microscopic chemical events').
Ions and salts
An ion is a charged species, an atom or a molecule, that has lost or gained one or more electrons. When an atom loses an electron and thus has more protons than electrons, the atom is a positively charged ion or cation. When an atom gains an electron and thus has more electrons than protons, the atom is a negatively charged ion or anion. Cations and anions can form a crystalline lattice of neutral salts, such as the Na+ and Cl− ions forming sodium chloride, or NaCl. Examples of polyatomic ions that do not split up during acid–base reactions are hydroxide (OH−) and phosphate (PO43−).
Plasma is composed of gaseous matter that has been completely ionized, usually through high temperature.
Acidity and basicity
A substance can often be classified as an acid or a base. There are several different theories which explain acid–base behavior. The simplest is Arrhenius theory, which states that acid is a substance that produces hydronium ions when it is dissolved in water, and a base is one that produces hydroxide ions when dissolved in water. According to Brønsted–Lowry acid–base theory, acids are substances that donate a positive hydrogen ion to another substance in a chemical reaction; by extension, a base is the substance which receives that hydrogen ion.
A third common theory is Lewis acid–base theory, which is based on the formation of new chemical bonds. Lewis theory explains that an acid is a substance which is capable of accepting a pair of electrons from another substance during the process of bond formation, while a base is a substance which can provide a pair of electrons to form a new bond. There are several other ways in which a substance may be classified as an acid or a base, as is evident in the history of this concept.
Acid strength is commonly measured by two methods. One measurement, based on the Arrhenius definition of acidity, is pH, which is a measurement of the hydronium ion concentration in a solution, as expressed on a negative logarithmic scale. Thus, solutions that have a low pH have a high hydronium ion concentration and can be said to be more acidic. The other measurement, based on the Brønsted–Lowry definition, is the acid dissociation constant (Ka), which measures the relative ability of a substance to act as an acid under the Brønsted–Lowry definition of an acid. That is, substances with a higher Ka are more likely to donate hydrogen ions in chemical reactions than those with lower Ka values.
Redox
Redox (-) reactions include all chemical reactions in which atoms have their oxidation state changed by either gaining electrons (reduction) or losing electrons (oxidation). Substances that have the ability to oxidize other substances are said to be oxidative and are known as oxidizing agents, oxidants or oxidizers. An oxidant removes electrons from another substance. Similarly, substances that have the ability to reduce other substances are said to be reductive and are known as reducing agents, reductants, or reducers.
A reductant transfers electrons to another substance and is thus oxidized itself. And because it "donates" electrons it is also called an electron donor. Oxidation and reduction properly refer to a change in oxidation number—the actual transfer of electrons may never occur. Thus, oxidation is better defined as an increase in oxidation number, and reduction as a decrease in oxidation number.
Equilibrium
Although the concept of equilibrium is widely used across sciences, in the context of chemistry, it arises whenever a number of different states of the chemical composition are possible, as for example, in a mixture of several chemical compounds that can react with one another, or when a substance can be present in more than one kind of phase.
A system of chemical substances at equilibrium, even though having an unchanging composition, is most often not static; molecules of the substances continue to react with one another thus giving rise to a dynamic equilibrium. Thus the concept describes the state in which the parameters such as chemical composition remain unchanged over time.
Chemical laws
Chemical reactions are governed by certain laws, which have become fundamental concepts in chemistry. Some of them are:
Avogadro's law
Beer–Lambert law
Boyle's law (1662, relating pressure and volume)
Charles's law (1787, relating volume and temperature)
Fick's laws of diffusion
Gay-Lussac's law (1809, relating pressure and temperature)
Le Chatelier's principle
Henry's law
Hess's law
Law of conservation of energy leads to the important concepts of equilibrium, thermodynamics, and kinetics.
Law of conservation of mass continues to be conserved in isolated systems, even in modern physics. However, special relativity shows that due to mass–energy equivalence, whenever non-material "energy" (heat, light, kinetic energy) is removed from a non-isolated system, some mass will be lost with it. High energy losses result in loss of weighable amounts of mass, an important topic in nuclear chemistry.
Law of definite composition, although in many systems (notably biomacromolecules and minerals) the ratios tend to require large numbers, and are frequently represented as a fraction.
Law of multiple proportions
Raoult's law
History
The history of chemistry spans a period from the ancient past to the present. Since several millennia BC, civilizations were using technologies that would eventually form the basis of the various branches of chemistry. Examples include extracting metals from ores, making pottery and glazes, fermenting beer and wine, extracting chemicals from plants for medicine and perfume, rendering fat into soap, making glass, and making alloys like bronze.
Chemistry was preceded by its protoscience, alchemy, which operated a non-scientific approach to understanding the constituents of matter and their interactions. Despite being unsuccessful in explaining the nature of matter and its transformations, alchemists set the stage for modern chemistry by performing experiments and recording the results. Robert Boyle, although skeptical of elements and convinced of alchemy, played a key part in elevating the "sacred art" as an independent, fundamental and philosophical discipline in his work The Sceptical Chymist (1661).
While both alchemy and chemistry are concerned with matter and its transformations, the crucial difference was given by the scientific method that chemists employed in their work. Chemistry, as a body of knowledge distinct from alchemy, became an established science with the work of Antoine Lavoisier, who developed a law of conservation of mass that demanded careful measurement and quantitative observations of chemical phenomena. The history of chemistry afterwards is intertwined with the history of thermodynamics, especially through the work of Willard Gibbs.
Definition
The definition of chemistry has changed over time, as new discoveries and theories add to the functionality of the science. The term "chymistry", in the view of noted scientist Robert Boyle in 1661, meant the subject of the material principles of mixed bodies. In 1663, the chemist Christopher Glaser described "chymistry" as a scientific art, by which one learns to dissolve bodies, and draw from them the different substances on their composition, and how to unite them again, and exalt them to a higher perfection.
The 1730 definition of the word "chemistry", as used by Georg Ernst Stahl, meant the art of resolving mixed, compound, or aggregate bodies into their principles; and of composing such bodies from those principles. In 1837, Jean-Baptiste Dumas considered the word "chemistry" to refer to the science concerned with the laws and effects of molecular forces. This definition further evolved until, in 1947, it came to mean the science of substances: their structure, their properties, and the reactions that change them into other substances – a characterization accepted by Linus Pauling. More recently, in 1998, Professor Raymond Chang broadened the definition of "chemistry" to mean the study of matter and the changes it undergoes.
Background
Early civilizations, such as the Egyptians Babylonians and Indians amassed practical knowledge concerning the arts of metallurgy, pottery and dyes, but did not develop a systematic theory.
A basic chemical hypothesis first emerged in Classical Greece with the theory of four elements as propounded definitively by Aristotle stating that fire, air, earth and water were the fundamental elements from which everything is formed as a combination. Greek atomism dates back to 440 BC, arising in works by philosophers such as Democritus and Epicurus. In 50 BCE, the Roman philosopher Lucretius expanded upon the theory in his poem De rerum natura (On The Nature of Things). Unlike modern concepts of science, Greek atomism was purely philosophical in nature, with little concern for empirical observations and no concern for chemical experiments.
An early form of the idea of conservation of mass is the notion that "Nothing comes from nothing" in Ancient Greek philosophy, which can be found in Empedocles (approx. 4th century BC): "For it is impossible for anything to come to be from what is not, and it cannot be brought about or heard of that what is should be utterly destroyed." and Epicurus (3rd century BC), who, describing the nature of the Universe, wrote that "the totality of things was always such as it is now, and always will be".
In the Hellenistic world the art of alchemy first proliferated, mingling magic and occultism into the study of natural substances with the ultimate goal of transmuting elements into gold and discovering the elixir of eternal life. Work, particularly the development of distillation, continued in the early Byzantine period with the most famous practitioner being the 4th century Greek-Egyptian Zosimos of Panopolis. Alchemy continued to be developed and practised throughout the Arab world after the Muslim conquests, and from there, and from the Byzantine remnants, diffused into medieval and Renaissance Europe through Latin translations.
The Arabic works attributed to Jabir ibn Hayyan introduced a systematic classification of chemical substances, and provided instructions for deriving an inorganic compound (sal ammoniac or ammonium chloride) from organic substances (such as plants, blood, and hair) by chemical means. Some Arabic Jabirian works (e.g., the "Book of Mercy", and the "Book of Seventy") were later translated into Latin under the Latinized name "Geber", and in 13th-century Europe an anonymous writer, usually referred to as pseudo-Geber, started to produce alchemical and metallurgical writings under this name. Later influential Muslim philosophers, such as Abū al-Rayhān al-Bīrūnī and Avicenna disputed the theories of alchemy, particularly the theory of the transmutation of metals.
Improvements of the refining of ores and their extractions to smelt metals was widely used source of information for early chemists in the 16th century, among them Georg Agricola (1494–1555), who published his major work De re metallica in 1556. His work, describing highly developed and complex processes of mining metal ores and metal extraction, were the pinnacle of metallurgy during that time. His approach removed all mysticism associated with the subject, creating the practical base upon which others could and would build. The work describes the many kinds of furnace used to smelt ore, and stimulated interest in minerals and their composition. Agricola has been described as the "father of metallurgy" and the founder of geology as a scientific discipline.
Under the influence of the new empirical methods propounded by Sir Francis Bacon and others, a group of chemists at Oxford, Robert Boyle, Robert Hooke and John Mayow began to reshape the old alchemical traditions into a scientific discipline. Boyle in particular questioned some commonly held chemical theories and argued for chemical practitioners to be more "philosophical" and less commercially focused in The Sceptical Chemyst. He formulated Boyle's law, rejected the classical "four elements" and proposed a mechanistic alternative of atoms and chemical reactions that could be subject to rigorous experiment.
In the following decades, many important discoveries were made, such as the nature of 'air' which was discovered to be composed of many different gases. The Scottish chemist Joseph Black and the Flemish Jan Baptist van Helmont discovered carbon dioxide, or what Black called 'fixed air' in 1754; Henry Cavendish discovered hydrogen and elucidated its properties and Joseph Priestley and, independently, Carl Wilhelm Scheele isolated pure oxygen. The theory of phlogiston (a substance at the root of all combustion) was propounded by the German Georg Ernst Stahl in the early 18th century and was only overturned by the end of the century by the French chemist Antoine Lavoisier, the chemical analogue of Newton in physics. Lavoisier did more than any other to establish the new science on proper theoretical footing, by elucidating the principle of conservation of mass and developing a new system of chemical nomenclature used to this day.
English scientist John Dalton proposed the modern theory of atoms; that all substances are composed of indivisible 'atoms' of matter and that different atoms have varying atomic weights.
The development of the electrochemical theory of chemical combinations occurred in the early 19th century as the result of the work of two scientists in particular, Jöns Jacob Berzelius and Humphry Davy, made possible by the prior invention of the voltaic pile by Alessandro Volta. Davy discovered nine new elements including the alkali metals by extracting them from their oxides with electric current.
British William Prout first proposed ordering all the elements by their atomic weight as all atoms had a weight that was an exact multiple of the atomic weight of hydrogen. J.A.R. Newlands devised an early table of elements, which was then developed into the modern periodic table of elements in the 1860s by Dmitri Mendeleev and independently by several other scientists including Julius Lothar Meyer. The inert gases, later called the noble gases were discovered by William Ramsay in collaboration with Lord Rayleigh at the end of the century, thereby filling in the basic structure of the table.
At the turn of the twentieth century the theoretical underpinnings of chemistry were finally understood due to a series of remarkable discoveries that succeeded in probing and discovering the very nature of the internal structure of atoms. In 1897, J.J. Thomson of the University of Cambridge discovered the electron and soon after the French scientist Becquerel as well as the couple Pierre and Marie Curie investigated the phenomenon of radioactivity. In a series of pioneering scattering experiments Ernest Rutherford at the University of Manchester discovered the internal structure of the atom and the existence of the proton, classified and explained the different types of radioactivity and successfully transmuted the first element by bombarding nitrogen with alpha particles.
His work on atomic structure was improved on by his students, the Danish physicist Niels Bohr, the Englishman Henry Moseley and the German Otto Hahn, who went on to father the emerging nuclear chemistry and discovered nuclear fission. The electronic theory of chemical bonds and molecular orbitals was developed by the American scientists Linus Pauling and Gilbert N. Lewis.
The year 2011 was declared by the United Nations as the International Year of Chemistry. It was an initiative of the International Union of Pure and Applied Chemistry, and of the United Nations Educational, Scientific, and Cultural Organization and involves chemical societies, academics, and institutions worldwide and relied on individual initiatives to organize local and regional activities.
Organic chemistry was developed by Justus von Liebig and others, following Friedrich Wöhler's synthesis of urea. Other crucial 19th century advances were; an understanding of valence bonding (Edward Frankland in 1852) and the application of thermodynamics to chemistry (J. W. Gibbs and Svante Arrhenius in the 1870s).
Practice
In the practice of chemistry, pure chemistry is the study of the fundamental principles of chemistry, while applied chemistry applies that knowledge to develop technology and solve real-world problems.
Subdisciplines
Chemistry is typically divided into several major sub-disciplines. There are also several main cross-disciplinary and more specialized fields of chemistry.
Analytical chemistry is the analysis of material samples to gain an understanding of their chemical composition and structure. Analytical chemistry incorporates standardized experimental methods in chemistry. These methods may be used in all subdisciplines of chemistry, excluding purely theoretical chemistry.
Biochemistry is the study of the chemicals, chemical reactions and interactions that take place at a molecular level in living organisms. Biochemistry is highly interdisciplinary, covering medicinal chemistry, neurochemistry, molecular biology, forensics, plant science and genetics.
Inorganic chemistry is the study of the properties and reactions of inorganic compounds, such as metals and minerals. The distinction between organic and inorganic disciplines is not absolute and there is much overlap, most importantly in the sub-discipline of organometallic chemistry.
Materials chemistry is the preparation, characterization, and understanding of solid state components or devices with a useful current or future function. The field is a new breadth of study in graduate programs, and it integrates elements from all classical areas of chemistry like organic chemistry, inorganic chemistry, and crystallography with a focus on fundamental issues that are unique to materials. Primary systems of study include the chemistry of condensed phases (solids, liquids, polymers) and interfaces between different phases.
Neurochemistry is the study of neurochemicals; including transmitters, peptides, proteins, lipids, sugars, and nucleic acids; their interactions, and the roles they play in forming, maintaining, and modifying the nervous system.
Nuclear chemistry is the study of how subatomic particles come together and make nuclei. Modern transmutation is a large component of nuclear chemistry, and the table of nuclides is an important result and tool for this field. In addition to medical applications, nuclear chemistry encompasses nuclear engineering which explores the topic of using nuclear power sources for generating energy
Organic chemistry is the study of the structure, properties, composition, mechanisms, and reactions of organic compounds. An organic compound is defined as any compound based on a carbon skeleton. Organic compounds can be classified, organized and understood in reactions by their functional groups, unit atoms or molecules that show characteristic chemical properties in a compound.
Physical chemistry is the study of the physical and fundamental basis of chemical systems and processes. In particular, the energetics and dynamics of such systems and processes are of interest to physical chemists. Important areas of study include chemical thermodynamics, chemical kinetics, electrochemistry, statistical mechanics, spectroscopy, and more recently, astrochemistry. Physical chemistry has large overlap with molecular physics. Physical chemistry involves the use of infinitesimal calculus in deriving equations. It is usually associated with quantum chemistry and theoretical chemistry. Physical chemistry is a distinct discipline from chemical physics, but again, there is very strong overlap.
Theoretical chemistry is the study of chemistry via fundamental theoretical reasoning (usually within mathematics or physics). In particular the application of quantum mechanics to chemistry is called quantum chemistry. Since the end of the Second World War, the development of computers has allowed a systematic development of computational chemistry, which is the art of developing and applying computer programs for solving chemical problems. Theoretical chemistry has large overlap with (theoretical and experimental) condensed matter physics and molecular physics.
Others subdivisions include electrochemistry, femtochemistry, flavor chemistry, flow chemistry, immunohistochemistry, hydrogenation chemistry, mathematical chemistry, molecular mechanics, natural product chemistry, organometallic chemistry, petrochemistry, photochemistry, physical organic chemistry, polymer chemistry, radiochemistry, sonochemistry, supramolecular chemistry, synthetic chemistry, and many others.
Interdisciplinary
Interdisciplinary fields include agrochemistry, astrochemistry (and cosmochemistry), atmospheric chemistry, chemical engineering, chemical biology, chemo-informatics, environmental chemistry, geochemistry, green chemistry, immunochemistry, marine chemistry, materials science, mechanochemistry, medicinal chemistry, molecular biology, nanotechnology, oenology, pharmacology, phytochemistry, solid-state chemistry, surface science, thermochemistry, and many others.
Industry
The chemical industry represents an important economic activity worldwide. The global top 50 chemical producers in 2013 had sales of US$980.5 billion with a profit margin of 10.3%.
Professional societies
American Chemical Society
American Society for Neurochemistry
Chemical Institute of Canada
Chemical Society of Peru
International Union of Pure and Applied Chemistry
Royal Australian Chemical Institute
Royal Netherlands Chemical Society
Royal Society of Chemistry
Society of Chemical Industry
World Association of Theoretical and Computational Chemists
List of chemistry societies
See also
Comparison of software for molecular mechanics modeling
Glossary of chemistry terms
International Year of Chemistry
List of chemists
List of compounds
List of important publications in chemistry
List of unsolved problems in chemistry
Outline of chemistry
Periodic systems of small molecules
Philosophy of chemistry
Science tourism
References
Bibliography
Further reading
Popular reading
Atkins, P. W. Galileo's Finger (Oxford University Press)
Atkins, P. W. Atkins' Molecules (Cambridge University Press)
Kean, Sam. The Disappearing Spoon – and Other True Tales from the Periodic Table (Black Swan) London, England, 2010
Levi, Primo The Periodic Table (Penguin Books) [1975] translated from the Italian by Raymond Rosenthal (1984)
Stwertka, A. A Guide to the Elements (Oxford University Press)
Introductory undergraduate textbooks
Atkins, P.W., Overton, T., Rourke, J., Weller, M. and Armstrong, F. Shriver and Atkins Inorganic Chemistry (4th ed.) 2006 (Oxford University Press)
Chang, Raymond. Chemistry 6th ed. Boston, Massachusetts: James M. Smith, 1998.
Voet and Voet. Biochemistry (Wiley)
Advanced undergraduate-level or graduate textbooks
Atkins, P. W. Physical Chemistry (Oxford University Press)
Atkins, P. W. et al. Molecular Quantum Mechanics (Oxford University Press)
McWeeny, R. Coulson's Valence (Oxford Science Publications)
Pauling, L. The Nature of the chemical bond (Cornell University Press)
Pauling, L., and Wilson, E. B. Introduction to Quantum Mechanics with Applications to Chemistry (Dover Publications)
Smart and Moore. Solid State Chemistry: An Introduction (Chapman and Hall)
Stephenson, G. Mathematical Methods for Science Students (Longman)
External links
General Chemistry principles, patterns and applications. | 0.760856 | 0.999546 | 0.76051 |
Desire | Desires are states of mind that are expressed by terms like "wanting", "wishing", "longing" or "craving". A great variety of features is commonly associated with desires. They are seen as propositional attitudes towards conceivable states of affairs. They aim to change the world by representing how the world should be, unlike beliefs, which aim to represent how the world actually is. Desires are closely related to agency: they motivate the agent to realize them. For this to be possible, a desire has to be combined with a belief about which action would realize it. Desires present their objects in a favorable light, as something that appears to be good. Their fulfillment is normally experienced as pleasurable in contrast to the negative experience of failing to do so. Conscious desires are usually accompanied by some form of emotional response. While many researchers roughly agree on these general features, there is significant disagreement about how to define desires, i.e. which of these features are essential and which ones are merely accidental. Action-based theories define desires as structures that incline us toward actions. Pleasure-based theories focus on the tendency of desires to cause pleasure when fulfilled. Value-based theories identify desires with attitudes toward values, like judging or having an appearance that something is good.
Desires can be grouped into various types according to a few basic distinctions. Intrinsic desires concern what the subject wants for its own sake while instrumental desires are about what the subject wants for the sake of something else. Occurrent desires are either conscious or otherwise causally active, in contrast to standing desires, which exist somewhere in the back of one's mind. Propositional desires are directed at possible states of affairs while object-desires are directly about objects. Various authors distinguish between higher desires associated with spiritual or religious goals and lower desires, which are concerned with bodily or sensory pleasures. Desires play a role in many different fields. There is disagreement whether desires should be understood as practical reasons or whether we can have practical reasons without having a desire to follow them. According to fitting-attitude theories of value, an object is valuable if it is fitting to desire this object or if we ought to desire it. Desire-satisfaction theories of well-being state that a person's well-being is determined by whether that person's desires are satisfied.
Marketing and advertising companies have used psychological research on how desire is stimulated to find more effective ways to induce consumers into buying a given product or service. Techniques include creating a sense of lack in the viewer or associating the product with desirable attributes. Desire plays a key role in art. The theme of desire is at the core of romance novels, which often create drama by showing cases where human desire is impeded by social conventions, class, or cultural barriers. Melodrama films use plots that appeal to the heightened emotions of the audience by showing "crises of human emotion, failed romance or friendship", in which desire is thwarted or unrequited.
Theories
Theories of desire aim to define desires in terms of their essential features. A great variety of features are ascribed to desires, like that they are propositional attitudes, that they lead to actions, that their fulfillment tends to bring pleasure, etc. Across the different theories of desires, there is a broad agreement about what these features are. Their disagreement concerns which of these features belong to the essence of desires and which ones are merely accidental or contingent. Traditionally, the two most important theories define desires in terms of dispositions to cause actions or concerning their tendency to bring pleasure upon being fulfilled. An important alternative of more recent origin holds that desiring something means seeing the object of desire as valuable.
General features
A great variety of features is ascribed to desires. They are usually seen as attitudes toward conceivable states of affairs, often referred to as propositional attitudes. They differ from beliefs, which are also commonly seen as propositional attitudes, by their direction of fit. Both beliefs and desires are representations of the world. But while beliefs aim at truth, i.e. to represent how the world actually is, desires aim to change the world by representing how the world should be. These two modes of representation have been termed mind-to-world and world-to-mind direction of fit respectively. Desires can be either positive, in the sense that the subject wants a desirable state to be the case, or negative, in the sense that the subject wants an undesirable state not to be the case. It is usually held that desires come in varying strengths: some things are desired more strongly than other things. We desire things in regard to some features they have but usually not in regard to all of their features.
Desires are also closely related to agency: we normally try to realize our desires when acting. It is usually held that desires by themselves are not sufficient for actions: they have to be combined with beliefs. The desire to own a new mobile phone, for example, can only result in the action of ordering one online if paired with the belief that ordering it would contribute to the desire being fulfilled. The fulfillment of desires is normally experienced as pleasurable in contrast to the negative experience of failing to do so. But independently of whether the desire is fulfilled or not, there is a sense in which the desire presents its object in a favorable light, as something that appears to be good. Besides causing actions and pleasures, desires also have various effects on the mental life. One of these effects is to frequently move the subject's attention to the object of desire, specifically to its positive features. Another effect of special interest to psychology is the tendency of desires to promote reward-based learning, for example, in the form of operant conditioning.
Action-based theories
Action-based or motivational theories have traditionally been dominant. They can take different forms but they all have in common that they define desires as structures that incline us toward actions. This is especially relevant when ascribing desires, not from a first-person perspective, but from a third-person perspective. Action-based theories usually include some reference to beliefs in their definition, for example, that "to desire that P is to be disposed to bring it about that P, assuming one's beliefs are true". Despite their popularity and their usefulness for empirical investigations, action-based theories face various criticisms. These criticisms can roughly be divided into two groups. On the one hand, there are inclinations to act that are not based on desires. Evaluative beliefs about what we should do, for example, incline us toward doing it, even if we do not want to do it. There are also mental disorders that have a similar effect, like the tics associated with Tourette syndrome. On the other hand, there are desires that do not incline us toward action. These include desires for things we cannot change, for example, a mathematician's desire that the number Pi be a rational number. In some extreme cases, such desires may be very common, for example, a totally paralyzed person may have all kinds of regular desires but lacks any disposition to act due to the paralysis.
Pleasure-based theories
It is one important feature of desires that their fulfillment is pleasurable. Pleasure-based or hedonic theories use this feature as part of their definition of desires. According to one version, "to desire p is ... to be disposed to take pleasure in it seeming that p and displeasure in it seeming that not-p". Hedonic theories avoid many of the problems faced by action-based theories: they allow that other things besides desires incline us to actions and they have no problems explaining how a paralyzed person can still have desires. But they also come with new problems of their own. One is that it is usually assumed that there is a causal relation between desires and pleasure: the satisfaction of desires is seen as the cause of the resulting pleasure. But this is only possible if cause and effect are two distinct things, not if they are identical. Apart from this, there may also be bad or misleading desires whose fulfillment does not bring the pleasure they originally seemed to promise.
Value-based theories
Value-based theories are of more recent origin than action-based theories and hedonic theories. They identify desires with attitudes toward values. Cognitivist versions, sometimes referred to as desire-as-belief theses, equate desires with beliefs that something is good, thereby categorizing desires as one type of belief. But such versions face the difficulty of explaining how we can have beliefs about what we should do despite not wanting to do it. A more promising approach identifies desires not with value-beliefs but with value-seemings. On this view, to desire to have one more drink is the same as it seeming good to the subject to have one more drink. But such a seeming is compatible with the subject having the opposite belief that having one more drink would be a bad idea. A closely related theory is due to T. M. Scanlon, who holds that desires are judgments of what we have reasons to do. Critics have pointed out that value-based theories have difficulties explaining how animals, like cats or dogs, can have desires, since they arguably cannot represent things as being good in the relevant sense.
Others
A great variety of other theories of desires have been proposed. Attention-based theories take the tendency of attention to keep returning to the desired object as the defining feature of desires. Learning-based theories define desires in terms of their tendency to promote reward-based learning, for example, in the form of operant conditioning. Functionalist theories define desires in terms of the causal roles played by internal states while interpretationist theories ascribe desires to persons or animals based on what would best explain their behavior. Holistic theories combine various of the aforementioned features in their definition of desires.
Types
Desires can be grouped into various types according to a few basic distinctions. Something is desired intrinsically if the subject desires it for its own sake. Otherwise, the desire is instrumental or extrinsic. Occurrent desires are causally active while standing desires exist somewhere in the back of one's mind. Propositional desires are directed at possible states of affairs, in contrast to object-desires, which are directly about objects.
Intrinsic and instrumental
The distinction between intrinsic and instrumental or extrinsic desires is central to many issues concerning desires. Something is desired intrinsically if the subject desires it for its own sake. Pleasure is a common object of intrinsic desires. According to psychological hedonism, it is the only thing desired intrinsically. Intrinsic desires have a special status in that they do not depend on other desires. They contrast with instrumental desires, in which something is desired for the sake of something else. For example, Haruto enjoys movies, which is why he has an intrinsic desire to watch them. But in order to watch them, he has to step into his car, navigate through the traffic to the nearby cinema, wait in line, pay for the ticket, etc. He desires to do all these things as well, but only in an instrumental manner. He would not do all these things were it not for his intrinsic desire to watch the movie. It is possible to desire the same thing both intrinsically and instrumentally at the same time. So if Haruto was a driving enthusiast, he might have both an intrinsic and an instrumental desire to drive to the cinema. Instrumental desires are usually about causal means to bring the object of another desire about. Driving to the cinema, for example, is one of the causal requirements for watching the movie there. But there are also constitutive means besides causal means. Constitutive means are not causes but ways of doing something. Watching the movie while sitting in seat 13F, for example, is one way of watching the movie, but not an antecedent cause. Desires corresponding to constitutive means are sometimes termed "realizer desires".
Occurrent and standing
Occurrent desires are desires that are currently active. They are either conscious or at least have unconscious effects, for example, on the subject's reasoning or behavior. Desires we engage in and try to realize are occurrent. But we have many desires that are not relevant to our present situation and do not influence us currently. Such desires are called standing or dispositional. They exist somewhere in the back of our minds and are different from not desiring at all despite lacking causal effects at the moment. If Dhanvi is busy convincing her friend to go hiking this weekend, for example, then her desire to go hiking is occurrent. But many of her other desires, like to sell her old car or to talk with her boss about a promotion, are merely standing during this conversation. Standing desires remain part of the mind even while the subject is sound asleep. It has been questioned whether standing desires should be considered desires at all in a strict sense. One motivation for raising this doubt is that desires are attitudes toward contents but a disposition to have a certain attitude is not automatically an attitude itself. Desires can be occurrent even if they do not influence our behavior. This is the case, for example, if the agent has a conscious desire to do something but successfully resists it. This desire is occurrent because it plays some role in the agents mental life, even if it is not action-guiding.
Propositional desires and object-desires
The dominant view is that all desires are to be understood as propositional attitudes. But a contrasting view allows that at least some desires are directed not at propositions or possible states of affairs but directly at objects. This difference is also reflected on a linguistic level. Object-desires can be expressed through a direct object, for example, Louis desires an omelet. Propositional desires, on the other hand, are usually expressed through a that-clause, for example, Arielle desires that she has an omelet for breakfast. Propositionalist theories hold that direct-object-expressions are just a short form for that-clause-expressions while object-desire-theorists contend that they correspond to a different form of desire. One argument in favor of the latter position is that talk of object-desire is very common and natural in everyday language. But one important objection to this view is that object-desires lack proper conditions of satisfaction necessary for desires. Conditions of satisfaction determine under which situations a desire is satisfied. Arielle's desire is satisfied if the that-clause expressing her desire has been realized, i.e. she is having an omelet for breakfast. But Louis's desire is not satisfied by the mere existence of omelets nor by his coming into possession of an omelet at some indeterminate point in his life. So it seems that, when pressed for the details, object-desire-theorists have to resort to propositional expressions to articulate what exactly these desires entail. This threatens to collapse object-desires into propositional desires.
Higher and lower
In religion and philosophy, a distinction is sometimes made between higher and lower desires. Higher desires are commonly associated with spiritual or religious goals in contrast to lower desires, sometimes termed passions, which are concerned with bodily or sensory pleasures. This difference is closely related to John Stuart Mill's distinction between the higher pleasures of the mind and the lower pleasures of the body. In some religions, all desires are outright rejected as a negative influence on our well-being. The second Noble Truth in Buddhism, for example, states that desiring is the cause of all suffering. A related doctrine is also found in the Hindu tradition of karma yoga, which recommends that we act without a desire for the fruits of our actions, referred to as "Nishkam Karma". But other strands in Hinduism explicitly distinguish lower or bad desires for worldly things from higher or good desires for closeness or oneness with God. This distinction is found, for example, in the Bhagavad Gita or in the tradition of bhakti yoga. A similar line of thought is present in the teachings of Christianity. In the doctrine of the seven deadly sins, for example, various vices are listed, which have been defined as perverse or corrupt versions of love. Explicit reference to bad forms of desiring is found, for example, in the sins of lust, gluttony and greed. The seven sins are contrasted with the seven virtues, which include the corresponding positive counterparts. A desire for God is explicitly encouraged in various doctrines. Existentialists sometimes distinguish between authentic and inauthentic desires. Authentic desires express what the agent truly wants from deep within. An agent wants something inauthentically, on the other hand, if the agent is not fully identified with this desire, despite having it.
Roles
Desire is a quite fundamental concept. As such, it is relevant for many different fields. Various definitions and theories of other concepts have been expressed in terms of desires. Actions depend on desires and moral praiseworthiness is sometimes defined in terms of being motivated by the right desire. A popular contemporary approach defines value as that which it is fitting to desire. Desire-satisfaction theories of well-being state that a person's well-being is determined by whether that person's desires are satisfied. It has been suggested that to prefer one thing to another is just to have a stronger desire for the former thing. An influential theory of personhood holds that only entities with higher-order desires can be persons.
Action, practical reasons and morality
Desires play a central role in actions as what motivates them. It is usually held that a desire by itself is not sufficient: it has to be combined with a belief that the action in question would contribute to the fulfillment of the desire. The notion of practical reasons is closely related to motivation and desire. Some philosophers, often from a Humean tradition, simply identify an agent's desires with the practical reasons he has. A closely related view holds that desires are not reasons themselves but present reasons to the agent. A strength of these positions is that they can give a straightforward explanation of how practical reasons can act as motivation. But an important objection is that we may have reasons to do things without a desire to do them. This is especially relevant in the field of morality. Peter Singer, for example, suggests that most people living in developed countries have a moral obligation to donate a significant portion of their income to charities. Such an obligation would constitute a practical reason to act accordingly even for people who feel no desire to do so.
A closely related issue in morality asks not what reasons we have but for what reasons we act. This idea goes back to Immanuel Kant, who holds that doing the right thing is not sufficient from the moral perspective. Instead, we have to do the right thing for the right reason. He refers to this distinction as the difference between legality (Legalität), i.e. acting in accordance with outer norms, and morality (Moralität), i.e. being motivated by the right inward attitude. On this view, donating a significant portion of one's income to charities is not a moral action if the motivating desire is to improve one's reputation by convincing other people of one's wealth and generosity. Instead, from a Kantian perspective, it should be performed out of a desire to do one's duty. These issues are often discussed in contemporary philosophy under the terms of moral praiseworthiness and blameworthiness. One important position in this field is that the praiseworthiness of an action depends on the desire motivating this action.
Value and well-being
It is common in axiology to define value in relation to desire. Such approaches fall under the category of fitting-attitude theories. According to them, an object is valuable if it is fitting to desire this object or if we ought to desire it. This is sometimes expressed by saying that the object is desirable, appropriately desired or worthy of desire. Two important aspects of this type of position are that it reduces values to deontic notions, or what we ought to feel, and that it makes values dependent on human responses and attitudes. Despite their popularity, fitting-attitude theories of value face various theoretical objections. An often-cited one is the wrong kind of reason problem, which is based on the consideration that facts independent of the value of an object may affect whether this object ought to be desired. In one thought experiment, an evil demon threatens the agent to kill her family unless she desires him. In such a situation, it is fitting for the agent to desire the demon in order to save her family, despite the fact that the demon does not possess positive value.
Well-being is usually considered a special type of value: the well-being of a person is what is ultimately good for this person. Desire-satisfaction theories are among the major theories of well-being. They state that a person's well-being is determined by whether that person's desires are satisfied: the higher the number of satisfied desires, the higher the well-being. One problem for some versions of desire theory is that not all desires are good: some desires may even have terrible consequences for the agent. Desire theorists have tried to avoid this objection by holding that what matters are not actual desires but the desires the agent would have if she was fully informed.
Preferences
Desires and preferences are two closely related notions: they are both conative states that determine our behavior. The difference between the two is that desires are directed at one object while preferences concern a comparison between two alternatives, of which one is preferred to the other. The focus on preferences instead of desires is very common in the field of decision theory. It has been argued that desire is the more fundamental notion and that preferences are to be defined in terms of desires. For this to work, desire has to be understood as involving a degree or intensity. Given this assumption, a preference can be defined as a comparison of two desires. That Nadia prefers tea over coffee, for example, just means that her desire for tea is stronger than her desire for coffee. One argument for this approach is due to considerations of parsimony: a great number of preferences can be derived from a very small number of desires. One objection to this theory is that our introspective access is much more immediate in cases of preferences than in cases of desires. So it is usually much easier for us to know which of two options we prefer than to know the degree with which we desire a particular object. This consideration has been used to suggest that maybe preference, and not desire, is the more fundamental notion.
Persons, personhood and higher-order desires
Personhood is what persons have. There are various theories about what constitutes personhood. Most agree that being a person has to do with having certain mental abilities and is connected to having a certain moral and legal status. An influential theory of persons is due to Harry Frankfurt. He defines persons in terms of higher-order desires. Many of the desires we have, like the desire to have ice cream or to take a vacation, are first-order desires. Higher-order desires, on the other hand, are desires about other desires. They are most prominent in cases where a person has a desire he does not want to have. A recovering addict, for example, may have both a first-order desire to take drugs and a second-order desire of not following this first-order desire. Or a religious ascetic may still have sexual desires while at the same time wanting to be free of these desires. According to Frankfurt, having second-order volitions, i.e. second-order desires about which first-order desires are followed, is the mark of personhood. It is a form of caring about oneself, of being concerned with who one is and what one does. Not all entities with a mind have higher-order volitions. Frankfurt terms them "wantons" in contrast to "persons". On his view, animals and maybe also some human beings are wantons.
Formation
Both psychology and philosophy are interested in where desires come from or how they form. An important distinction for this investigation is between intrinsic desires, i.e. what the subject wants for its own sake, and instrumental desires, i.e. what the subject wants for the sake of something else. Instrumental desires depend for their formation and existence on other desires. For example, Aisha has a desire to find a charging station at the airport. This desire is instrumental because it is based on another desire: to keep her mobile phone from dying. Without the latter desire, the former would not have come into existence. As an additional requirement, a possibly unconscious belief or judgment is necessary to the effect that the fulfillment of the instrumental desire would somehow contribute to the fulfillment of the desire it is based on. Instrumental desires usually pass away after the desires they are based on cease to exist. But defective cases are possible where, often due to absentmindedness, the instrumental desire remains. Such cases are sometimes termed "motivational inertia". Something like this might be the case when the agent finds himself with a desire to go to the kitchen, only to realize upon arriving that he does not know what he wants there.
Intrinsic desires, on the other hand, do not depend on other desires. Some authors hold that all or at least some intrinsic desires are inborn or innate, for example, desires for pleasure or for nutrition. But other authors suggest that even these relatively basic desires may depend to some extent on experience: before we can desire a pleasurable object, we have to learn, through a hedonic experience of this object for example, that it is pleasurable. But it is also conceivable that reason by itself generates intrinsic desires. On this view, reasoning to the conclusion that it would be rational to have a certain intrinsic desire causes the subject to have this desire. It has also been proposed that instrumental desires may be transformed into intrinsic desires under the right conditions. This could be possible through processes of reward-based learning. The idea is that whatever reliably predicts the fulfillment of intrinsic desires may itself become the object of an intrinsic desire. So a baby may initially only instrumentally desire its mother because of the warmth, hugs and milk she provides. But over time, this instrumental desire may become an intrinsic desire.
The death-of-desire thesis holds that desires cannot continue to exist once their object is realized. This would mean that an agent cannot desire to have something if he believes that he already has it. One objection to the death-of-desire thesis comes from the fact that our preferences usually do not change upon desire-satisfaction. So if Samuel prefers to wear dry clothes rather than wet clothes, he would continue to hold this preference even after having come home from a rainy day and having changed his clothes. This would indicate against the death-of-desire thesis that no change on the level of the agent's conative states takes place.
Philosophy
In philosophy, desire has been identified as a philosophical problem since Antiquity. In The Republic, Plato argues that individual desires must be postponed in the name of the higher ideal. In De Anima, Aristotle claims that desire is implicated in animal interactions and the propensity of animals to motion; at the same time, he acknowledges that reasoning also interacts with desire.
Thomas Hobbes (1588–1679) proposed the concept of psychological hedonism, which asserts that the "fundamental motivation of all human action is the desire for pleasure." Baruch Spinoza (1632–1677) had a view which contrasted with Hobbes, in that "he saw natural desires as a form of bondage" that are not chosen by a person of their own free will. David Hume (1711–1776) claimed that desires and passions are non-cognitive, automatic bodily responses, and he argued that reasoning is "capable only of devising means to ends set by [bodily] desire".
Immanuel Kant (1724–1804) called any action based on desires a hypothetical imperative, which means they are a command of reason, applying only if one desires the goal in question. Kant also established a relation between the beautiful and pleasure in Critique of Judgment. Georg Wilhelm Friedrich Hegel claimed that "self-consciousness is desire".
Because desire can cause humans to become obsessed and embittered, it has been called one of the causes of woe for mankind.
Religion
Buddhism
In Buddhism, craving (see taṇhā) is thought to be the cause of all suffering that one experiences in human existence. The eradication of craving leads one to ultimate happiness, or Nirvana. However, desire for wholesome things is seen as liberating and enhancing. While the stream of desire for sense-pleasures must be cut eventually, a practitioner on the path to liberation is encouraged by the Buddha to "generate desire" for the fostering of skillful qualities and the abandoning of unskillful ones.
For an individual to effect his or her liberation, the flow of sense-desire must be cut completely; however, while training, he or she must work with motivational processes based on skillfully applied desire. According to the early Buddhist scriptures, the Buddha stated that monks should "generate desire" for the sake of fostering skillful qualities and abandoning unskillful ones.
Christianity
Within Christianity, desire is seen as something that can either lead a person towards God or away from him. Desire is not considered to be a bad thing in and of itself; rather, it is a powerful force within the human that, once submitted to the Lordship of Christ, can become a tool for good, for advancement, and for abundant living.
Hinduism
In Hinduism, the Rig Veda's creation myth Nasadiya Sukta states regarding the one (ekam) spirit: "In the beginning there was Desire (kama) that was first seed of mind. Poets found the bond of being in non-being in their heart's thought".
Psychology
Neuropsychology
While desires are often classified as emotions by laypersons, psychologists often describe desires as ur-emotions, or feelings that do not quite fit the category of basic emotions. For psychologists, desires arise from bodily structures and functions (e.g., the stomach needing food and the blood needing oxygen). On the other hand, emotions arise from a person's mental state. A 2008 study by the University of Michigan indicated that, while humans experience desire and fear as psychological opposites, they share the same brain circuit. A 2008 study entitled "The Neural Correlates of Desire" showed that the human brain categorizes stimuli according to its desirability by activating three different brain areas: the superior orbitofrontal cortex, the mid-cingulate cortex, and the anterior cingulate cortex.
In affective neuroscience, "desire" and "wanting" are operationally defined as motivational salience; the form of "desire" or "wanting" associated with a rewarding stimulus (i.e., a stimulus which acts as a positive reinforcer, such as palatable food, an attractive mate, or an addictive drug) is called "incentive salience" and research has demonstrated that incentive salience, the sensation of pleasure, and positive reinforcement are all derived from neuronal activity within the reward system. Studies have shown that dopamine signaling in the nucleus accumbens shell and endogenous opioid signaling in the ventral pallidum are at least partially responsible for mediating an individual's desire (i.e., incentive salience) for a rewarding stimulus and the subjective perception of pleasure derived from experiencing or "consuming" a rewarding stimulus (e.g., pleasure derived from eating palatable food, sexual pleasure from intercourse with an attractive mate, or euphoria from using an addictive drug). Research also shows that the orbitofrontal cortex has connections to both the opioid and dopamine systems, and stimulating this cortex is associated with subjective reports of pleasure.
Psychoanalysis
Austrian psychiatrist Sigmund Freud, who is best known for his theories of the unconscious mind and the defense mechanism of repression and for creating the clinical practice of psychoanalysis, proposed the notion of the Oedipus complex, which argues that desire for the mother creates neuroses in their sons. Freud used the Greek myth of Oedipus to argue that people desire incest and must repress that desire. He claimed that children pass through several stages, including a stage in which they fixate on the mother as a sexual object.
That this "complex" is universal has long since been disputed. Even if it were true, that would not explain those neuroses in daughters, but only in sons. While it is true that sexual confusion can be aberrative in a few cases, there is no credible evidence to suggest that it is a universal scenario. While Freud was correct in labeling the various symptoms behind most compulsions, phobias and disorders, he was largely incorrect in his theories regarding the etiology of what he identified.
French psychoanalyst and psychiatrist Jacques Lacan (1901–1981) argues that desire first occurs during a "mirror phase" of a baby's development, when the baby sees an image of wholeness in a mirror which gives them a desire for that being. As a person matures, Lacan claims that they still feel separated from themselves by language, which is incomplete, and so a person continually strives to become whole. He uses the term "jouissance" to refer to the lost object or feeling of absence (see manque) which a person believes to be unobtainable. Gilles Deleuze rejects the idea, defended by Lacan and other psychoanalysts, that desire is a form of lack related to incompleteness or a lost object. Instead, he holds that it should be understood as a positive reality in the form of an affirmative vital force.
Marketing
In the field of marketing, desire is the human appetite for a given object of attention. Desire for a product is stimulated by advertising, which attempts to give buyers a sense of lack or wanting. In store retailing, merchants attempt to increase the desire of the buyer by showcasing the product attractively, in the case of clothes or jewellery, or, for food stores, by offering samples. With print, TV, and radio advertising, desire is created by giving the potential buyer a sense of lacking ("Are you still driving that old car?") or by associating the product with desirable attributes, either by showing a celebrity using or wearing the product, or by giving the product a "halo effect" by showing attractive models with the product. Nike's "Just Do It" ads for sports shoes are appealing to consumers' desires for self-betterment.
In some cases, the potential buyer already has the desire for the product before they enter the store, as in the case of a decorating buff entering their favorite furniture store. The role of the salespeople in these cases is simply to guide the customer towards making a choice; they do not have to try to "sell" the general idea of making a purchase, because the customer already wants the products. In other cases, the potential buyer does not have a desire for the product or service, and so the company has to create the sense of desire. An example of this situation is for life insurance. Most young adults are not thinking about dying, so they are not naturally thinking about how they need to have accidental death insurance. Life insurance companies, though, are attempting to create a desire for life insurance with advertising that shows pictures of children and asks "If anything happens to you, who will pay for the children's upkeep?".
Marketing theorists call desire the third stage in the hierarchy of effects, which occurs when the buyer develops a sense that if they felt the need for the type of product in question, the advertised product is what would quench their desire.
Artworks
Texts
The theme of desire is at the core of the written fictions, especially romance novels. Novels which are based around the theme of desire, which can range from a long aching feeling to an unstoppable torrent, include Madame Bovary by Gustave Flaubert; Love in the Time of Cholera by Gabriel García Márquez; Lolita by Vladimir Nabokov; Jane Eyre by Charlotte Brontë, and Dracula by Bram Stoker. Brontë's characterization of Jane Eyre depicts her as torn by an inner conflict between reason and desire, because "customs" and "conventionalities" stand in the way of her romantic desires. E.M. Forster's novels use homoerotic codes to describe same-sex desire and longing. Close male friendships with subtle homoerotic undercurrents occur in every novel, which subverts the conventional, heterosexual plot of the novels. In the Gothic-themed Dracula, Stoker depicts the theme of desire which is coupled with fear. When the character Lucy is seduced by Dracula, she describes her sensations in the graveyard as a mixture of fear and blissful emotion.
Poet W. B. Yeats depicts the positive and negative aspects of desire in his poems such as "The Rose for the World", "Adam's Curse", "No Second Troy", "All Things can Tempt me", and "Meditations in Time of Civil War". Some poems depict desire as a poison for the soul; Yeats worked through his desire for his beloved, Maud Gonne, and realized that "Our longing, our craving, our thirsting for something other than Reality is what dissatisfies us". In "The Rose for the World", he admires her beauty, but feels pain because he cannot be with her. In the poem "No Second Troy", Yeats overflows with anger and bitterness because of their unrequited love. Poet T. S. Eliot dealt with the themes of desire and homoeroticism in his poetry, prose and drama. Other poems on the theme of desire include John Donne's poem "To His Mistress Going to Bed", Carol Ann Duffy's longings in "Warming Her Pearls"; Ted Hughes' "Lovesong" about the savage intensity of desire; and Wendy Cope's humorous poem "Song".
Philippe Borgeaud's novels analyse how emotions such as erotic desire and seduction are connected to fear and wrath by examining cases where people are worried about issues of impurity, sin, and shame.
Films
Just as desire is central to the written fiction genre of romance, it is the central theme of melodrama films, which are a subgenre of the drama film. Like drama, a melodrama depends mostly on in-depth character development, interaction, and highly emotional themes. Melodramatic films tend to use plots that appeal to the heightened emotions of the audience. Melodramatic plots often deal with "crises of human emotion, failed romance or friendship, strained familial situations, tragedy, illness, neuroses, or emotional and physical hardship." Film critics sometimes use the term "pejoratively to connote an unrealistic, bathos-filled, campy tale of romance or domestic situations with stereotypical characters (often including a central female character) that would directly appeal to feminine audiences." Also called "women's movies", "weepies", tearjerkers, or "chick flicks".
"Melodrama… is Hollywood's fairly consistent way of treating desire and subject identity", as can be seen in well-known films such as Gone with the Wind, in which "desire is the driving force for both Scarlett and the hero, Rhett". Scarlett desires love, money, the attention of men, and the vision of being a virtuous "true lady". Rhett Butler desires to be with Scarlett, which builds to a burning longing that is ultimately his undoing, because Scarlett keeps refusing his advances; when she finally confesses her secret desire, Rhett is worn out and his longing is spent.
In Cathy Cupitt's article on "Desire and Vision in Blade Runner", she argues that film, as a "visual narrative form, plays with the voyeuristic desires of its audience". Focusing on the dystopian 1980s science fiction film Blade Runner, she calls the film an "Object of Visual Desire", in which it plays to an "expectation of an audience's delight in visual texture, with the 'retro-fitted' spectacle of the post-modern city to ogle" and with the use of the "motif of the 'eye'". In the film, "desire is a key motivating influence on the narrative of the film, both in the 'real world', and within the text."
See also
Affect
Feeling
Impulse (psychology)
Motivation
Saudade
Taṇhā
Trishna (Vedic thought)
Valence (psychology)
References
Further reading
Marks, Joel. The Ways of Desire: New Essays in Philosophical Psychology on the Concept of Wanting. Transaction Publishers, 1986
Jadranka Skorin-Kapov, The Aesthetics of Desire and Surprise: Phenomenology and Speculation. Lexington Books 2015
Emotions
Philosophy of love
Personal life
Philosophy of sexuality
Social influence
no:Lyst | 0.764649 | 0.994574 | 0.760499 |
Eight disciplines problem solving | Eight Disciplines Methodology (8D) is a method or model developed at Ford Motor Company used to approach and to resolve problems, typically employed by quality engineers or other professionals. Focused on product and process improvement, its purpose is to identify, correct, and eliminate recurring problems. It establishes a permanent corrective action based on statistical analysis of the problem and on the origin of the problem by determining the root causes. Although it originally comprised eight stages, or 'disciplines', it was later augmented by an initial planning stage. 8D follows the logic of the PDCA cycle. The disciplines are:
D0: Preparation and Emergency Response Actions: Plan for solving the problem and determine the prerequisites. Provide emergency response actions.
D1: Use a Team: Establish a team of people with product/process knowledge. Teammates provide new perspectives and different ideas when it comes to problem solving.
D2: Describe the Problem: Specify the problem by identifying in quantifiable terms the who, what, where, when, why, how, and how many (5W2H) for the problem.
D3: Develop Interim Containment Plan: Define and implement containment actions to isolate the problem from any customer.
D4: Determine and Verify Root Causes and Escape Points: Identify all applicable causes that could explain why the problem has occurred. Also identify why the problem was not noticed at the time it occurred. All causes shall be verified or proved. One can use five whys or Ishikawa diagrams to map causes against the effect or problem identified.
D5: Verify Permanent Corrections (PCs) for Problem that will resolve the problem for the customer: Using pre-production programs, quantitatively confirm that the selected correction will resolve the problem. (Verify that the correction will actually solve the problem).
D6: Define and Implement Corrective Actions: Define and implement the best corrective actions. Also, validate corrective actions with empirical evidence of improvement.
D7: Prevent Recurrence / System Problems: Modify the management systems, operation systems, practices, and procedures to prevent recurrence of this and similar problems.
D8: Congratulate the Main Contributors to your Team: Recognize the collective efforts of the team. The team needs to be formally thanked by the organization.
8Ds has become a standard in the automotive, assembly, and other industries that require a thorough structured problem-solving process using a team approach.
Ford Motor Company's team-oriented problem solving
The executives of the Powertrain Organization (transmissions, chassis, engines) wanted a methodology where teams (design engineering, manufacturing engineering, and production) could work on recurring chronic problems. In 1986, the assignment was given to develop a manual and a subsequent course that would achieve a new approach to solving identified engineering design and manufacturing problems. The manual for this methodology was documented and defined in Team Oriented Problem Solving (TOPS), first published in 1987. The manual and subsequent course material were piloted at Ford World Headquarters in Dearborn, Michigan. Ford refers to their current variant as G8D (Global 8D). The Ford 8Ds manual is extensive and covers chapter by chapter how to go about addressing, quantifying, and resolving engineering issues. It begins with a cross-functional team and concludes with a successful demonstrated resolution of the problem. Containment actions may or may not be needed based on where the problem occurred in the life cycle of the product.
Usage
Many disciplines are typically involved in the "8Ds" methodology. The tools used can be found in textbooks and reference materials used by quality assurance professionals. For example, an "Is/Is Not" worksheet is a common tool employed at D2, and Ishikawa, or "fishbone," diagrams and "5-why analysis" are common tools employed at step D4.
In the late 1990s, Ford developed a revised version of the 8D process that they call "Global 8D" (G8D), which is the current global standard for Ford and many other companies in the automotive supply chain. The major revisions to the process are as follows:
Addition of a D0 (D-Zero) step as a gateway to the process. At D0, the team documents the symptoms that initiated the effort along with any emergency response actions (ERAs) that were taken before formal initiation of the G8D. D0 also incorporates standard assessing questions meant to determine whether a full G8D is required. The assessing questions are meant to ensure that in a world of limited problem-solving resources, the efforts required for a full team-based problem-solving effort are limited to those problems that warrant these resources.
Addition of the notion of escape points to D4 through D6. An 'escape point' is the earliest control point in the control system following the root cause of a problem that should have detected that problem but failed to do so. The idea here is to consider not only the root cause, but also what went wrong with the control system in allowing this problem to escape. Global 8D requires the team to identify and verify an escape point at D4. Then, through D5 and D6, the process requires the team to choose, verify, implement, and validate permanent corrective actions to address the escape point.
Recently, the 8D process has been employed significantly outside the auto industry. As part of lean initiatives and continuous-improvement processes it is employed extensively in the food manufacturing, health care, and high-tech manufacturing industries.
Benefits
The benefits of the 8D methodology include effective approaches to finding a root cause, developing proper actions to eliminate root causes, and implementing the permanent corrective action. The 8D methodology also helps to explore the control systems that allowed the problem to escape. The Escape Point is studied for the purpose of improving the ability of the Control System to detect the failure or cause when and if it should occur again.
Finally the Prevention Loop explores the systems that permitted the condition that allowed the Failure and Cause Mechanism to exist in the first place.
Prerequisites
Requires training in the 8D problem-solving process as well as appropriate data collection and analysis tools such as Pareto charts, fishbone diagrams, and process maps.
Problem solving tools
The following tools can be used within 8D:
Ishikawa diagrams also known as cause-and-effect or fishbone diagrams
Pareto charts or Pareto diagrams
5 Whys
5W and 2H (who, what, where, when, why, how, how many or how much)
Statistical process control
Scatter plots
Design of experiments
Check sheet
Histograms
FMEA
Flowcharts or process maps
Background of common corrective actions to dispose of nonconforming items
The 8D methodology was first described in a Ford manual in 1987. The manual describes the eight-step methodology to address chronic product and process problems. The 8Ds included several concepts of effective problem solving, including taking corrective actions and containing nonconforming items. These two steps have been very common in most manufacturing facilities, including government and military installations. In 1974, the U.S. Department of Defense (DOD) released “MIL-STD 1520 Corrective Action and Disposition System for Nonconforming Material”. This 13 page standard defines establishing some corrective actions and then taking containment actions on nonconforming material or items. It is focused on inspection for defects and disposing of them. The basic idea of corrective actions and containment of defectives was officially abolished in 1995, but these concepts were also common to Ford Motor Company, a major supplier to the government in World War II. Corrective actions and containment of poor quality parts were part of the manual and course for the automotive industry and are well known to many companies. Ford's 60 page manual covers details associated with each step in their 8D problem solving manual and the actions to take to deal with identified problems.
Military usage
The exact history of the 8D method remains disputed as many publications and websites state that it originates from the US military. Indeed, MIL-STD-1520C outlines a set of requirements for their contractors on how they should organize themselves with respect to non-conforming materials. Developed in 1974 and cancelled in February 1995 as part of the Perry memo, you can compare it best to the ISO 9001 standard that currently exists as it expresses the same philosophy. The aforementioned military standard does outline some aspects that are in the 8D method, however, it does not provide the same structure that the 8D methodology offers. Taking into account the fact that the Ford Motor Company played an instrumental role in producing army vehicles during the Second World War and in the decades after, it could very well be the case that the MIL-STD-1520C stood as a model for today's 8D method.
Relationship between 8D and FMEA
FMEA (failure mode and effect analysis) is a tool generally used in the planning of product or process design. The relationships between 8D and FMEA are outlined below:
The problem statements and descriptions are sometimes linked between both documents. An 8D can utilize pre-brainstormed information from a FMEA to assist in looking for potential problems.
Possible causes in a FMEA can immediately be used to jump start 8D Fishbone or Ishikawa diagrams. Brainstorming information that is already known is not a good use of time or resources.
Data and brainstorming collected during an 8D can be placed into a FMEA for future planning of new product or process quality. This allows a FMEA to consider actual failures, occurring as failure modes and causes, becoming more effective and complete.
The design or process controls in a FMEA can be used in verifying the root cause and Permanent Corrective Action in an 8D.
The FMEA and 8D should reconcile each failure and cause by cross documenting failure modes, problem statements and possible causes. Each FMEA can be used as a database of possible causes of failure as an 8D is developed.
See also
Complaint system
Corrective and preventive action
Failure mode and effects analysis
Fault tree analysis
Quality management system (QMS)
Eight dimensions of quality
Problem solving
References
External links
8-D Problem Solving Overview from the Ford Motor Company
Laurie Rambaud (2011), 8D Structured Problem Solving: A Guide to Creating High Quality 8D Reports, PHRED Solutions, Second Edition 978-0979055317
Society of Manufacturing Engineers: SME,
Chris S.P. Visser (2017), 8D Problem solving explained – Turning operational failures into knowledge to drive your strategic and competitive advantages,
Quality
Problem solving methods | 0.765378 | 0.993578 | 0.760462 |
Mentalization-based treatment | Mentalization-based treatment (MBT) is an integrative form of psychotherapy, bringing together aspects of psychodynamic, cognitive-behavioral, systemic and ecological approaches. MBT was developed and manualised by Peter Fonagy and Anthony Bateman, designed for individuals with borderline personality disorder (BPD). Some of these individuals suffer from disorganized attachment and failed to develop a robust mentalization capacity. Fonagy and Bateman define mentalization as the process by which we implicitly and explicitly interpret the actions of oneself and others as meaningful on the basis of intentional mental states. An alternative and simpler definition is "Seeing others from the inside and ourselves from the outside." The object of treatment is that patients with BPD increase their mentalization capacity, which should improve affect regulation, thereby reducing suicidality and self-harm, as well as strengthening interpersonal relationships.
More recently, a range of mentalization-based treatments, using the "mentalizing stance" defined in MBT but directed at children (MBT-C), families (MBT-F) and adolescents (MBT-A), and for chaotic multi-problem youth, AMBIT (adaptive mentalization-based integrative treatment) has been under development by groups mainly gravitating around the Anna Freud National Centre for Children and Families. Moreover, the MBT model has been used in treating patients with eating disorders (MBT-ED)
The treatment should be distinguished from and has no connection with mindfulness-based stress reduction (MBSR) therapy developed by Jon Kabat-Zinn.
Goals
The major goals of MBT are:
better behavioral control
increased affect regulation
more intimate and gratifying relationships
the ability to pursue life goals
This is believed to be accomplished through increasing the patient's capacity for mentalization in order to stabilize the client's sense of self and to enhance stability in emotions and relationships.
Focus of treatment
A distinctive feature of MBT is placing the enhancement of mentalizing itself as focus of treatment. The aim of therapy is not developing insight, but the recovery of mentalizing. Therapy examines mainly the present moment, attending to events of the past only insofar as they affect the individual in the present. Other core aspects of treatment include a stance of curiosity, partnership with the patient rather than an 'expert' type role, monitoring and regulating emotional arousal, and identifying the affect focus. Transference is not included in the MBT model. MBT does encourage consideration of the patient-therapist relationship, but without necessarily generalizing to other relationships, past or present.
Treatment procedure
MBT should be offered to patients twice per week with sessions alternating between group therapy and individual treatment. During sessions the therapist works to stimulate or nurture mentalizing. Particular techniques are employed to lower or raise emotional arousal as needed, to interrupt non-mentalizing and to foster flexibility in perspective-taking. Activation occurs through the elaboration of current attachment relationships, the therapist's encouragement and regulation of the patient's attachment bond with the therapist and the therapist's attempts to create attachment bonds between members of the therapy group.
Mechanisms of change
The safe attachment relationship with the therapist provides a relational context in which it is safe for the patient to explore the mind of the other. Fonagy and Bateman have recently proposed that MBT (and other evidence-based therapies) works by providing ostensive cues that stimulate epistemic trust. The increase in epistemic trust, together with a persistent focus on mentalizing in therapy, appear to facilitate change by leaving people more open to learning outside of therapy, in the social interactions of their day-to-day lives.
Efficacy
Fonagy, Bateman, and colleagues have done extensive outcome research on MBT for borderline personality disorder. The first randomized, controlled trial was published in 1999, concerning MBT delivered in a partial hospital setting. The results showed real-world clinical effectiveness that compared favorably with existing treatments for BPD. A follow-up study published in 2003 demonstrated that MBT is cost-effective. Encouraging results were also found in an 18-month study, in which subjects were randomly assigned to an outpatient MBT treatment condition versus a structured clinical management (SCM) treatment. The lasting efficacy of MBT was demonstrated in an 8-year follow-up of patients from the original trial, comparing MBT versus treatment as usual. In that research, patients who had received MBT had less medication use, fewer hospitalizations and longer periods of employment compared to patients who received standard care. Replication studies have been published by other European investigators. Researchers have also demonstrated the effectiveness of MBT for adolescents as well as that of a group-only format of MBT.
References
Further reading
Allen, J.G., Fonagy, P. (2006). Handbook of mentalization-based treatment. Chichester, UK: John Wiley. .
Allen, J.G., Fonagy, P., Bateman, A.W. (2008) Mentalizing in clinical practice. Arlington, USA: American Psychiatric Publishing. .
Psychodynamics
Psychotherapy by type
Mindfulness (psychology)
Borderline personality disorder | 0.773201 | 0.983509 | 0.76045 |
Science studies | Science studies is an interdisciplinary research area that seeks to situate scientific expertise in broad social, historical, and philosophical contexts. It uses various methods to analyze the production, representation and reception of scientific knowledge and its epistemic and semiotic role.
Similarly to cultural studies, science studies are defined by the subject of their research and encompass a large range of different theoretical and methodological perspectives and practices. The interdisciplinary approach may include and borrow methods from the humanities, natural and formal sciences, from scientometrics to ethnomethodology or cognitive science.
Science studies have a certain importance for evaluation and science policy. Overlapping with the field of science, technology and society, practitioners study the relationship between science and technology, and the interaction of expert and lay knowledge in the public realm.
Scope
The field started with a tendency toward navel-gazing: it was extremely self-conscious in its genesis and applications. From early concerns with scientific discourse, practitioners soon started to deal with the relation of scientific expertise to politics and lay people. Practical examples include bioethics, bovine spongiform encephalopathy (BSE), pollution, global warming, biomedical sciences, physical sciences, natural hazard predictions, the (alleged) impact of the Chernobyl disaster in the UK, generation and review of science policy and risk governance and its historical and geographic contexts. While staying a discipline with multiple metanarratives, the fundamental concern is about the role of the perceived expert in providing governments and local authorities with information from which they can make decisions.
The approach poses various important questions about what makes an expert and how experts and their authority are to be distinguished from the lay population and interacts with the values and policy making process in liberal democratic societies.
Practitioners examine the forces within and through which scientists investigate specific phenomena such as
technological milieus, epistemic instruments and cultures and laboratory life (compare Karin Knorr-Cetina, Bruno Latour, Hans-Jörg Rheinberger)
science and technology (e.g. Wiebe Bijker, Trevor Pinch, Thomas P. Hughes)
science, technology and society (e.g. Peter Weingart, Ulrike Felt, Helga Nowotny and Reiner Grundmann)
language and rhetoric of science (e.g. Charles Bazerman, Alan G. Gross, Greg Myers)
aesthetics of science and visual culture in science (u.a. Peter Geimer), the role of aesthetic criteria in scientific practice (compare mathematical beauty) and the relation between emotion, cognition and rationality in the development of science.
semiotic studies of creative processes, as in the discovery, conceptualization, and realization of new ideas. or the interaction and management of different forms of knowledge in cooperative research.
large-scale research and research institutions, e.g. particle colliders (Sharon Traweek)
research ethics, science policy, and the role of the university.
History of the field
In 1935, in a celebrated paper, the Polish sociologist couple Maria Ossowska and Stanisław Ossowski proposed the founding of a "science of science" to study the scientific enterprise, its practitioners, and the factors influencing their work. Earlier, in 1923, the Polish sociologist Florian Znaniecki had made a similar proposal.
Fifty years before Znaniecki, in 1873, Aleksander Głowacki, better known in Poland by his pen name "Bolesław Prus", had delivered a public lecture – later published as a booklet – On Discoveries and Inventions, in which he said:
It is striking that, while early 20th-century sociologist proponents of a discipline to study science and its practitioners wrote in general theoretical terms, Prus had already half a century earlier described, with many specific examples, the scope and methods of such a discipline.
Thomas Kuhn's Structure of Scientific Revolutions (1962) increased interest both in the history of science and in science's philosophical underpinnings. Kuhn posited that the history of science was less a linear succession of discoveries than a succession of paradigms within the philosophy of science. Paradigms are broader, socio-intellectual constructs that determine which types of truth claims are permissible.
Science studies seeks to identify key dichotomies – such as those between science and technology, nature and culture, theory and experiment, and science and fine art – leading to the differentiation of scientific fields and practices.
The sociology of scientific knowledge arose at the University of Edinburgh, where David Bloor and his colleagues developed what has been termed "the strong programme". It proposed that both "true" and "false" scientific theories should be treated the same way. Both are informed by social factors such as cultural context and self-interest.
Human knowledge, abiding as it does within human cognition, is ineluctably influenced by social factors.
It proved difficult, however, to address natural-science topics with sociological methods, as was abundantly evidenced by the US science wars. Use of a deconstructive approach (as in relation to works on arts or religion) to the natural sciences risked endangering not only the "hard facts" of the natural sciences, but the objectivity and positivist tradition of sociology itself. The view on scientific knowledge production as a (at least partial) social construct was not easily accepted. Latour and others identified a dichotomy crucial for modernity, the division between nature (things, objects) as being transcendent, allowing to detect them, and society (the subject, the state) as immanent as being artificial, constructed. The dichotomy allowed for mass production of things (technical-natural hybrids) and large-scale global issues that endangered the distinction as such. E.g. We Have Never Been Modern asks to reconnect the social and natural worlds, returning to the pre-modern use of "thing"—addressing objects as hybrids made and scrutinized by the public interaction of people, things, and concepts.
Science studies scholars such as Trevor Pinch and Steve Woolgar started already in the 1980s to involve "technology", and called their field "science, technology and society". This "turn to technology" brought science studies into communication with academics in science, technology, and society programs.
More recently, a novel approach known as mapping controversies has been gaining momentum among science studies practitioners, and was introduced as a course for students in engineering, and architecture schools. In 2002 Harry Collins and Robert Evans asked for a third wave of science studies (a pun on The Third Wave), namely studies of expertise and experience answering to recent tendencies to dissolve the boundary between experts and the public.
Application to natural and man-made hazards
Sheepfarming after Chernobyl
A showcase of the rather complex problems of scientific information and its interaction with lay persons is Brian Wynne's study of Sheepfarming in Cumbria after the Chernobyl disaster. He elaborated on the responses of sheep farmers in Cumbria, who had been subjected to administrative restrictions because of radioactive contamination, allegedly caused by the nuclear accident at Chernobyl in 1986. The sheep farmers suffered economic losses, and their resistance against the imposed regulation was being deemed irrational and inadequate. It turned out that the source of radioactivity was actually the Sellafield nuclear reprocessing complex; thus, the experts who were responsible for the duration of the restrictions were completely mistaken. The example led to attempts to better involve local knowledge and lay-persons' experience and to assess its often highly geographically and historically defined background.
Science studies on volcanology
Donovan et al. (2012) used social studies of volcanology to investigate the generation of knowledge and expert advice on various active volcanoes. It contains a survey of volcanologists carried out during 2008 and 2009 and interviews with scientists in the UK, Montserrat, Italy and Iceland during fieldwork seasons. Donovan et al. (2012) asked the experts about the felt purpose of volcanology and what they considered the most important eruptions in historical time. The survey tries to identify eruptions that had an influence on volcanology as a science and to assess the role of scientists in policymaking.
A main focus was on the impact of the Montserrat eruption 1997. The eruption, a classical example of the black swan theory directly killed (only) 19 persons. However the outbreak had major impacts on the local society and destroyed important infrastructure, as the island's airport. About 7,000 people, or two-thirds of the population, left Montserrat; 4,000 to the United Kingdom.
The Montserrat case put immense pressure on volcanologists, as their expertise suddenly became the primary driver of various public policy approaches. The science studies approach provided valuable insights in that situation. There were various miscommunications among scientists. Matching scientific uncertainty (typical of volcanic unrest) and the request for a single unified voice for political advice was a challenge. The Montserrat Volcanologists began to use statistical elicitation models to estimate the probabilities of particular events, a rather subjective method, but allowing to synthesizing consensus and experience-based expertise step by step. It involved as well local knowledge and experience.
Volcanology as a science currently faces a shift of its epistemological foundations of volcanology. The science started to involve more research into risk assessment and risk management. It requires new, integrated methodologies for knowledge collection that transcend scientific disciplinary boundaries but combine qualitative and quantitative outcomes in a structured whole.
Experts and democracy
Science has become a major force in Western democratic societies, which depend on innovation and technology (compare Risk society) to address its risks. Beliefs about science can be very different from those of the scientists themselves, for reasons of e.g. moral values, epistemology or political motivations.The designation of expertise as authoritative in the interaction with lay people and decision makers of all kind is nevertheless challenged in contemporary risk societies, as suggested by scholars who follow Ulrich Beck's theorisation. The role of expertise in contemporary democracies is an important theme for debate among science studies scholars. Some argue for a more widely distributed, pluralist understanding of expertise (Sheila Jasanoff and Brian Wynne, for example), while others argue for a more nuanced understanding of the idea of expertise and its social functions (Collins and Evans, for example).
See also
Logology (study of science)
Merton thesis
Public awareness of science
Science and technology studies
Science and technology studies in India
Social construction of technology
Sociology of scientific knowledge
Sokal affair
References
Bibliography
Science studies, general
Bauchspies, W., Jennifer Croissant and Sal Restivo: Science, Technology, and Society: A Sociological Perspective (Oxford: Blackwell, 2005).
Biagioli, Mario, ed. The Science Studies Reader (New York: Routledge, 1999).
Bloor, David; Barnes, Barry & Henry, John, Scientific knowledge: a sociological analysis (Chicago: University Press, 1996).
Gross, Alan. Starring the Text: The Place of Rhetoric in Science Studies. Carbondale: SIU Press, 2006.
Fuller, Steve, The Philosophy of Science and Technology Studies (New York: Routledge, 2006).
Hess, David J. Science Studies: An Advanced Introduction (New York: NYU Press, 1997).
Jasanoff, Sheila, ed. Handbook of science and technology studies (Thousand Oaks, Calif.: SAGE Publications, 1995).
Latour, Bruno, "The Last Critique," Harper's Magazine (April 2004): 15–20.
Latour, Bruno. Science in Action. Cambridge. 1987.
Latour, Bruno, "Do You Believe in Reality: News from the Trenches of the Science Wars," in Pandora's Hope (Cambridge: Harvard University Press, 1999)
Vinck, Dominique. The Sociology of Scientific Work. The Fundamental Relationship between Science and Society (Cheltenham: Edward Elgar, 2010).
Wyer, Mary; Donna Cookmeyer; Mary Barbercheck, eds. Women, Science and Technology: A Reader in Feminist Science Studies, Routledge 200
Haraway, Donna J. "Situated Knowledges: The Science Question in Feminism and the Privilege of Partial Perspective," in Simians, Cyborgs, and Women: the Reinvention of Nature (New York: Routledge, 1991), 183–201. Originally published in Feminist Studies, Vol. 14, No. 3 (Autumn, 1988), pp. 575–599. (available online)
Foucault, Michel, "Truth and Power," in Power/Knowledge (New York: Pantheon Books, 1997), 109–133.
Porter, Theodore M. Trust in Numbers: The Pursuit of Objectivity in Science and Public Life (Princeton: Princeton University Press, 1995).
Restivo, Sal: "Science, Society, and Values: Toward a Sociology of Objectivity" (Lehigh PA: Lehigh University Press, 1994).
Medicine and biology
Media, culture, society and technology
Hancock, Jeff. Deception and design: the impact of communication technology on lying behavior
Lessig, Lawrence. Free Culture. Penguin USA, 2004.
MacKenzie, Donald. The Social Shaping of Technology Open University Press: 2nd ed. 1999.
Mitchell, William J. Rethinking Media Change Thorburn and Jennings eds. Cambridge, Massachusetts : MIT Press, 2003.
Postman, Neil. Amusing Ourselves to Death: Public Discourse in the Age of Show Business. Penguin USA, 1985.
Rheingold, Howard. Smart Mobs: The Next Social Revolution. Cambridge: Mass., Perseus Publishing. 2002.
External links
Sociology of Science, an introductory article by Joseph Ben-David & Teresa A. Sullivan, Annual Review of Sociology, 1975
The Incommensurability of Scientific and Poetic Knowledge
University of Washington Science Studies Network
Historiography of science
Philosophy of science
Pedagogy
Science and technology studies | 0.778901 | 0.9763 | 0.760441 |
Health care | Health care, or healthcare, is the improvement of health via the prevention, diagnosis, treatment, amelioration or cure of disease, illness, injury, and other physical and mental impairments in people. Health care is delivered by health professionals and allied health fields. Medicine, dentistry, pharmacy, midwifery, nursing, optometry, audiology, psychology, occupational therapy, physical therapy, athletic training, and other health professions all constitute health care. The term includes work done in providing primary care, secondary care, tertiary care, and public health.
Access to healthcare may vary across countries, communities, and individuals, influenced by social and economic conditions and health policies. Providing health care services means "the timely use of personal health services to achieve the best possible health outcomes". Factors to consider in terms of healthcare access include financial limitations (such as insurance coverage), geographical and logistical barriers (such as additional transportation costs and the ability to take paid time off work to use such services), sociocultural expectations, and personal limitations (lack of ability to communicate with health care providers, poor health literacy, low income). Limitations to health care services affect negatively the use of medical services, the efficacy of treatments, and overall outcome (well-being, mortality rates).
Health systems are the organizations established to meet the health needs of targeted populations. According to the World Health Organization (WHO), a well-functioning healthcare system requires a financing mechanism, a well-trained and adequately paid workforce, reliable information on which to base decisions and policies, and well-maintained health facilities to deliver quality medicines and technologies.
An efficient healthcare system can contribute to a significant part of a country's economy, development, and industrialization. Health care is an important determinant in promoting the general physical and mental health and well-being of people around the world. An example of this was the worldwide eradication of smallpox in 1980, declared by the WHO, as the first disease in human history to be eliminated by deliberate healthcare interventions.
Delivery
The delivery of modern health care depends on groups of trained professionals and paraprofessionals coming together as interdisciplinary teams. This includes professionals in medicine, psychology, physiotherapy, nursing, dentistry, midwifery and allied health, along with many others such as public health practitioners, community health workers and assistive personnel, who systematically provide personal and population-based preventive, curative and rehabilitative care services.
While the definitions of the various types of health care vary depending on the different cultural, political, organizational, and disciplinary perspectives, there appears to be some consensus that primary care constitutes the first element of a continuing health care process and may also include the provision of secondary and tertiary levels of care. Health care can be defined as either public or private.
Primary care
Primary care refers to the work of health professionals who act as a first point of consultation for all patients within the health care system. The primary care model supports first-contact, accessible, continuous, comprehensive and coordinated person-focused care. Such a professional would usually be a primary care physician, such as a general practitioner or family physician. Another professional would be a licensed independent practitioner such as a physiotherapist, or a non-physician primary care provider such as a physician assistant or nurse practitioner. Depending on the locality and health system organization, the patient may see another health care professional first, such as a pharmacist or nurse. Depending on the nature of the health condition, patients may be referred for secondary or tertiary care.
Primary care is often used as the term for the health care services that play a role in the local community. It can be provided in different settings, such as Urgent care centers that provide same-day appointments or services on a walk-in basis.
Primary care involves the widest scope of health care, including all ages of patients, patients of all socioeconomic and geographic origins, patients seeking to maintain optimal health, and patients with all types of acute and chronic physical, mental and social health issues, including multiple chronic diseases. Consequently, a primary care practitioner must possess a wide breadth of knowledge in many areas. Continuity is a key characteristic of primary care, as patients usually prefer to consult the same practitioner for routine check-ups and preventive care, health education, and every time they require an initial consultation about a new health problem. The International Classification of Primary Care (ICPC) is a standardized tool for understanding and analyzing information on interventions in primary care based on the reason for the patient's visit.
Common chronic illnesses usually treated in primary care may include, for example, hypertension, diabetes, asthma, COPD, depression and anxiety, back pain, arthritis or thyroid dysfunction. Primary care also includes many basic maternal and child health care services, such as family planning services and vaccinations. In the United States, the 2013 National Health Interview Survey found that skin disorders (42.7%), osteoarthritis and joint disorders (33.6%), back problems (23.9%), disorders of lipid metabolism (22.4%), and upper respiratory tract disease (22.1%, excluding asthma) were the most common reasons for accessing a physician.
In the United States, primary care physicians have begun to deliver primary care outside of the managed care (insurance-billing) system through direct primary care which is a subset of the more familiar concierge medicine. Physicians in this model bill patients directly for services, either on a pre-paid monthly, quarterly, or annual basis, or bill for each service in the office. Examples of direct primary care practices include Foundation Health in Colorado and Qliance in Washington.
In the context of global population aging, with increasing numbers of older adults at greater risk of chronic non-communicable diseases, rapidly increasing demand for primary care services is expected in both developed and developing countries. The World Health Organization attributes the provision of essential primary care as an integral component of an inclusive primary health care strategy.
Secondary care
Secondary care includes acute care: necessary treatment for a short period of time for a brief but serious illness, injury, or other health condition. This care is often found in a hospital emergency department. Secondary care also includes skilled attendance during childbirth, intensive care, and medical imaging services.
The term "secondary care" is sometimes used synonymously with "hospital care". However, many secondary care providers, such as psychiatrists, clinical psychologists, occupational therapists, most dental specialties or physiotherapists, do not necessarily work in hospitals. Some primary care services are delivered within hospitals. Depending on the organization and policies of the national health system, patients may be required to see a primary care provider for a referral before they can access secondary care.
In countries that operate under a mixed market health care system, some physicians limit their practice to secondary care by requiring patients to see a primary care provider first. This restriction may be imposed under the terms of the payment agreements in private or group health insurance plans. In other cases, medical specialists may see patients without a referral, and patients may decide whether self-referral is preferred.
In other countries patient self-referral to a medical specialist for secondary care is rare as prior referral from another physician (either a primary care physician or another specialist) is considered necessary, regardless of whether the funding is from private insurance schemes or national health insurance.
Allied health professionals, such as physical therapists, respiratory therapists, occupational therapists, speech therapists, and dietitians, also generally work in secondary care, accessed through either patient self-referral or through physician referral.
Tertiary care
Tertiary care is specialized consultative health care, usually for inpatients and on referral from a primary or secondary health professional, in a facility that has personnel and facilities for advanced medical investigation and treatment, such as a tertiary referral hospital.
Examples of tertiary care services are cancer management, neurosurgery, cardiac surgery, plastic surgery, treatment for severe burns, advanced neonatology services, palliative, and other complex medical and surgical interventions.
Quaternary care
The term quaternary care is sometimes used as an extension of tertiary care in reference to advanced levels of medicine which are highly specialized and not widely accessed. Experimental medicine and some types of uncommon diagnostic or surgical procedures are considered quaternary care. These services are usually only offered in a limited number of regional or national health care centers.
Home and community care
Many types of health care interventions are delivered outside of health facilities. They include many interventions of public health interest, such as food safety surveillance, distribution of condoms and needle-exchange programs for the prevention of transmissible diseases.
They also include the services of professionals in residential and community settings in support of self-care, home care, long-term care, assisted living, treatment for substance use disorders among other types of health and social care services.
Community rehabilitation services can assist with mobility and independence after the loss of limbs or loss of function. This can include prostheses, orthotics, or wheelchairs.
Many countries are dealing with aging populations, so one of the priorities of the health care system is to help seniors live full, independent lives in the comfort of their own homes. There is an entire section of health care geared to providing seniors with help in day-to-day activities at home such as transportation to and from doctor's appointments along with many other activities that are essential for their health and well-being. Although they provide home care for older adults in cooperation, family members and care workers may harbor diverging attitudes and values towards their joint efforts. This state of affairs presents a challenge for the design of ICT (information and communication technology) for home care.
Because statistics show that over 80 million Americans have taken time off of their primary employment to care for a loved one, many countries have begun offering programs such as the Consumer Directed Personal Assistant Program to allow family members to take care of their loved ones without giving up their entire income.
With obesity in children rapidly becoming a major concern, health services often set up programs in schools aimed at educating children about nutritional eating habits, making physical education a requirement and teaching young adolescents to have a positive self-image.
Ratings
Health care ratings are ratings or evaluations of health care used to evaluate the process of care and health care structures and/or outcomes of health care services. This information is translated into report cards that are generated by quality organizations, nonprofit, consumer groups and media. This evaluation of quality is based on measures of:
health plan quality
hospital quality
of patient experience
physician quality
quality for other health professionals
Access to health care
Access to healthcare may vary across countries, communities, and individuals, influenced by social and economic conditions as well as health policies. Providing health care services means "the timely use of personal health services to achieve the best possible health outcomes". Factors to consider in terms of healthcare access include financial limitations (such as insurance coverage), geographical and logistical barriers (such as additional transportation costs and the ability to take paid time off work to use such services), sociocultural expectations, and personal limitations (lack of ability to communicate with health care providers, poor health literacy, low income). Limitations to health care services affects negatively the use of medical services, the efficacy of treatments, and overall outcome (well-being, mortality rates).
Related sectors
Health care extends beyond the delivery of services to patients, encompassing many related sectors, and is set within a bigger picture of financing and governance structures.
Health system
A health system, also sometimes referred to as health care system or healthcare system, is the organization of people, institutions, and resources that deliver health care services to populations in need.
Industry
The healthcare industry incorporates several sectors that are dedicated to providing health care services and products. As a basic framework for defining the sector, the United Nations' International Standard Industrial Classification categorizes health care as generally consisting of hospital activities, medical and dental practice activities, and "other human health activities." The last class involves activities of, or under the supervision of, nurses, midwives, physiotherapists, scientific or diagnostic laboratories, pathology clinics, residential health facilities, patient advocates or other allied health professions.
In addition, according to industry and market classifications, such as the Global Industry Classification Standard and the Industry Classification Benchmark, health care includes many categories of medical equipment, instruments and services including biotechnology, diagnostic laboratories and substances, drug manufacturing and delivery.
For example, pharmaceuticals and other medical devices are the leading high technology exports of Europe and the United States. The United States dominates the biopharmaceutical field, accounting for three-quarters of the world's biotechnology revenues.
Research
The quantity and quality of many health care interventions are improved through the results of science, such as advanced through the medical model of health which focuses on the eradication of illness through diagnosis and effective treatment. Many important advances have been made through health research, biomedical research and pharmaceutical research, which form the basis for evidence-based medicine and evidence-based practice in health care delivery. Health care research frequently engages directly with patients, and as such issues for whom to engage and how to engage with them become important to consider when seeking to actively include them in studies. While single best practice does not exist, the results of a systematic review on patient engagement suggest that research methods for patient selection need to account for both patient availability and willingness to engage.
Health services research can lead to greater efficiency and equitable delivery of health care interventions, as advanced through the social model of health and disability, which emphasizes the societal changes that can be made to make populations healthier. Results from health services research often form the basis of evidence-based policy in health care systems. Health services research is also aided by initiatives in the field of artificial intelligence for the development of systems of health assessment that are clinically useful, timely, sensitive to change, culturally sensitive, low-burden, low-cost, built into standard procedures, and involve the patient.
Financing
There are generally five primary methods of funding health care systems:
General taxation to the state, county or municipality
Social health insurance
Voluntary or private health insurance
Out-of-pocket payments
Donations to health charities
In most countries, there is a mix of all five models, but this varies across countries and over time within countries. Aside from financing mechanisms, an important question should always be how much to spend on health care. For the purposes of comparison, this is often expressed as the percentage of GDP spent on health care. In OECD countries for every extra $1000 spent on health care, life expectancy falls by 0.4 years. A similar correlation is seen from the analysis carried out each year by Bloomberg. Clearly this kind of analysis is flawed in that life expectancy is only one measure of a health system's performance, but equally, the notion that more funding is better is not supported.
In 2011, the health care industry consumed an average of 9.3 percent of the GDP or US$ 3,322 (PPP-adjusted) per capita across the 34 members of OECD countries. The US (17.7%, or US$ PPP 8,508), the Netherlands (11.9%, 5,099), France (11.6%, 4,118), Germany (11.3%, 4,495), Canada (11.2%, 5669), and Switzerland (11%, 5,634) were the top spenders, however life expectancy in total population at birth was highest in Switzerland (82.8 years), Japan and Italy (82.7), Spain and Iceland (82.4), France (82.2) and Australia (82.0), while OECD's average exceeds 80 years for the first time ever in 2011: 80.1 years, a gain of 10 years since 1970. The US (78.7 years) ranges only on place 26 among the 34 OECD member countries, but has the highest costs by far. All OECD countries have achieved universal (or almost universal) health coverage, except the US and Mexico. (see also international comparisons.)
In the United States, where around 18% of GDP is spent on health care, the Commonwealth Fund analysis of spend and quality shows a clear correlation between worse quality and higher spending.
Expand the OECD charts below to see the breakdown:
"Government/compulsory": Government spending and compulsory health insurance.
"Voluntary": Voluntary health insurance and private funds such as households' out-of-pocket payments, NGOs and private corporations.
They are represented by columns starting at zero. They are not stacked. The 2 are combined to get the total.
At the source you can run your cursor over the columns to get the year and the total for that country.
Click the table tab at the source to get 3 lists (one after another) of amounts by country: "Total", "Government/compulsory", and "Voluntary".
Administration and regulation
The management and administration of health care is vital to the delivery of health care services. In particular, the practice of health professionals and the operation of health care institutions is typically regulated by national or state/provincial authorities through appropriate regulatory bodies for purposes of quality assurance. Most countries have credentialing staff in regulatory boards or health departments who document the certification or licensing of health workers and their work history.
Health information technology
Health information technology (HIT) is "the application of information processing involving both computer hardware and software that deals with the storage, retrieval, sharing, and use of health care information, data, and knowledge for communication and decision making."
Health information technology components:
Electronic health record (EHR) – An EHR contains a patient's comprehensive medical history, and may include records from multiple providers.
Electronic Medical Record (EMR) – An EMR contains the standard medical and clinical data gathered in one's provider's office.
Health information exchange (HIE) – Health Information Exchange allows health care professionals and patients to appropriately access and securely share a patient's vital medical information electronically.
Medical practice management software (MPM) – is designed to streamline the day-to-day tasks of operating a medical facility. Also known as practice management software or practice management system (PMS).
Personal health record (PHR) – A PHR is a patient's medical history that is maintained privately, for personal use.
See also
:Category:Health care by country
Global health
Health equity
Health policy
Healthcare system / Health professionals
Tobacco control laws
Universal health care
References
External links
Primary care
Public services
Health
Public health
Universal health care
Health economics
Health sciences | 0.761255 | 0.998923 | 0.760435 |
Cognitive style | Cognitive style or thinking style is a concept used in cognitive psychology to describe the way individuals think, perceive and remember information. Cognitive style differs from cognitive ability (or level), the latter being measured by aptitude tests or so-called intelligence tests. There is controversy over the exact meaning of the term "cognitive style" and whether it is a single or multiple dimension of human personality. However it remains a key concept in the areas of education and management. If a pupil has a cognitive style that is similar to that of his/her teacher, the chances are improved that the pupil will have a more positive learning experience (Kirton, 2003). Likewise, team members with similar cognitive styles likely feel more positive about their participation with the team (Kirton, 2003). While matching cognitive styles may make participants feel more comfortable when working with one another, this alone cannot guarantee the success of the outcome.
Multi-dimensional models and measures
A popular multi-dimensional instrument for the measure of cognitive style is the Myers–Briggs Type Indicator.
Riding (1991) developed a two-dimensional cognitive style instrument, his Cognitive Style Analysis (CSA), which is a compiled computer-presented test that measures individuals' position on two orthogonal dimensions – Wholist-Analytic (W-A) and Verbal-Imagery (V-I). The W-A dimension reflects how individuals organise and structure information. Individuals described as Analytics will deconstruct information into its component parts, whereas individuals described as Wholists will retain a global or overall view of information. The V–I dimension describes individuals' mode of information representation in memory during thinking – Verbalisers represent information in words or verbal associations, and Imagers represent information in mental pictures. The CSA test is broken down into three sub-tests, all of which are based on a comparison between response times to different types of stimulus items. Some scholars argue that this instrument, being at least in part reliant on the ability of the respondent to answer at speed, really measures a mix of cognitive style and cognitive ability (Kirton, 2003). This is said to contribute to the unreliability of this instrument.
Bipolar, one-dimensional models and measures
The field dependence-independence model, invented by Herman Witkin, identifies an individual's perceptive behaviour while distinguishing object figures from the content field in which they are set. Two similar instruments to do this were produced, the Embedded Figures Test (EFT) and the Group Embedded Figures Test (GEFT) (1971). In both cases, the content field is a distracting or confusing background. These instruments are designed to distinguish field-independent from field-dependent cognitive types; a rating which is claimed to be value-neutral. Field-independent people tend to be more autonomous when it comes to the development of restructuring skills; that is, those skills required during technical tasks with which the individual is not necessarily familiar. They are, however, less autonomous in the development of interpersonal skills. The EFT and GEFT continue to enjoy support and usage in research and practice. However, they, too, are criticised by scholars as containing an element of ability and so may not measure cognitive style alone
Liam Hudson (Carey, 1991) identified two cognitive styles: convergent thinkers, good at accumulating material from a variety of sources relevant to a problem's solution, and divergent thinkers who proceed more creatively and subjectively in their approach to problem-solving. Hudson's Converger-diverger construct attempts to measure the processing rather than the acquisition of information by an individual. It aims to differentiate convergent from divergent thinkers; the former being persons who think rationally and logically while the latter tend to be more flexible and to base reasoning more on heuristic evidence.
In contrast, cognitive complexity theories as proposed by James Bieri (1961) attempt to identify individuals who are more complex in their approach to problem-solving against those who are simpler. The instruments used to measure this concept of "cognitive style" are either Driver's Decision Style Exercise (DDSE) (Carey, 1991) or the Complexity Self-Test Description Instrument, which are somewhat ad hoc and so are little used at present.
Gordon Pask (Carey, 1991) extended these notions in a discussion of strategies and styles of learning. In this, he classifies learning strategies as either holist or serialist. When confronted with an unfamiliar type of problem, holists gather information randomly within a framework, while serialists approach problem-solving step-wise, proceeding from the known to the unknown.
Robert Ornstein's Hemispherical lateralisation concept (Carey, 1991), commonly called left-brain/right-brain theory, posits that the left hemisphere of the brain controls logical and analytical operations while the right hemisphere controls holistic, intuitive and pictorial activities. Cognitive style is thus claimed to be a single dimension on a scale from extreme left-brain to extreme right-brain types, depending on which associated behaviour dominates in the individual, and by how much.
Taggart's (1988) "Whole-brain human information processing theory" classifies the brain as having six divisions, three per hemisphere, which in a sense is a refined model of the hemispherical lateralisation theory discussed above.
The Allinson-Hayes (1996) Cognitive Style Index (CSI) has features of Ornstein's left-brain/right-brain theory. Recent evidence suggests that it may be the most widely used measure of cognitive style in academic research in the fields of management and education (Cools, Armstrong and Verbrigghe, 2014; Evans, Cools and Charlesworth, 2010). The CSI contains 38 items, each rated using a 3-point scale (true; uncertain; false). Certain scholars have questioned its construct validity on the grounds of theoretical and methodological approaches associated with its development. Allinson and Hayes (2012), however, have refuted these claims on the basis of other independent studies of its psychometric properties. Research has indicated both gender and cultural differences in CSI scores. While this may complicate some management and educational applications, previous investigations have suggested it is entirely plausible that cognitive style is related to these social factors.
Kirton's model of cognitive style
Another popular model of cognitive style was devised by Michael Kirton (1976, 2003). His model, called Adaption-Innovation theory, claims that an individual's preferred approach to problem solving, can be placed on a continuum ranging from high adaptation to high innovation. He suggests that some human beings, called adaptors tend to prefer the adaptive approach to problem-solving, while others (innovators), of course, prefer the reverse. Adaptors use what is given to solve problems by time-honoured techniques. Alternatively, innovators look beyond what is given to solve problems with the aid of innovative technologies. Kirton suggests that while adaptors prefer to do well within a given paradigm, innovators would rather do differently, thereby striving to transcend existing paradigms.
Kirton also invented an instrument to measure cognitive style (at least in accordance with this model) known as the Kirton Adaption-innovation Inventory (KAI). This requires the respondent to rate themselves against thirty-two personality traits. A drawback of all the other efforts to measure cognitive style discussed above is their failure to separate out cognitive style and cognitive level. As the items on the KAI are expressed in clear and simple language, cognitive level plays no significant role. Scores on the A-I continuum are normally distributed between the extreme cognitive styles of high innovation and high adaptation.
Another important concept associated with A-I theory is that of bridging in teams. Kirton (2003) defines bridging as "reaching out to people in the team and helping them be part of it so that they may contribute even if their contribution is outside the mainstream". Bridging is thus a task and a role, which has to be learnt. It is not a cognitive style. Bridging is also not leading, although the skilled leader may make use of persons they recognise as good bridgers to maintain group cohesion. Group cohesion means, to keep the group aware of the importance of its members working well together. Kirton (2003) suggests that it is easier for a person to learn and assume a bridging role if their cognitive style is an intermediate one. If person B assumes a bridging role which assists persons A and C to work well together in a team, then B's KAI score is recommended to be between those of A and C. Of course, it is only recommended that B's score lies between the scores of A and C, not that B's score lies near the KAI mean. All of A, B and C could be high-scoring innovators or, for that matter, high-scoring adaptors.
See also
Barnum effect or Forer effect
Differential psychology
Fluid and crystallized intelligence
Learning styles
List of thought processes
References
Allinson, C.W., and Hayes, J. "The cognitive style index: a measure of intuition-analysis for organisational research", Journal of Management Studies (33:1), January 1996, pp 119–135.
Allinson, C. W., and Hayes, J. The Cognitive Style Index: Technical manual and user guide, 2012.
Atherton, J.S. "Learning and Teaching: Pask and Laurillard", 2003. Retrieved 28 June 2003, from https://web.archive.org/web/20070603195811/http://www.dmu.ac.uk/%7Ejamesa/learning/pask.htm#serialists.
Beiri, J. "Complexity-simplicity as a personality variable in cognitive and preferential behaviour" Dorsey Press, Homewood, IL, 1961.
Bobic, M., Davis, E., and Cunningham, R. "The Kirton adaptation-innovation inventory", Review of Public Personnel Administration (19:2), Spring 1999, pp 18–31.
Carey, J.M. "The issue of cognitive style in MIS/DSS research", 1991.
Cools, E., Armstrong, S.J., & Verbrigghe, J. (2014). Methodological practices in cognitive style research: Insights and recommendations from the field of business and psychology. European Journal of Work and Organisational Psychology, Vol 23, Iss 4, pp. 627–641.
Evans, C., Cools, E., and Charlesworth, Z. M. (2010). "Learning in higher education – how cognitive and learning styles matter". Teaching in Higher Education, 15, 469–480.
Kirton, M. "Adaptors and innovators: a description and measure", Journal of Applied Psychology (61:5) 1976, pp 622–629.
Kirton, M.J. "Field Dependence and Adaptation Innovation Theories", Perceptual and Motor Skills, 1978, 47, pp 1239 1245.
Kirton, M.J. Adaptation and innovation in the context of diversity and change Routledge, London, 2003, p. 392
Mullany, M.J. "Using cognitive style measurements to forecast user resistance", 14th Annual conference of the National Advisory Committee on Computing Qualifications, Napier, New Zealand, 2001, pp. 95–100.
Peterson, E.R., & Deary, I.J. (2006). Examining wholistic-analytic style using preferences in early information processing. Personality and Individual Differences, 41, 3–14.
Pask, G. "Styles and Strategies of Learning", British Journal of Educational Psychology (46:II) 1976, pp 128–148.
Riding, R.J., and Cheema, I. "Cognitive styles - An overview and integration", Educational Psychology (11:3/4) 1991, pp 193–215.
Riding, R.J., and Sadler-Smith, E. "Type of instructional material, cognitive style and learning performance", Educational Studies (18:3) 1992, pp 323–340.
Salmani Nodoushan, M. A. (2007). Is Cognitive Style A Precursor To EFL Reading Performance? i-Manager's Journal of Educational Technology, 4(1), 66–86.
Sternberg, R.J., & Zhang, L.F. (2001). "Perspectives on thinking, learning, and cognitive styles" (Edited). Mahwah, NJ: Lawrence Erlbaum.
Witkin, H.A., Moore, C.A., Goodenough, D.R., and Cox, P.W. "Field dependent and field independent cognitive styles and their educational implications", Review of Educational Research (47:1), Winter 1977, pp 1–64.
Zhang, L.F., & Sternberg, R.J. (2006). "The nature of intellectual styles". Mahwah, NJ: Lawrence Erlbaum.
External links
Cognition
Cognitive psychology
Educational psychology | 0.779221 | 0.975818 | 0.760378 |
Savior complex | In psychology, a savior complex is an attitude and demeanor in which a person believes they are responsible for assisting other people. A person with a savior complex will often experience empathic episodes and commit to impulsive decisions such as volunteering, donating, or advocating for a cause. A person with the complex will usually make an attempt to assist or continue to assist even if they are not helpful or are detrimental to the situation, others, or themselves. It is often associated with other disorders, such as schizophrenia and bipolar disorder, and is commonly used interchangebly with the similar term 'Messiah complex'.
See also
Hero syndrome
Messiah complex
Superman complex
White knight
White savior
References
Complex (psychology)
Popular psychology | 0.764114 | 0.995103 | 0.760372 |
Developmental disorder | Developmental disorders comprise a group of psychiatric conditions originating in childhood that involve serious impairment in different areas. There are several ways of using this term. The most narrow concept is used in the category "Specific Disorders of Psychological Development" in the ICD-10. These disorders comprise developmental language disorder, learning disorders, developmental coordination disorders, and autism spectrum disorders (ASD). In broader definitions, attention deficit hyperactivity disorder (ADHD) is included, and the term used is neurodevelopmental disorders. Yet others include antisocial behavior and schizophrenia that begins in childhood and continues through life. However, these two latter conditions are not as stable as the other developmental disorders, and there is not the same evidence of a shared genetic liability.
Developmental disorders are present from early life onward. Most improve as the child grows older, but some entail impairments that continue throughout life. These disorders differ from Pervasive developmental disorders (PPD), which uniquely describe a group of five developmental diagnoses, one of which is autism spectrum disorders (ASD). Pervasive developmental disorders reference a limited number of conditions whereas development disorders are a broad network of social, communicative, physical, genetic, intellectual, behavioral, and language concerns and diagnoses.
Emergence
Learning disabilities are often diagnosed when the children are young and just beginning school. Most learning disabilities are found under the age of 9.
Young children with communication disorders may not speak at all, or may have a limited vocabulary for their age. Some children with communication disorders have difficulty understanding simple directions or cannot name objects. Most children with communication disorders can speak by the time they enter school, however, they continue to have problems with communication. School-aged children often have problems understanding and formulating words. Teens may have more difficulty with understanding or expressing abstract ideas.
Causes
The scientific study of the causes of developmental disorders involves many theories. Some of the major differences between these theories involves whether environment disrupts normal development, if abnormalities are pre-determined, or if they are products of human evolutionary history which become disorders in modern environments (see evolutionary psychiatry).
Normal development occurs with a combination of contributions from both the environment and genetics. The theories vary in the part each factor has to play in normal development, thus affecting how the abnormalities are caused.
One theory that supports environmental causes of developmental disorders involves stress in early childhood. Researcher and child psychiatrist Bruce D. Perry, M.D., Ph.D, theorizes that developmental disorders can be caused by early childhood traumatization. In his works, he compares developmental disorders in traumatized children to adults with post-traumatic stress disorder, linking extreme environmental stress to the cause of developmental difficulties. Other stress theories suggest that even small stresses can accumulate to result in emotional, behavioral, or social disorders in children.
A 2017 study tested all 20,000 genes in about 4,300 families with children with rare developmental difficulties in the UK and Ireland in order to identify if these difficulties had a genetic cause. They found 14 new developmental disorders caused by spontaneous genetic mutations not found in either parent (such as a fault in the CDK13 gene). They estimated that about one in 300 children are born with spontaneous genetic mutations associated with rare developmental disorders.
Types
Autism spectrum disorder (ASD)
Diagnosis
The first diagnosed case of ASD was published in 1943 by American psychiatrist Leo Kanner. There is a wide range of cases and severity to ASD so it is very hard to detect the first signs of ASD. A diagnosis of ASD can be made accurately before the child is 3 years old, but the diagnosis of ASD is not commonly confirmed until the child is somewhat older. The age of diagnosis can range from 9 months to 14 years, and the mean age is 4 years old in the USA. On average each case of ASD is tested at three different diagnostic centers before confirmed. Early diagnosis of the disorder can diminish familial stress, speed up referral to special educational programs and influence family planning. The occurrence of ASD in one child can increase the risk of the next child having ASD by 50 to 100 times.
Abnormalities in the brain
The cause of ASD is still uncertain. What is known is that a child with ASD has a pervasive problem with how the brain is wired. Genes related to neurotransmitter receptors (serotonin and gamma-aminobutyric acid [GABA]) and CNS structural control (HOX genes) are found to be potential target genes that get affected in ASD. Autism spectrum disorder is a disorder of the many parts of the brain. Structural changes are observed in the cortex, which controls higher functions, sensation, muscle movements, and memory. Structural defects are seen in the cerebellum too, which affect the motor and communication skills. Sometimes the left lobe of the brain is affected and this causes neuropsychological symptoms. The distribution of white matter, the nerve fibers that link diverse parts of the brain, is abnormal. The corpus callosum, the band of nerve fibers, that connects the left and right hemispheres of the brain also gets affected in ASD. A study also found that 33% of people who have AgCC (agenesis of the corpus callosum), a condition in which the corpus callosum is partially or completely absent, had scores higher than the autism screening cut-off.
An ASD child's brain grows at a very rapid rate and is almost fully grown by the age of 10. Recent fMRI studies have also found altered connectivity within the social brain areas due to ASD and may be related to the social impairments encountered in ASD.
Symptoms
The symptoms have a wide range of severity. The symptoms of ASD can be broadly categorised as the following:
Persistent issues in social interactions and communications
These are predominantly seen by unresponsiveness in conversations, lesser emotional sharing, inability to initiate conversations, inability to interpret body language, avoidance of eye-contact and difficulty maintaining relationships.
Repetitive behavioral patterns
These patterns can be seen in the form of repeated movements of the hand or the phrases used while talking. A rigid adherence to schedules and inflexibility to adapt even if a minor change is made to their routine is also one of the behavioral symptoms of ASD. They could also display sensory patterns such as extreme aversion to certain odors or indifference to pain or temperature.
There are also different symptoms at different ages based on developmental milestones. Children between 0 and 36 months with ASD show a lack of eye contact, seem to be deaf, lack a social smile, do not like being touched or held, have unusual sensory behavior and show a lack of imitation. Children between 12 and 24 months with ASD show a lack of gestures, prefer to be alone, do not point to objects to indicate interest, are easily frustrated with challenges, and lack of functional play. And finally children between the ages 24 to 36 months with ASD show a lack of symbolic play and an unusual interest in certain objects, or moving objects.
Treatment
There is no specific treatment for autism spectrum disorders, but there are several types of therapy effective in easing the symptoms of autism, such as Applied Behavior Analysis (ABA), Speech-language therapy, Occupational therapy or Sensory integration therapy.
Applied behavioral analysis (ABA) is considered the most effective therapy for Autism spectrum disorders by the American Academy of Pediatrics. ABA focuses on teaching adaptive behaviors like social skills, play skills, or communication skills and diminishing problematic behaviors like self-injury. This is done by creating a specialized plan that uses behavioral therapy techniques, such as positive or negative reinforcement, to encourage or discourage certain behaviors over-time.
Occupational therapy helps autistic children and adults learn everyday skills that help them with daily tasks, such as personal hygiene and movement. These skills are then integrated into their home, school, and work environments. Therapists will oftentimes help patients learn to adapt their environment to their skill level. This type of therapy could help autistic people become more engaged in their environment. An occupational therapist will create a plan based on the patient's needs and desires and work with them to achieve their set goals.
Speech-language therapy can help those with autism who need to develop or improve communication skills. According to the organization Autism Speaks, “speech-language therapy is designed to coordinate the mechanics of speech with the meaning and social use of speech”. People with low-functioning autism may not be able to communicate with spoken words. Speech-language Pathologists (SLP) may teach someone how to communicate more effectively with others or work on starting to develop speech patterns. The SLP will create a plan that focuses on what the child needs.
Sensory integration therapy helps people with autism adapt to different kinds of sensory stimuli. Many children with autism can be oversensitive to certain stimuli, such as lights or sounds, causing them to overreact. Others may not react to certain stimuli, such as someone speaking to them. Many types of therapy activities involve a form of play, such as using swings, toys and trampolines to help engage the patients with sensory stimuli. Therapists will create a plan that focuses on the type of stimulation the person needs integration with.
Attention deficit hyperactivity disorder (ADHD)
Attention deficit hyperactivity disorder is a neurodevelopmental disorder that occurs in early childhood. ADHD affects 8 to 11% of children in the school going age. ADHD is characterised by significant levels of hyperactivity, inattentiveness, and impulsiveness. There are three subtypes of ADHD: predominantly inattentive, predominantly hyperactive, and combined (which presents as both hyperactive and inattentive subtypes). ADHD is twice as common in boys than girls but it is seen that the hyperactive/impulsive type is more common in boys while the inattentive type affects both sexes equally.
Symptoms
Symptoms of ADHD include inattentiveness, impulsiveness, and hyperactivity. Many of the behaviors that are associated with ADHD include poor control over actions resulting in disruptive behavior and academic problems. Another area that is affected by these disorders is the social arena for the person with the disorder. Many children that have this disorder exhibit poor interpersonal relationships and struggle to fit in socially with their peers. Behavioral study of these children can show a history of other symptoms such as temper tantrums, mood swings, sleep disturbances and aggressiveness.
Treatment options
The treatment of Attention Deficit Hyperactivity Disorder (ADHD) commonly involves a multimodal approach, combining various strategies to address the complex nature of the disorder. This comprehensive approach includes psychological, behavioral, pharmaceutical, and educational interventions tailored to the individual's specific needs. Here's a breakdown of the different components:
Psychological Interventions:
Counseling and Psychoeducation - Individuals with ADHD may benefit from counseling sessions that provide a safe space to discuss challenges, develop coping strategies, and improve self-esteem. **Psychoeducation helps individuals and their families understand the nature of ADHD and learn effective management techniques.
Cognitive Behavioral Therapy (CBT) - CBT aims to modify negative thought patterns and behaviors associated with ADHD. It helps individuals develop organizational skills, time management, and problem-solving abilities.
Behavioral Interventions:
Parent Training - Parents often participate in training programs to learn behavior management techniques. This may involve setting clear expectations, using positive reinforcement, and implementing consistent consequences for behavior.
Behavioral Modification Programs - These programs focus on shaping positive behaviors and reducing impulsive or disruptive behaviors in various settings, including home and school.
Pharmaceutical Interventions:
Stimulant Medications - Stimulant medications, such as methylphenidate (e.g., Ritalin) and amphetamines (e.g., Adderall), are commonly prescribed to manage symptoms of ADHD. These medications enhance the activity of neurotransmitters like dopamine and norepinephrine, helping to improve attention and impulse control.
Non-stimulant Medications - In cases where stimulants are not suitable or effective, non-stimulant medications like atomoxetine (Strattera) or guanfacine (Intuniv) may be prescribed.
Educational Interventions:
Individualized Education Plans (IEPs) - In educational settings, IEPs are developed to accommodate the unique learning needs of students with ADHD. This may involve classroom modifications, additional support, and specific teaching strategies.
504 Plans - These plans outline accommodations for students with ADHD in mainstream educational settings, such as extended test-taking time or preferential seating.
The effectiveness of the treatment plan depends on the individual's specific challenges and responses to interventions. A collaborative and multidisciplinary approach involving parents, educators, mental health professionals, and healthcare providers is crucial for developing and implementing a successful ADHD management plan. Regular monitoring and adjustments to the treatment plan may be necessary to meet the evolving needs of individuals with ADHD.
Behavioral therapy
Sessions of counselling, cognitive behavioral therapy (CBT), making environmental changes in noise and visual stimulation are some behavioral management techniques followed. But it has been observed that behavioral therapy alone is less effective than therapy with stimulant drugs alone.
Drug therapy
Medications commonly utilized in the treatment of Attention Deficit Hyperactivity Disorder (ADHD) include stimulants like methylphenidate and lisdexamfetamine, as well as non-stimulants such as atomoxetine. These medications can effectively manage ADHD symptoms by targeting neurotransmitter imbalances. However, it is important to be aware of potential side effects associated with these medications. Common side effects may include headaches, which can often be mitigated by adjusting the dosage or administration timing. Gastrointestinal discomfort, including stomach pain or nausea, is another possible side effect, and taking the medication with food or modifying the dosage may help alleviate these symptoms. Additionally, while rare, changes in mood such as feelings of depression have been reported. Careful monitoring and communication with healthcare providers are essential to address and manage any side effects, ensuring the overall effectiveness and well-being of individuals undergoing ADHD treatments.
SSRI antidepressants may be unhelpful, and could worsen symptoms of ADHD. However ADHD is often misdiagnosed as depression, particularly when no hyperactivity is present.
Other disorders
Learning disabilities, such as Dysgraphia
Communication disorders and Auditory processing disorder
Developmental coordination disorder
Genetic disorders, such as Down syndrome or Williams syndrome
Tic disorders such as Tourette syndrome
Stuttering
Intellectual disability
See also
Developmental disability
References
External links
Developmental disabilities | 0.766529 | 0.991936 | 0.760347 |
Philosophy of education | The philosophy of education is the branch of applied philosophy that investigates the nature of education as well as its aims and problems. It also examines the concepts and presuppositions of education theories. It is an interdisciplinary field that draws inspiration from various disciplines both within and outside philosophy, like ethics, political philosophy, psychology, and sociology. Many of its theories focus specifically on education in schools but it also encompasses other forms of education. Its theories are often divided into descriptive theories, which provide a value-neutral description of what education is, and normative theories, which investigate how education should be practiced.
A great variety of topics is discussed in the philosophy of education. Some studies provide a conceptual analysis of the fundamental concepts of education. Others center around the aims or purpose of education, like passing on knowledge and the development of the abilities of good reasoning, judging, and acting. An influential discussion concerning the epistemic aims of education is whether education should focus mainly on the transmission of true beliefs or rather on the abilities to reason and arrive at new knowledge. In this context, many theorists emphasize the importance of critical thinking in contrast to indoctrination. Another debate about the aims of education is whether the primary beneficiary is the student or the society to which the student belongs.
Many of the more specific discussions in the philosophy of education concern the contents of the curriculum. This involves the questions of whether, when, and in what detail a certain topic, like sex education or religion, should be taught. Other debates focus on the specific contents and methods used in moral, art, and science education. Some philosophers investigate the relation between education and power, often specifically regarding the power used by modern states to compel children to attend school. A different issue is the problem of the equality of education and factors threatening it, like discrimination and unequal distribution of wealth. Some philosophers of education promote a quantitative approach to educational research, which follows the example of the natural sciences by using wide experimental studies. Others prefer a qualitative approach, which is closer to the methodology of the social sciences and tends to give more prominence to individual case studies.
Various schools of philosophy have developed their own perspective on the main issues of education. Existentialists emphasize the role of authenticity while pragmatists give particular prominence to active learning and discovery. Feminists and postmodernists often try to uncover and challenge biases and forms of discrimination present in current educational practices. Other philosophical movements include perennialism, classical education, essentialism, critical pedagogy, and progressivism. The history of the philosophy of education started in ancient philosophy but only emerged as a systematic branch of philosophy in the latter half of the 20th century.
Definition
The philosophy of education is the branch of philosophy that examines the nature, aims, and problems of education. As the philosophical study of education, it investigates its topic similar to how other discipline-specific branches of philosophy, like the philosophy of science or the philosophy of law, study their topics. A central task for the philosophy of education is to make explicit the various fundamental assumptions and disagreements at work in its field and to evaluate the arguments raised for and against the different positions. The issue of education has a great many manifestations in various fields. Because of this, both the breadth and the influence of the philosophy of education are significant and wide-ranging, touching many other branches of philosophy, such as ethics, political philosophy, epistemology, metaphysics, and philosophy of mind. Its theories are often formulated from the perspective of these other philosophical disciplines. But due to its interdisciplinary nature, it also attracts contributions from scholars belonging to fields outside the domain of philosophy.
While there is wide agreement on the general topics discussed in the philosophy of education, it has proven difficult to give a precise definition of it. The philosophy of education belongs mainly to applied philosophy. According to some definitions, it can be characterized as an offshoot of ethics. But not everyone agrees with this characterization since the philosophy of education has a more theoretical side as well, which includes the examination of the fundamental concepts and theories of education as well as their philosophical implications. These two sides are sometimes referred to as the outward and the inward looking nature of the philosophy of education. Its topics can range from very general questions, like the nature of the knowledge worth teaching, to more specific issues, like how to teach art or whether public schools should implement standardized curricula and testing.
The problem of education was already an important topic in ancient philosophy and has remained so to the present day. But it only emerged as a distinct branch of philosophy in the latter half of the 20th century, when it became the subject of a systematic study and analysis. The term "education" can refer either to the process of educating or to the field of study investigating education as this process. This ambiguity is also reflected on the level of the philosophy of education, which encompasses the study of the philosophical presuppositions and issues both of education as a process and as a discipline. Many works in the philosophy of education focus explicitly or implicitly on the education happening in schools. But in its widest sense, education takes place in various other fields as well, such as at home, in libraries, in museums, or in the public media. Different types of education can be distinguished, such as formal and informal education or private and public education.
Subdivisions
Different subdivisions of the philosophy of education have been suggested. One categorization distinguishes between descriptive and normative issues. Descriptive theories aim to describe what education is and how to understand its related concepts. This includes also epistemological questions, which ask not whether a theory about education is true or false, but how one can arrive at the knowledge to answer such questions. Normative theories, on the other hand, try to give an account of how education should be practiced or what is the right form of education. Some normative theories are built on a wider ethical framework of what is right or good and then arrive at their educational normative theories by applying this framework to the practice of education. But the descriptive and the normative approaches are intertwined and cannot always be clearly separated since descriptive findings often directly imply various normative attitudes.
Another categorization divides topics in the philosophy of education into the nature and aims of education on the one hand, and the methods and circumstances of education on the other hand. The latter section may again be divided into concrete normative theories and the study of the conceptual and methodological presuppositions of these theories. Other classifications additionally include areas for topics such as the role of reasoning and morality as well as issues pertaining to social and political topics and the curriculum.
The theories within the philosophy of education can also be subdivided based on the school of philosophy they belong to. Various schools of philosophy, such as existentialism, pragmatism, Marxism, postmodernism, and feminism, have developed their own perspective on the main issues of education. They often include normative theories about how education should or should not be practiced and are in most cases controversial.
Another approach is to simply list all topics discussed in the philosophy of education. Among them are the issues and presuppositions concerning sex education, science education, aesthetic education, religious education, moral education, multicultural education, professional education, theories of teaching and learning, the measurement of learning, knowledge and its value, cultivating reason, epistemic and moral aims of education, authority, fallibilism, and fallibility.
Finally, yet another way that philosophy of education is often tacitly divided is in terms of western versus non-western and “global south” perspectives. For many generations, philosophy of education has maintained a relatively ethnocentric orientation, with little attention paid to ideas from outside Europe and North America, but this is starting to change in the 21st century due to decolonization and related movements.
Main topics
Fundamental concepts of education
The starting point of many philosophical inquiries into a field is the examination and clarification of the fundamental concepts used in this field, often in the form of conceptual analysis. This approach is particularly prominent in the analytic tradition. It aims to make ambiguities explicit and to uncover various implicit and potentially false assumptions associated with these terms.
Theorists in this field often emphasize the importance of this form of investigation since all subsequent work on more specific issues already has to assume at least implicitly what their central terms mean to demarcate their field. For example, in order to study what constitutes good education, one has to have a notion of what the term "education" means and how to achieve, measure, and evaluate it. Definitions of education can be divided into thin and thick definitions. Thin definitions are neutral and descriptive. They usually emphasize the role of the transmission of knowledge and understanding in education. Thick definitions include additional normative components, for example, by stating that the process in question has to have certain positive results to be called education. According to one thick definition, education means that the person educated has acquired knowledge and intellectual skills, values these factors, and has thus changed for the better. These characteristics can then be used to distinguish education from other closely related terms, such as "indoctrination". Other fundamental notions in the philosophy of education include the concepts of teaching, learning, student, schooling, and rearing.
Aims of education
A central question in the philosophy of education concerns the aims of education, i.e. the question of why people should be educated and what goals should be pursued in the process of education. This issue is highly relevant for evaluating educational practices and products by assessing how well they manage to realize these goals. There is a lot of disagreement and various theories have been proposed concerning the aims of education. Prominent suggestions include that education should foster knowledge, curiosity, creativity, rationality, and critical thinking while also promoting the tendency to think, feel, and act morally. The individual should thereby develop as a person, and achieve self-actualization by realizing their potential. Some theorists emphasize the cultivation of liberal ideals, such as freedom, autonomy, and open-mindedness, while others stress the importance of docility, obedience to authority, and ideological purity, sometimes also with a focus on piety and religious faith. Many suggestions concern the social domain, such as fostering a sense of community and solidarity and thus turning the individual into a productive member of society while protecting them from the potentially negative influences of society. The discussion of these positions and the arguments cited for and against them often include references to various disciplines in their justifications, such as ethics, psychology, anthropology, and sociology.
There is wide consensus concerning certain general aims of education, like that it should foster all students, help them in the development of their ability to reason, and guide them in how to judge and act. But these general characteristics are usually too vague to be of much help and there are many disagreements about the more specific suggestions of what education should aim for. Some attempts have been made to provide an overarching framework of these different aims. According to one approach, education should at its core help the individual lead a good life. All the different more specific goals are aims of education to the extent that they serve this ultimate purpose. On this view, it may be argued that fostering rationality and autonomy in the students are aims of education to the extent that increased rationality and autonomy will result in the student leading a better life.
The different theories of the aims of education are sometimes divided into goods-based, skills-based, and character-based accounts. Goods-based accounts hold that the ultimate aim of education is to produce some form of epistemic good, such as truth, knowledge, and understanding. Skills-based accounts, on the other hand, see the development of certain skills, like rationality as well as critical and independent thinking as the goal of education. For character-based accounts, the character traits or virtues of the learner play the central role, often with an emphasis on moral and civic traits like kindness, justice, and honesty.
Epistemic
Many theories emphasize the epistemic aims of education. According to the epistemic approach, the central aim of education has to do with knowledge, for example, to pass on knowledge accumulated in the societal effort from one generation to the next. This process may be seen both as the development of the student's mind as well as the transmission of a valuable heritage. Such an approach is sometimes rejected by pragmatists, who emphasize experimentation and critical thinking over the transmission of knowledge. Others have argued that this constitutes a false dichotomy: that the transmission of knowledge and the development of a rational and critical mind are intertwined aims of education that depend on and support each other. In this sense, education aims also at fostering the ability to acquire new knowledge. This includes both instilling true beliefs in the students as well as teaching the methods and forms of evidence responsible for verifying existing beliefs and arriving at new knowledge. It promotes the epistemic autonomy of students and may help them challenge unwarranted claims by epistemic authorities. In its widest sense, the epistemic approach includes various related goals, such as imparting true beliefs or knowledge to the students as well as teaching dispositions and abilities, such as rationality, critical thinking, understanding, and other intellectual virtues.
Critical thinking and indoctrination
Critical thinking is often cited as one of the central aims of education. There is no generally accepted definition of critical thinking. But there is wide agreement that it is reasonable, reflective, careful, and focused on determining what to believe or how to act. It has clarity and rationality as its standards and includes a metacognitive component monitoring not just the solution of the problem at hand but also ensuring that it complies with its own standards in the process. In this sense, education is not just about conveying many true beliefs to the students. Instead, the students' ability to arrive at conclusions by themselves and the disposition to question pre-existing beliefs should also be fostered, often with the goal of benefitting not just the student but society at large. But not everyone agrees with the positive role ascribed to critical thinking in education. Objections are often based on disagreements about what it means to reason well. Some critics argue that there is no universally correct form of reasoning. According to them, education should focus more on teaching subject-specific skills and less on imparting a universal method of thinking. Other objections focus on the allegation that critical thinking is not as neutral, universal, and presuppositionless as some of its proponents claim. On this view, it involves various implicit biases, like egocentrism or distanced objectivity, and culture-specific values arising from its roots in the philosophical movement of the European Enlightenment.
The problem of critical thinking is closely connected to that of indoctrination. Many theorists hold that indoctrination is in important ways different from education and should be avoided in education. But others contend that indoctrination should be part of education or even that there is no difference between the two. These different positions depend a lot on how "indoctrination" is to be defined. Most definitions of indoctrination agree that its goal is to get the student to accept and embrace certain beliefs. It has this in common with most forms of education but differs from it in other ways. According to one definition, the belief acquisition in indoctrination happens without regard for the evidential support of these beliefs, i.e. without presenting proper arguments and reasons for adopting them. According to another, the beliefs are instilled in such a way as to discourage the student to question or assess for themselves the believed contents. In this sense, the goals of indoctrination are exactly opposite to other aims of education, such as rationality and critical thinking. In this sense, education tries to impart not just beliefs but also make the students more open-minded and conscious of human fallibility. An intimately related issue is whether the aim of education is to mold the mind of the pupil or to liberate it by strengthening its capacity for critical and independent inquiry.
An important consequence of this debate concerns the problem of testimony, i.e to what extent students should trust the claims of teachers and books. It has been argued that this issue depends a lot on the age and the intellectual development of the student. In the earlier stages of education, a high level of trust on the side of the students may be necessary. But the more their intellectual capacities develop, the more they should use them when trying to assess the plausibility of claims and the reasons for and against them. In this regard, it has been argued that, especially for young children, weaker forms of indoctrination may be necessary while they still lack the intellectual capacities to evaluate the reasons for and against certain claims and thus to critically assess them. In this sense, one can distinguish unavoidable or acceptable forms of indoctrination from their avoidable or unacceptable counterparts. But this distinction is not always affirmed and some theorists contend that all forms of indoctrination are bad or unacceptable.
Individual and society
A recurrent source of disagreement about the aims of education concerns the question of who is the primary beneficiary of education: the individual educated or the society having this individual as its member. In many cases, the interests of both are aligned. On the one hand, many new opportunities in life open to the individual through education, especially concerning their career. On the other hand, education makes it more likely that the person becomes a good, law-abiding, and productive member of society. But this issue becomes more problematic in cases where the interests of the individual and society conflict with each other. This poses the question of whether individual autonomy should take precedence over communal welfare. According to comprehensive liberals, for example, education should emphasize the self-directedness of the students. On this view, it is up to the student to choose their own path in life. The role of education is to provide them with the necessary resources but it does not direct the student with respect to what constitutes an ethically good path in life. This position is usually rejected by communitarians, who stress the importance of social cohesion by being part of the community and sharing a common good.
Curriculum
An important and controversial issue in the philosophy of education concerns the contents of the curriculum, i.e. the question of what should be taught to students. This includes both the selection of subjects to be taught and the consideration of arguments for and against the inclusion of a particular topic. This issue is intimately tied to the aims of education: one may argue that a certain subject should be included in the curriculum because it serves one of the aims of education.
While many positions about what subjects to include in the curriculum are controversial, some particular issues stand out where these controversies go beyond the academic discourse to a wide public discourse, like questions about sexual and religious education. Controversies in sex education involve both biological aspects, such as the functioning of sex organs, and social aspects, such as sexual practices and gender identities. Disagreements in this area concern which aspects are taught and in which detail as well as to which age groups these teachings should be directed. Debates on religious education include questions like whether religion should be taught as a distinct subject and, if so, whether it should be compulsory. Other questions include which religion or religions should be taught and to what degree religious views should influence other topics, such as ethics or sex education.
Another prominent topic in this field concerns the subject of moral education. This field is sometimes referred to as "educational ethics". Disagreements in this field concern which moral beliefs and values should be taught to the students. This way, many of the disagreements in moral philosophy are reflected in the field of moral education. Some theorists in the Kantian tradition emphasize the importance of moral reasoning and enabling children to become morally autonomous agents who can tell right from wrong. Theorists in the Aristotelian tradition, on the other hand, focus more on moral habituation through the development of virtues that concern both perception, affect, and judgment in regard to moral situations. A related issue, heavily discussed in ancient philosophy, is the extent to which morality can be taught at all instead of just being an inborn disposition.
Various discussions also concern the role of art and aesthetics in public education. It has been argued that the creativity learned in these areas can be applied to various other fields and may thereby benefit the student in various ways. It has been argued that aesthetic education also has indirect effects on various other issues, such as shaping the student's sensibilities in the fields of morality and politics as well as heightening their awareness of self and others.
Some researchers reject the possibility of objectivity in general. They use this claim to argue against universal forms of education, which they see as hiding particular worldviews, beliefs, and interests under a false cover. This is sometimes utilized to advance an approach focused on more diversity, for example, by giving more prominence in education to the great variety of cultures, customs, languages, and lifestyles without giving preference to any of them.
Different approaches to solving these disputes are employed. In some cases, psychology in the field of child development, learning, and motivation can provide important general insights. More specific questions about the curriculum of a particular subject, such as mathematics, are often strongly influenced by the philosophy of this specific discipline, such as the philosophy of mathematics.
Power
The problem of power is another issue in the philosophy of education. Of specific interest on this topic is that the modern states compel children to attend school, so-called compulsory education. The children and their parents usually have few to no ways of opting out or changing the established curriculum. An important question in this respect is why or whether modern states are justified to use this form of power. For example, various liberationist movements belonging to the fields of deschooling and unschooling reject this power and argue that the children's welfare is best served in the absence of compulsory schooling in general. This is sometimes based on the idea that the best form of learning does not happen while studying but instead occurs as a side-effect while doing something else. This position is often rejected by pointing out that it is based on overly optimistic presuppositions about the children's natural and unguided development of rationality. While some objections focus on compulsory education in general, a less radical and more common criticism concerns specific compulsory topics in the curriculum, for example, in relation to sexuality or religion. Another contemporary debate in the United States concerns the practice of standardized testing: it has been argued that this discriminates against certain racial, cultural, or religious minorities since the standardized test may implicitly assume various presuppositions not shared by these minorities. Other issues in relation to power concern the authority and responsibility teachers have towards their students.
Postmodern theorists often see established educational practices as instruments of power used by elites in society to further their own interests. Important aspects in this regard are the unequal power relation between the state and its institutions in contrast to the individual as well as the control that can thus be employed due to the close connection between power and knowledge, specifically the knowledge passed on through education.
Equality
A recurrent demand on public education is that all students should be treated equally and in a fair manner. One reason for this demand is that education plays a central role for the child's path and prospects in life, which should not be limited by unfair or arbitrary external circumstances. But there are various disagreements about how this demand is best understood and whether it is applicable in all cases. An initial problem concerns what is meant by "equality". In the field of education, it is often understood as equality of opportunity. In this sense, the demand for equality implies that education should open the same opportunities to everyone. This means, among other things, that students from higher social classes should not enjoy a competitive advantage over others. One difficulty with this demand, when understood in a wide sense, is that there are many sources of educational inequality and it is not always in the best interest to eliminate all of them. For example, parents who are concerned with their young children's education may read them bedtime stories early on and thereby provide them with a certain advantage over other children who do not enjoy this privilege. But disallowing such practices to level the field would have serious negative side-effects. A weaker position on this issue does not demand full equality but holds instead that educational policies should ensure that certain factors, like race, native language, and disabilities, do not pose obstacles to the equality of opportunity.
A closely related topic is whether all students, both high and low performers, should be treated equally. According to some, more resources should be dedicated to low performers, to help them get to an average level, while others recommend a preferential treatment for high performers in order to help them fully develop their exceptional abilities and thereby benefit society at large. A similar problem is the issue of specialization. It concerns the question of whether all students should follow the same curriculum or to what extent they should specialize early on in specific fields according to their interests and skills.
Marxist critiques of the school systems in capitalist societies often focus on the inequality they cause by sorting students for different economic positions. While overtly this process happens based on individual effort and desert, they argue that this just masks and reinforces the underlying influence of the preexisting social class structure. This is sometimes integrated into a wider Marxist perspective on society which holds that education in capitalist societies plays the role of upholding this inequality and thereby reproduces the capitalist relations of production.
Other criticisms of the dominant paradigms in education are often voiced by feminist and postmodern theorists. They usually point to alleged biases and forms of discrimination present in current practices that should be eliminated. Feminists often hold that traditional education is overly man-oriented and thereby oppresses women in some form. This bias was present to severe degrees in earlier forms of education and a lot of progress has been made towards more gender-equal forms of education. Nonetheless, feminists often contend that certain problems still persist in contemporary education. Some argue, for example, that this manifests itself in the prominence given to cognitive development in education, which is said to be associated primarily with masculinity in contrast to a more feminine approach based on emotion and intuition. A related criticism holds that there is an overemphasis on abilities belonging to the public sphere, like reason and objectivity, in contrast to equally important characteristics belonging to the private sphere, like compassion and empathy.
Epistemology
The philosophy of education is also interested in the epistemology of education. This term is often used to talk about the epistemic aims of education, i.e. questions like whether educators should aim at transmitting justified true beliefs rather than merely true beliefs or should additionally foster other epistemic virtues like critical thinking. In a different sense, the epistemology of education concerns the issue of how we arrive at knowledge on educational matters. This is especially relevant in the field of educational research, which is an active field of investigation with many studies being published on a regular basis. It is also quite influential in regard to educational policy and practice. Epistemological questions in this field concern the objectivity of its insights.
An important methodological divide in this area, often referred to as the "paradigm wars", is between the quantitative or statistical approach in contrast to the qualitative or ethnographical approach. The quantitative approach usually focuses on wide experimental studies and employs statistical methods to uncover the general causal factors responsible for educational phenomena. It has been criticized based on the claim that its method, which is inspired by the natural sciences, is inappropriate for understanding the complex cultural and motivational patterns investigated by the social sciences. The qualitative approach, on the other hand, gives more weight to particular case studies for reaching its conclusions. Its opponents hold that this approach lacks the methodological rigor to arrive at well-warranted knowledge. The mixed-method research is a recent contemporary approach in which the methods of both camps are combined. The question of the most promising approach is relevant to how funding budgets are spent on research, which in its turn has important implications for policymaking.
Others
One question concerns how the learners are to be conceptualized. John Locke sees the mind as a blank slate or a tabula rasa that passively absorbs information and is filled with contents through experience. This view contrasts with a more pragmatist perspective, which in its emphasis on practice sees students not as passive absorbers but as active learners that should be encouraged to discover and learn things by themselves.
Another disputed topic is the role of testing in public education. Some theorists have argued that it is counterproductive since it puts undue pressure on the students. But testing also plays various critical roles, such as providing feedback on the learning progress both to the student, their parents, and their teachers. Concrete discussions on the role of testing often focus less on whether it should be done at all and more on how much importance should be ascribed to the test results. This also includes questions about the form of testing, for example, whether it should be standardized. Standardized tests present the same questions and scoring system to all students taking the test and are often motivated by a desire for objective and fair evaluations both of students and schools. Opponents have argued that this approach tends to favor certain social groups over others and severely limits the creativity and effectiveness of teachers.
Philosophical movements
Existentialist
The existentialist sees the world as one's personal subjectivity, where goodness, truth, and reality are individually defined. Reality is a world of existing, truth subjectively chosen, and goodness a matter of freedom. The subject matter of existentialist classrooms should be a matter of personal choice. Teachers view the individual as an entity within a social context in which the learner must confront others' views to clarify his or her own. Character development emphasizes individual responsibility for decisions. Real answers come from within the individual, not from outside authority. Examining life through authentic thinking involves students in genuine learning experiences. Existentialists are opposed to thinking about students as objects to be measured, tracked, or standardized. Such educators want the educational experience to focus on creating opportunities for self-direction and self-actualization. They start with the student, rather than on curriculum content.
Perennialism
Perennialists believe that one should teach the things that one deems to be of everlasting importance to all people everywhere. They believe that the most important topics develop a person. Since details of fact change constantly, these cannot be the most important. Therefore, one should teach principles, not facts. Since people are human, one should teach first about humans, not machines or techniques. Since people are people first, and workers second if at all, one should teach liberal topics first, not vocational topics. The focus is primarily on teaching reasoning and wisdom rather than facts, the liberal arts rather than vocational training.
Classical education
The Classical education movement advocates a form of education based in the traditions of Western culture, with a particular focus on education as understood and taught in the Middle Ages. The term "classical education" has been used in English for several centuries, with each era modifying the definition and adding its own selection of topics. By the end of the 18th century, in addition to the trivium and quadrivium of the Middle Ages, the definition of a classical education embraced study of literature, poetry, drama, philosophy, history, art, and languages. In the 20th and 21st centuries it is used to refer to a broad-based study of the liberal arts and sciences, as opposed to a practical or pre-professional program. Classical Education can be described as rigorous and systematic, separating children and their learning into three rigid categories, Grammar, Dialectic, and Rhetoric.
Essentialism
According to educational essentialism, there are certain essential facts about the world that every student needs to learn and master. It is a form of traditional education that relies on long-standing and established subjects and teaching methods. Essentialists usually focus on subjects like reading, writing, mathematics, and science, usually starting with very basic skills while progressively increasing complexity. They prefer a teacher-centered approach, meaning that the teacher acts as the authority figure guiding the learning activity while students are expected to follow their lead.
Social reconstructionism and critical pedagogy
Critical pedagogy is an "educational movement, guided by passion and principle, to help students develop consciousness of freedom, recognize authoritarian tendencies, and connect knowledge to power and the ability to take constructive action." Based in Marxist theory, critical pedagogy draws on radical democracy, anarchism, feminism, and other movements for social justice.
Democratic education
Democratic education is a theory of learning and school governance in which students and staff participate freely and equally in a school democracy. In a democratic school, there is typically shared decision-making among students and staff on matters concerning living, working, and learning together.
Progressivism
Educational progressivism is the belief that education must be based on the principle that humans are social animals who learn best in real-life activities with other people. Progressivists, like proponents of most educational theories, claim to rely on the best available scientific theories of learning. Most progressive educators believe that children learn as if they were scientists, following a process similar to John Dewey's model of learning known as "the pattern of inquiry": 1) Become aware of the problem. 2) Define the problem. 3) Propose hypotheses to solve it. 4) Evaluate the consequences of the hypotheses from one's past experience. 5) Test the likeliest solution.
Unschooling
Unschooling is a range of educational philosophies and practices centered on allowing children to learn through their natural life experiences, including child directed play, game play, household responsibilities, work experience, and social interaction, rather than through a more traditional school curriculum. Unschooling encourages exploration of activities led by the children themselves, facilitated by the adults. Unschooling differs from conventional schooling principally in the thesis that standard curricula and conventional grading methods, as well as other features of traditional schooling, are counterproductive to the goal of maximizing the education of each child.
Contemplative education
Contemplative education focuses on bringing introspective practices such as mindfulness and yoga into curricular and pedagogical processes for diverse aims grounded in secular, spiritual, religious and post-secular perspectives. Contemplative approaches may be used in the classroom, especially in tertiary or (often in modified form) in secondary education. Parker Palmer is a recent pioneer in contemplative methods. The Center for Contemplative Mind in Society founded a branch focusing on education, The Association for Contemplative Mind in Higher Education.
Contemplative methods may also be used by teachers in their preparation; Waldorf education was one of the pioneers of the latter approach. In this case, inspiration for enriching the content, format, or teaching methods may be sought through various practices, such as consciously reviewing the previous day's activities; actively holding the students in consciousness; and contemplating inspiring pedagogical texts. Zigler suggested that only through focusing on their own spiritual development could teachers positively impact the spiritual development of students.
History
Ancient
Plato
Plato's educational philosophy was grounded in a vision of an ideal Republic wherein the individual was best served by being subordinated to a just society due to a shift in emphasis that departed from his predecessors. The mind and body were to be considered separate entities. In the dialogues of Phaedo, written in his "middle period" (360 BCE), Plato expressed his distinctive views about the nature of knowledge, reality, and the soul:When the soul and body are united, then nature orders the soul to rule and govern, and the body to obey and serve. Now which of these two functions is akin to the divine? and which to the mortal? Does not the divine appear ... to be that which naturally orders and rules, and the mortal to be that which is subject and servant?On this premise, Plato advocated removing children from their mothers' care and raising them as wards of the state, with great care being taken to differentiate children suitable to the various castes, the highest receiving the most education, so that they could act as guardians of the city and care for the less able. Education would be holistic, including facts, skills, physical discipline, and music and art, which he considered the highest form of endeavor.
Plato believed that talent was distributed non-genetically and thus must be found in children born in any social class. He built on this by insisting that those suitably gifted were to be trained by the state so that they might be qualified to assume the role of a ruling class. What this established was essentially a system of selective public education premised on the assumption that an educated minority of the population were, by virtue of their education (and inborn educability), sufficient for healthy governance.
Plato's writings contain some of the following ideas:
Elementary education would be confined to the guardian class till the age of 18, followed by two years of compulsory military training and then by higher education for those who qualified. While elementary education made the soul responsive to the environment, higher education helped the soul to search for truth which illuminated it. Both boys and girls receive the same kind of education. Elementary education consisted of music and gymnastics, designed to train and blend gentle and fierce qualities in the individual and create a harmonious person.
At the age of 20, a selection was made. The best students would take an advanced course in mathematics, geometry, astronomy and harmonics. The first course in the scheme of higher education would last for ten years. It would be for those who had a flair for science. At the age of 30 there would be another selection; those who qualified would study dialectics and metaphysics, logic and philosophy for the next five years. After accepting junior positions in the army for 15 years, a man would have completed his theoretical and practical education by the age of 50.
Aristotle
Only fragments of Aristotle's treatise On Education are still in existence. We thus know of his philosophy of education primarily through brief passages in other works. Aristotle considered human nature, habit and reason to be equally important forces to be cultivated in education. Thus, for example, he considered repetition to be a key tool to develop good habits. The teacher was to lead the student systematically; this differs, for example, from Socrates' emphasis on questioning his listeners to bring out their own ideas (though the comparison is perhaps incongruous since Socrates was dealing with adults).
Aristotle placed great emphasis on balancing the theoretical and practical aspects of subjects taught. Subjects he explicitly mentions as being important included reading, writing and mathematics; music; physical education; literature and history; and a wide range of sciences. He also mentioned the importance of play.
One of education's primary missions for Aristotle, perhaps its most important, was to produce good and virtuous citizens for the polis. All who have meditated on the art of governing mankind have been convinced that the fate of empires depends on the education of youth.
Medieval
Ibn Sina
In the medieval Islamic world, an elementary school was known as a maktab, which dates back to at least the 10th century. Like madrasahs (which referred to higher education), a maktab was often attached to a mosque. In the 11th century, Ibn Sina (known as Avicenna in the West), wrote a chapter dealing with the maktab entitled "The Role of the Teacher in the Training and Upbringing of Children", as a guide to teachers working at maktab schools. He wrote that children can learn better if taught in classes instead of individual tuition from private tutors, and he gave a number of reasons for why this is the case, citing the value of competition and emulation among pupils as well as the usefulness of group discussions and debates. Ibn Sina described the curriculum of a maktab school in some detail, describing the curricula for two stages of education in a maktab school.
Ibn Sina wrote that children should be sent to a maktab school from the age of 6 and be taught primary education until they reach the age of 14. During which time, he wrote that they should be taught the Qur'an, Islamic metaphysics, language, literature, Islamic ethics, and manual skills (which could refer to a variety of practical skills).
Ibn Sina refers to the secondary education stage of maktab schooling as the period of specialization, when pupils should begin to acquire manual skills, regardless of their social status. He writes that children after the age of 14 should be given a choice to choose and specialize in subjects they have an interest in, whether it was reading, manual skills, literature, preaching, medicine, geometry, trade and commerce, craftsmanship, or any other subject or profession they would be interested in pursuing for a future career. He wrote that this was a transitional stage and that there needs to be flexibility regarding the age in which pupils graduate, as the student's emotional development and chosen subjects need to be taken into account.
The empiricist theory of 'tabula rasa' was also developed by Ibn Sina. He argued that the "human intellect at birth is rather like a tabula rasa, a pure potentiality that is actualized through education and comes to know" and that knowledge is attained through "empirical familiarity with objects in this world from which one abstracts universal concepts" which is developed through a "syllogistic method of reasoning; observations lead to prepositional statements, which when compounded lead to further abstract concepts." He further argued that the intellect itself "possesses levels of development from the material intellect (al-‘aql al-hayulani), that potentiality that can acquire knowledge to the active intellect (al-‘aql al-fa‘il), the state of the human intellect in conjunction with the perfect source of knowledge."
Ibn Tufail
In the 12th century, the Andalusian-Arabian philosopher and novelist Ibn Tufail (known as "Abubacer" or "Ebn Tophail" in the West) demonstrated the empiricist theory of 'tabula rasa' as a thought experiment through his Arabic philosophical novel, Hayy ibn Yaqzan, in which he depicted the development of the mind of a feral child "from a tabula rasa to that of an adult, in complete isolation from society" on a desert island, through experience alone. Some scholars have argued that the Latin translation of his philosophical novel, Philosophus Autodidactus, published by Edward Pococke the Younger in 1671, had an influence on John Locke's formulation of tabula rasa in "An Essay Concerning Human Understanding".
Modern
Michel de Montaigne
Child education was among the psychological topics that Michel de Montaigne wrote about. His essays On the Education of Children, On Pedantry, and On Experience explain the views he had on child education. Some of his views on child education are still relevant today.
Montaigne's views on the education of children were opposed to the common educational practices of his day. He found fault both with what was taught and how it was taught. Much of the education during Montaigne's time was focused on the reading of the classics and learning through books.Montaigne disagreed with learning strictly through books. He believed it was necessary to educate children in a variety of ways. He also disagreed with the way information was being presented to students. It was being presented in a way that encouraged students to take the information that was taught to them as absolute truth. Students were denied the chance to question the information. Therefore, students could not truly learn. Montaigne believed that, to learn truly, a student had to take the information and make it their own.
At the foundation Montaigne believed that the selection of a good tutor was important for the student to become well educated. Education by a tutor was to be conducted at the pace of the student.He believed that a tutor should be in dialogue with the student, letting the student speak first. The tutor also should allow for discussions and debates to be had. Such a dialogue was intended to create an environment in which students would teach themselves. They would be able to realize their mistakes and make corrections to them as necessary.
Individualized learning was integral to his theory of child education. He argued that the student combines information already known with what is learned and forms a unique perspective on the newly learned information. Montaigne also thought that tutors should encourage the natural curiosity of students and allow them to question things.He postulated that successful students were those who were encouraged to question new information and study it for themselves, rather than simply accepting what they had heard from the authorities on any given topic. Montaigne believed that a child's curiosity could serve as an important teaching tool when the child is allowed to explore the things that the child is curious about.
Experience also was a key element to learning for Montaigne. Tutors needed to teach students through experience rather than through the mere memorization of information often practised in book learning.He argued that students would become passive adults, blindly obeying and lacking the ability to think on their own. Nothing of importance would be retained and no abilities would be learned. He believed that learning through experience was superior to learning through the use of books. For this reason he encouraged tutors to educate their students through practice, travel, and human interaction. In doing so, he argued that students would become active learners, who could claim knowledge for themselves.
Montaigne's views on child education continue to have an influence in the present. Variations of Montaigne's ideas on education are incorporated into modern learning in some ways. He argued against the popular way of teaching in his day, encouraging individualized learning. He believed in the importance of experience, over book learning and memorization. Ultimately, Montaigne postulated that the point of education was to teach a student how to have a successful life by practicing an active and socially interactive lifestyle.
John Locke
In Some Thoughts Concerning Education and Of the Conduct of the Understanding John Locke composed an outline on how to educate this mind in order to increase its powers and activity:
"The business of education is not, as I think, to make them perfect in any one of the sciences, but so to open and dispose their minds as may best make them capable of any, when they shall apply themselves to it."
"If men are for a long time accustomed only to one sort or method of thoughts, their minds grow stiff in it, and do not readily turn to another. It is therefore to give them this freedom, that I think they should be made to look into all sorts of knowledge, and exercise their understandings in so wide a variety and stock of knowledge. But I do not propose it as a variety and stock of knowledge, but a variety and freedom of thinking, as an increase of the powers and activity of the mind, not as an enlargement of its possessions."
Locke expressed the belief that education maketh the man, or, more fundamentally, that the mind is an "empty cabinet", with the statement, "I think I may say that of all the men we meet with, nine parts of ten are what they are, good or evil, useful or not, by their education."
Locke also wrote that "the little and almost insensible impressions on our tender infancies have very important and lasting consequences." He argued that the "associations of ideas" that one makes when young are more important than those made later because they are the foundation of the self: they are, put differently, what first mark the tabula rasa. In his Essay, in which is introduced both of these concepts, Locke warns against, for example, letting "a foolish maid" convince a child that "goblins and sprites" are associated with the night for "darkness shall ever afterwards bring with it those frightful ideas, and they shall be so joined, that he can no more bear the one than the other."
"Associationism", as this theory would come to be called, exerted a powerful influence over eighteenth-century thought, particularly educational theory, as nearly every educational writer warned parents not to allow their children to develop negative associations. It also led to the development of psychology and other new disciplines with David Hartley's attempt to discover a biological mechanism for associationism in his Observations on Man (1749).
Jean-Jacques Rousseau
Jean-Jacques Rousseau, though he paid his respects to Plato's philosophy, rejected it as impractical due to the decayed state of society. Rousseau also had a different theory of human development; where Plato held that people are born with skills appropriate to different castes (though he did not regard these skills as being inherited), Rousseau held that there was one developmental process common to all humans. This was an intrinsic, natural process, of which the primary behavioral manifestation was curiosity. This differed from Locke's 'tabula rasa' in that it was an active process deriving from the child's nature, which drove the child to learn and adapt to its surroundings.
Rousseau wrote in his book Emile that all children are perfectly designed organisms, ready to learn from their surroundings so as to grow into virtuous adults, but due to the malign influence of corrupt society, they often fail to do so. Rousseau advocated an educational method which consisted of removing the child from society—for example, to a country home—and alternately conditioning him through changes to his environment and setting traps and puzzles for him to solve or overcome.
Rousseau was unusual in that he recognized and addressed the potential of a problem of legitimation for teaching. He advocated that adults always be truthful with children, and in particular that they never hide the fact that the basis for their authority in teaching was purely one of physical coercion: "I'm bigger than you." Once children reached the age of reason, at about 12, they would be engaged as free individuals in the ongoing process of their own.
He once said that a child should grow up without adult interference and that the child must be guided to suffer from the experience of the natural consequences of his own acts or behaviour. When he experiences the consequences of his own acts, he advises himself.
"Rousseau divides development into five stages (a book is devoted to each). Education in the first two stages seeks to the senses: only when Émile is about 12 does the tutor begin to work to develop his mind. Later, in Book 5, Rousseau examines the education of Sophie (whom Émile is to marry). Here he sets out what he sees as the essential differences that flow from sex. 'The man should be strong and active; the woman should be weak and passive' (Everyman edn: 322). From this difference comes a contrasting education. They are not to be brought up in ignorance and kept to housework: Nature means them to think, to will, to love to cultivate their minds as well as their persons; she puts these weapons in their hands to make up for their lack of strength and to enable them to direct the strength of men. They should learn many things, but only such things as suitable' (Everyman edn.: 327)."
Émile
Immanuel Kant
Immanuel Kant believed that education differs from training in that the former involves thinking whereas the latter does not. In addition to educating reason, of central importance to him was the development of character and teaching of moral maxims. Kant was a proponent of public education and of learning by doing.
Charlotte Mason
Charlotte Mason was a British educator who invested her life in improving the quality of children's education. Her ideas led to a method used by some homeschoolers. Mason's philosophy of education is probably best summarized by the principles given at the beginning of each of her books. Two key mottos taken from those principles are "Education is an atmosphere, a discipline, a life" and "Education is the science of relations." She believed that children were born persons and should be respected as such; they should also be taught the Way of the Will and the Way of Reason. Her motto for students was "I am, I can, I ought, I will." Charlotte Mason believed that children should be introduced to subjects through living books, not through the use of "compendiums, abstracts, or selections." She used abridged books only when the content was deemed inappropriate for children. She preferred that parents or teachers read aloud those texts (such as Plutarch and the Old Testament), making omissions only where necessary.
20th and 21st century
Rudolf Steiner (Waldorf education)
Waldorf education (also known as Steiner or Steiner-Waldorf education) is a humanistic approach to pedagogy based upon the educational philosophy of the Austrian philosopher Rudolf Steiner, the founder of anthroposophy. Now known as Waldorf or Steiner education, his pedagogy emphasizes a balanced development of cognitive, affective/artistic, and practical skills (head, heart, and hands). Schools are normally self-administered by faculty; emphasis is placed upon giving individual teachers the freedom to develop creative methods.
Steiner's theory of child development divides education into three discrete developmental stages predating but with close similarities to the stages of development described by Piaget. Early childhood education occurs through imitation; teachers provide practical activities and a healthy environment. Steiner believed that young children should meet only goodness. Elementary education is strongly arts-based, centered on the teacher's creative authority; the elementary school-age child should meet beauty. Secondary education seeks to develop the judgment, intellect, and practical idealism; the adolescent should meet truth.
Learning is interdisciplinary, integrating practical, artistic, and conceptual elements. The approach emphasizes the role of the imagination in learning, developing thinking that includes a creative as well as an analytic component. The educational philosophy's overarching goals are to provide young people the basis on which to develop into free, morally responsible and integrated individuals, and to help every child fulfill his or her unique destiny, the existence of which anthroposophy posits. Schools and teachers are given considerable freedom to define curricula within collegial structures.
John Dewey
In Democracy and Education: An Introduction to the Philosophy of Education, John Dewey stated that education, in its broadest sense, is the means of the "social continuity of life" given the "primary ineluctable facts of the birth and death of each one of the constituent members in a social group". Education is therefore a necessity, for "the life of the group goes on." Dewey was a proponent of Educational Progressivism and was a relentless campaigner for reform of education, pointing out that the authoritarian, strict, pre-ordained knowledge approach of modern traditional education was too concerned with delivering knowledge, and not enough with understanding students' actual experiences.
In 1896, Dewey opened the Laboratory School at the University of Chicago in an institutional effort to pursue together rather than apart "utility and culture, absorption and expression, theory and practice, [which] are [indispensable] elements in any educational scheme. As the unified head of the departments of Philosophy, Psychology and Pedagogy, John Dewey articulated a desire to organize an educational experience where children could be more creative than the best of progressive models of his day. Transactionalism as a pragmatic philosophy grew out of the work he did in the Laboratory School. The two most influential works that stemmed from his research and study were The Child and the Curriculum (1902) and Democracy and Education (1916). Dewey wrote of the dualisms that plagued educational philosophy in the latter book: "Instead of seeing the educative process steadily and as a whole, we see conflicting terms. We get the case of the child vs. the curriculum; of the individual nature vs. social culture." Dewey found that the preoccupation with facts as knowledge in the educative process led students to memorize "ill-understood rules and principles" and while second-hand knowledge learned in mere words is a beginning in study, mere words can never replace the ability to organize knowledge into both useful and valuable experience.
Maria Montessori
The Montessori method arose from Dr. Maria Montessori's discovery of what she referred to as "the child's true normal nature" in 1907, which happened in the process of her experimental observation of young children given freedom in an environment prepared with materials designed for their self-directed learning activity. The method itself aims to duplicate this experimental observation of children to bring about, sustain and support their true natural way of being.
William Heard Kilpatrick
William Heard Kilpatrick was a US American philosopher of education and a colleague and a successor of John Dewey. He was a major figure in the progressive education movement of the early 20th century. Kilpatrick developed the Project Method for early childhood education, which was a form of Progressive Education organized curriculum and classroom activities around a subject's central theme. He believed that the role of a teacher should be that of a "guide" as opposed to an authoritarian figure. Kilpatrick believed that children should direct their own learning according to their interests and should be allowed to explore their environment, experiencing their learning through the natural senses. Proponents of Progressive Education and the Project Method reject traditional schooling that focuses on memorization, rote learning, strictly organized classrooms (desks in rows; students always seated), and typical forms of assessment.
William Chandler Bagley
William Chandler Bagley taught in elementary schools before becoming a professor of education at the University of Illinois, where he served as the Director of the School of Education from 1908 until 1917. He was a professor of education at Teachers College, Columbia, from 1917 to 1940. An opponent of pragmatism and progressive education, Bagley insisted on the value of knowledge for its own sake, not merely as an instrument, and he criticized his colleagues for their failure to emphasize systematic study of academic subjects. Bagley was a proponent of educational essentialism.
A. S. Neill
A. S. Neill founded Summerhill School, the oldest existing democratic school in Suffolk, England, in 1921. He wrote a number of books that now define much of contemporary democratic education philosophy. Neill believed that the happiness of the child should be the paramount consideration in decisions about the child's upbringing, and that this happiness grew from a sense of personal freedom. He felt that deprivation of this sense of freedom during childhood, and the consequent unhappiness experienced by the repressed child, was responsible for many of the psychological disorders of adulthood.
Martin Heidegger
Martin Heidegger's philosophizing about education was primarily related to higher education. He believed that teaching and research in the university should be unified and that students should be taught "to focus on and explicitly investigate the ontological presuppositions which implicitly guide research in each domain of knowledge,” an approach he believed would "encourage revolutionary transformation in the sciences and humanities."
Jean Piaget
Jean Piaget was a Swiss developmental psychologist known for his epistemological studies with children. His theory of cognitive development and epistemological view are together called "genetic epistemology". Piaget placed great importance on the education of children. As the Director of the International Bureau of Education, he declared in 1934 that "only education is capable of saving our societies from possible collapse, whether violent, or gradual." Piaget created the International Centre for Genetic Epistemology in Geneva in 1955 and directed it until 1980. According to Ernst von Glasersfeld, Jean Piaget is "the great pioneer of the constructivist theory of knowing."
Jean Piaget described himself as an epistemologist, interested in the process of the qualitative development of knowledge. As he says in the introduction of his book Genetic Epistemology: "What the genetic epistemology proposes is discovering the roots of the different varieties of knowledge, since its elementary forms, following to the next levels, including also the scientific knowledge."
Mortimer Jerome Adler
Mortimer Jerome Adler was an American philosopher, educator, and popular author. As a philosopher he worked within the Aristotelian and Thomistic traditions. He lived for the longest stretches in New York City, Chicago, San Francisco, and San Mateo, California. He worked for Columbia University, the University of Chicago, Encyclopædia Britannica, and Adler's own Institute for Philosophical Research. Adler was married twice and had four children. Adler was a proponent of educational perennialism.
Harry S. Broudy
Harry S. Broudy's philosophical views were based on the tradition of classical realism, dealing with truth, goodness, and beauty. However he was also influenced by the modern philosophy existentialism and instrumentalism. In his textbook Building a Philosophy of Education he has two major ideas that are the main points to his philosophical outlook: The first is truth and the second is universal structures to be found in humanity's struggle for education and the good life. Broudy also studied issues on society's demands on school. He thought education would be a link to unify the diverse society and urged the society to put more trust and a commitment to the schools and a good education.
Jerome Bruner
Another important contributor to the inquiry method in education is Jerome Bruner. His books The Process of Education and Toward a Theory of Instruction are landmarks in conceptualizing learning and curriculum development. He argued that any subject can be taught in some intellectually honest form to any child at any stage of development. This notion was an underpinning for his concept of the "spiral" (helical) curriculum which posited the idea that a curriculum should revisit basic ideas, building on them until the student had grasped the full formal concept. He emphasized intuition as a neglected but essential feature of productive thinking. He felt that interest in the material being learned was the best stimulus for learning rather than external motivation such as grades. Bruner developed the concept of discovery learning which promoted learning as a process of constructing new ideas based on current or past knowledge. Students are encouraged to discover facts and relationships and continually build on what they already know.
Paulo Freire
A Brazilian philosopher and educator committed to the cause of educating the impoverished peasants of his nation and collaborating with them in the pursuit of their liberation from what he regarded as "oppression", Paulo Freire is best known for his attack on what he called the "banking concept of education", in which the student was viewed as an empty account to be filled by the teacher. Freire also suggests that a deep reciprocity be inserted into our notions of teacher and student; he comes close to suggesting that the teacher-student dichotomy be completely abolished, instead promoting the roles of the participants in the classroom as the teacher-student (a teacher who learns) and the student-teacher (a learner who teaches). In its early, strong form this kind of classroom has sometimes been criticized on the grounds that it can mask rather than overcome the teacher's authority.
Aspects of the Freirian philosophy have been highly influential in academic debates over "participatory development" and development more generally. Freire's emphasis on what he describes as "emancipation" through interactive participation has been used as a rationale for the participatory focus of development, as it is held that 'participation' in any form can lead to empowerment of poor or marginalised groups. Freire was a proponent of critical pedagogy.
"He participated in the import of European doctrines and ideas into Brazil,
assimilated them to the needs of a specific socio-economic situation, and thus expanded and refocused them in a thought-provoking way"
John Holt
In 1964 John Holt published his first book, How Children Fail, asserting that the academic failure of schoolchildren was not despite the efforts of the schools, but actually because of the schools. How Children Fail ignited a firestorm of controversy. Holt was catapulted into the American national consciousness to the extent that he made appearances on major TV talk shows, wrote book reviews for Life magazine, and was a guest on the To Tell The Truth TV game show. In his follow-up work, How Children Learn, published in 1967, Holt tried to elucidate the learning process of children and why he believed school short circuits that process.
Nel Noddings
Nel Noddings' first sole-authored book Caring: A Feminine Approach to Ethics and Moral Education (1984) followed close on the 1982 publication of Carol Gilligan's ground-breaking work in the ethics of care In a Different Voice. While her work on ethics continued, with the publication of Women and Evil (1989) and later works on moral education, most of her later publications have been on the philosophy of education and educational theory. Her most significant works in these areas have been Educating for Intelligent Belief or Unbelief (1993) and Philosophy of Education (1995).
Noddings' contribution to education philosophy centers around the ethic of care. Her belief was that a caring teacher-student relationship will result in the teacher designing a differentiated curriculum for each student, and that this curriculum would be based around the students' particular interests and needs. The teacher's claim to care must not be based on a one time virtuous decision but an ongoing interest in the students' welfare.
Professional organizations and associations
See also
Education sciences
Methodology
Learning theory (education)
Outline of educational aims
Pedagogy
Philosophy education
References
Further reading
Classic and Contemporary Readings in the Philosophy of Education, by Steven M. Cahn, 1997, .
A Companion to the Philosophy of Education (Blackwell Companions to Philosophy), ed. by Randall Curren, Paperback edition, 2006, .
The Blackwell Guide to the Philosophy of Education, ed. by Nigel Blake, Paul Smeyers, Richard Smith, and Paul Standish, Paperback edition, 2003, .
Philosophy of Education (Westview Press, Dimension of Philosophy Series), by Nel Noddings, Paperback edition, 1995, .
Andre Kraak, Michael Young Education in Retrospect: Policy And Implementation Since 1990
Daan Thoomes, The necessity of education. In: The History of education and childhood. Radboud University, Nijmegen, 2000
External links
"Philosophy of Education". In Stanford Encyclopedia of Philosophy
Encyclopedia of Philosophy of Education
Thinkers of Education. UNESCO-International Bureau of Education website
Education studies | 0.762581 | 0.996986 | 0.760282 |
Positivism | Positivism is a philosophical school that holds that all genuine knowledge is either true by definition or positive meaning a posteriori facts derived by reason and logic from sensory experience. Other ways of knowing, such as intuition, introspection, or religious faith, are rejected or considered meaningless.
Although the positivist approach has been a recurrent theme in the history of western thought, modern positivism was first articulated in the early 19th century by Auguste Comte. His school of sociological positivism holds that society, like the physical world, operates according to general laws. After Comte, positivist schools arose in logic, psychology, economics, historiography, and other fields of thought. Generally, positivists attempted to introduce scientific methods to their respective fields. Since the turn of the 20th century, positivism, although still popular, has declined under criticism in parts of social sciences from antipositivists and critical theorists, among others, for its alleged scientism, reductionism, overgeneralizations, and methodological limitations.
Etymology
The English noun positivism in this meaning was imported in the 19th century from the French word , derived from in its philosophical sense of 'imposed on the mind by experience'. The corresponding adjective has been used in a similar sense to discuss law (positive law compared to natural law) since the time of Chaucer.Background
Kieran Egan argues that positivism can be traced to the philosophy side of what Plato described as the quarrel between philosophy and poetry, later reformulated by Wilhelm Dilthey as a quarrel between the natural sciences and the human sciences.Saunders, T. J. Introduction to Ion. London: Penguin Books, 1987, p. 46
In the early nineteenth century, massive advances in the natural sciences encouraged philosophers to apply scientific methods to other fields. Thinkers such as Henri de Saint-Simon, Pierre-Simon Laplace and Auguste Comte believed that the scientific method, the circular dependence of theory and observation, must replace metaphysics in the history of thought.
Positivism in the social sciences
Comte's positivism
Auguste Comte (1798–1857) first described the epistemological perspective of positivism in The Course in Positive Philosophy, a series of texts published between 1830 and 1842. These texts were followed in 1844 by A General View of Positivism (published in French 1848, English in 1865). The first three volumes of the Course dealt chiefly with the physical sciences already in existence (mathematics, astronomy, physics, chemistry, biology), whereas the latter two emphasized the inevitable coming of social science. Observing the circular dependence of theory and observation in science, and classifying the sciences in this way, Comte may be regarded as the first philosopher of science in the modern sense of the term. For him, the physical sciences had necessarily to arrive first, before humanity could adequately channel its efforts into the most challenging and complex "Queen science" of human society itself. His View of Positivism therefore set out to define the empirical goals of sociological method:
Comte offered an account of social evolution, proposing that society undergoes three phases in its quest for the truth according to a general "law of three stages". Comte intended to develop a secular-scientific ideology in the wake of European secularisation.
Comte's stages were (1) the theological, (2) the metaphysical, and (3) the positive. The theological phase of man was based on whole-hearted belief in all things with reference to God. God, Comte says, had reigned supreme over human existence pre-Enlightenment. Humanity's place in society was governed by its association with the divine presences and with the church. The theological phase deals with humankind's accepting the doctrines of the church (or place of worship) rather than relying on its rational powers to explore basic questions about existence. It dealt with the restrictions put in place by the religious organization at the time and the total acceptance of any "fact" adduced for society to believe.
Comte describes the metaphysical phase of humanity as the time since the Enlightenment, a time steeped in logical rationalism, to the time right after the French Revolution. This second phase states that the universal rights of humanity are most important. The central idea is that humanity is invested with certain rights that must be respected. In this phase, democracies and dictators rose and fell in attempts to maintain the innate rights of humanity.
The final stage of the trilogy of Comte's universal law is the scientific, or positive, stage. The central idea of this phase is that individual rights are more important than the rule of any one person. Comte stated that the idea of humanity's ability to govern itself makes this stage inherently different from the rest. There is no higher power governing the masses and the intrigue of any one person can achieve anything based on that individual's free will. The third principle is most important in the positive stage. Comte calls these three phases the universal rule in relation to society and its development. Neither the second nor the third phase can be reached without the completion and understanding of the preceding stage. All stages must be completed in progress.
Comte believed that the appreciation of the past and the ability to build on it towards the future was key in transitioning from the theological and metaphysical phases. The idea of progress was central to Comte's new science, sociology. Sociology would "lead to the historical consideration of every science" because "the history of one science, including pure political history, would make no sense unless it was attached to the study of the general progress of all of humanity". As Comte would say: "from science comes prediction; from prediction comes action". It is a philosophy of human intellectual development that culminated in science. The irony of this series of phases is that though Comte attempted to prove that human development has to go through these three stages, it seems that the positivist stage is far from becoming a realization. This is due to two truths: The positivist phase requires having a complete understanding of the universe and world around us and requires that society should never know if it is in this positivist phase. Anthony Giddens argues that since humanity constantly uses science to discover and research new things, humanity never progresses beyond the second metaphysical phase.
Comte's fame today owes in part to Emile Littré, who founded The Positivist Review in 1867. As an approach to the philosophy of history, positivism was appropriated by historians such as Hippolyte Taine. Many of Comte's writings were translated into English by the Whig writer, Harriet Martineau, regarded by some as the first female sociologist. Debates continue to rage as to how much Comte appropriated from the work of his mentor, Saint-Simon. He was nevertheless influential: Brazilian thinkers turned to Comte's ideas about training a scientific elite in order to flourish in the industrialization process. Brazil's national motto, Ordem e Progresso ("Order and Progress") was taken from the positivism motto, "Love as principle, order as the basis, progress as the goal", which was also influential in Poland.
In later life, Comte developed a 'religion of humanity' for positivist societies in order to fulfil the cohesive function once held by traditional worship. In 1849, he proposed a calendar reform called the 'positivist calendar'. For close associate John Stuart Mill, it was possible to distinguish between a "good Comte" (the author of the Course in Positive Philosophy) and a "bad Comte" (the author of the secular-religious system). The system was unsuccessful but met with the publication of Darwin's On the Origin of Species to influence the proliferation of various secular humanist organizations in the 19th century, especially through the work of secularists such as George Holyoake and Richard Congreve. Although Comte's English followers, including George Eliot and Harriet Martineau, for the most part rejected the full gloomy panoply of his system, they liked the idea of a religion of humanity and his injunction to "vivre pour autrui" ("live for others", from which comes the word "altruism").
The early sociology of Herbert Spencer came about broadly as a reaction to Comte; writing after various developments in evolutionary biology, Spencer attempted (in vain) to reformulate the discipline in what we might now describe as socially Darwinistic terms.
Early followers of Comte
Within a few years, other scientific and philosophical thinkers began creating their own definitions for positivism. These included Émile Zola, Emile Hennequin, Wilhelm Scherer, and Dimitri Pisarev. Fabien Magnin was the first working-class adherent to Comte's ideas, and became the leader of a movement known as "Proletarian Positivism". Comte appointed Magnin as his successor as president of the Positive Society in the event of Comte's death. Magnin filled this role from 1857 to 1880, when he resigned. Magnin was in touch with the English positivists Richard Congreve and Edward Spencer Beesly. He established the Cercle des prolétaires positivistes in 1863 which was affiliated to the First International. Eugène Sémérie was a psychiatrist who was also involved in the Positivist movement, setting up a positivist club in Paris after the foundation of the French Third Republic in 1870. He wrote: "Positivism is not only a philosophical doctrine, it is also a political party which claims to reconcile order—the necessary basis for all social activity—with Progress, which is its goal."
Durkheim's positivism
The modern academic discipline of sociology began with the work of Émile Durkheim (1858–1917). While Durkheim rejected much of the details of Comte's philosophy, he retained and refined its method, maintaining that the social sciences are a logical continuation of the natural ones into the realm of human activity, and insisting that they may retain the same objectivity, rationalism, and approach to causality. Durkheim set up the first European department of sociology at the University of Bordeaux in 1895, publishing his Rules of the Sociological Method (1895). In this text he argued: "[o]ur main goal is to extend scientific rationalism to human conduct... What has been called our positivism is but a consequence of this rationalism."
Durkheim's seminal monograph, Suicide (1897), a case study of suicide rates amongst Catholic and Protestant populations, distinguished sociological analysis from psychology or philosophy. By carefully examining suicide statistics in different police districts, he attempted to demonstrate that Catholic communities have a lower suicide rate than Protestants, something he attributed to social (as opposed to individual or psychological) causes. He developed the notion of objective sui generis "social facts" to delineate a unique empirical object for the science of sociology to study. Through such studies, he posited, sociology would be able to determine whether a given society is 'healthy' or 'pathological', and seek social reform to negate organic breakdown or "social anomie". Durkheim described sociology as the "science of institutions, their genesis and their functioning".
David Ashley and David M. Orenstein have alleged, in a consumer textbook published by Pearson Education, that accounts of Durkheim's positivism are possibly exaggerated and oversimplified; Comte was the only major sociological thinker to postulate that the social realm may be subject to scientific analysis in exactly the same way as natural science, whereas Durkheim saw a far greater need for a distinctly sociological scientific methodology. His lifework was fundamental in the establishment of practical social research as we know it today—techniques which continue beyond sociology and form the methodological basis of other social sciences, such as political science, as well of market research and other fields.
Historical positivism
In historiography, historical or documentary positivism is the belief that historians should pursue the objective truth of the past by allowing historical sources to "speak for themselves", without additional interpretation. In the words of the French historian Fustel de Coulanges, as a positivist, "It is not I who am speaking, but history itself". The heavy emphasis placed by historical positivists on documentary sources led to the development of methods of source criticism, which seek to expunge bias and uncover original sources in their pristine state.
The origin of the historical positivist school is particularly associated with the 19th-century German historian Leopold von Ranke, who argued that the historian should seek to describe historical truth "wie es eigentlich gewesen ist" ("as it actually was")—though subsequent historians of the concept, such as Georg Iggers, have argued that its development owed more to Ranke's followers than Ranke himself.
Historical positivism was critiqued in the 20th century by historians and philosophers of history from various schools of thought, including Ernst Kantorowicz in Weimar Germany—who argued that "positivism ... faces the danger of becoming Romantic when it maintains that it is possible to find the Blue Flower of truth without preconceptions"—and Raymond Aron and Michel Foucault in postwar France, who both posited that interpretations are always ultimately multiple and there is no final objective truth to recover. In his posthumously published 1946 The Idea of History, the English historian R. G. Collingwood criticized historical positivism for conflating scientific facts with historical facts, which are always inferred and cannot be confirmed by repetition, and argued that its focus on the "collection of facts" had given historians "unprecedented mastery over small-scale problems", but "unprecedented weakness in dealing with large-scale problems".
Historicist arguments against positivist approaches in historiography include that history differs from sciences like physics and ethology in subject matter and method; that much of what history studies is nonquantifiable, and therefore to quantify is to lose in precision; and that experimental methods and mathematical models do not generally apply to history, so that it is not possible to formulate general (quasi-absolute) laws in history.
Other subfields
In psychology the positivist movement was influential in the development of operationalism. The 1927 philosophy of science book The Logic of Modern Physics in particular, which was originally intended for physicists, coined the term operational definition, which went on to dominate psychological method for the whole century.
In economics, practicing researchers tend to emulate the methodological assumptions of classical positivism, but only in a de facto fashion: the majority of economists do not explicitly concern themselves with matters of epistemology. Economic thinker Friedrich Hayek (see "Law, Legislation and Liberty") rejected positivism in the social sciences as hopelessly limited in comparison to evolved and divided knowledge. For example, much (positivist) legislation falls short in contrast to pre-literate or incompletely defined common or evolved law.
In jurisprudence, "legal positivism" essentially refers to the rejection of natural law; thus its common meaning with philosophical positivism is somewhat attenuated and in recent generations generally emphasizes the authority of human political structures as opposed to a "scientific" view of law.
Logical positivism
Logical positivism (later and more accurately called logical empiricism) is a school of philosophy that combines empiricism, the idea that observational evidence is indispensable for knowledge of the world, with a version of rationalism, the idea that our knowledge includes a component that is not derived from observation.
Logical positivism grew from the discussions of a group called the "First Vienna Circle", which gathered at the Café Central before World War I. After the war Hans Hahn, a member of that early group, helped bring Moritz Schlick to Vienna. Schlick's Vienna Circle, along with Hans Reichenbach's Berlin Circle, propagated the new doctrines more widely in the 1920s and early 1930s.
It was Otto Neurath's advocacy that made the movement self-conscious and more widely known. A 1929 pamphlet written by Neurath, Hahn, and Rudolf Carnap summarized the doctrines of the Vienna Circle at that time. These included the opposition to all metaphysics, especially ontology and synthetic a priori propositions; the rejection of metaphysics not as wrong but as meaningless (i.e., not empirically verifiable); a criterion of meaning based on Ludwig Wittgenstein's early work (which he himself later set out to refute); the idea that all knowledge should be codifiable in a single standard language of science; and above all the project of "rational reconstruction," in which ordinary-language concepts were gradually to be replaced by more precise equivalents in that standard language. However, the project is widely considered to have failed.
After moving to the United States, Carnap proposed a replacement for the earlier doctrines in his Logical Syntax of Language. This change of direction, and the somewhat differing beliefs of Reichenbach and others, led to a consensus that the English name for the shared doctrinal platform, in its American exile from the late 1930s, should be "logical empiricism." While the logical positivist movement is now considered dead, it has continued to influence philosophical development.
Criticism
Historically, positivism has been criticized for its reductionism, i.e., for contending that all "processes are reducible to physiological, physical or chemical events," "social processes are reducible to relationships between and actions of individuals," and that "biological organisms are reducible to physical systems."
The consideration that laws in physics may not be absolute but relative, and, if so, this might be even more true of social sciences, was stated, in different terms, by G. B. Vico in 1725.Giambattista Vico, Principi di scienza nuova, Opere, ed. Fausto Nicolini (Milan: R. Ricciardi, 1953), pp. 365–905. Vico, in contrast to the positivist movement, asserted the superiority of the science of the human mind (the humanities, in other words), on the grounds that natural sciences tell us nothing about the inward aspects of things.
Wilhelm Dilthey fought strenuously against the assumption that only explanations derived from science are valid. He reprised Vico's argument that scientific explanations do not reach the inner nature of phenomena and it is humanistic knowledge that gives us insight into thoughts, feelings and desires. Dilthey was in part influenced by the historism of Leopold von Ranke (1795–1886).
The contesting views over positivism are reflected both in older debates (see the Positivism dispute) and current ones over the proper role of science in the public sphere. Public sociology—especially as described by Michael Burawoy—argues that sociologists should use empirical evidence to display the problems of society so they might be changed.
Antipositivism
At the turn of the 20th century, the first wave of German sociologists formally introduced methodological antipositivism, proposing that research should concentrate on human cultural norms, values, symbols, and social processes viewed from a subjective perspective. Max Weber, one such thinker, argued that while sociology may be loosely described as a 'science' because it is able to identify causal relationships (especially among ideal types), sociologists should seek relationships that are not as "ahistorical, invariant, or generalizable" as those pursued by natural scientists. Weber regarded sociology as the study of social action, using critical analysis and verstehen techniques. The sociologists Georg Simmel, Ferdinand Tönnies, George Herbert Mead, and Charles Cooley were also influential in the development of sociological antipositivism, whilst neo-Kantian philosophy, hermeneutics, and phenomenology facilitated the movement in general.
Critical rationalism and postpositivism
In the mid-twentieth century, several important philosophers and philosophers of science began to critique the foundations of logical positivism. In his 1934 work The Logic of Scientific Discovery, Karl Popper argued against verificationism. A statement such as "all swans are white" cannot actually be empirically verified, because it is impossible to know empirically whether all swans have been observed. Instead, Popper argued that at best an observation can falsify a statement (for example, observing a black swan would prove that not all swans are white). Popper also held that scientific theories talk about how the world really is (not about phenomena or observations experienced by scientists), and critiqued the Vienna Circle in his Conjectures and Refutations.Karl Popper, The Logic of Scientific Discovery, 1934, 1959 (1st English ed.) W. V. O. Quine and Pierre Duhem went even further. The Duhem–Quine thesis states that it is impossible to experimentally test a scientific hypothesis in isolation, because an empirical test of the hypothesis requires one or more background assumptions (also called auxiliary assumptions or auxiliary hypotheses); thus, unambiguous scientific falsifications are also impossible. Thomas Kuhn, in his 1962 book The Structure of Scientific Revolutions, put forward his theory of paradigm shifts. He argued that it is not simply individual theories but whole worldviews that must occasionally shift in response to evidence.
Together, these ideas led to the development of critical rationalism and postpositivism. Postpositivism is not a rejection of the scientific method, but rather a reformation of positivism to meet these critiques. It reintroduces the basic assumptions of positivism: the possibility and desirability of objective truth, and the use of experimental methodology. Postpositivism of this type is described in social science guides to research methods. Postpositivists argue that theories, hypotheses, background knowledge and values of the researcher can influence what is observed. Postpositivists pursue objectivity by recognizing the possible effects of biases. While positivists emphasize quantitative methods, postpositivists consider both quantitative and qualitative methods to be valid approaches.
In the early 1960s, the positivism dispute arose between the critical theorists (see below) and the critical rationalists over the correct solution to the value judgment dispute (Werturteilsstreit). While both sides accepted that sociology cannot avoid a value judgement that inevitably influences subsequent conclusions, the critical theorists accused the critical rationalists of being positivists; specifically, of asserting that empirical questions can be severed from their metaphysical heritage and refusing to ask questions that cannot be answered with scientific methods. This contributed to what Karl Popper termed the "Popper Legend", a misconception among critics and admirers of Popper that he was, or identified himself as, a positivist.
Critical theory
Although Karl Marx's theory of historical materialism drew upon positivism, the Marxist tradition would also go on to influence the development of antipositivist critical theory. Critical theorist Jürgen Habermas critiqued pure instrumental rationality (in its relation to the cultural "rationalisation" of the modern West) as a form of scientism, or science "as ideology". He argued that positivism may be espoused by "technocrats" who believe in the inevitability of social progress through science and technology.Outhwaite, William, 1988 Habermas: Key Contemporary Thinkers, Polity Press (Second Edition 2009), p. 68 New movements, such as critical realism, have emerged in order to reconcile postpositivist aims with various so-called 'postmodern' perspectives on the social acquisition of knowledge.
Max Horkheimer criticized the classic formulation of positivism on two grounds. First, he claimed that it falsely represented human social action. The first criticism argued that positivism systematically failed to appreciate the extent to which the so-called social facts it yielded did not exist 'out there', in the objective world, but were themselves a product of socially and historically mediated human consciousness. Positivism ignored the role of the 'observer' in the constitution of social reality and thereby failed to consider the historical and social conditions affecting the representation of social ideas. Positivism falsely represented the object of study by reifying social reality as existing objectively and independently of the labour that actually produced those conditions. Secondly, he argued, representation of social reality produced by positivism was inherently and artificially conservative, helping to support the status quo, rather than challenging it. This character may also explain the popularity of positivism in certain political circles. Horkheimer argued, in contrast, that critical theory possessed a reflexive element lacking in the positivistic traditional theory.
Some scholars today hold the beliefs critiqued in Horkheimer's work, but since the time of his writing critiques of positivism, especially from philosophy of science, have led to the development of postpositivism. This philosophy greatly relaxes the epistemological commitments of logical positivism and no longer claims a separation between the knower and the known. Rather than dismissing the scientific project outright, postpositivists seek to transform and amend it, though the exact extent of their affinity for science varies vastly. For example, some postpositivists accept the critique that observation is always value-laden, but argue that the best values to adopt for sociological observation are those of science: skepticism, rigor, and modesty. Just as some critical theorists see their position as a moral commitment to egalitarian values, these postpositivists see their methods as driven by a moral commitment to these scientific values. Such scholars may see themselves as either positivists or antipositivists.
Other criticisms
During the later twentieth century, positivism began to fall out of favor with scientists as well. Later in his career, German theoretical physicist Werner Heisenberg, Nobel laureate for his pioneering work in quantum mechanics, distanced himself from positivism: The positivists have a simple solution: the world must be divided into that which we can say clearly and the rest, which we had better pass over in silence. But can any one conceive of a more pointless philosophy, seeing that what we can say clearly amounts to next to nothing? If we omitted all that is unclear we would probably be left with completely uninteresting and trivial tautologies.
In the early 1970s, urbanists of the quantitative school like David Harvey started to question the positivist approach itself, saying that the arsenal of scientific theories and methods developed so far in their camp were "incapable of saying anything of depth and profundity" on the real problems of contemporary cities.
According to the Catholic Encyclopedia, Positivism has also come under fire on religious and philosophical grounds, whose proponents state that truth begins in sense experience, but does not end there. Positivism fails to prove that there are not abstract ideas, laws, and principles, beyond particular observable facts and relationships and necessary principles, or that we cannot know them. Nor does it prove that material and corporeal things constitute the whole order of existing beings, and that our knowledge is limited to them. According to positivism, our abstract concepts or general ideas are mere collective representations of the experimental order—for example; the idea of "man" is a kind of blended image of all the men observed in our experience. This runs contrary to a Platonic or Christian ideal, where an idea can be abstracted from any concrete determination, and may be applied identically to an indefinite number of objects of the same class. From the idea's perspective, Platonism is more precise. Defining an idea as a sum of collective images is imprecise and more or less confused, and becomes more so as the collection represented increases. An idea defined explicitly always remains clear.
Other new movements, such as critical realism, have emerged in opposition to positivism. Critical realism seeks to reconcile the overarching aims of social science with postmodern critiques. Experientialism, which arose with second generation cognitive science, asserts that knowledge begins and ends with experience itself.Lakoff, G., & Johnson, M. (1999). Philosophy in the Flesh: The Embodied Mind and Its Challenge to Western Thought. Basic books. In other words, it rejects the positivist assertion that a portion of human knowledge is a priori.
Positivism today
Echoes of the "positivist" and "antipositivist" debate persist today, though this conflict is hard to define. Authors writing in different epistemological perspectives do not phrase their disagreements in the same terms and rarely actually speak directly to each other. To complicate the issues further, few practising scholars explicitly state their epistemological commitments, and their epistemological position thus has to be guessed from other sources such as choice of methodology or theory. However, no perfect correspondence between these categories exists, and many scholars critiqued as "positivists" are actually postpositivists. One scholar has described this debate in terms of the social construction of the "other", with each side defining the other by what it is not rather than what it is, and then proceeding to attribute far greater homogeneity to their opponents than actually exists. Thus, it is better to understand this not as a debate but as two different arguments: the "antipositivist" articulation of a social meta-theory which includes a philosophical critique of scientism, and "positivist" development of a scientific research methodology for sociology with accompanying critiques of the reliability and validity of work that they see as violating such standards. Strategic positivism aims to bridge these two arguments.
Social sciences
While most social scientists today are not explicit about their epistemological commitments, articles in top American sociology and political science journals generally follow a positivist logic of argument. It can be thus argued that "natural science and social science [research articles] can therefore be regarded with a good deal of confidence as members of the same genre".
In contemporary social science, strong accounts of positivism have long since fallen out of favour. Practitioners of positivism today acknowledge in far greater detail observer bias and structural limitations. Modern positivists generally eschew metaphysical concerns in favour of methodological debates concerning clarity, replicability, reliability and validity. This positivism is generally equated with "quantitative research" and thus carries no explicit theoretical or philosophical commitments. The institutionalization of this kind of sociology is often credited to Paul Lazarsfeld, who pioneered large-scale survey studies and developed statistical techniques for analyzing them. This approach lends itself to what Robert K. Merton called middle-range theory: abstract statements that generalize from segregated hypotheses and empirical regularities rather than starting with an abstract idea of a social whole.
In the original Comtean usage, the term "positivism" roughly meant the use of scientific methods to uncover the laws according to which both physical and human events occur, while "sociology" was the overarching science that would synthesize all such knowledge for the betterment of society. "Positivism is a way of understanding based on science"; people don't rely on the faith in God but instead on the science behind humanity. "Antipositivism" formally dates back to the start of the twentieth century, and is based on the belief that natural and human sciences are ontologically and epistemologically distinct. Neither of these terms is used any longer in this sense. There are no fewer than twelve distinct epistemologies that are referred to as positivism. Many of these approaches do not self-identify as "positivist", some because they themselves arose in opposition to older forms of positivism, and some because the label has over time become a term of abuse by being mistakenly linked with a theoretical empiricism. The extent of antipositivist criticism has also become broad, with many philosophies broadly rejecting the scientifically based social epistemology and other ones only seeking to amend it to reflect 20th century developments in the philosophy of science. However, positivism (understood as the use of scientific methods for studying society) remains the dominant approach to both the research and the theory construction in contemporary sociology, especially in the United States.
The majority of articles published in leading American sociology and political science journals today are positivist (at least to the extent of being quantitative rather than qualitative).Brett, Paul. 1994. "A genre analysis of the results section of sociology articles". English For Specific Purposes. Vol 13, Num 1:47–59. This popularity may be because research utilizing positivist quantitative methodologies holds a greater prestige in the social sciences than qualitative work; quantitative work is easier to justify, as data can be manipulated to answer any question. Such research is generally perceived as being more scientific and more trustworthy, and thus has a greater impact on policy and public opinion (though such judgments are frequently contested by scholars doing non-positivist work).
Natural sciences
The key features of positivism as of the 1950s, as defined in the "received view", are:
A focus on science as a product, a linguistic or numerical set of statements;
A concern with axiomatization, that is, with demonstrating the logical structure and coherence of these statements;
An insistence on at least some of these statements being testable; that is, amenable to being verified, confirmed, or shown to be false by the empirical observation of reality. Statements that would, by their nature, be regarded as untestable included the teleological; thus positivism rejects much of classical metaphysics.
The belief that science is markedly cumulative;
The belief that science is predominantly transcultural;
The belief that science rests on specific results that are dissociated from the personality and social position of the investigator;
The belief that science contains theories or research traditions that are largely commensurable;
The belief that science sometimes incorporates new ideas that are discontinuous from old ones;
The belief that science involves the idea of the unity of science, that there is, underlying the various scientific disciplines, basically one science about one real world.
The belief that science is nature and nature is science; and out of this duality, all theories and postulates are created, interpreted, evolve, and are applied.
Stephen Hawking was a recent high-profile advocate of positivism in the physical sciences. In The Universe in a Nutshell (p. 31) he wrote:
Any sound scientific theory, whether of time or of any other concept, should in my opinion be based on the most workable philosophy of science: the positivist approach put forward by Karl Popper and others. According to this way of thinking, a scientific theory is a mathematical model that describes and codifies the observations we make. A good theory will describe a large range of phenomena on the basis of a few simple postulates and will make definite predictions that can be tested. ... If one takes the positivist position, as I do, one cannot say what time actually is. All one can do is describe what has been found to be a very good mathematical model for time and say what predictions it makes.
See also
Cliodynamics
Científico
Charvaka
Determinism
Gödel's incompleteness theorems
London Positivist Society
Nature versus nurture
Physics envy
Scientific politics
Sociological naturalism
The New Paul and Virginia Vladimir Solovyov
Notes
References
Armenteros, Carolina. 2017. "The Counterrevolutionary Comte: Theorist of the Two Powers and Enthusiastic Medievalist." In The Anthem Companion to Auguste Comte, edited by Andrew Wernick, 91–116. London: Anthem.
Annan, Noel. 1959. The Curious Strength of Positivism in English Political Thought. London: Oxford University Press.
Ardao, Arturo. 1963. "Assimilation and Transformation of Positivism in Latin America." Journal of the History of Ideas 24 (4):515–22.
Bevir, Mark. 2002. "Sidney Webb: Utilitarianism, Positivism, and Social Democracy." The Journal of Modern History 74 (2):217–252.
Bevir, Mark. 2011. The Making of British Socialism. Princeton. PA: Princeton University Press.
Bourdeau, Michel. 2006. Les trois états: Science, théologie et métaphysique chez Auguste Comte. Paris: Éditions du Cerf.
Bourdeau, Michel, Mary Pickering, and Warren Schmaus, eds. 2018. Love, Order and Progress. Pittsburgh, PA: University of Pittsburgh Press.
Bryant, Christopher G. A. 1985. Positivism in Social Theory and Research. New York: St. Martin's Press.
Claeys, Gregory. 2010. Imperial Sceptics. Cambridge: Cambridge University Press.
Claeys, Gregory. 2018. "Professor Beesly, Positivism and the International: the Patriotism Issue." In "Arise Ye Wretched of the Earth": The First International in a Global Perspective, edited by Fabrice Bensimon, Quinton Deluermoz and Jeanne Moisand. Leiden: Brill.
De Boni, Carlo. 2013. Storia di un'utopia. La religione dell'Umanità di Comte e la sua circolazione nel mondo. Milano: Mimesis.
Dixon, Thomas. 2008. The Invention of Altruism. Oxford: Oxford University Press.
Feichtinger, Johannes, Franz L. Fillafer, and Jan Surman, eds. 2018. The Worlds of Positivism. London: Palgrave Macmillan.
Forbes, Geraldine Handcock. 2003. "The English Positivists and India." In Essays on Indian Renaissance, edited by Raj Kumar, 151–63. Discovery: New Delhi.
Gane, Mike. 2006. Auguste Comte. London: Routledge.
Giddens, Anthony. Positivism and Sociology. Heinemann. London. 1974.
Gilson, Gregory D. and Irving W. Levinson, eds. Latin American Positivism: New Historical and Philosophic Essays (Lexington Books; 2012) 197 pages; Essays on positivism in the intellectual and political life of Brazil, Colombia, and Mexico.
Harp, Gillis J. 1995. Positivist Republic: Auguste Comte and the Reconstruction of American Liberalism, 1865–1920. University Park, PA: Pennsylvania State University Press.
Harrison, Royden. 1965. Before the Socialists. London: Routledge.
Hoecker-Drysdale, Susan. 2001. "Harriet Martineau and the Positivism of Auguste Comte." In Harriet Martineau: Theoretical and Methodological Perspectives, edited by Michael R. Hill and Susan Hoecker-Drysdale, 169–90. London: Routledge.
Kremer-Marietti, Angèle. L'Anthropologie positiviste d'Auguste Comte, Librairie Honoré Champion, Paris, 1980.
Kremer-Marietti, Angèle. Le positivisme, Collection "Que sais-je?", Paris, PUF, 1982.
LeGouis, Catherine. Positivism and Imagination: Scientism and Its Limits in Emile Hennequin, Wilhelm Scherer and Dmitril Pisarev. Bucknell University Press. London: 1997.
Lenzer, Gertrud, ed. 2009. The Essential Writings of Auguste Comte and Positivism. London: Transaction.
"Positivism." Marxists Internet Archive. Web. 23 Feb. 2012.
McGee, John Edwin. 1931. A Crusade for Humanity. London: Watts.
Mill, John Stuart. Auguste Comte and Positivism.
Mises, Richard von. Positivism: A Study In Human Understanding. Harvard University Press. Cambridge, Massachusetts: 1951.
Petit, Annie. Le Système d'Auguste Comte. De la science à la religion par la philosophie. Vrin, Paris (2016).
Pickering, Mary. Auguste Comte: An Intellectual Biography. Cambridge University Press. Cambridge, England; 1993.
Quin, Malcolm. 1924. Memoirs of a Positivist. London: George Allen & Unwin.
Richard Rorty (1982). Consequences of Pragmatism.
Scharff, Robert C. 1995. Comte After Positivism. Cambridge: Cambridge University Press.
Schunk, Dale H. Learning Theories: An Educational Perspective, 5th. Pearson, Merrill Prentice Hall. 1991, 1996, 2000, 2004, 2008.
Simon, W. M. 1963. European Positivism in the Nineteenth Century. Ithaca, NY: Cornell University Press.
Sutton, Michael. 1982. Nationalism, Positivism and Catholicism. Cambridge: Cambridge University Press.
Trindade, Helgio. 2003. "La république positiviste chex Comte." In Auguste Comte: Trajectoires positivistes 1798–1998, edited by Annie Petit, 363–400. Paris: L'Harmattan.
Turner, Mark. 2000. "Defining Discourses: The "Westminster Review", "Fortnightly Review", and Comte's Positivism." Victorian Periodicals Review 33 (3):273–282.
Wernick, Andrew. 2001. Auguste Comte and the Religion of Humanity. Cambridge: Cambridge University Press.
Whatmore, Richard. 2005. "Comte, Auguste (1798–1857)." In Encyclopaedia of Nineteenth-Century Thought, edited by Gregory Claeys, 123–8. London: Routledge.
Whetsell, Travis and Patricia M. Shields. "The Dynamics of Positivism in the Study of Public Administration: A Brief Intellectual History and Reappraisal", Administration & Society. .
Wils, Kaat. 2005. De omweg van de wetenschap: het positivisme en de Belgische en Nederlandse intellectuele cultuur, 1845–1914. Amsterdam: Amsterdam University Press.
Wilson, Matthew. 2018. "British Comtism and Modernist Design." Modern Intellectual History x (xx):1–32.
Wilson, Matthew. 2018. Moralising Space: the Utopian Urbanism of the British Positivists, 1855–1920. London: Routledge.
Wilson, Matthew. 2020. "Rendering sociology: on the utopian positivism of Harriet Martineau and the ‘Mumbo Jumbo club." Journal of Interdisciplinary History of Ideas 8 (16):1–42.
Woll, Allen L. 1976. "Positivism and History in Nineteenth-Century Chile." Journal of the History of Ideas 37 (3):493–506.
Woodward, Ralph Lee, ed. 1971. Positivism in Latin America, 1850–1900. Lexington: Heath.
Wright, T. R. 1986. The Religion of Humanity. Cambridge: Cambridge University Press.
Wright, T. R. 1981. "George Eliot and Positivism: A Reassessment." The Modern Language Review 76 (2):257–72.
Wunderlich, Roger. 1992. Low Living and High Thinking at Modern Times, New York. Syracuse, NY: Syracuse University Press.
Zea, Leopoldo. 1974. Positivism in Mexico''. Austin: University of Texas Press.
External links
The full text of the 1911 Encyclopædia Britannica article "Positivism" at Wikisource
Parana, Brazil
Porto Alegre, Brazil
Rio de Janeiro, Brazil
Posnan, Poland
Positivists Worldwide
Maison d'Auguste Comte, France
Philosophy of science
Philosophy of social science
Epistemological theories
20th century in philosophy
19th century in philosophy
Philosophy of law
Sociological theories | 0.761173 | 0.998822 | 0.760277 |
Positive psychology in the workplace | Positive psychology is defined as a method of building on what is good and what is already working instead of attempting to stimulate improvement by focusing on the weak links in an individual, a group, or in this case, a company. Implementing positive psychology in the workplace means creating an environment that is more enjoyable, productive, and values individual employees. This also means creating a work schedule that does not lead to emotional and physical distress.
Overview
Positive psychology in the workplace focuses on shifting attention away from negative aspects such as workplace violence, stress, burnout, and job insecurity; it shifts attention to positive and hopeful attributes, resilience, confidence, and a productive work culture that emphasizes professional success and human success. Through the employment of positive psychology, a working environment to promote positive affect in its employees can be created.
Fun should not be looked at as something that cannot be achieved during work but rather as a motivation factor for the staff. However, the type of fun in the workplace needs to be considered by the manager. Depending on the learning types of their employees, it is not always productive depending on the personalities of their employees. Along this line, it is important to examine the role of helping behaviors, team-building exercises, job resources, job security, and work support.
The emerging field of positive psychology also helps to creatively manage organizational behaviors and to increase productivity in the workplace through applying positive organizational forces. Recent research on job satisfaction and employee retention have created a great need to focus on implementing positive psychology in the workplace.
Background
According to the United States Department of Labor, “In 2009, employed persons worked an average of 7.5 hours on the days they worked, which were mostly weekdays. [In addition to that], 84 percent of employed persons did some or all of their work at their workplace.” This indicates that majority of the population spend their waking hours at work, outside their homes. Therefore, employers must do their best to create a low stress and inspiring work environment to yield greater productivity.
Michelle T. Iaffaldano and Paul M. Muchinsky were among the first people to ignite interest in the connection between job satisfaction and job performance. The meta-analytic research of these individuals impacted the way in which later research on the topic was conducted, especially regarding sample sizes.
Major Theoretical Approaches
Martin E.P. Seligman and Mihaly Csikszentmihalyi are noted frontrunners in the area of positive psychology as a field of study. They state that “psychology has become a science largely about healing. Therefore its concentration on healing largely neglects the fulfilled individual and thriving community”. Seligman and Csikszentmihalyi further stress that, “the aim of positive psychology is to begin to catalyze a change in the focus of psychology from preoccupation only with repairing the worst things in life to also building positive qualities.”
Abraham Maslow and Carl Rogers developed Humanistic Psychology that focuses on the positive potential of people and on helping people reach their full potential.
Peter Warr is noted for his early work on work well being. “Proponents of the well-being perspective argue that the presence of positive emotional states and positive appraisals of the worker and his or her relationships within the workplace accentuate worker performance and quality of life”. A common idea in work environment theories is that demands match or slightly exceed the resources. With regards to research concerning positive outcomes within the employment setting, several models have been established like Demand Control, Job Demands-Resources, and Job Characteristics.
Demand Control Model
Robert A. Karasek is credited with this particular work design model. In Karasek’s model, workplace stress is in indicator of how taxing a worker's job is and how much control, authority, discretion, and decision latitude the worker has over his/her tasks and duties. This creates four kinds of jobs—passive, active, low strain and high strain The Demand Control Model (DCM) has been used by researchers to design jobs that enhance the psychological and physical well-being. This model promotes a work design that proposes high demand and high control, fostering an environment that encourages learning and simultaneously offers autonomy.
This model is based on the assumption that “workers with active jobs are more likely to seek challenging situations that promote mastery, thereby encouraging skill and knowledge acquisition”. It also points out the role of social support, referring to the quality interactions between colleagues and managers. However, there is some controversy over this model because some researchers believe it lacks evidence for the interaction between demand and control.
The DCM, (otherwise known as the Diagnosing and Statistical Manuel for Diagnosing Mental Disorders), is commonly criticized for its inability to consistently replicate findings to support its basic assumption. Some feel that the DCM is somewhat unhelpful in that it may prompt people who do not need a diagnosis to be diagnosed anyway; thus making it a potential H.R problem. However, there is evidence supporting the idea that “high amounts of job control is associated with increases in job satisfaction and decreased depression, however, high demands with out adequate control may lead to increase anxiety”.
Job Demands Resources
The job demands-resources model (JD-R) is an expansion of the DCM and is founded on the same principle that high job demands and high job resources produce employees with more positive work attitudes. The difference between the JD-R and DCM is that the JD-R expounds upon the differentiation between demand and resources, as well as encompasses a broader view of resources. This model refers to demands as “ those physical, psychological, social, or organizational aspects of the job that require sustained physical and/or psychological effort. This may refer to jobs that require contact with customers. Resources are regarded as “those physical, psychological, social, or organizational aspects of the job that are either/or: (1) functional in achieving work goals; (2) reduce job demands and the associated physiological and psychological costs; and (3) stimulate personal growth, learning, and development”. Another difference between these two theories is that the JD-R postulates that resources can be predictors of motivation and learning related outcomes. The findings by Bakker and colleagues supports their hypothesis that many resources may be linked to job well-being. They also found that “task enjoyment and organizational commitment are the result of combinations of many different job demands and job resources. Enjoyment and commitment were high when employees were confronted with challenging and stimulating tasks, and simultaneously had sufficient resources at their disposal”.
Job Characteristics Model
The job characteristics model (JCM) is “an influential theory of work design developed by Hackman and Oldham. It is based upon five characteristics - skill variety, task identity, task significance, task autonomy, and task feedback - which are used to identify the general content and structure of jobs”. This model argues that employees with a personal need for growth and development, as well as knowledge and skill, will display more positive work outcomes. These include things such as: job satisfaction, lower absenteeism, and better work turnover. This model is based upon an idea that high task control and feedback are two essential elements for maximizing work potential. Stronger experiences of these five traits is said to lead to greater job satisfaction and better performance.
Empirical evidence
Safety
In order to protect the physical and mental health of workers, the demands of the job must be balanced by easily accessible job resources in order to prevent burnout in employees yet encourage employee engagement. Engagement signifies a positive employee who is committed to the safety within the workplace for self and others. In contrast, burnout represents a negative employee possessing elements of anxiety, depression, and work-related stress. Engagement increases as job resources like knowledge of safety are present. On the other hand, burnout increases when more job demands are present without the buffering effects of job resources.
Hazards in the workplace can be seen as a combination of the physical demands of the work and the complexity of the work. Job resources provide a buffering effect that protects the employees from job demands like high work pressure, an unfavorable physical environment, and emotionally demanding interactions. Employees are better equipped to handle changes in their work environment when resources are readily available. The resources a job can provide include autonomy, support, and knowledge of safety. Autonomy allows employees the freedom to decide how to execute their work. Support can originate directly from a supervisor or from other workers in the environment. And lastly, employees must have knowledge about safety procedures and policies. When the employee is able to work in a safe environment, workers are more satisfied with their jobs. A safe environment provides support and resources that promote healthy employees.
Emotion, Attitude and Mood
Emotional intelligence is the ability to recognize and interpret emotions that can be used to regulate emotions and assist cognitive processes which promote emotional and intellectual growth. Emotional intelligence has been researched by Carmelli (2003) in order to see its effect on employees work performance. Due to the social nature of the interactions of the employees, emotional intelligence is essential in order to work well with co-workers. When employees work well together by coordinating their efforts, their task performance improves and as a result the business benefits. With emotional intelligence, employees are better able to perceive others needing help and are more willing to help for intrinsic benefits.
Isen & Reeve (2005) proposed that positive affect led to positive intrinsic motivation for completing a task. As a result of the intrinsic motivation, the employees enjoyed the task more and were more optimistic when having to complete a more uninteresting task. The combination of having the freedom to choose tasks and maintaining positive affect results in better task performance. Positive affect promotes self-control to remain focused on any task and forward-looking thinking that motivates workers to look-forward to more enjoyable tasks.
Concepts of positive psychology like hope and altruism provide a positive work environment that influences the moods and attitudes of workers. Youssef & Luthans (2007) examined the effects hope, optimism, and resilience had in the workplace on employees’ job performance, job satisfaction, work happiness, and organizational commitment. Hope and resilience had a more direct effect on organizational commitment whereas hope had a greater impact on performance. Hope allows employees to be better at creating more realistic plans for completing task so as not to focus on the failure that accompanies an incomplete task. Optimism strengthens the employee’s resilience to break through barriers and causes the employee to build social support and other strengths to overcome any obstacle he or she may encounter.
Positive psychology also encourages maintaining a positive mood in the work environment to encourage productivity on an individual level and organizational level. Organizational citizenship behaviors (OCB) refer to behaviors like altruism and compliance that are not formal tasks in that the behaviors are not a mandatory of the workers job description. They are considered extra-role behaviors that help in gauging the workers commitment to the job and to the rules of the job in the absence of monitoring these behaviors. OCB have proven to improve the moods of employees and the moods in the workplace. A helping behavior improves mood because the individual is no longer focused on negative moods; helping others acts as a distracter for the employee. Altruism is effective because it has more impact in a social setting like the workplace and is more extrinsically rewarding. OCB encourage positive interactions among workers, and they lead to better psychological health for employees.
According to Froman (2010), having a more hopeful perspective about life leads one to being more optimistic about responding to opportunities. Workers are more resilient to adversity and are able to bounce back more quickly. When organizations encourage positive attitudes in their employees, they grow and flourish. As a result, the organization profits and grows from the human capital of productive employees and the monetary capital resulting from productive workers.
Fun
Chan (2010) studied fun activities in the workplace that created a positive work environment that could retain and attract employees and encourage employee well-being. Activities must be enjoyable, encourage responsibility, and help employees become team players. These qualities empower employees to become more engaged with their work, take on more leadership roles, and experience less stress. Making the workplace fun promotes positive, happy moods in employees that in turn increase job satisfaction and organizational commitment. According to Chan’s framework, workplace fun must be staff-oriented, supervisor-oriented, social-oriented, or strategy-oriented. While staff-oriented activities focus on creating fun work for employees, supervisor-oriented activities create a better relationship between the employees and supervisors. Social-oriented activities create social events that are organizational-based (i.e. company barbecue or Christmas office party). Strategy-oriented activities allow more autonomy with employees in different aspects of their work in hopes of cultivating strengths within the organization’s employees. The framework proposes that a fun work environment promotes employee well-being in addition to fostering creativity, enthusiasm, satisfaction, and communication among the organization’s employees. The research found in this study hopes to encourage implementing other work fun activities in other various industries in order to engage and retain positive employees.
There have also been connections between workplace fun and creativity in the workplace. Studies have found that a fun workplace environment is an antecedent to employee creativity. Fun in the workplace has also been shown to be positively correlated with the creative performance of employees.
Flow
Flow is when a person is in an intensely focused state. Flow is achieved when there is a proper balance between the person's skill level and the challenge of the task they are engaging in. Researchers are also starting to look into the connection between flow in the workplace and positive affect in the workplace. Tobert and Moneta found a significant negative correlation between flow and negative affect. They also found a significant positive correlation between flow and positive correlation. As there is more positive affect in the workplace, more flow will be able to occur. In turn, the more flow there is, the more positive affect there will be. A similar spiral happens with negative affect. The more negative affect there is in a workplace, the likelihood of flow will decrease. As the flow decreases, it can lead to more negative affect in the workplace.
Other researchers have looked into the connection between employee motivation and flow. In order to create this optimal level of flow, there needs to be a balance between challenge, skill, workload, and the capacity to work. When all of these are balanced, employees are more highly motivated and more effective in their duties.
Creativity
Creativity also has a critical role in the workplace. Creativity leads organizations to be able to overcome problems, innovate, and ultimately have success. Workplace creativity is defined as new, useful, and valuable services, ideas, processes, or products that were created by individuals in the workplace. Creativity in the workplace has been linked to increased positive affect in employees. Tavares found that creative workplaces lead to employees feeling that their work was meaningful. As work became meaningful to them, they felt more satisfied and that they had a purpose in life.
Application
There are several examples of popular, real-world uses of infusing Positive Psychology in the workplace. In such contexts such as a workplace, researchers often hope to examine and measure variable levels of such factors such as productivity and organization. One such popular model is the aforementioned Job Characteristics Model (JCM), which applies influential theories of work as it correlates to the five central characteristics of skill variety, task identity, task significance, task autonomy, and task feedback. However, such practices such as business teams within a workplace often present the varying dynamics of positivity and negativity in business behaviors. There are often a plethora of special research teams that go into looking at certain workplaces in order to help report to employers the status of their employees. Furthermore, the three psychological states often measured and examined are: meaningfulness of performed work, responsibility of outcomes, and results knowledge. In mixing together these aspects, a score is generated in order to observe a range reflecting a job quality. In addition, each score details the differing degrees of autonomy and necessary feedback as it relates to ensuring high quality work. Most research points to the fact that typical teams of high performance are those that function high on positivity in their workplace behaviors.
Controversies
There has been research regarding whether the practice of measuring positive behaviors is actually beneficial, in attempting to measure a variable to ensure a more positive environment in the workplace. There is debate concerning which proper components to value and measure. Additionally, the act and process of specifically looking into certain factors of productivity in the workplace can also go on to influence workers negatively due to the added pressure that it may place on those under review. It is suspected that avoiding all negativity can cause contentious interactions to escalade and increase, when they may have not even been an issue in the first place.
Conclusion
The multitudes of research and new, developing information detailing the possibility of positive psychology at work often deals with reporting workplace safety, the engagement of the employees, productivity, and overall happiness. Moreover, understanding the significance of a healthy work environment can directly provide and contribute to work mastery and work ethic. Motivation, researchers have learned, helps to keep a reinforced sense of both discipline and a higher perception which then yields to higher levels of efficiency for both employees and employers.
See also
Positive psychology
Happiness at work
Employee engagement
Work engagement
Booster Breaks in the Workplace
References
External links
Happy at Work: How the Science of Positive Psychology Will Revolutionize the Workplace
Positive Psychology in the Workplace
Industrial and organizational psychology
Positive psychology
Occupational safety and health | 0.791261 | 0.960829 | 0.760267 |
Nursing process | The nursing process is a modified scientific method which is a fundamental part of nursing practices in many countries around the world. Nursing practise was first described as a four-stage nursing process by Ida Jean Orlando in 1958. It should not be confused with nursing theories or health informatics. The diagnosis phase was added later.
The nursing process uses clinical judgement to strike a balance of epistemology between personal interpretation and research evidence in which critical thinking may play a part to categorize the clients issue and course of action. Nursing offers diverse patterns of knowing. Nursing knowledge has embraced pluralism since the 1970s.
Some authors refer to a mind map or abductive reasoning as a potential alternative strategy for organizing care. Intuition plays a part for experienced nurses.
Phases
The nursing process is goal-oriented method of caring that provides a framework to nursing care. It involves seven major steps:
A
Assess (what data is collected?)
D
Diagnose (what is the problem?)
O
Outcome Identification - (Was originally a part of the Planning phase, but has recently been added as a new step in the complete process).
P
Plan (how to manage the problem)
I
Implement (putting plan into action)
R
Rationale (Scientific reason of the implementations)
E
Evaluate (did the plan work?)
According to some theorists, this seven-steps description of the nursing process is outdated and misrepresents nursing as linear and atomic.
Assessing phase
The nurse completes a holistic nursing assessment of the needs of the individual/family/community, regardless of the reason for the encounter. The nurse collects subjective data and objective data using a nursing framework, such as Marjory Gordon's functional health patterns.
Models for data collection
Nursing assessments provide the starting point for determining nursing diagnoses. It is vital that a recognized nursing assessment framework is used in practice to identify the patient's* problems, risks and outcomes for enhancing health. The use of an evidence-based nursing framework such as Gordon's Functional Health Pattern Assessment should guide assessments that support nurses in determination of NANDA-I nursing diagnoses. For accurate determination of nursing diagnoses, a useful, evidence-based assessment framework is best practice.
Methods
Client Interview
Physical Examination
Obtaining a health history (including dietary data)
Family history/report
Diagnosing phase
Nursing diagnoses represent the nurse's clinical judgment about actual or potential health problems/life process occurring with the individual, family, group or community. The accuracy of the nursing diagnosis is validated when a nurse is able to clearly identify and link to the defining characteristics, related factors and/or risk factors found within the patients assessment. Multiple nursing diagnoses may be made for one client.
Planning phase
In agreement with the client, the nurse addresses each of the problems identified in the diagnosing phase. When there are multiple nursing diagnoses to be addressed, the nurse prioritizes which diagnoses will receive the most attention first according to their severity and potential for causing more serious harm. The most common terminology for standardized nursing diagnosis is that of the evidence-based terminology developed and refined by NANDA International, the oldest and one of the most researched of all standardized nursing languages. For each problem a measurable goal/outcome is set. For each goal/outcome, the nurse selects nursing interventions that will help achieve the goal/outcome, which are aimed at the related factors (etiologies) not merely at symptoms (defining characteristics). A common method of formulating the expected outcomes is to use the evidence-based Nursing Outcomes Classification to allow for the use of standardized language which improves consistency of terminology, definition and outcome measures. The interventions used in the Nursing Interventions Classification again allow for the use of standardized language which improves consistency of terminology, definition and ability to identify nursing activities, which can also be linked to nursing workload and staffing indices. The result of this phase is a nursing care plan.
Implementing phase
The nurse implements the nursing care plan, performing the determined interventions that were selected to help meet the goals/outcomes that were established. Delegated tasks and the monitoring of them is included here as well.
Activities
pre-assessment of the client-done before just carrying out implementation to determine if it is relevant
determine need for assistance
implementation of nursing orders
delegating and supervising-determines who to carry out what action
Evaluating phase
The nurse evaluates the progress toward the goals/outcomes identified in the previous phases. If progress towards the goal is slow, or if regression has occurred, the nurse must change the plan of care accordingly. Conversely, if the goal has been achieved then the care can cease. New problems may be identified at this stage, and thus the process will start all over again.
Characteristics
The nursing process is a cyclical and ongoing process that can end at any stage if the problem is solved. The nursing process exists for every problem that the individual/family/community has. The nursing process not only focuses on ways to improve physical needs, but also on social and emotional needs as well.
The entire process is recorded or documented in order to inform all members of the health care team.
Variations and documentation
The PIE method is a system for documenting actions, especially in the field of nursing. The name comes from the acronym PIE, meaning Problem, Intervention, Evaluation.
See also
Clinical Care Classification System
Decision cycle
Nursing
Nursing theory
Nursing diagnosis
NANDA
OODA loop
References
Nursing
Critical thinking
Scientific method | 0.771085 | 0.985949 | 0.76025 |
Cybernetics | Cybernetics is the transdisciplinary study of circular processes such as feedback systems where outputs are also inputs. It is concerned with general principles that are relevant across multiple contexts, including in ecological, technological, biological, cognitive and social systems and also in practical activities such as designing, learning, and managing.
The field is named after an example of circular causal feedback—that of steering a ship (the ancient Greek κυβερνήτης (kybernḗtēs) means "helmsperson"). In steering a ship, the helmsperson adjusts their steering in continual response to the effect it is observed as having, forming a feedback loop through which a steady course can be maintained in a changing environment, responding to disturbances from cross winds and tide.
Cybernetics' transdisciplinary character has meant that it intersects with a number of other fields, leading to it having both wide influence and diverse interpretations.
Definitions
Cybernetics has been defined in a variety of ways, reflecting "the richness of its conceptual base." One of the best known definitions is that of the American scientist Norbert Wiener, who characterised cybernetics as concerned with "control and communication in the animal and the machine." Another early definition is that of the Macy cybernetics conferences, where cybernetics was understood as the study of "circular causal and feedback mechanisms in biological and social systems." Margaret Mead emphasised the role of cybernetics as "a form of cross-disciplinary thought which made it possible for members of many disciplines to communicate with each other easily in a language which all could understand."
Other definitions include: "the art of governing or the science of government" (André-Marie Ampère); "the art of steersmanship" (Ross Ashby); "the study of systems of any nature which are capable of receiving, storing, and processing information so as to use it for control" (Andrey Kolmogorov); and "a branch of mathematics dealing with problems of control, recursiveness, and information, focuses on forms and the patterns that connect" (Gregory Bateson).
Etymology
The Ancient Greek term κυβερνητικός (kubernētikos, '(good at) steering') appears in Plato's Republic and Alcibiades, where the metaphor of a steersman is used to signify the governance of people. The French word cybernétique was also used in 1834 by the physicist André-Marie Ampère to denote the sciences of government in his classification system of human knowledge.
According to Norbert Wiener, the word cybernetics was coined by a research group involving himself and Arturo Rosenblueth in the summer of 1947. It has been attested in print since at least 1948 through Wiener's book Cybernetics: Or Control and Communication in the Animal and the Machine. In the book, Wiener states:
Moreover, Wiener explains, the term was chosen to recognize James Clerk Maxwell's 1868 publication on feedback mechanisms involving governors, noting that the term governor is also derived from κυβερνήτης (kubernḗtēs) via a Latin corruption gubernator. Finally, Wiener motivates the choice by steering engines of a ship being "one of the earliest and best-developed forms of feedback mechanisms".
History
First wave
The initial focus of cybernetics was on parallels between regulatory feedback processes in biological and technological systems. Two foundational articles were published in 1943: "Behavior, Purpose and Teleology" by Arturo Rosenblueth, Norbert Wiener, and Julian Bigelowbased on the research on living organisms that Rosenblueth did in Mexicoand the paper "A Logical Calculus of the Ideas Immanent in Nervous Activity" by Warren McCulloch and Walter Pitts. The foundations of cybernetics were then developed through a series of transdisciplinary conferences funded by the Josiah Macy, Jr. Foundation, between 1946 and 1953. The conferences were chaired by McCulloch and had participants included Ross Ashby, Gregory Bateson, Heinz von Foerster, Margaret Mead, John von Neumann, and Norbert Wiener. In the UK, similar focuses were explored by the Ratio Club, an informal dining club of young psychiatrists, psychologists, physiologists, mathematicians and engineers that met between 1949 and 1958. Wiener introduced the neologism cybernetics to denote the study of "teleological mechanisms" and popularized it through the book Cybernetics: Or Control and Communication in the Animal and the Machine.
During the 1950s, cybernetics was developed as a primarily technical discipline, such as in Qian Xuesen's 1954 "Engineering Cybernetics". In the Soviet Union, Cybernetics was initially considered with suspicion but became accepted from the mid to late 1950s.
By the 1960s and 1970s, however, cybernetics' transdisciplinarity fragmented, with technical focuses separating into separate fields. Artificial intelligence (AI) was founded as a distinct discipline at the Dartmouth workshop in 1956, differentiating itself from the broader cybernetics field. After some uneasy coexistence, AI gained funding and prominence. Consequently, cybernetic sciences such as the study of artificial neural networks were downplayed. Similarly, computer science became defined as a distinct academic discipline in the 1950s and early 1960s.
Second wave
The second wave of cybernetics came to prominence from the 1960s onwards, with its focus inflecting away from technology toward social, ecological, and philosophical concerns. It was still grounded in biology, notably Maturana and Varela's autopoiesis, and built on earlier work on self-organising systems and the presence of anthropologists Mead and Bateson in the Macy meetings. The Biological Computer Laboratory, founded in 1958 and active until the mid-1970s under the direction of Heinz von Foerster at the University of Illinois at Urbana–Champaign, was a major incubator of this trend in cybernetics research.
Focuses of the second wave of cybernetics included management cybernetics, such as Stafford Beer's biologically inspired viable system model; work in family therapy, drawing on Bateson; social systems, such as in the work of Niklas Luhmann; epistemology and pedagogy, such as in the development of radical constructivism. Cybernetics' core theme of circular causality was developed beyond goal-oriented processes to concerns with reflexivity and recursion. This was especially so in the development of second-order cybernetics (or the cybernetics of cybernetics), developed and promoted by Heinz von Foerster, which focused on questions of observation, cognition, epistemology, and ethics.
The 1960s onwards also saw cybernetics begin to develop exchanges with the creative arts, design, and architecture, notably with the Cybernetic Serendipity exhibition (ICA, London, 1968), curated by Jasia Reichardt, and the unrealised Fun Palace project (London, unrealised, 1964 onwards), where Gordon Pask was consultant to architect Cedric Price and theatre director Joan Littlewood.
Third wave
From the 1990s onwards, there has been a renewed interest in cybernetics from a number of directions. Early cybernetic work on artificial neural networks has been returned to as a paradigm in machine learning and artificial intelligence. The entanglements of society with emerging technologies has led to exchanges with feminist technoscience and posthumanism. Re-examinations of cybernetics' history have seen science studies scholars emphasising cybernetics' unusual qualities as a science, such as its "performative ontology". Practical design disciplines have drawn on cybernetics for theoretical underpinning and transdisciplinary connections. Emerging topics include how cybernetics' engagements with social, human, and ecological contexts might come together with its earlier technological focus, whether as a critical discourse or a "new branch of engineering".
Key concepts and theories
The central theme in cybernetics is feedback. Feedback is a process where the observed outcomes of actions are taken as inputs for further action in ways that support the pursuit, maintenance, or disruption of particular conditions, forming a circular causal relationship. In steering a ship, the helmsperson maintains a steady course in a changing environment by adjusting their steering in continual response to the effect it is observed as having.
Other examples of circular causal feedback include: technological devices such as the thermostat, where the action of a heater responds to measured changes in temperature regulating the temperature of the room within a set range, and the centrifugal governor of a steam engine, which regulates the engine speed; biological examples such as the coordination of volitional movement through the nervous system and the homeostatic processes that regulate variables such as blood sugar; and processes of social interaction such as conversation.
Negative feedback processes are those that maintain particular conditions by reducing (hence 'negative') the difference from a desired state, such as where a thermostat turns on a heater when it is too cold and turns a heater off when it is too hot. Positive feedback processes increase (hence 'positive') the difference from a desired state. An example of positive feedback is when a microphone picks up the sound that it is producing through a speaker, which is then played through the speaker, and so on.
In addition to feedback, cybernetics is concerned with other forms of circular processes including: feedforward, recursion, and reflexivity.
Other key concepts and theories in cybernetics include:
Autopoiesis
Black box
Conversation theory
Double bind theory: Double binds are patterns created in interaction between two or more parties in ongoing relationships where there is a contradiction between messages at different logical levels that creates a situation with emotional threat but no possibility of withdrawal from the situation and no way to articulate the problem. The theory was first described by Gregory Bateson and colleagues in the 1950s with regard to the origins of schizophrenia, but it is also characteristic of many other social contexts.
Experimental epistemology
Good regulator theorem
Perceptual control theory: A model of behavior based on the properties of negative feedback (cybernetic) control loops. A key insight of PCT is that the controlled variable is not the output of the system (the behavioral actions), but its input, "perception". The theory came to be known as "perceptual control theory" to distinguish from those control theorists that assert or assume that it is the system's output that is controlled. Method of levels is an approach to psychotherapy based on perceptual control theory where the therapist aims to help the patient shift their awareness to higher levels of perception in order to resolve conflicts and allow reorganization to take place.
Radical constructivism
Second-order cybernetics: Also known as the cybernetics of cybernetics, second-order cybernetics is the recursive application of cybernetics to itself and the practice of cybernetics according to such a critique.
Self-organisation
Social systems theory
Variety and Requisite Variety
Viable system model
Related fields and applications
Cybernetics' central concept of circular causality is of wide applicability, leading to diverse applications and relations with other fields. Many of the initial applications of cybernetics focused on engineering, biology, and exchanges between the two, such as medical cybernetics and robotics and topics such as neural networks, heterarchy. In the social and behavioral sciences, cybernetics has included and influenced work in anthropology, sociology, economics, family therapy, cognitive science, and psychology.
As cybernetics has developed, it broadened in scope to include work in management, design, pedagogy, and the creative arts, while also developing exchanges with constructivist philosophies, counter-cultural movements, and media studies. The development of management cybernetics has led to a variety of applications, notably to the national economy of Chile under the Allende government in Project Cybersyn. In design, cybernetics has been influential on interactive architecture, human-computer interaction, design research, and the development of systemic design and metadesign practices.
Cybernetics is often understood within the context of systems science, systems theory, and systems thinking. Systems approaches influenced by cybernetics include critical systems thinking, which incorporates the viable system model; systemic design; and system dynamics, which is based on the concept of causal feedback loops.
Many fields trace their origins in whole or part to work carried out in cybernetics, or were partially absorbed into cybernetics when it was developed. These include artificial intelligence, bionics, cognitive science, control theory, complexity science, computer science, information theory and robotics. Some aspects of modern artificial intelligence, particularly the social machine, are often described in cybernetic terms.
Journals and societies
Academic journals with focuses in cybernetics include:
Constructivist Foundations
Cybernetics and Human Knowing
Cybernetics and Systems
Enacting Cybernetics. An open access journal published by the Cybernetics Society and hosted by Ubiquity Press.
Biological Cybernetics
IEEE Transactions on Systems, Man, and Cybernetics: Systems
IEEE Transactions on Human-Machine Systems
IEEE Transactions on Cybernetics
IEEE Transactions on Computational Social Systems
Kybernetes
Academic societies primarily concerned with cybernetics or aspects of it include:
American Society for Cybernetics (ASC), founded in 1964
British Cybernetics Society (CybSoc)
: The Metaphorum group was set up in 2003 to develop Stafford Beer's legacy in Organizational Cybernetics. The Metaphorum Group was born in a Syntegration in 2003 and have every year after developed a Conference on issues related to Organizational Cybernetics' theory and practice.
IEEE Systems, Man, and Cybernetics Society
RC51 Sociocybernetics: RC51 is a research committee of the International Sociological Association promoting the development of (socio)cybernetic theory and research within the social sciences.
SCiO (Systems and Complexity in Organisation) is a community of systems practitioners who believe that traditional approaches to running organisations are no longer capable of dealing with the complexity and turbulence faced by organisations today and are responsible for many of the problems we see today. SCiO delivers an apprenticeship on masters level and a certification in systems practice.
See also
Further reading
Ascott, Roy (1967). Behaviourist Art and the Cybernetic Vision. Cybernetica, Journal of the International Association for Cybernetics (Namur), 10, pp. 25–56
François, Charles (1999). "Systemics and cybernetics in a historical perspective". In: Systems Research and Behavioral Science. Vol 16, pp. 203–219 (1999)
Hayles, N. Katherine (1999). How We Became Posthuman: Virtual Bodies in Cybernetics, Literature and Informatics, Chicago: The University of Chicago Press. ISBN 9780226321462
Heylighen, Francis, and Cliff Joslyn (2002). "Cybernetics and Second Order Cybernetics", in: R.A. Meyers (ed.), Encyclopedia of Physical Science & Technology (3rd ed.), Vol. 4, (Academic Press, San Diego), p. 155-169.
Ilgauds, Hans Joachim (1980), Norbert Wiener, Leipzig.
Mariátegui, José-Carlos / Maulen, D. (eds.) Special issue on “Cybernetics in Latin America: Contexts Developments, Perceptions and Impacts”, AI & Society, 37, 2022.
von Foerster, Heinz, (1995), Ethics and Second-Order Cybernetics .
Notes
References
External links
General
Norbert Wiener and Stefan Odobleja - A Comparative Analysis
Reading List for Cybernetics
Principia Cybernetica Web
Web Dictionary of Cybernetics and Systems
Glossary Slideshow (136 slides)
Societies and Journals
American Society for Cybernetics
IEEE Systems, Man, & Cybernetics Society
International Society for Cybernetics and Systems Research
The Cybernetics Society
Transhumanism
Science and technology studies
Automation | 0.761155 | 0.998742 | 0.760197 |
Bioinformatics | Bioinformatics is an interdisciplinary field of science that develops methods and software tools for understanding biological data, especially when the data sets are large and complex. Bioinformatics uses biology, chemistry, physics, computer science, computer programming, information engineering, mathematics and statistics to analyze and interpret biological data. The subsequent process of analyzing and interpreting data is often referred to as computational biology, though the distinction between the two terms is often disputed.
Computational, statistical, and computer programming techniques have been used for computer simulation analyses of biological queries. They include reused specific analysis "pipelines", particularly in the field of genomics, such as by the identification of genes and single nucleotide polymorphisms (SNPs). These pipelines are used to better understand the genetic basis of disease, unique adaptations, desirable properties (esp. in agricultural species), or differences between populations. Bioinformatics also includes proteomics, which tries to understand the organizational principles within nucleic acid and protein sequences.
Image and signal processing allow extraction of useful results from large amounts of raw data. In the field of genetics, it aids in sequencing and annotating genomes and their observed mutations. Bioinformatics includes text mining of biological literature and the development of biological and gene ontologies to organize and query biological data. It also plays a role in the analysis of gene and protein expression and regulation. Bioinformatics tools aid in comparing, analyzing and interpreting genetic and genomic data and more generally in the understanding of evolutionary aspects of molecular biology. At a more integrative level, it helps analyze and catalogue the biological pathways and networks that are an important part of systems biology. In structural biology, it aids in the simulation and modeling of DNA, RNA, proteins as well as biomolecular interactions.
History
The first definition of the term bioinformatics was coined by Paulien Hogeweg and Ben Hesper in 1970, to refer to the study of information processes in biotic systems. This definition placed bioinformatics as a field parallel to biochemistry (the study of chemical processes in biological systems).
Bioinformatics and computational biology involved the analysis of biological data, particularly DNA, RNA, and protein sequences. The field of bioinformatics experienced explosive growth starting in the mid-1990s, driven largely by the Human Genome Project and by rapid advances in DNA sequencing technology.
Analyzing biological data to produce meaningful information involves writing and running software programs that use algorithms from graph theory, artificial intelligence, soft computing, data mining, image processing, and computer simulation. The algorithms in turn depend on theoretical foundations such as discrete mathematics, control theory, system theory, information theory, and statistics.
Sequences
There has been a tremendous advance in speed and cost reduction since the completion of the Human Genome Project, with some labs able to sequence over 100,000 billion bases each year, and a full genome can be sequenced for $1,000 or less.
Computers became essential in molecular biology when protein sequences became available after Frederick Sanger determined the sequence of insulin in the early 1950s. Comparing multiple sequences manually turned out to be impractical. Margaret Oakley Dayhoff, a pioneer in the field, compiled one of the first protein sequence databases, initially published as books as well as methods of sequence alignment and molecular evolution. Another early contributor to bioinformatics was Elvin A. Kabat, who pioneered biological sequence analysis in 1970 with his comprehensive volumes of antibody sequences released online with Tai Te Wu between 1980 and 1991.
In the 1970s, new techniques for sequencing DNA were applied to bacteriophage MS2 and øX174, and the extended nucleotide sequences were then parsed with informational and statistical algorithms. These studies illustrated that well known features, such as the coding segments and the triplet code, are revealed in straightforward statistical analyses and were the proof of the concept that bioinformatics would be insightful.
Goals
In order to study how normal cellular activities are altered in different disease states, raw biological data must be combined to form a comprehensive picture of these activities. Therefore, the field of bioinformatics has evolved such that the most pressing task now involves the analysis and interpretation of various types of data. This also includes nucleotide and amino acid sequences, protein domains, and protein structures.
Important sub-disciplines within bioinformatics and computational biology include:
Development and implementation of computer programs to efficiently access, manage, and use various types of information.
Development of new mathematical algorithms and statistical measures to assess relationships among members of large data sets. For example, there are methods to locate a gene within a sequence, to predict protein structure and/or function, and to cluster protein sequences into families of related sequences.
The primary goal of bioinformatics is to increase the understanding of biological processes. What sets it apart from other approaches is its focus on developing and applying computationally intensive techniques to achieve this goal. Examples include: pattern recognition, data mining, machine learning algorithms, and visualization. Major research efforts in the field include sequence alignment, gene finding, genome assembly, drug design, drug discovery, protein structure alignment, protein structure prediction, prediction of gene expression and protein–protein interactions, genome-wide association studies, the modeling of evolution and cell division/mitosis.
Bioinformatics entails the creation and advancement of databases, algorithms, computational and statistical techniques, and theory to solve formal and practical problems arising from the management and analysis of biological data.
Over the past few decades, rapid developments in genomic and other molecular research technologies and developments in information technologies have combined to produce a tremendous amount of information related to molecular biology. Bioinformatics is the name given to these mathematical and computing approaches used to glean understanding of biological processes.
Common activities in bioinformatics include mapping and analyzing DNA and protein sequences, aligning DNA and protein sequences to compare them, and creating and viewing 3-D models of protein structures.
Sequence analysis
Since the bacteriophage Phage Φ-X174 was sequenced in 1977, the DNA sequences of thousands of organisms have been decoded and stored in databases. This sequence information is analyzed to determine genes that encode proteins, RNA genes, regulatory sequences, structural motifs, and repetitive sequences. A comparison of genes within a species or between different species can show similarities between protein functions, or relations between species (the use of molecular systematics to construct phylogenetic trees). With the growing amount of data, it long ago became impractical to analyze DNA sequences manually. Computer programs such as BLAST are used routinely to search sequences—as of 2008, from more than 260,000 organisms, containing over 190 billion nucleotides.
DNA sequencing
Before sequences can be analyzed, they are obtained from a data storage bank, such as GenBank. DNA sequencing is still a non-trivial problem as the raw data may be noisy or affected by weak signals. Algorithms have been developed for base calling for the various experimental approaches to DNA sequencing.
Sequence assembly
Most DNA sequencing techniques produce short fragments of sequence that need to be assembled to obtain complete gene or genome sequences. The shotgun sequencing technique (used by The Institute for Genomic Research (TIGR) to sequence the first bacterial genome, Haemophilus influenzae) generates the sequences of many thousands of small DNA fragments (ranging from 35 to 900 nucleotides long, depending on the sequencing technology). The ends of these fragments overlap and, when aligned properly by a genome assembly program, can be used to reconstruct the complete genome. Shotgun sequencing yields sequence data quickly, but the task of assembling the fragments can be quite complicated for larger genomes. For a genome as large as the human genome, it may take many days of CPU time on large-memory, multiprocessor computers to assemble the fragments, and the resulting assembly usually contains numerous gaps that must be filled in later. Shotgun sequencing is the method of choice for virtually all genomes sequenced (rather than chain-termination or chemical degradation methods), and genome assembly algorithms are a critical area of bioinformatics research.
Genome annotation
In genomics, annotation refers to the process of marking the stop and start regions of genes and other biological features in a sequenced DNA sequence. Many genomes are too large to be annotated by hand. As the rate of sequencing exceeds the rate of genome annotation, genome annotation has become the new bottleneck in bioinformatics.
Genome annotation can be classified into three levels: the nucleotide, protein, and process levels.
Gene finding is a chief aspect of nucleotide-level annotation. For complex genomes, a combination of ab initio gene prediction and sequence comparison with expressed sequence databases and other organisms can be successful. Nucleotide-level annotation also allows the integration of genome sequence with other genetic and physical maps of the genome.
The principal aim of protein-level annotation is to assign function to the protein products of the genome. Databases of protein sequences and functional domains and motifs are used for this type of annotation. About half of the predicted proteins in a new genome sequence tend to have no obvious function.
Understanding the function of genes and their products in the context of cellular and organismal physiology is the goal of process-level annotation. An obstacle of process-level annotation has been the inconsistency of terms used by different model systems. The Gene Ontology Consortium is helping to solve this problem.
The first description of a comprehensive annotation system was published in 1995 by The Institute for Genomic Research, which performed the first complete sequencing and analysis of the genome of a free-living (non-symbiotic) organism, the bacterium Haemophilus influenzae. The system identifies the genes encoding all proteins, transfer RNAs, ribosomal RNAs, in order to make initial functional assignments. The GeneMark program trained to find protein-coding genes in Haemophilus influenzae is constantly changing and improving.
Following the goals that the Human Genome Project left to achieve after its closure in 2003, the ENCODE project was developed by the National Human Genome Research Institute. This project is a collaborative data collection of the functional elements of the human genome that uses next-generation DNA-sequencing technologies and genomic tiling arrays, technologies able to automatically generate large amounts of data at a dramatically reduced per-base cost but with the same accuracy (base call error) and fidelity (assembly error).
Gene function prediction
While genome annotation is primarily based on sequence similarity (and thus homology), other properties of sequences can be used to predict the function of genes. In fact, most gene function prediction methods focus on protein sequences as they are more informative and more feature-rich. For instance, the distribution of hydrophobic amino acids predicts transmembrane segments in proteins. However, protein function prediction can also use external information such as gene (or protein) expression data, protein structure, or protein-protein interactions.
Computational evolutionary biology
Evolutionary biology is the study of the origin and descent of species, as well as their change over time. Informatics has assisted evolutionary biologists by enabling researchers to:
trace the evolution of a large number of organisms by measuring changes in their DNA, rather than through physical taxonomy or physiological observations alone,
compare entire genomes, which permits the study of more complex evolutionary events, such as gene duplication, horizontal gene transfer, and the prediction of factors important in bacterial speciation,
build complex computational population genetics models to predict the outcome of the system over time
track and share information on an increasingly large number of species and organisms
Future work endeavours to reconstruct the now more complex tree of life.
Comparative genomics
The core of comparative genome analysis is the establishment of the correspondence between genes (orthology analysis) or other genomic features in different organisms. Intergenomic maps are made to trace the evolutionary processes responsible for the divergence of two genomes. A multitude of evolutionary events acting at various organizational levels shape genome evolution. At the lowest level, point mutations affect individual nucleotides. At a higher level, large chromosomal segments undergo duplication, lateral transfer, inversion, transposition, deletion and insertion. Entire genomes are involved in processes of hybridization, polyploidization and endosymbiosis that lead to rapid speciation. The complexity of genome evolution poses many exciting challenges to developers of mathematical models and algorithms, who have recourse to a spectrum of algorithmic, statistical and mathematical techniques, ranging from exact, heuristics, fixed parameter and approximation algorithms for problems based on parsimony models to Markov chain Monte Carlo algorithms for Bayesian analysis of problems based on probabilistic models.
Many of these studies are based on the detection of sequence homology to assign sequences to protein families.
Pan genomics
Pan genomics is a concept introduced in 2005 by Tettelin and Medini. Pan genome is the complete gene repertoire of a particular monophyletic taxonomic group. Although initially applied to closely related strains of a species, it can be applied to a larger context like genus, phylum, etc. It is divided in two parts: the Core genome, a set of genes common to all the genomes under study (often housekeeping genes vital for survival), and the Dispensable/Flexible genome: a set of genes not present in all but one or some genomes under study. A bioinformatics tool BPGA can be used to characterize the Pan Genome of bacterial species.
Genetics of disease
As of 2013, the existence of efficient high-throughput next-generation sequencing technology allows for the identification of cause many different human disorders. Simple Mendelian inheritance has been observed for over 3,000 disorders that have been identified at the Online Mendelian Inheritance in Man database, but complex diseases are more difficult. Association studies have found many individual genetic regions that individually are weakly associated with complex diseases (such as infertility, breast cancer and Alzheimer's disease), rather than a single cause. There are currently many challenges to using genes for diagnosis and treatment, such as how we don't know which genes are important, or how stable the choices an algorithm provides.
Genome-wide association studies have successfully identified thousands of common genetic variants for complex diseases and traits; however, these common variants only explain a small fraction of heritability. Rare variants may account for some of the missing heritability. Large-scale whole genome sequencing studies have rapidly sequenced millions of whole genomes, and such studies have identified hundreds of millions of rare variants. Functional annotations predict the effect or function of a genetic variant and help to prioritize rare functional variants, and incorporating these annotations can effectively boost the power of genetic association of rare variants analysis of whole genome sequencing studies. Some tools have been developed to provide all-in-one rare variant association analysis for whole-genome sequencing data, including integration of genotype data and their functional annotations, association analysis, result summary and visualization. Meta-analysis of whole genome sequencing studies provides an attractive solution to the problem of collecting large sample sizes for discovering rare variants associated with complex phenotypes.
Analysis of mutations in cancer
In cancer, the genomes of affected cells are rearranged in complex or unpredictable ways. In addition to single-nucleotide polymorphism arrays identifying point mutations that cause cancer, oligonucleotide microarrays can be used to identify chromosomal gains and losses (called comparative genomic hybridization). These detection methods generate terabytes of data per experiment. The data is often found to contain considerable variability, or noise, and thus Hidden Markov model and change-point analysis methods are being developed to infer real copy number changes.
Two important principles can be used to identify cancer by mutations in the exome. First, cancer is a disease of accumulated somatic mutations in genes. Second, cancer contains driver mutations which need to be distinguished from passengers.
Further improvements in bioinformatics could allow for classifying types of cancer by analysis of cancer driven mutations in the genome. Furthermore, tracking of patients while the disease progresses may be possible in the future with the sequence of cancer samples. Another type of data that requires novel informatics development is the analysis of lesions found to be recurrent among many tumors.
Gene and protein expression
Analysis of gene expression
The expression of many genes can be determined by measuring mRNA levels with multiple techniques including microarrays, expressed cDNA sequence tag (EST) sequencing, serial analysis of gene expression (SAGE) tag sequencing, massively parallel signature sequencing (MPSS), RNA-Seq, also known as "Whole Transcriptome Shotgun Sequencing" (WTSS), or various applications of multiplexed in-situ hybridization. All of these techniques are extremely noise-prone and/or subject to bias in the biological measurement, and a major research area in computational biology involves developing statistical tools to separate signal from noise in high-throughput gene expression studies. Such studies are often used to determine the genes implicated in a disorder: one might compare microarray data from cancerous epithelial cells to data from non-cancerous cells to determine the transcripts that are up-regulated and down-regulated in a particular population of cancer cells.
Analysis of protein expression
Protein microarrays and high throughput (HT) mass spectrometry (MS) can provide a snapshot of the proteins present in a biological sample. The former approach faces similar problems as with microarrays targeted at mRNA, the latter involves the problem of matching large amounts of mass data against predicted masses from protein sequence databases, and the complicated statistical analysis of samples when multiple incomplete peptides from each protein are detected. Cellular protein localization in a tissue context can be achieved through affinity proteomics displayed as spatial data based on immunohistochemistry and tissue microarrays.
Analysis of regulation
Gene regulation is a complex process where a signal, such as an extracellular signal such as a hormone, eventually leads to an increase or decrease in the activity of one or more proteins. Bioinformatics techniques have been applied to explore various steps in this process.
For example, gene expression can be regulated by nearby elements in the genome. Promoter analysis involves the identification and study of sequence motifs in the DNA surrounding the protein-coding region of a gene. These motifs influence the extent to which that region is transcribed into mRNA. Enhancer elements far away from the promoter can also regulate gene expression, through three-dimensional looping interactions. These interactions can be determined by bioinformatic analysis of chromosome conformation capture experiments.
Expression data can be used to infer gene regulation: one might compare microarray data from a wide variety of states of an organism to form hypotheses about the genes involved in each state. In a single-cell organism, one might compare stages of the cell cycle, along with various stress conditions (heat shock, starvation, etc.). Clustering algorithms can be then applied to expression data to determine which genes are co-expressed. For example, the upstream regions (promoters) of co-expressed genes can be searched for over-represented regulatory elements. Examples of clustering algorithms applied in gene clustering are k-means clustering, self-organizing maps (SOMs), hierarchical clustering, and consensus clustering methods.
Analysis of cellular organization
Several approaches have been developed to analyze the location of organelles, genes, proteins, and other components within cells. A gene ontology category, cellular component, has been devised to capture subcellular localization in many biological databases.
Microscopy and image analysis
Microscopic pictures allow for the location of organelles as well as molecules, which may be the source of abnormalities in diseases.
Protein localization
Finding the location of proteins allows us to predict what they do. This is called protein function prediction. For instance, if a protein is found in the nucleus it may be involved in gene regulation or splicing. By contrast, if a protein is found in mitochondria, it may be involved in respiration or other metabolic processes. There are well developed protein subcellular localization prediction resources available, including protein subcellular location databases, and prediction tools.
Nuclear organization of chromatin
Data from high-throughput chromosome conformation capture experiments, such as Hi-C (experiment) and ChIA-PET, can provide information on the three-dimensional structure and nuclear organization of chromatin. Bioinformatic challenges in this field include partitioning the genome into domains, such as Topologically Associating Domains (TADs), that are organised together in three-dimensional space.
Structural bioinformatics
Finding the structure of proteins is an important application of bioinformatics. The Critical Assessment of Protein Structure Prediction (CASP) is an open competition where worldwide research groups submit protein models for evaluating unknown protein models.
Amino acid sequence
The linear amino acid sequence of a protein is called the primary structure. The primary structure can be easily determined from the sequence of codons on the DNA gene that codes for it. In most proteins, the primary structure uniquely determines the 3-dimensional structure of a protein in its native environment. An exception is the misfolded protein involved in bovine spongiform encephalopathy. This structure is linked to the function of the protein. Additional structural information includes the secondary, tertiary and quaternary structure. A viable general solution to the prediction of the function of a protein remains an open problem. Most efforts have so far been directed towards heuristics that work most of the time.
Homology
In the genomic branch of bioinformatics, homology is used to predict the function of a gene: if the sequence of gene A, whose function is known, is homologous to the sequence of gene B, whose function is unknown, one could infer that B may share A's function. In structural bioinformatics, homology is used to determine which parts of a protein are important in structure formation and interaction with other proteins. Homology modeling is used to predict the structure of an unknown protein from existing homologous proteins.
One example of this is hemoglobin in humans and the hemoglobin in legumes (leghemoglobin), which are distant relatives from the same protein superfamily. Both serve the same purpose of transporting oxygen in the organism. Although both of these proteins have completely different amino acid sequences, their protein structures are virtually identical, which reflects their near identical purposes and shared ancestor.
Other techniques for predicting protein structure include protein threading and de novo (from scratch) physics-based modeling.
Another aspect of structural bioinformatics include the use of protein structures for Virtual Screening models such as Quantitative Structure-Activity Relationship models and proteochemometric models (PCM). Furthermore, a protein's crystal structure can be used in simulation of for example ligand-binding studies and in silico mutagenesis studies.
A 2021 deep-learning algorithms-based software called AlphaFold, developed by Google's DeepMind, greatly outperforms all other prediction software methods, and has released predicted structures for hundreds of millions of proteins in the AlphaFold protein structure database.
Network and systems biology
Network analysis seeks to understand the relationships within biological networks such as metabolic or protein–protein interaction networks. Although biological networks can be constructed from a single type of molecule or entity (such as genes), network biology often attempts to integrate many different data types, such as proteins, small molecules, gene expression data, and others, which are all connected physically, functionally, or both.
Systems biology involves the use of computer simulations of cellular subsystems (such as the networks of metabolites and enzymes that comprise metabolism, signal transduction pathways and gene regulatory networks) to both analyze and visualize the complex connections of these cellular processes. Artificial life or virtual evolution attempts to understand evolutionary processes via the computer simulation of simple (artificial) life forms.
Molecular interaction networks
Tens of thousands of three-dimensional protein structures have been determined by X-ray crystallography and protein nuclear magnetic resonance spectroscopy (protein NMR) and a central question in structural bioinformatics is whether it is practical to predict possible protein–protein interactions only based on these 3D shapes, without performing protein–protein interaction experiments. A variety of methods have been developed to tackle the protein–protein docking problem, though it seems that there is still much work to be done in this field.
Other interactions encountered in the field include Protein–ligand (including drug) and protein–peptide. Molecular dynamic simulation of movement of atoms about rotatable bonds is the fundamental principle behind computational algorithms, termed docking algorithms, for studying molecular interactions.
Biodiversity informatics
Biodiversity informatics deals with the collection and analysis of biodiversity data, such as taxonomic databases, or microbiome data. Examples of such analyses include phylogenetics, niche modelling, species richness mapping, DNA barcoding, or species identification tools. A growing area is also macro-ecology, i.e. the study of how biodiversity is connected to ecology and human impact, such as climate change.
Others
Literature analysis
The enormous number of published literature makes it virtually impossible for individuals to read every paper, resulting in disjointed sub-fields of research. Literature analysis aims to employ computational and statistical linguistics to mine this growing library of text resources. For example:
Abbreviation recognition – identify the long-form and abbreviation of biological terms
Named-entity recognition – recognizing biological terms such as gene names
Protein–protein interaction – identify which proteins interact with which proteins from text
The area of research draws from statistics and computational linguistics.
High-throughput image analysis
Computational technologies are used to automate the processing, quantification and analysis of large amounts of high-information-content biomedical imagery. Modern image analysis systems can improve an observer's accuracy, objectivity, or speed. Image analysis is important for both diagnostics and research. Some examples are:
high-throughput and high-fidelity quantification and sub-cellular localization (high-content screening, cytohistopathology, Bioimage informatics)
morphometrics
clinical image analysis and visualization
determining the real-time air-flow patterns in breathing lungs of living animals
quantifying occlusion size in real-time imagery from the development of and recovery during arterial injury
making behavioral observations from extended video recordings of laboratory animals
infrared measurements for metabolic activity determination
inferring clone overlaps in DNA mapping, e.g. the Sulston score
High-throughput single cell data analysis
Computational techniques are used to analyse high-throughput, low-measurement single cell data, such as that obtained from flow cytometry. These methods typically involve finding populations of cells that are relevant to a particular disease state or experimental condition.
Ontologies and data integration
Biological ontologies are directed acyclic graphs of controlled vocabularies. They create categories for biological concepts and descriptions so they can be easily analyzed with computers. When categorised in this way, it is possible to gain added value from holistic and integrated analysis.
The OBO Foundry was an effort to standardise certain ontologies. One of the most widespread is the Gene ontology which describes gene function. There are also ontologies which describe phenotypes.
Databases
Databases are essential for bioinformatics research and applications. Databases exist for many different information types, including DNA and protein sequences, molecular structures, phenotypes and biodiversity. Databases can contain both empirical data (obtained directly from experiments) and predicted data (obtained from analysis of existing data). They may be specific to a particular organism, pathway or molecule of interest. Alternatively, they can incorporate data compiled from multiple other databases. Databases can have different formats, access mechanisms, and be public or private.
Some of the most commonly used databases are listed below:
Used in biological sequence analysis: Genbank, UniProt
Used in structure analysis: Protein Data Bank (PDB)
Used in finding Protein Families and Motif Finding: InterPro, Pfam
Used for Next Generation Sequencing: Sequence Read Archive
Used in Network Analysis: Metabolic Pathway Databases (KEGG, BioCyc), Interaction Analysis Databases, Functional Networks
Used in design of synthetic genetic circuits: GenoCAD
Software and tools
Software tools for bioinformatics include simple command-line tools, more complex graphical programs, and standalone web-services. They are made by bioinformatics companies or by public institutions.
Open-source bioinformatics software
Many free and open-source software tools have existed and continued to grow since the 1980s. The combination of a continued need for new algorithms for the analysis of emerging types of biological readouts, the potential for innovative in silico experiments, and freely available open code bases have created opportunities for research groups to contribute to both bioinformatics regardless of funding. The open source tools often act as incubators of ideas, or community-supported plug-ins in commercial applications. They may also provide de facto standards and shared object models for assisting with the challenge of bioinformation integration.
Open-source bioinformatics software includes Bioconductor, BioPerl, Biopython, BioJava, BioJS, BioRuby, Bioclipse, EMBOSS, .NET Bio, Orange with its bioinformatics add-on, Apache Taverna, UGENE and GenoCAD.
The non-profit Open Bioinformatics Foundation and the annual Bioinformatics Open Source Conference promote open-source bioinformatics software.
Web services in bioinformatics
SOAP- and REST-based interfaces have been developed to allow client computers to use algorithms, data and computing resources from servers in other parts of the world. The main advantage are that end users do not have to deal with software and database maintenance overheads.
Basic bioinformatics services are classified by the EBI into three categories: SSS (Sequence Search Services), MSA (Multiple Sequence Alignment), and BSA (Biological Sequence Analysis). The availability of these service-oriented bioinformatics resources demonstrate the applicability of web-based bioinformatics solutions, and range from a collection of standalone tools with a common data format under a single web-based interface, to integrative, distributed and extensible bioinformatics workflow management systems.
Bioinformatics workflow management systems
A bioinformatics workflow management system is a specialized form of a workflow management system designed specifically to compose and execute a series of computational or data manipulation steps, or a workflow, in a Bioinformatics application. Such systems are designed to
provide an easy-to-use environment for individual application scientists themselves to create their own workflows,
provide interactive tools for the scientists enabling them to execute their workflows and view their results in real-time,
simplify the process of sharing and reusing workflows between the scientists, and
enable scientists to track the provenance of the workflow execution results and the workflow creation steps.
Some of the platforms giving this service: Galaxy, Kepler, Taverna, UGENE, Anduril, HIVE.
BioCompute and BioCompute Objects
In 2014, the US Food and Drug Administration sponsored a conference held at the National Institutes of Health Bethesda Campus to discuss reproducibility in bioinformatics. Over the next three years, a consortium of stakeholders met regularly to discuss what would become BioCompute paradigm. These stakeholders included representatives from government, industry, and academic entities. Session leaders represented numerous branches of the FDA and NIH Institutes and Centers, non-profit entities including the Human Variome Project and the European Federation for Medical Informatics, and research institutions including Stanford, the New York Genome Center, and the George Washington University.
It was decided that the BioCompute paradigm would be in the form of digital 'lab notebooks' which allow for the reproducibility, replication, review, and reuse, of bioinformatics protocols. This was proposed to enable greater continuity within a research group over the course of normal personnel flux while furthering the exchange of ideas between groups. The US FDA funded this work so that information on pipelines would be more transparent and accessible to their regulatory staff.
In 2016, the group reconvened at the NIH in Bethesda and discussed the potential for a BioCompute Object, an instance of the BioCompute paradigm. This work was copied as both a "standard trial use" document and a preprint paper uploaded to bioRxiv. The BioCompute object allows for the JSON-ized record to be shared among employees, collaborators, and regulators.
Education platforms
Bioinformatics is not only taught as in-person masters degree at many universities. The computational nature of bioinformatics lends it to computer-aided and online learning. Software platforms designed to teach bioinformatics concepts and methods include Rosalind and online courses offered through the Swiss Institute of Bioinformatics Training Portal. The Canadian Bioinformatics Workshops provides videos and slides from training workshops on their website under a Creative Commons license. The 4273π project or 4273pi project also offers open source educational materials for free. The course runs on low cost Raspberry Pi computers and has been used to teach adults and school pupils. 4273 is actively developed by a consortium of academics and research staff who have run research level bioinformatics using Raspberry Pi computers and the 4273π operating system.
MOOC platforms also provide online certifications in bioinformatics and related disciplines, including Coursera's Bioinformatics Specialization at the University of California, San Diego, Genomic Data Science Specialization at Johns Hopkins University, and EdX's Data Analysis for Life Sciences XSeries at Harvard University.
Conferences
There are several large conferences that are concerned with bioinformatics. Some of the most notable examples are Intelligent Systems for Molecular Biology (ISMB), European Conference on Computational Biology (ECCB), and Research in Computational Molecular Biology (RECOMB).
See also
References
Further reading
Sehgal et al. : Structural, phylogenetic and docking studies of D-amino acid oxidase activator(DAOA ), a candidate schizophrenia gene. Theoretical Biology and Medical Modelling 2013 10 :3.
Achuthsankar S Nair Computational Biology & Bioinformatics – A gentle Overview , Communications of Computer Society of India, January 2007
Aluru, Srinivas, ed. Handbook of Computational Molecular Biology. Chapman & Hall/Crc, 2006. (Chapman & Hall/Crc Computer and Information Science Series)
Baldi, P and Brunak, S, Bioinformatics: The Machine Learning Approach, 2nd edition. MIT Press, 2001.
Barnes, M.R. and Gray, I.C., eds., Bioinformatics for Geneticists, first edition. Wiley, 2003.
Baxevanis, A.D. and Ouellette, B.F.F., eds., Bioinformatics: A Practical Guide to the Analysis of Genes and Proteins, third edition. Wiley, 2005.
Baxevanis, A.D., Petsko, G.A., Stein, L.D., and Stormo, G.D., eds., Current Protocols in Bioinformatics. Wiley, 2007.
Cristianini, N. and Hahn, M. Introduction to Computational Genomics , Cambridge University Press, 2006. ( |)
Durbin, R., S. Eddy, A. Krogh and G. Mitchison, Biological sequence analysis. Cambridge University Press, 1998.
Keedwell, E., Intelligent Bioinformatics: The Application of Artificial Intelligence Techniques to Bioinformatics Problems. Wiley, 2005.
Kohane, et al. Microarrays for an Integrative Genomics. The MIT Press, 2002.
Lund, O. et al. Immunological Bioinformatics. The MIT Press, 2005.
Pachter, Lior and Sturmfels, Bernd. "Algebraic Statistics for Computational Biology" Cambridge University Press, 2005.
Pevzner, Pavel A. Computational Molecular Biology: An Algorithmic Approach The MIT Press, 2000.
Soinov, L. Bioinformatics and Pattern Recognition Come Together Journal of Pattern Recognition Research (JPRR ), Vol 1 (1) 2006 p. 37–41
Stevens, Hallam, Life Out of Sequence: A Data-Driven History of Bioinformatics, Chicago: The University of Chicago Press, 2013,
Tisdall, James. "Beginning Perl for Bioinformatics" O'Reilly, 2001.
Catalyzing Inquiry at the Interface of Computing and Biology (2005) CSTB report
Calculating the Secrets of Life: Contributions of the Mathematical Sciences and computing to Molecular Biology (1995)
Foundations of Computational and Systems Biology MIT Course
Computational Biology: Genomes, Networks, Evolution Free MIT Course
External links
Bioinformatics Resource Portal (SIB) | 0.761496 | 0.99827 | 0.760179 |
Nature therapy | Nature therapy, sometimes referred to as ecotherapy, forest therapy, forest bathing, grounding, earthing, Shinrin-Yoku or Sami Lok, is a practice that describes a broad group of techniques or treatments using nature to improve mental or physical health. Spending time in nature has various physiological benefits such as relaxation and stress reduction. Additionally, it can enhance cardiovascular health and reduce risks of high blood pressure.
History
Scientists in the 1950s looked into the reasons humans chose to spend time in nature. There is relatively recent history of the term Shinrin-yoku or 'forest bathing' gaining momentum as a term and concept within American culture; the term 'forest bathing' and Shrinrin-yoku was first popularized in Japan by a man named Tomohide Akiyama, who was the head of the Japanese Ministry of Agriculture, Forestry, and Fisheries; this happened in 1982 to encourage more people to visit the forests.
Health effects
Mood
Nature therapy has a benefit in reducing stress and improving a person's mood.
Forest therapy has been linked to some physiological benefits as indicated by neuroimaging and the profile of mood states psychological test.
Stress and depression
Interaction with nature can decrease stress and depression. Forest therapy might help stress management for all age groups.
Social horticulture could help with depression and other mental health problems of PTSD, abuse, lonely elderly people, drug or alcohol addicts, blind people, and other people with special needs. Nature therapy could also improve self-management, self-esteem, social relations and skills, socio-political awareness and employability. Nature therapy could reduce aggression and improve relationship skills.
Other possible benefits
Nature therapy could help with general medical recovery, pain reduction, Attention Deficit/Hyperactivity Disorder, dementia, obesity, and vitamin D deficiency. Interactions with nature environments enhance social connections, stewardship, sense of place, and increase environmental participation. Connecting with nature also addresses needs such as intellectual capacity, emotional bonding, creativity, and imagination. Overall, there seems to be benefits to time spent in nature including memory, cognitive flexibility, and attention control.
Research also suggests that childhood experience in nature are crucial for children in their daily lives as it contributes to several developmental outcomes and various domains of their well-being. Essentially, these experiences also foster an intrinsic care for nature.
Criticism
A 2012 systematic review study showed inconclusive results related to the methodology used in studies. Spending time in forests demonstrated positive health effects, but not enough to generate clinical practice guidelines or demonstrate causality. Additionally, there are concerns from researchers expressing that time spent in nature as a form of regenerative therapy is highly personal and entirely unpredictable. Nature can be harmed in the process of human interaction.
Governmental support and professionalization
In Finland, researchers recommend five hours a month in nature to reduce depression, alcoholism, and suicide. South Korea has a nature therapy program for firefighters with post-traumatic stress disorder. Canadian physicians can also "prescribe nature" to patients with mental and physical health problems encouraging them to get into nature more.
References
Therapy
Forestry
Fringe science
Pseudoscience | 0.777627 | 0.977537 | 0.76016 |
Escapism | Escapism is mental diversion from unpleasant aspects of daily life, typically through activities involving imagination or entertainment. Escapism also may be used to occupy one's self away from persistent feelings of depression or general sadness.
Perceptions
Entire industries have sprung up to foster a growing tendency of people to remove themselves from the rigors of daily life – especially into the digital world. Many activities that are normal parts of a healthy existence (e.g., eating, sleeping, exercise, sexual activity) can also become avenues of escapism when taken to extremes or out of proper context; and as a result the word "escapism" often carries a negative connotation, suggesting that escapists are unhappy, with an inability or unwillingness to connect meaningfully with the world and to take necessary action. Indeed, the Oxford English Dictionary defined escapism as "The tendency to seek, or the practice of seeking, distraction from what normally has to be endured".
However, many challenge the idea that escapism is fundamentally and exclusively negative. C. S. Lewis was fond of humorously remarking that the usual enemies of escape were jailers and considered that, used in moderation, escapism could serve both to refresh and to expand the imaginative powers. Similarly, J. R. R. Tolkien argued for escapism in fantasy literature as the creative expression of reality within a secondary (imaginative) world (but also emphasised that they required an element of horror in them, if they were not to be 'mere escapism'). Terry Pratchett considered that the twentieth century had seen the development over time of a more positive view of escapist literature. Apart from literature, music and video games have been seen and valued as artistic media of escape, too.
Psychological escapes
Freud considers a quota of escapist fantasy a necessary element in the life of humans: "[T]hey cannot subsist on the scanty satisfaction they can extort from reality. 'We simply cannot do without auxiliary constructions', Theodor Fontane once said, "His followers saw rest and wish fulfilment (in small measures) as useful tools in adjusting to traumatic upset"; while later psychologists have highlighted the role of vicarious distractions in shifting unwanted moods, especially anger and sadness.
However, if permanent residence is taken up in some such psychic retreats, the results will often be negative and even pathological. Drugs cause some forms of escapism which can occur when certain mind-altering drugs are taken which make the participant forget the reality of where they are or what they are meant to be doing.
Escapist societies
Some social critics warn of attempts by the powers that control society to provide means of escapism instead of bettering the condition of the people – what Juvenal called "bread and the games".
Social philosopher Ernst Bloch wrote that utopias and images of fulfillment, however regressive they might be, also included an impetus for a radical social change. According to Bloch, social justice could not be realized without seeing things fundamentally differently. Something that is mere "daydreaming" or "escapism" from the viewpoint of a technological-rational society might be a seed for a new and more humane social order, as it can be seen as an "immature, but honest substitute for revolution".
Escapist societies appear often in literature. The Time Machine depicts the Eloi, a lackadaisical, insouciant race of the future, and the horror of their happy lifestyle beliefs. The novel subtly criticizes capitalism, or at least classism, as a means of escape. Escapist societies are common in dystopian novels; for example, in the Fahrenheit 451 society, television and "seashell radios" are used to escape a life with strict regulations and the threat of a forthcoming war. In science fiction media escapism is often depicted as an extension of social evolution, as society becomes detached from physical reality and processing into a virtual one, examples include the virtual world of Oz in the 2009 Japanese animated science fiction film Summer Wars and the game "Society" in the 2009 American science fiction film Gamer, a play on the real-life MMO game Second Life. Other escapist societies in literature include The Reality Bug by D. J. McHale, where an entire civilization leaves their world in ruin while they 'jump' into their perfect realities. The aim of the anti-hero becomes a quest to make their realities seemingly less perfect to regain control over their dying planet.
Escape scale
The Norwegian psychologist Frode Stenseng has presented a dualistic model of escapism in relation to different types of activity engagements. He discusses the paradox that the flow state (Csikszentmihalyi) resembles psychological states obtainable through actions such as drug abuse, sexual masochism, and suicide ideation (Baumeister). Accordingly, he deduces that the state of escape can have both positive and negative meanings and outcomes. Stenseng argues that there exist two forms of escapism with different affective outcomes dependent on the motivational focus that lies behind the immersion in the activity. Escapism in the form of self-suppression stems from motives to run away from unpleasant thoughts, self-perceptions, and emotions, whereas self-expansion stems from motives to gain positive experiences through the activity and to discover new aspects of self. Stenseng has developed the "escape scale" to measure self-suppression and self-expansion in people's favorite activities, such as sports, arts, and gaming. Empirical investigations of the model have shown that:
the two dimensions are distinctively different with regard to affective outcomes
some individuals are more prone to engage through one type of escapism
situational levels of well-being affect the type of escapism that becomes dominant at a specific time
During the Great Depression
Alan Brinkley, author of Culture and Politics in the Great Depression, presents how escapism became the new trend for dealing with the hardships created by the stock market crash in 1929: magazines, radio and movies, all were aimed to help people mentally escape from the mass poverty and economic downturn. Life magazine, which became hugely popular during the 1930s, was said to have pictures that give "no indication that there was such a thing as depression; most of the pictures are of bathing beauties and ship launchings and building projects and sports heroes – of almost anything but poverty and unemployment". Famous director Preston Sturges aimed to validate this notion by creating a film called Sullivan's Travels. The film ends with a group of poor destitute men in jail watching a comedic Mickey Mouse cartoon that ultimately lifts their spirits. Sturges aims to point out how "foolish and vain and self-indulgent" it would be to make a film about suffering. Therefore, movies of the time more often than not focused on comedic plot lines that distanced people emotionally from the horrors that were occurring all around them. These films "consciously, deliberately set out to divert people from their problems", but it also diverted them from the problems of those around them.
See also
Bread and circuses
Daydream
Deindividuation
Sehnsucht
Primitivism
Peter Pan syndrome
Quixotism
Self-deception
Utopianism
Wanderlust
Escapist fiction
References
External links
Ernst Bloch, Utopia and Ideology Critique
Aesthetics
Entertainment
Imagination
Utopian movements
Themes of the Romantic Movement
Defence mechanisms
Idealism
Emotions
Emotion | 0.76194 | 0.997635 | 0.760138 |
Acute stress reaction | Acute stress reaction (ASR, also known as psychological shock, mental shock, or simply shock) and acute stress disorder (ASD) is a psychological response to a terrifying, traumatic or surprising experience. Combat stress reaction (CSR) is a similar response to the trauma of war. The reactions may include but are not limited to intrusive or dissociative symptoms, and reactivity symptoms such as avoidance or arousal. It may be exhibited for days or weeks after the traumatic event. If the condition is not correctly addressed, it may develop into post-traumatic stress disorder (PTSD).
Diagnostic criteria
The International Classification of Diseases (ICD) treats this condition differently from the Diagnostic and Statistical Manual of Mental Disorders (DSM).
According to the ICD-11, acute stress reaction refers to the symptoms experienced a few hours to a few days after exposure to a traumatic event. In contrast, DSM-5 defines acute stress disorder by symptoms experienced 48 hours to one month following the event. Symptoms experienced for longer than one month are consistent with a diagnosis of PTSD per both classifications.
Acute stress reaction per ICD
The ICD-11 MMS gives the following description:
Acute stress disorder per DSM
According to the DSM-5, acute stress disorder requires the exposure to actual or threatened death, serious injury, or sexual violation by either directly experiencing it, witnessing it in person, learning it occurred to a close family or friend, or experiencing repeated exposure to aversive details of a traumatic event. In addition to the initial exposure, individuals may also present with a variety of different symptoms that fall within several clusters including intrusion, negative mood, dissociation, avoidance of distressing memories and emotional arousal. Intrusion symptoms include recurring and distressing dreams, flashbacks, or memories related to the traumatic event and related somatic symptoms. Negative mood refers to ones inability to experience positive emotions such as happiness or satisfaction. Dissociative symptoms include a sense of numbing or detachment from emotional reactions, a sense of physical detachment, decreased awareness of one's surroundings, the perception that one's environment is unreal or dreamlike, and the inability to recall critical aspects of the traumatic event (dissociative amnesia). Emotional arousal symptoms include sleep disturbances, hypervigilance, difficulties with concentration, more common startle response, and irritability. Symptom presentation must last for at least three consecutive days after trauma exposure to be classified as acute stress disorder. If symptoms persist past one month, the diagnosis of PTSD should be assessed for. The presenting symptoms must also cause significant impairment in multiple domains of one's life to be diagnosed.
Additional diagnoses that may develop from acute stress disorder include depression, anxiety, mood disorders, and substance abuse problems. Untreated acute stress disorder can also lead to the development of post-traumatic stress disorder.
Diagnostic assessment
Evaluation of patients is done through close examination of emotional response. Using self-report from patients is a large part of diagnosing acute stress disorder, as acute stress is the result of reactions to stressful situations.
Development and course
There are several theoretical perspectives on trauma response, including cognitive, biological, and psycho-biological. While PTSD-specific, these theories are still useful in understanding acute stress disorder, as the two disorders share many symptoms. A recent study found that even a single stressful event may have long-term consequences on cognitive function. This result calls the traditional distinction between the effects of acute and chronic stress into question.
Risk factors
Risk factors for developing acute stress disorder include a previously existing mental health diagnosis, avoidant coping mechanisms, and exaggerated appraisals of events. Additional factors also include prior trauma history and heightened emotional reactivity. The DSM-5 specifies that there is a higher prevalence of acute stress disorder among females compared to males due to neurobiological gender differences in stress response, as well as an alleged higher risk of experiencing traumatic events (a now defunct assumption originating from the continued prevalence of the Duluth Model in the legal cultures of relevant demographics, despite its having been soundly discredited in modern times by an overwhelming body of combined research and clinical experience); even though this specification has since been demonstrated to be erroneous, no official updates to the DSM have been committed to reflect as much. As a consequence of this oversight combined with the multitude of pervasive social stigmas and double-standards currently surrounding male mental health in many areas of the world, acute stress disorder and PTSD are both under-reported and under-diagnosed in a majority of male populations to date. A substantially increased realization of risk factors followed by a corresponding surge in diagnoses for males over the next several decades should be expected.
Types
Sympathetic
Sympathetic acute stress disorder is caused by the release of excessive adrenaline and norepinephrine into the nervous system. These hormones may speed up a person's pulse and respiratory rate, dilate pupils, or temporarily mask pain. This type of ASR developed as an evolutionary advantage to help humans survive dangerous situations. The "fight or flight" response may allow for temporarily-enhanced physical output, even in the face of severe injury. However, other physical illnesses become more difficult to diagnose, as ASR masks the pain and other vital signs that would otherwise be symptomatic.
Parasympathetic
Parasympathetic acute stress disorder is characterised by feeling faint and nauseated. This response is fairly often triggered by the sight of blood. In this stress response, the body releases acetylcholine. In many ways, this reaction is the opposite of the sympathetic response, in that it slows the heart rate and can cause the patient to either regurgitate or temporarily lose consciousness. The evolutionary value of this is unclear, although it may have allowed for prey to appear dead to avoid being eaten.
Pathophysiology
Stress is characterised by specific physiological responses to adverse or noxious stimuli.
Hans Selye was the first to coin the term "general adaptation syndrome" to suggest that stress-induced physiological responses proceed through the stages of alarm, resistance, and exhaustion.
The sympathetic branch of the autonomic nervous system gives rise to a specific set of physiological responses to physical or psychological stress. The body's response to stress is also termed a "fight or flight" response, and it is characterised by an increase in blood flow to the skeletal muscles, heart, and brain, a rise in heart rate and blood pressure, dilation of pupils, and an increase in the amount of glucose released by the liver.
The onset of an acute stress response is associated with specific physiological actions in the sympathetic nervous system, both directly and indirectly through the release of adrenaline and, to a lesser extent, noradrenaline from the medulla of the adrenal glands. These catecholamine hormones facilitate immediate physical reactions by triggering increases in heart rate and breathing, constricting blood vessels. An abundance of catecholamines at neuroreceptor sites facilitates reliance on spontaneous or intuitive behaviours often related to combat or escape.
Normally, when a person is in a serene, non-stimulated state, the firing of neurons in the locus ceruleus is minimal. A novel stimulus, once perceived, is relayed from the sensory cortex of the brain through the thalamus to the brain stem. That route of signalling increases the rate of noradrenergic activity in the locus ceruleus, and the person becomes more alert and attentive to their environment.
If a stimulus is perceived as a threat, a more intense and prolonged discharge of the locus ceruleus activates the sympathetic division of the autonomic nervous system. The activation of the sympathetic nervous system leads to the release of norepinephrine from nerve endings acting on the heart, blood vessels, respiratory centers, and other sites. The ensuing physiological changes constitute a major part of the acute stress response. The other major player in the acute stress response is the hypothalamic-pituitary-adrenal axis. Stress activates this axis and produces neuro-biological changes. These chemical changes increase the chances of survival by bringing the physiological system back to homeostasis.
The autonomic nervous system controls all automatic functions in the body and contains two subsections within it that aid the response to an acute stress reaction. These two subunits are the sympathetic nervous system and the parasympathetic nervous system. The sympathetic response is colloquially known as the "fight or flight" response, indicated by accelerated pulse and respiration rates, pupil dilation, and a general feeling of anxiety and hyper-awareness. This is caused by the release of epinephrine and norepinephrine from the adrenal glands. The epinephrine and norepinephrine strike the beta receptors of the heart, which feeds the heart's sympathetic nerve fibres to increase the strength of heart muscle contraction; as a result, more blood gets circulated, increasing the heart rate and respiratory rate. The sympathetic nervous system also stimulates the skeletal system and muscular system to pump more blood to those areas to handle the acute stress. Simultaneously, the sympathetic nervous system inhibits the digestive system and the urinary system to optimise blood flow to the heart, lungs, and skeletal muscles. This plays a role in the alarm reaction stage. The parasympathetic response is colloquially known as the "rest and digest" response, indicated by reduced heart and respiration rates, and, more obviously, by a temporary loss of consciousness if the system is fired at a rapid rate. The parasympathetic nervous system stimulates the digestive system and urinary system to send more blood to those systems to increase the process of digestion. To do this, it must inhibit the cardiovascular system and respiratory system to optimise blood flow to the digestive tract, causing low heart and respiratory rates. The parasympathetic nervous system plays no role in acute stress response.
Studies have shown that patients with acute stress disorder have overactive right amygdalae and prefrontal cortices; both structures are involved in the fear-processing pathway.
Treatment
This disorder may resolve itself with time or may develop into a more severe disorder, such as PTSD. However, results of Creamer, O'Donnell, and Pattison's (2004) study of 363 patients suggests that a diagnosis of acute stress disorder had only limited predictive validity for PTSD. Creamer et al. found that re-experiences of the traumatic event and arousal were better predictors of PTSD. Early pharmacotherapy may prevent the development of post-traumatic symptoms. Additionally, early trauma-focused cognitive behavioural therapy (TF-CBT) for those with a diagnosis of ASD can protect an individual from developing chronic PTSD.
Studies have been conducted to assess the efficacy of counselling and psychotherapy for people with acute stress disorder. Cognitive behavioural therapy, which includes exposure and cognitive restructuring, was found to be effective in preventing PTSD in patients diagnosed with acute stress disorder with clinically significant results at six-month follow-up appointments. A combination of relaxation, cognitive restructuring, imaginal exposure, and in-vivo exposure was superior to supportive counselling. Mindfulness-based stress reduction programmes also appear to be effective for stress management.
The pharmacological approach has made some progress in lessening the effects of ASD. To relax patients and allow for better sleep, Prazosin can be given to patients, which regulates their sympathetic response. Hydrocortisone has shown some success as an early preventative measure following a traumatic event, typically in the treatment of PTSD.
In a wilderness context where counselling, psychotherapy, and cognitive behavioural therapy is unlikely to be available, the treatment for acute stress reaction is very similar to the treatment of cardiogenic shock, vascular shock, and hypovolemic shock; that is, allowing the patient to lie down, providing reassurance, and removing the stimulus that prompted the reaction. In traditional shock cases, this generally means relieving injury pain or stopping blood loss. In an acute stress reaction, this may mean pulling a rescuer away from the emergency to calm down or blocking the sight of an injured friend from a patient.
History
The term "acute stress disorder" (ASD) was first used to describe the symptoms of soldiers during World War I and II, and it was therefore also termed "combat stress reaction" (CSR). Approximately 20% of U.S. troops displayed symptoms of CSR during WWII. It was assumed to be a temporary response of healthy individuals to witnessing or experiencing traumatic events. Symptoms include depression, anxiety, withdrawal, confusion, paranoia, and sympathetic hyperactivity.
The American Psychiatric Association officially included ASD in the DSM-IV in 1994. Before that, symptomatic individuals within the first month of trauma were diagnosed with adjustment disorder.
Initially, being able to describe different ASRs was one of the goals of introducing ASD. Some criticisms surrounding ASD's focal point include issues with ASD recognising other distressing emotional reactions, like depression and shame. Emotional reactions similar to these may then be diagnosed as adjustment disorder under the current system of trying to diagnose ASD.
Since its addition to the DSM-IV, questions about the efficacy and purpose of the ASD diagnosis have been raised. The diagnosis of ASD was criticized as an unnecessary addition to the progress of diagnosing PTSD, as some considered it more akin to a sign of PTSD than an independent issue requiring diagnosis. Also, the terms ASD and ASR have been criticized for not fully covering the range of stress reactions.
In animals
Notes
References
Aftermath of war
Shock
Stress-related disorders | 0.763289 | 0.995838 | 0.760113 |
Philosophy, politics and economics | Philosophy, politics and economics, or politics, philosophy and economics (PPE), is an interdisciplinary undergraduate or postgraduate degree which combines study from three disciplines. The first institution to offer degrees in PPE was the University of Oxford in the 1920s.
This particular course has produced a significant number of notable graduates such as Aung San Suu Kyi, Burmese politician and State Counsellor of Myanmar, Nobel Peace Prize winner; Princess Haya bint Hussein, daughter of the late King Hussein of Jordan; Christopher Hitchens, the British–American author and journalist; Will Self, British author and journalist; Oscar-winning writer and director Florian Henckel von Donnersmarck; Michael Dummett, Gareth Evans, Philippa Foot, Christopher Peacocke, Gilbert Ryle, and Peter Strawson, philosophers; Harold Wilson, Edward Heath, David Cameron, Liz Truss and Rishi Sunak, Prime Ministers of the United Kingdom; Hugh Gaitskell, Michael Foot, William Hague and Ed Miliband, former Leaders of the Opposition; former Prime Ministers of Pakistan Benazir Bhutto and Imran Khan; and Malcolm Fraser, Bob Hawke and Tony Abbott, former Prime Ministers of Australia. The course received fresh attention in 2017, when Nobel Peace Prize winner Malala Yousafzai earned a place.
In the 1980s, the University of York went on to establish its own PPE degree based upon the Oxford model; King's College London, the University of Warwick, the University of Manchester, and other British universities later followed. According to the BBC, the Oxford PPE "dominate[s] public life" (in the UK). It is now offered at several other leading colleges and universities around the world. More recently Warwick University and King’s College added a new degree under the name of PPL (Politics, Philosophy and Law) with the aim to bring an alternative to the more classical PPE degrees.
In the United States, it is offered by over 50 colleges and universities, including three Ivy League schools and a large number of public universities. Harvard University began offering a similar degree in Social Studies in 1960, which combines politics, philosophy, and economics with history and sociology. In 2020, in addition to its undergraduate degree programs in PPE, Virginia Tech joined the Chapman University's Smith Institute as among the first research centers in the world dedicated to interdisciplinary research in PPE. Several PPE programs exist in Canada, most notably the first endowed school in the nation – the Frank McKenna School of Philosophy, Politics and Economics at Mount Allison University. In Asia, Tsinghua University, Waseda University, NUS, Tel-Aviv University and Ashoka University are among those that have PPE or similar programs.
History
Philosophy, politics and economics was established as a degree course at the University of Oxford in the 1920s, as a modern alternative to classics (known as "literae humaniores" or "greats" at Oxford) because it was thought as a more modern alternative for those entering the civil service. It was thus initially known as "modern greats". The first PPE students commenced their course in the autumn of 1921. The regulation by which it was established is Statt. Tit. VI. Sect. 1 C; "the subject of the Honour School of Philosophy, Politics, and Economics shall be the study of the structure, and the philosophical and economic principles, of Modern Society." Initially it was compulsory to study all three subjects for all three years of the course, but in 1970 this requirement was relaxed, and since then students have been able to drop one subject after the first year – most do this, but a minority continue with all three.
During the 1960s some students started to critique the course from a left-wing perspective, culminating in the publication of a pamphlet, The Poverty of PPE, in 1968, written by Trevor Pateman, who argued that it "gives no training in scholarship, only refining to a high degree of perfection the ability to write short dilettantish essays on the basis of very little knowledge: ideal training for the social engineer". The pamphlet advocated incorporating the study of sociology, anthropology and art, and to take on the aim of "assist(ing) the radicalisation and mobilisation of political opinion outside the university". In response, some minor changes were made, with influential leftist writers such as Frantz Fanon and Régis Debray being added to politics reading lists, but the core of the programme remained the same.
Christopher Stray has pointed to the course as one reason for the gradual decline of the study of classics, as classicists in political life began to be edged out by those who had studied the modern greats.
Political theorists Dario Castiglione and Iain Hampsher-Monk have described the course as being fundamental to the development of political thought in the UK since it established a connection between politics and philosophy. Previously at Oxford, and for some time subsequently at Cambridge, politics had been taught only as a branch of modern history.
Course material
The programme is rooted in the view that to understand social phenomena one must approach them from several complementary disciplinary directions and analytical frameworks. In this regard, the study of philosophy is considered important because it both equips students with meta-tools such as the ability to reason rigorously and logically, and facilitates ethical reflection. The study of politics is considered necessary because it acquaints students with the institutions that govern society and help solve collective action problems. Finally, studying economics is seen as vital in the modern world because political decisions often concern economic matters, and government decisions are often influenced by economic events. The vast majority of students at Oxford drop one of the three subjects for the second and third years of their course. Oxford now has more than 600 undergraduates studying the subject, admitting over 200 each year.
Academic opinions
Oxford PPE graduate Nick Cohen and former tutor Iain McLean consider the course's breadth important to its appeal, especially "because British society values generalists over specialists". Academic and Labour peer Maurice Glasman noted that "PPE combines the status of an elite university degree – PPE is the ultimate form of being good at school – with the stamp of a vocational course. It is perfect training for cabinet membership, and it gives you a view of life". However, he also noted that it had an orientation towards consensus politics and technocracy.
Geoffrey Evans, an Oxford fellow in politics and a senior tutor, critiques that the Oxford course's success and consequent over-demand is a self-perpetuating feature of those in front of and behind the scenes in national administration, in stating "all in all, it's how the class system works". In the current economic system, he bemoans the unavoidable inequalities besetting admissions and thereby enviable recruitment prospects of successful graduates. The argument itself intended as a paternalistic ethical reflection on how governments and peoples can perpetuate social stratification.
Stewart Wood, a former adviser to Ed Miliband who studied PPE at Oxford in the 1980s and taught politics there in the 1990s and 2000s, acknowledged that the programme has been slow to catch up with contemporary political developments, saying that "it does still feel like a course for people who are going to run the Raj in 1936... In the politics part of PPE, you can go three years without discussing a single contemporary public policy issue". He also stated that the structure of the course gave it a centrist bias, due to the range of material covered: "...most students think, mistakenly, that the only way to do it justice is to take a centre position".
List of offering universities
United Kingdom
England
Birkbeck, University of London
Durham University
Goldsmiths, University of London
Keele University
King's College London
Kingston University
Lancaster University
London School of Economics
The Open University
Royal Holloway, University of London
SOAS University of London
University College London
University of Buckingham
New College of the Humanities at Northeastern
University of East Anglia
University of Essex
University of Exeter
University of Hull
University of Leeds
University of Liverpool
University of Loughborough
University of Manchester
University of Nottingham
University of Oxford
University of Reading
University of Sheffield
University of Southampton
University of Sussex
University of Warwick
University of Winchester
University of York
Scotland
University of Aberdeen
University of Edinburgh
University of the Highlands and Islands
University of Stirling
Wales
Swansea University
Northern Ireland
Queen's University Belfast
North America
Canada
Mount Allison University (within the Frank McKenna School of Philosophy, Politics, & Economics)
Queen's University
The King's University
University of British Columbia (Okanagan Campus)
University of Regina
University of Western Ontario
Wilfrid Laurier University
United States
Arizona State University (certificate)
Austin College
Belmont Abbey College
Binghamton University (under the designation of "PPL" - replacing economics with law)
Bowie State University
Bowling Green State University (under the designation of "PPEL" - with law)
Boyce College
Bridgewater State University (minor)
Calvin University
Carnegie Mellon University (under the designation "Ethics, History, and Public Policy", abbreviated "EHPP")
Carroll University
Claremont McKenna College
Cornell University (offers academic year abroad at Oxford University to study PPE)
Criswell College
Dallas Baptist University
Dartmouth College (under the modified major of "Politics, Philosophy, and the Economy")
Denison University
Drexel University
Duke University (certificate)
Eastern Oregon University
Elon University (minor)
Emory & Henry College
George Mason University
Georgia State University
Indiana University of Pennsylvania
Juniata College
The King's College
La Salle University
Liberty University (online)
Mercer University
Minnesota State University, Mankato
Mount St. Mary's University
Murphy Institute (Tulane University, under the designation "Political Economy")
Northeastern University
Northwest Nazarene University
Ohio Northern University
Ohio State University
Ottawa University
Palm Beach Atlantic University
Pomona College
Rhodes College
Rutgers University–New Brunswick (certificate)
Seattle Pacific University
Siena Heights University (certificate)
Spring Hill College
St. John's University (master's degree)
State University of New York at Oswego
Suffolk University
Swarthmore College
Taylor University
Texas Tech University (as a concentration of an Honor Sciences and the Humanities degree)
Transylvania University
University of Akron
University of Alabama at Birmingham (as a concentration of an Economics degree)
University of Arizona (under the designation "PPEL" - with law)
University at Buffalo
University of California, Berkeley (minor, under the designation "PPL" - replacing economics with law)
University of California, Irvine
The University of Idaho (minor)
University of Iowa (under the designation "Ethics & Public Policy")
The University of Louisville (minor)
University of Maryland
University of Michigan (honors program)
University of Minnesota Morris (as a concentration of a Philosophy degree)
University of North Carolina at Chapel Hill (minor)
University of Notre Dame (minor)
University of Pennsylvania
University of Pittsburgh
University of Richmond (under the designation "PPEL" - with law)
University of Rochester
University of San Diego (minor)
University of Sioux Falls (as "Philosophy, Economics, and Political Theory")
University of Southern California
University of Virginia (under the designation "PPL" - replacing economics with law)
University of Washington Bothell (under the designation "Law, Economics & Public Policy", abbreviated "LEPP")
University of Washington Tacoma
University of Wisconsin Political Economy, Politics and Philosophy (certificate program)
Utah State University (certificate)
Villanova University (honors program and honors minor)
Virginia Tech (offers both a major and a minor in PPE)
Wabash College
Wesleyan University (under the designation "College of Social Studies")
Western Washington University
Wheaton College (certificate)
Wheeling University (under the designation "political and economic philosophy")
Xavier University (under the designation "Philosophy, Politics, and the Public", abbreviated "PPP")
Yale University (under the designation "ethics, politics and economics", abbreviated "EP&E")
Africa
South Africa
Stellenbosch University
University of Cape Town
University of KwaZulu-Natal
University of South Africa
University of Johannesburg
University of Witwatersrand
University of Pretoria
Nigeria
Obafemi Awolowo University,
Afe Babalola University,
Oceania
Australia
Australian National University
Deakin University
La Trobe University
University of Adelaide
University of New South Wales
University of Sydney
University of Queensland
University of Technology, Sydney
University of Western Australia
University of Wollongong
Murdoch University (appears as a unit in Philosophy (BA) or Ethics minor)
Monash University
New Zealand
University of Canterbury
University of Otago
Victoria University of Wellington
Europe
Northern Europe
Iceland
Bifröst University, Iceland
Sweden
Lund University, Sweden
Stockholm University, Sweden
Södertörn University, Sweden
Norway
University of Oslo, Norway
Southern Europe
Italy
Ca' Foscari University of Venice, Italy (under the designation of "Philosophy, International Studies and Economics" abbreviated "PISE", more recently “Philosophy, International and Economic Studies”)
Free University of Bolzano, Italy
Libera Università Internazionale degli Studi Sociali Guido Carli, Rome (Italy)
University of Milan, Italy (BA International Politics, Law, and Economics, MA Politics, Philosophy, and Public Affairs )
University of Bari, Italy (University Master Programme, 'Philosophy, Politics and Economics in Med')
Spain
Charles III University of Madrid, Autonomous University of Madrid, Autonomous University of Barcelona and Pompeu Fabra University (alliance of four universities), Spain
Ramon Llull University, Barcelona, Spain
Comillas Pontifical University, Madrid (joint degree), Spain
Francisco de Vitoria University, Spain
University of Navarra, Spain
University of Deusto, Basque Country, Spain
IE University, Madrid, Spain (offered together with Law)
Portugal
ISCTE - University Institute of Lisbon (Politics, Economy and Society)
Catholic University of Lisbon - Human Sciences university
Turkey
Ankara University, Turkey(Politics and Economics, still abbreviated as PPE)
Western Europe
Austria
Central European University, Vienna, Austria
University of Vienna, Austria (MA Philosophy and Economics, P&E)
University of Salzburg, Austria
University of Graz, Austria (under the designation of MA "political, economic and legal philosophy" abbreviated "PELP")
Belgium
UCLouvain, Belgium
France
American University of Paris, France
Sciences Po, France
European School of Political and Social Sciences, France
Germany
Karlshochschule International University, Germany, offers both a BA in Politics, Philosophy & Economics "PPE" as well as a MA in "Social Transformation: PPE".
Bard College Berlin, Germany (under the designation of BA "Economics, Politics and Social Thought" abbreviated "EPST")
Frankfurt School of Finance & Management, Germany (under the designation of B.Sc. "Management, Philosophy & Economics" abbreviated "MPE")
University of Bayreuth, Germany (Philosophy and Economics, P&E)
University of Hamburg, Germany (under the designation of M.Sc. "politics, economics and philosophy" abbreviated "PEP")
Heinrich-Heine-University Düsseldorf, Germany (Bachelor since 2018/ Master starting in 2023)
Ludwig Maximilian University of Munich, Germany
Witten/Herdecke University (bachelor and master), Germany
Ireland
National University of Ireland, Maynooth
UCD, National University of Ireland
Trinity College, The University of Dublin
The Netherlands
Utrecht University(Honours College)
Leiden University, Netherlands
University of Amsterdam, Netherlands under the designation: PPLE, Politics, Psychology, Law and Economics
VU Amsterdam, Netherlands, Bachelor's Philosophy, Politics and Economics at the John Stuart Mill College
Erasmus University College, Netherlands, under the designation, Bachelor of Liberal Arts and Science - Major in Philosophy, Politics and Economics (PPE) - and a Research Master in Philosophy and Economics.
University of Groningen, Netherlands, Master's Philosophy, Politics and Economics
Switzerland
University of Zurich, Switzerland (under the designation of MA "economic and political philosophy")
University of Bern, Switzerland (under the designation of MA "political, legal and economic philosophy" abbreviated "PLEP")
University of Lucerne, Switzerland
Eastern Europe
Czech Republic
CEVRO Institute, Prague, Czech Republic
Charles University, Prague, Czech Republic, BA & MA programs
Hungary
Corvinus University of Budapest, Budapest, Hungary
University of Pécs, Pécs, Hungary
Others
University of Bucharest, Romania (Master's degree in Philosophy, Politics and Economics)
National Research University – Higher School of Economics, (Masters in Politics, Economics, Philosophy), Moscow, Russia
Ukrainian Catholic University, Lviv, Ukraine (under the designation "Ethics. Politics. Economics", abbreviated "EPE")
American University of Armenia, (Minor in Philosophy, Politics, & Economics, abbreviated as PPE), Yerevan, Armenia
Asia
East Asia
People's Republic of China
Mainland China
Tsinghua University, Beijing, China
Peking University, Beijing, China
Renmin University of China, Beijing, China
Beijing Normal University, Beijing, China
Fudan University, Shanghai, China
Nankai University, Tianjin, China
Wuhan University, Wuhan, China
Zhongnan University of Economics and Law, Wuhan, China
Shandong University, Jinan, China
Inner Mongolia University, Hohhot, China
Hong Kong Special Administrative Region
Hang Seng University of Hong Kong
Korea
Seoul National University, S. Korea
Korea University, S. Korea
Kyungpook National University, S. Korea
Sungkyunkwan University, S. Korea
Sogang University, S. Korea
Hanyang University (under the designation "PPEL" - with law), S. Korea
Yonsei University, S. Korea
Japan
Keio University, Japan
University of Kyoto, Japan
University of Tokyo, Japan
Waseda University, Japan
Southeast Asia
Singapore
Yale-NUS, Singapore
National University of Singapore, Singapore
Thailand
Rangsit University, Thailand
Thammasat University, Thailand
South Asia
India
Lucknow University, Lucknow, India
Amity University, Noida, India
Bangalore University, Bangalore, India
Birla Institute of Technology and Science, India
Azim Premji University, Bangalore, India
Ashoka University, India
Pakistan
Aga Khan University, Karachi, Pakistan
Bangladesh
Asian University for Women, Bangladesh
Middle East
Israel
Hebrew University of Jerusalem, Israel
Tel Aviv University (under the designation "PPEL" - with law), Israel
Open University of Israel, Israel
Latin America
Universidad Torcuato Di Tella (under the designation "Ciencia Sociales, Orientación en Política y Economía"), Argentina
Universidad Metropolitana (under the designation "Estudios Liberales"), Venezuela
Universidad de las Americas (under the designation "Filosofia, Politica, y Economia), Ecuador
Universidad del Desarrollo Chile (under the Master Program "Filosofia, Politica, y Economia PPE), Santiago de Chile.
See also
Literae Humaniores
Philosophy and economics
List of University of Oxford people with PPE degrees
References
Further reading
External links
PPE, Oxford University – Official Website
100 years of PPE at Oxford University
International PPE Conference
Anomaly, Jonny (29 January 2016). "Why PPE?". American Philosophical Association.
Beckett, Andy (23 February 2017). "PPE: the Oxford degree that runs Britain". The Guardian.
Academic courses at the University of Oxford
Political economy
Economics education
Philosophy education
Political science education
Subfields of political science | 0.762287 | 0.997078 | 0.76006 |
Neuroscience | Neuroscience is the scientific study of the nervous system (the brain, spinal cord, and peripheral nervous system), its functions, and its disorders. It is a multidisciplinary science that combines physiology, anatomy, molecular biology, developmental biology, cytology, psychology, physics, computer science, chemistry, medicine, statistics, and mathematical modeling to understand the fundamental and emergent properties of neurons, glia and neural circuits. The understanding of the biological basis of learning, memory, behavior, perception, and consciousness has been described by Eric Kandel as the "epic challenge" of the biological sciences.
The scope of neuroscience has broadened over time to include different approaches used to study the nervous system at different scales. The techniques used by neuroscientists have expanded enormously, from molecular and cellular studies of individual neurons to imaging of sensory, motor and cognitive tasks in the brain.
History
The earliest study of the nervous system dates to ancient Egypt. Trepanation, the surgical practice of either drilling or scraping a hole into the skull for the purpose of curing head injuries or mental disorders, or relieving cranial pressure, was first recorded during the Neolithic period. Manuscripts dating to 1700 BC indicate that the Egyptians had some knowledge about symptoms of brain damage.
Early views on the function of the brain regarded it to be a "cranial stuffing" of sorts. In Egypt, from the late Middle Kingdom onwards, the brain was regularly removed in preparation for mummification. It was believed at the time that the heart was the seat of intelligence. According to Herodotus, the first step of mummification was to "take a crooked piece of iron, and with it draw out the brain through the nostrils, thus getting rid of a portion, while the skull is cleared of the rest by rinsing with drugs."
The view that the heart was the source of consciousness was not challenged until the time of the Greek physician Hippocrates. He believed that the brain was not only involved with sensation—since most specialized organs (e.g., eyes, ears, tongue) are located in the head near the brain—but was also the seat of intelligence. Plato also speculated that the brain was the seat of the rational part of the soul. Aristotle, however, believed the heart was the center of intelligence and that the brain regulated the amount of heat from the heart. This view was generally accepted until the Roman physician Galen, a follower of Hippocrates and physician to Roman gladiators, observed that his patients lost their mental faculties when they had sustained damage to their brains.
Abulcasis, Averroes, Avicenna, Avenzoar, and Maimonides, active in the Medieval Muslim world, described a number of medical problems related to the brain. In Renaissance Europe, Vesalius (1514–1564), René Descartes (1596–1650), Thomas Willis (1621–1675) and Jan Swammerdam (1637–1680) also made several contributions to neuroscience.
Luigi Galvani's pioneering work in the late 1700s set the stage for studying the electrical excitability of muscles and neurons. In 1843 Emil du Bois-Reymond demonstrated the electrical nature of the nerve signal, whose speed Hermann von Helmholtz proceeded to measure, and in 1875 Richard Caton found electrical phenomena in the cerebral hemispheres of rabbits and monkeys. Adolf Beck published in 1890 similar observations of spontaneous electrical activity of the brain of rabbits and dogs. Studies of the brain became more sophisticated after the invention of the microscope and the development of a staining procedure by Camillo Golgi during the late 1890s. The procedure used a silver chromate salt to reveal the intricate structures of individual neurons. His technique was used by Santiago Ramón y Cajal and led to the formation of the neuron doctrine, the hypothesis that the functional unit of the brain is the neuron. Golgi and Ramón y Cajal shared the Nobel Prize in Physiology or Medicine in 1906 for their extensive observations, descriptions, and categorizations of neurons throughout the brain.
In parallel with this research, in 1815 Jean Pierre Flourens induced localized lesions of the brain in living animals to observe their effects on motricity, sensibility and behavior. Work with brain-damaged patients by Marc Dax in 1836 and Paul Broca in 1865 suggested that certain regions of the brain were responsible for certain functions. At the time, these findings were seen as a confirmation of Franz Joseph Gall's theory that language was localized and that certain psychological functions were localized in specific areas of the cerebral cortex. The localization of function hypothesis was supported by observations of epileptic patients conducted by John Hughlings Jackson, who correctly inferred the organization of the motor cortex by watching the progression of seizures through the body. Carl Wernicke further developed the theory of the specialization of specific brain structures in language comprehension and production. Modern research through neuroimaging techniques, still uses the Brodmann cerebral cytoarchitectonic map (referring to the study of cell structure) anatomical definitions from this era in continuing to show that distinct areas of the cortex are activated in the execution of specific tasks.
During the 20th century, neuroscience began to be recognized as a distinct academic discipline in its own right, rather than as studies of the nervous system within other disciplines. Eric Kandel and collaborators have cited David Rioch, Francis O. Schmitt, and Stephen Kuffler as having played critical roles in establishing the field. Rioch originated the integration of basic anatomical and physiological research with clinical psychiatry at the Walter Reed Army Institute of Research, starting in the 1950s. During the same period, Schmitt established a neuroscience research program within the Biology Department at the Massachusetts Institute of Technology, bringing together biology, chemistry, physics, and mathematics. The first freestanding neuroscience department (then called Psychobiology) was founded in 1964 at the University of California, Irvine by James L. McGaugh. This was followed by the Department of Neurobiology at Harvard Medical School, which was founded in 1966 by Stephen Kuffler.
In the process of treating epilepsy, Wilder Penfield produced maps of the location of various functions (motor, sensory, memory, vision) in the brain. He summarized his findings in a 1950 book called The Cerebral Cortex of Man. Wilder Penfield and his co-investigators Edwin Boldrey and Theodore Rasmussen are considered to be the originators of the cortical homunculus.
The understanding of neurons and of nervous system function became increasingly precise and molecular during the 20th century. For example, in 1952, Alan Lloyd Hodgkin and Andrew Huxley presented a mathematical model for the transmission of electrical signals in neurons of the giant axon of a squid, which they called "action potentials", and how they are initiated and propagated, known as the Hodgkin–Huxley model. In 1961–1962, Richard FitzHugh and J. Nagumo simplified Hodgkin–Huxley, in what is called the FitzHugh–Nagumo model. In 1962, Bernard Katz modeled neurotransmission across the space between neurons known as synapses. Beginning in 1966, Eric Kandel and collaborators examined biochemical changes in neurons associated with learning and memory storage in Aplysia. In 1981 Catherine Morris and Harold Lecar combined these models in the Morris–Lecar model. Such increasingly quantitative work gave rise to numerous biological neuron models and models of neural computation.
As a result of the increasing interest about the nervous system, several prominent neuroscience organizations have been formed to provide a forum to all neuroscientists during the 20th century. For example, the International Brain Research Organization was founded in 1961, the International Society for Neurochemistry in 1963, the European Brain and Behaviour Society in 1968, and the Society for Neuroscience in 1969. Recently, the application of neuroscience research results has also given rise to applied disciplines as neuroeconomics, neuroeducation, neuroethics, and neurolaw.
Over time, brain research has gone through philosophical, experimental, and theoretical phases, with work on neural implants and brain simulation predicted to be important in the future.
Modern neuroscience
The scientific study of the nervous system increased significantly during the second half of the twentieth century, principally due to advances in molecular biology, electrophysiology, and computational neuroscience. This has allowed neuroscientists to study the nervous system in all its aspects: how it is structured, how it works, how it develops, how it malfunctions, and how it can be changed.
For example, it has become possible to understand, in much detail, the complex processes occurring within a single neuron. Neurons are cells specialized for communication. They are able to communicate with neurons and other cell types through specialized junctions called synapses, at which electrical or electrochemical signals can be transmitted from one cell to another. Many neurons extrude a long thin filament of axoplasm called an axon, which may extend to distant parts of the body and are capable of rapidly carrying electrical signals, influencing the activity of other neurons, muscles, or glands at their termination points. A nervous system emerges from the assemblage of neurons that are connected to each other in neural circuits, and networks.
The vertebrate nervous system can be split into two parts: the central nervous system (defined as the brain and spinal cord), and the peripheral nervous system. In many species—including all vertebrates—the nervous system is the most complex organ system in the body, with most of the complexity residing in the brain. The human brain alone contains around one hundred billion neurons and one hundred trillion synapses; it consists of thousands of distinguishable substructures, connected to each other in synaptic networks whose intricacies have only begun to be unraveled. At least one out of three of the approximately 20,000 genes belonging to the human genome is expressed mainly in the brain.
Due to the high degree of plasticity of the human brain, the structure of its synapses and their resulting functions change throughout life.
Making sense of the nervous system's dynamic complexity is a formidable research challenge. Ultimately, neuroscientists would like to understand every aspect of the nervous system, including how it works, how it develops, how it malfunctions, and how it can be altered or repaired. Analysis of the nervous system is therefore performed at multiple levels, ranging from the molecular and cellular levels to the systems and cognitive levels. The specific topics that form the main focus of research change over time, driven by an ever-expanding base of knowledge and the availability of increasingly sophisticated technical methods. Improvements in technology have been the primary drivers of progress. Developments in electron microscopy, computer science, electronics, functional neuroimaging, and genetics and genomics have all been major drivers of progress.
Advances in the classification of brain cells have been enabled by electrophysiological recording, single-cell genetic sequencing, and high-quality microscopy, which have combined into a single method pipeline called patch-sequencing in which all three methods are simultaneously applied using miniature tools. The efficiency of this method and the large amounts of data that is generated has allowed researchers to make some general conclusions about cell types; for example that the human and mouse brain have different versions of fundamentally the same cell types.
Molecular and cellular neuroscience
Basic questions addressed in molecular neuroscience include the mechanisms by which neurons express and respond to molecular signals and how axons form complex connectivity patterns. At this level, tools from molecular biology and genetics are used to understand how neurons develop and how genetic changes affect biological functions. The morphology, molecular identity, and physiological characteristics of neurons and how they relate to different types of behavior are also of considerable interest.
Questions addressed in cellular neuroscience include the mechanisms of how neurons process signals physiologically and electrochemically. These questions include how signals are processed by neurites and somas and how neurotransmitters and electrical signals are used to process information in a neuron. Neurites are thin extensions from a neuronal cell body, consisting of dendrites (specialized to receive synaptic inputs from other neurons) and axons (specialized to conduct nerve impulses called action potentials). Somas are the cell bodies of the neurons and contain the nucleus.
Another major area of cellular neuroscience is the investigation of the development of the nervous system. Questions include the patterning and regionalization of the nervous system, axonal and dendritic development, trophic interactions, synapse formation and the implication of fractones in neural stem cells, differentiation of neurons and glia (neurogenesis and gliogenesis), and neuronal migration.
Computational neurogenetic modeling is concerned with the development of dynamic neuronal models for modeling brain functions with respect to genes and dynamic interactions between genes, on the cellular level (CNGM can also be used to model neural systems as well).
Neural circuits and systems
Systems neuroscience research centers on the structural and functional architecture of the developing human brain, and the functions of large-scale brain networks, or functionally-connected systems within the brain. Alongside brain development, systems neuroscience also focuses on how the structure and function of the brain enables or restricts the processing of sensory information, using learned mental models of the world, to motivate behavior.
Questions in systems neuroscience include how neural circuits are formed and used anatomically and physiologically to produce functions such as reflexes, multisensory integration, motor coordination, circadian rhythms, emotional responses, learning, and memory. In other words, this area of research studies how connections are made and morphed in the brain, and the effect it has on human sensation, movement, attention, inhibitory control, decision-making, reasoning, memory formation, reward, and emotion regulation.
Specific areas of interest for the field include observations of how the structure of neural circuits effect skill acquisition, how specialized regions of the brain develop and change (neuroplasticity), and the development of brain atlases, or wiring diagrams of individual developing brains.
The related fields of neuroethology and neuropsychology address the question of how neural substrates underlie specific animal and human behaviors. Neuroendocrinology and psychoneuroimmunology examine interactions between the nervous system and the endocrine and immune systems, respectively. Despite many advancements, the way that networks of neurons perform complex cognitive processes and behaviors is still poorly understood.
Cognitive and behavioral neuroscience
Cognitive neuroscience addresses the questions of how psychological functions are produced by neural circuitry. The emergence of powerful new measurement techniques such as neuroimaging (e.g., fMRI, PET, SPECT), EEG, MEG, electrophysiology, optogenetics and human genetic analysis combined with sophisticated experimental techniques from cognitive psychology allows neuroscientists and psychologists to address abstract questions such as how cognition and emotion are mapped to specific neural substrates. Although many studies still hold a reductionist stance looking for the neurobiological basis of cognitive phenomena, recent research shows that there is an interesting interplay between neuroscientific findings and conceptual research, soliciting and integrating both perspectives. For example, neuroscience research on empathy solicited an interesting interdisciplinary debate involving philosophy, psychology and psychopathology. Moreover, the neuroscientific identification of multiple memory systems related to different brain areas has challenged the idea of memory as a literal reproduction of the past, supporting a view of memory as a generative, constructive and dynamic process.
Neuroscience is also allied with the social and behavioral sciences, as well as with nascent interdisciplinary fields. Examples of such alliances include neuroeconomics, decision theory, social neuroscience, and neuromarketing to address complex questions about interactions of the brain with its environment. A study into consumer responses for example uses EEG to investigate neural correlates associated with narrative transportation into stories about energy efficiency.
Computational neuroscience
Questions in computational neuroscience can span a wide range of levels of traditional analysis, such as development, structure, and cognitive functions of the brain. Research in this field utilizes mathematical models, theoretical analysis, and computer simulation to describe and verify biologically plausible neurons and nervous systems. For example, biological neuron models are mathematical descriptions of spiking neurons which can be used to describe both the behavior of single neurons as well as the dynamics of neural networks. Computational neuroscience is often referred to as theoretical neuroscience.
Neuroscience and medicine
Clinical neuroscience
Neurology, psychiatry, neurosurgery, psychosurgery, anesthesiology and pain medicine, neuropathology, neuroradiology, ophthalmology, otolaryngology, clinical neurophysiology, addiction medicine, and sleep medicine are some medical specialties that specifically address the diseases of the nervous system. These terms also refer to clinical disciplines involving diagnosis and treatment of these diseases.
Neurology works with diseases of the central and peripheral nervous systems, such as amyotrophic lateral sclerosis (ALS) and stroke, and their medical treatment. Psychiatry focuses on affective, behavioral, cognitive, and perceptual disorders. Anesthesiology focuses on perception of pain, and pharmacologic alteration of consciousness. Neuropathology focuses upon the classification and underlying pathogenic mechanisms of central and peripheral nervous system and muscle diseases, with an emphasis on morphologic, microscopic, and chemically observable alterations. Neurosurgery and psychosurgery work primarily with surgical treatment of diseases of the central and peripheral nervous systems.
Translational research
Recently, the boundaries between various specialties have blurred, as they are all influenced by basic research in neuroscience. For example, brain imaging enables objective biological insight into mental illnesses, which can lead to faster diagnosis, more accurate prognosis, and improved monitoring of patient progress over time.
Integrative neuroscience describes the effort to combine models and information from multiple levels of research to develop a coherent model of the nervous system. For example, brain imaging coupled with physiological numerical models and theories of fundamental mechanisms may shed light on psychiatric disorders.
Another important area of translational research is brain–computer interfaces (BCIs), or machines that are able to communicate and influence the brain. They are currently being researched for their potential to repair neural systems and restore certain cognitive functions. However, some ethical considerations have to be dealt with before they are accepted.
Major branches
Modern neuroscience education and research activities can be very roughly categorized into the following major branches, based on the subject and scale of the system in examination as well as distinct experimental or curricular approaches. Individual neuroscientists, however, often work on questions that span several distinct subfields.
Careers in neuroscience
Bachelor's Level
Master's Level
Advanced Degree
Neuroscience organizations
The largest professional neuroscience organization is the Society for Neuroscience (SFN), which is based in the United States but includes many members from other countries. Since its founding in 1969 the SFN has grown steadily: as of 2010 it recorded 40,290 members from 83 countries. Annual meetings, held each year in a different American city, draw attendance from researchers, postdoctoral fellows, graduate students, and undergraduates, as well as educational institutions, funding agencies, publishers, and hundreds of businesses that supply products used in research.
Other major organizations devoted to neuroscience include the International Brain Research Organization (IBRO), which holds its meetings in a country from a different part of the world each year, and the Federation of European Neuroscience Societies (FENS), which holds a meeting in a different European city every two years. FENS comprises a set of 32 national-level organizations, including the British Neuroscience Association, the German Neuroscience Society, and the French . The first National Honor Society in Neuroscience, Nu Rho Psi, was founded in 2006. Numerous youth neuroscience societies which support undergraduates, graduates and early career researchers also exist, such as Simply Neuroscience and Project Encephalon.
In 2013, the BRAIN Initiative was announced in the US. The International Brain Initiative was created in 2017, currently integrated by more than seven national-level brain research initiatives (US, Europe, Allen Institute, Japan, China, Australia, Canada, Korea, and Israel) spanning four continents.
Public education and outreach
In addition to conducting traditional research in laboratory settings, neuroscientists have also been involved in the promotion of awareness and knowledge about the nervous system among the general public and government officials. Such promotions have been done by both individual neuroscientists and large organizations. For example, individual neuroscientists have promoted neuroscience education among young students by organizing the International Brain Bee, which is an academic competition for high school or secondary school students worldwide. In the United States, large organizations such as the Society for Neuroscience have promoted neuroscience education by developing a primer called Brain Facts, collaborating with public school teachers to develop Neuroscience Core Concepts for K-12 teachers and students, and cosponsoring a campaign with the Dana Foundation called Brain Awareness Week to increase public awareness about the progress and benefits of brain research. In Canada, the CIHR Canadian National Brain Bee is held annually at McMaster University.
Neuroscience educators formed Faculty for Undergraduate Neuroscience (FUN) in 1992 to share best practices and provide travel awards for undergraduates presenting at Society for Neuroscience meetings.
Neuroscientists have also collaborated with other education experts to study and refine educational techniques to optimize learning among students, an emerging field called educational neuroscience. Federal agencies in the United States, such as the National Institute of Health (NIH) and National Science Foundation (NSF), have also funded research that pertains to best practices in teaching and learning of neuroscience concepts.
Engineering applications of neuroscience
Neuromorphic computer chips
Neuromorphic engineering is a branch of neuroscience that deals with creating functional physical models of neurons for the purposes of useful computation. The emergent computational properties of neuromorphic computers are fundamentally different from conventional computers in the sense that they are a complex system, and that the computational components are interrelated with no central processor.
One example of such a computer is the SpiNNaker supercomputer.
Sensors can also be made smart with neuromorphic technology. An example of this is the Event Camera's BrainScaleS (brain-inspired Multiscale Computation in Neuromorphic Hybrid Systems), a hybrid analog neuromorphic supercomputer located at Heidelberg University in Germany. It was developed as part of the Human Brain Project's neuromorphic computing platform and is the complement to the SpiNNaker supercomputer, which is based on digital technology. The architecture used in BrainScaleS mimics biological neurons and their connections on a physical level; additionally, since the components are made of silicon, these model neurons operate on average 864 times (24 hours of real time is 100 seconds in the machine simulation) that of their biological counterparts.
Recent advances in neuromorphic microchip technology have led a group of scientists to create an artificial neuron that can replace real neurons in diseases.
Nobel prizes related to neuroscience
See also
List of neuroscience databases
List of neuroscience journals
List of neuroscientists
Outline of brain mapping
Outline of the human brain
List of regions in the human brain
Gut–brain axis
Connectomics
References
Further reading
Squire, L. et al. (2012). Fundamental Neuroscience, 4th edition. Academic Press;
Byrne and Roberts (2004). From Molecules to Networks. Academic Press;
Sanes, Reh, Harris (2005). Development of the Nervous System, 2nd edition. Academic Press;
Siegel et al. (2005). Basic Neurochemistry, 7th edition. Academic Press;
Rieke, F. et al. (1999). Spikes: Exploring the Neural Code. The MIT Press; Reprint edition
section.47 Neuroscience 2nd ed. Dale Purves, George J. Augustine, David Fitzpatrick, Lawrence C. Katz, Anthony-Samuel LaMantia, James O. McNamara, S. Mark Williams. Published by Sinauer Associates, Inc., 2001.
section.18 Basic Neurochemistry: Molecular, Cellular, and Medical Aspects 6th ed. by George J. Siegel, Bernard W. Agranoff, R. Wayne Albers, Stephen K. Fisher, Michael D. Uhler, editors. Published by Lippincott, Williams & Wilkins, 1999.
Damasio, A. R. (1994). Descartes' Error: Emotion, Reason, and the Human Brain. New York, Avon Books. (Hardcover) (Paperback)
Gardner, H. (1976). The Shattered Mind: The Person After Brain Damage. New York, Vintage Books, 1976
Goldstein, K. (2000). The Organism. New York, Zone Books. (Hardcover) (Paperback)
Subhash Kak, The Architecture of Knowledge: Quantum Mechanics, Neuroscience, Computers and Consciousness, Motilal Banarsidass, 2004,
Llinas R. (2001). I of the vortex: from neurons to self MIT Press. (Hardcover) (Paperback)
Luria, A. R. (1997). The Man with a Shattered World: The History of a Brain Wound. Cambridge, Massachusetts, Harvard University Press. (Hardcover) (Paperback)
Luria, A. R. (1998). The Mind of a Mnemonist: A Little Book About A Vast Memory. New York, Basic Books, Inc.
Medina, J. (2008). Brain Rules: 12 Principles for Surviving and Thriving at Work, Home, and School. Seattle, Pear Press. (Hardcover with DVD)
Pinker, S. (1999). How the Mind Works. W. W. Norton & Company.
Pinker, S. (2002). The Blank Slate: The Modern Denial of Human Nature. Viking Adult.
Penrose, R., Hameroff, S. R., Kak, S., & Tao, L. (2011). Consciousness and the universe: Quantum physics, evolution, brain & mind. Cambridge, MA: Cosmology Science Publishers.
Ramachandran, V. S. (1998). Phantoms in the Brain. New York, HarperCollins. (Paperback)
Rose, S. (2006). 21st Century Brain: Explaining, Mending & Manipulating the Mind (Paperback)
Sacks, O. The Man Who Mistook His Wife for a Hat. Summit Books (Hardcover) (Paperback)
Sacks, O. (1990). Awakenings. New York, Vintage Books. (See also Oliver Sacks) (Hardcover) (Paperback)
Encyclopedia:Neuroscience Scholarpedia Expert articles
Sternberg, E. (2007) Are You a Machine? The Brain, the Mind and What it Means to be Human. Amherst, New York: Prometheus Books.
Churchland, P. S. (2011) Braintrust: What Neuroscience Tells Us about Morality . Princeton University Press.
External links
Neuroscience Information Framework (NIF)
American Society for Neurochemistry
British Neuroscience Association (BNA)
Federation of European Neuroscience Societies
Neuroscience Online (electronic neuroscience textbook)
HHMI Neuroscience lecture series - Making Your Mind: Molecules, Motion, and Memory
Société des Neurosciences
Neuroscience For Kids
Neurology
Nervous system
Neurophysiology | 0.761184 | 0.998513 | 0.760053 |
Resocialization | Resocialization or resocialisation (British English) is the process by which one's sense of social values, beliefs, and norms are re-engineered. The process is deliberately carried out in military boot camps through an intense social process or may take place in a total institution. An important thing to note about socialization is that what can be learned can be unlearned. That forms the basis of resocialization: to unlearn and to relearn.
Resocialization can be defined also as a process by which individuals, defined as inadequate according to the norms of a dominant institution, are subjected to a dynamic redistribution of those values, attitudes and abilities to allow them to function according to the norms of the said dominant institutions. That definition relates more to a jail sentence. If individuals exhibit deviance, society delivers the offenders to a total institution, where they can be rehabilitated.
Resocialization varies in its severity. A mild resocialization might be involved in moving to a different country. One who does so may need to learn new social customs and norms such as language, eating, dress, and talking customs. A more drastic example of resocialization is joining a military or a cult, and the most severe example would be if one suffers from a loss of all memories and so would have to relearn all of society's norms.
The first stage of resocialization is the destruction of an individual's former beliefs and confidence.
Institutions
The goal of total institutions is resocialization, which radically alters residents' personalities by deliberate manipulation of their environment. A total institution refers to an institution in which one is totally immersed and controls all of one's day-to-day life. All activity occurs in a single place under a single authority. Examples of a total institution include prisons, fraternity houses, and the military.
Resocialization is a two-part process. First, the institutional staff try to erode the residents' identities and independence. Strategies to erode identities include forcing individuals to surrender all personal possessions, get uniform haircuts and wear standardized clothing. Independence is eroded by subjecting residents to humiliating and degrading procedures. Examples are strip searches, fingerprinting, and assigning serial numbers or code names to replace the residents' given names.
The second part of resocialization process involves the systematic attempt to build a different personality or self. That is generally done through a system of rewards and punishments. The privilege of being allowed to read a book, watch television, or make a phone call can be a powerful motivator for conformity. Conformity occurs when individuals change their behavior to fit in with the expectations of an authority figure or the expectations of the larger group.
No two people respond to resocialization programs in the same manner. Some residents are found to be "rehabilitated," but others might become bitter and hostile. As well, over a long period of time, a strictly-controlled environment can destroy a person's ability to make decisions and live independently, which is known as institutionalisation, a negative outcome of total institution that prevents an individual from ever functioning effectively in the outside world again. (Sproule, 154–155)
Resocialization is also evident in individuals who have never been "socialized" in the first place or have not been required to behave socially for an extended period of time. Examples include feral children (never socialized) or inmates who have been in solitary confinement.
Socialization is a lifelong process. Adult socialization often includes learning new norms and values that are very different from those associated with the culture in which the person was raised. The process can be voluntary. Currently, joining a volunteer military qualifies as an example of voluntary resocialization. The norms and values associated with military life are different from those associated with civilian life (Riehm, 2000).
The sociologist Erving Goffman studied resocialization in mental institutions. He characterized the mental institution as a total institution, one in which virtually every aspect of the inmates' lives is controlled by the institution and calculated to serve the institution's goals. For example, the institution requires patients to comply with certain regulations, even when that is not necessarily in the best interest of individuals.
In Military
Those who join the military enter a new social realm in which they become socialized as military members. Resocialization is defined as a "process wherein an individual, defined as inadequate according to the norms of a dominant institution(s), is subjected to a dynamic program of behavior intervention aimed at instilling and/or rejuvenating those values, attitudes, and abilities which would allow... to function according to the norms of said dominant institution(s)."
Boot camp serves as an example for understanding how military members are resocialized within the total institution of the military. According to Fox and Pease (2012), the purpose of military training, like boot camp, is to "promote the willing and systematic subordination of one’s own individual desires and interests to those of one’s unit and, ultimately, country." To accomplish it, all aspects of military members' lives exist within the same military institution and are controlled by the same "institutional authorities" (drill instructors) and are done to accomplish the goals of the total institution. The individual's "civil[ian] identity, with its built-in restraints is eradicated, or at least undermined and set aside in favor of the warrior identity and its central focus upon killing." This warrior identity or ethos, is the mindset and group of values that all United States armed forces aim to instill in their members. Leonard Wong in “Leave No Man Behind: Recovering America’s Fallen Warriors,” describes the warrior ethos as placing the mission above all else, not accepting defeat, not ever quitting, and never leaving another American behind.
Military training prepares individuals for combat by promoting traditional ideas of masculinity, like training individuals to disregard their bodies' natural reactions to run from fear, have pain or show emotions. Although resocialization through military training can create a sense of purpose in military members, it can also create mental and emotional distress when members are unable to achieve set standards and expectations.
Military members, in part, find purpose and meaning through resocialization because the institution provides access to symbolic and material resources, helping military members construct meaningful identities. Fox and Pease state, "like any social identity, military identity is always an achievement, something dependent upon conformity to others' expectations and their acknowledgment. The centrality of performance testing in the military, and the need to 'measure up,' heightens this dependence. Although resocialization through military training can create a sense of purpose in military members, it also has the likelihood to create mental and emotional distress when members are unable to achieve set standards and expectations."
In the first couple of days, the most important aspect of basic training is the surrender of their identity. Recruits in basic training are exposed to a degrading process, where leaders break down the recruits’ civilian selves and essentially give them a new identity. The recruits go through a brutal, humbling, and physically and emotionally exhausting process. They are subjected to their new norms, language, rules, and identity. Recruits shed their clothes and hair, which are the physical representation of their old identities. The processes happen very quickly and allows no time for recruits to think over the loss of their identity and so the recruits have no chance to regret their decisions.
Drill sergeants then give the young men and women a romanticized view on what it is to be a soldier and how manly it is. When the training starts, it is physically demanding and gets harder every week. The recruits are constantly insulted and put down to break down their pride and destroy their ability to resist the change that they are undergoing. Drill sergeants put up a facade that tells their recruits that finishing out basic training sets them apart from all of the others who fail. However, almost all recruits succeed and graduate from basic training.
The training is also set up with roles. There are three younger drill sergeants closer to the recruits in age and one senior drill sergeant, who becomes a father figure to the new recruits. The company commander plays a god-like role, which the recruits look up to. The people in the roles will become role models and authority figures but also help to create a sense of loyalty to the entire organization.
Recruits are made to march in a formation in which every person moves the same way at the same time, which causes a sense of unity. It makes the recruits feel less like individuals and more like parts of a group. They sing in cadence to boost morale and to make the group feel important. Drill sergeants also feed the group small doses of triumphs to keep the soldiers proud and feeling accomplished. According to Jeff Parker Knight, the ultimate function for these songs is described as “marching precision,” but Knight argues that these jodies have a secondary socialization purpose that creates a type of “rite of passage” for the recruits. These jody performances, “reflect martial attitudes, and, as symbolic action, help to induce attitude changes in initiates.”
The troop also undergoes group punishment, which unifies the unit. Generally, the similar hatred of something will bring everyone together. In this case, group punishment allows all the recruits to hate the drill sergeants and the punishment but to find unity within their unit. They will encourage others to push themselves and create shared hardships.
In Prisons
Prisons have two different types of re-socialization. The first type is that prisoners must learn the new normal behaviors that apply to their new environment. The second type is the prisoners must partake in rehabilitation measures to help fix their deviant ways. When the individual violates the dominant society's norms, the criminal system subjects them to a form of re-socialization called criminal rehabilitation.
Rehabilitation aims to bring an inmate's real behavior closer to that of most individuals, who make up the dominant society. The ideal societal behavior is highly valued in many societies, mainly because it serves to protect and promote the well-being of most of the society's members. In rehabilitation, the system strips the criminal of his prior socialization of criminal behavior, including the techniques of committing a crime and the specific motives, drives, rationalizations, and attitudes. Criminal behavior is learned behavior and so can be unlearned.
The first step towards rehabilitation is the choice of milieu. That is the type of interactions the deviant has with the people around him in custody. Usually, that is determined after psychological and sociological screenings are performed on the criminal. The second step is diagnosis, a continual process influenced by feedback from the individual's behavior. The next stage is treatment, which is dependent on the diagnosis. Whether it is treating an addiction or redefining the values of a person, the treatment is what socializes the criminal back to societal norms.
References
Conley, Dalton. You May Ask Yourself: An Introduction to Thinking like a Sociologist. New York: W.W. Norton, 2011. Print.
Ferguson, Susan J., ed. Mapping the Social Landscape: Readings in Sociology. Boston: McGraw-Hill, 2002. Print.
Kennedy, Daniel B., and August Kerber. Resocialization, an American Experiment. New York: Behavioral Publications, 1973. Print.
Sociological terminology
Social influence
Majority–minority relations | 0.774871 | 0.980867 | 0.760046 |
Euthymia (medicine) | In psychiatry and psychology, euthymia is a normal, tranquil mental state or mood.
In those with bipolar disorder, euthymia is a stable mental state or mood that is neither manic nor depressive. Achieving euthymia is the goal of the treatment for bipolar patients. Euthymia is also the “baseline” of other cyclical mood disorders like major depressive disorder (MDD), as well as borderline personality disorder (BPD) and narcissistic personality disorder (NPD). This state is the goal of psychiatric and psychological interventions.
Etymology
The term euthymia is derived from the Greek words eu and thymos . The word “thymos” also had four additional meanings: life energy; feelings and passions; desires and inclinations; and thought or intelligence. Euthymia is also derived from a verb, “euthymeo”, that means both “I am happy, in good spirits” and “I make others happy, I reassure and encourage”. This is the basis on which the first formal definition of euthymia was built.
History
Democritus, who coined the philosophical concept of euthymia, said that euthymia is achieved when "one is satisfied with what is present and available, taking little heed of people who are envied and admired and observing the lives of those who suffer and yet endure". This was later amended in the translation given by the Roman philosopher Seneca the Younger in which euthymia means a state of internal calm and contentment. Seneca was also the first to link the state of euthymia to a learning process; in order to achieve it, one must be aware of psychological well-being. Seneca’s definition included a caveat about detachment from current events. Later, the Greek biographer Plutarch removed this caveat with his definition which focused more on learning from adverse events.
The traditional clinical concept of euthymia is an absence of disorder. This turns out to be insufficient: patients considered to be in remission are not displaying any symptoms meeting the threshold for diagnosis, but still have impairments in psychological well-being compared to healthy subjects.
Expansion of clinical concept
In 1958, Marie Jahoda gave a modern clinical definition of mental health in the terms of positive symptoms by outlining the criteria for mental health: "autonomy (regulation of behavior from within), environmental mastery, satisfactory interactions with other people and the milieu, the individual’s style and degree of growth, development or self-actualization, the attitudes of an individual toward his/her own self". In her definition she acknowledged the absence of disease as being necessary, but not enough, to constitute positive mental health, or euthymia.
Carol Ryff (1989) was the first to develop a comprehensive scale that could assess euthymia: the six-factor model of psychological well-being. The 84-item scale includes facets of self-acceptance, positive relations with others, autonomy, environmental mastery, purpose in life, and personal growth. Garamoni et al (1991) described euthymia as having a balance between the positive and negative in six dimensions of cognition and affects similar to the Ryff factors. Having too much positivity in one factor is not euthymia: for example, a person with too little "purpose in life" would lack a sense of meaning in life, while one with too much would have unrealistic expectations and hopes.
The concept of resilience (or, resistance to stress) was added again in the 2000s by authors in the field. Fava and Bech (2016)'s definition can be seen as a modern example:
Lack of mood disturbances. As with the older clinical sense, full remission from past mood disorder. If there is any sadness, anxiety, or irritable mood, it should be short-lived and possible to be interrupted.
Positive affects. Cheerfulness, relaxation, interest in things, plus restorative sleep.
Psychological well-being. Flexibility (balance of psychic forces, similar to Garamoni), consistency (a unifying outlook on life), resillance (resistance to stress), and tolerance to anxiety and frustration.
Medical applications of the expanded concept
In 1987, Kellner R published the Symptom Questionaire, containging 24 items referring to positive feelings and 68 referring to the negative. With the inclusion of positive feelings such as relaxation and friendliness, the SQ was found to be more sensitive to the effects of psychotrophic medication. A number of other scales, such as the WHO-5, PWB, AAQ-II, CIE, have been developed to also measure the positive side of euthymia.
Macro-analysis and micro-analysis are techniques used by clinicians to combine the assessments of psychological well-being and distress. Using both fields may offer more insight into the planning of treatment: for example, well-being therapy (WBT) can be used to help a patient self-observe and increase periods of well-being, while cognitive behavioral therapy (CBT) can be used to target distress. Other therapies that focus on aspects of well-being include mindfulness-based cognitive therapy (MBCT), acceptance and commitment therapy (ACT) which focus on flexibility, and the less-proven Pedasky and Mooney's strengths-based CBT and forgiveness therapy.
A few clinical trials have been done using a sequential model, where patients who have responded to antidepressants are tapered off the drug and then given a combination cognitive-wellbeing therapy. Although the results have been impressive with regard to relapse rates, it is unclear how much is due to this added well-being component. In a different trial setup, anxiety patients who have responded to behavioral theapy and mood disorder patients who have responded to medication are assigned to either CBT or WBT for residual symptoms. While both achieved a significant reduction of symptoms, WBT provided more benefit in terms of observer rating and PWB scores. WBT may also be applicable to cyclothymic disorder. MBCT seem to be an effective add-on to treatment-as-usual in treatment-resistant depression.
Related terms
Parathymia, on the other hand, is related to pathological laughter (called “Witzelsucht”).
See also
Cyclothymia
Hyperthymia
Dysphoria
Euphoria
Euthymia (philosophy)
Hypomania
Major depressive disorder
Mania
Quality of life
References
Happiness
Medical signs | 0.765391 | 0.993014 | 0.760044 |
Emotional dysregulation | Emotional dysregulation is characterized by an inability to flexibly respond to and manage emotional states, resulting in intense and prolonged emotional reactions that deviate from social norms, given the nature of the environmental stimuli encountered. Such reactions not only deviate from accepted social norms but also surpass what is informally deemed appropriate or proportional to the encountered stimuli.
It is often linked to physical factors such as brain injury, or psychological factors such as adverse childhood experiences, and ongoing maltreatment, including child abuse, neglect, or institutional abuse.
Emotional dysregulation may be present in people with psychiatric and neurodevelopmental disorders such as attention deficit hyperactivity disorder, autism spectrum disorder, bipolar disorder, borderline personality disorder, complex post-traumatic stress disorder, and fetal alcohol spectrum disorders. In such cases as borderline personality disorder and complex post-traumatic stress disorder, hypersensitivity to emotional stimuli causes a slower return to a normal emotional state. This is manifested biologically by deficits in the frontal cortices of the brain. As such, the period after a traumatic brain injury such as a frontal lobe disorder can be marked by emotional dysregulation. This is also true of neurodegenerative diseases.
Possible manifestations of emotion dysregulation include extreme tearfulness, angry outbursts or behavioral outbursts such as destroying or throwing objects, aggression towards self or others, and threats to kill oneself. Emotion dysregulation can lead to behavioral problems and can interfere with a person's social interactions and relationships at home, in school, or at their place of employment.
Etymology
The word dysregulation is a neologism created by combining the prefix dys- to regulation. According to Webster's Dictionary, dys- has various roots and is of Greek origin. With Latin and Greek roots, it is akin to Old English , 'apart' and Sanskrit 'bad, difficult'. It is frequently confused with the spelling disregulation, with the prefix dis meaning 'the opposite of' or 'absence of'; while disregulation refers to the removal or absence of regulation, dysregulation refers to ways of regulating that are inappropriate or ineffective.
Child psychopathology
There are links between child emotional dysregulation and later psychopathology. For instance, ADHD symptoms are associated with problems with emotional regulation, motivation, and arousal. One study found a connection between emotional dysregulation at 5 and 10 months, and parent-reported problems with anger and distress at 18 months. Low levels of emotional regulation behaviors at 5 months were also related to non-compliant behaviors at 30 months. While links have been found between emotional dysregulation and child psychopathology, the mechanisms behind how early emotional dysregulation and later psychopathology are related are not yet clear.
Symptoms
Smoking, self-harm, eating disorders, and addiction have all been associated with emotional dysregulation. Somatoform disorders may be caused by a decreased ability to regulate and experience emotions or an inability to express emotions in a positive way. Individuals who have difficulty regulating emotions are at risk for eating disorders and substance abuse as they use food or substances as a way to regulate their emotions. Emotional dysregulation is also found in people who have an increased risk of developing a mental disorder, particularly an affective disorder such as depression or bipolar disorder.
Childhood
Dysregulation is more prevalent in this age group, and is generally seen to decrease as children develop. During early childhood, emotional dysregulation or reactivity is considered to be situational rather than indicative of emotional disorders. It is important to consider parental mood disorders as genetic and environmental determinants. Children of parents with symptoms of depression are less likely to learn strategies for regulating their emotions and are at risk of inheriting a mood disorder. When parents have difficulty with regulating their emotions, they often cannot teach their children to regulate properly. The role of parents in a child's development is acknowledged by attachment theory, which argues that the characteristics of the caregiver-child relationship impact future relationships. Current research indicates that parent-child relationships characterized by less affection and greater hostility may result in children developing emotional regulation problems. If the child's emotional needs are ignored or rejected, they may experience greater difficulty dealing with emotions in the future. Moreover, conflict between parents is linked to increased emotional reactivity or dysregulation in children. Other factors involved include the quality of relationship with peers, the child's temperament, and social or cognitive understanding. Additionally, loss or grief can contribute to emotional dysregulation.
Research has shown that failures in emotional regulation may be related to the display of acting out, externalizing disorders, or behavior problems. When presented with challenging tasks, children who were found to have defects in emotional regulation (high-risk) spent less time attending to tasks and more time throwing tantrums or fretting than children without emotional regulation problems (low-risk). High-risk children had difficulty with self-regulation and had difficulty complying with requests from caregivers and were more defiant. Emotional dysregulation has also been associated with childhood social withdrawal.
Internalizing behaviors
Emotional dysregulation in children can be associated with internalizing behaviors including:
exhibiting emotions too intense for a situation;
difficulty calming down when upset;
difficulty decreasing negative emotions;
being less able to calm themselves;
difficulty understanding emotional experiences;
becoming avoidant or aggressive when dealing with negative emotions;
experiencing more negative emotions.
Externalizing behaviors
Emotional dysregulation in children can be associated with externalizing behaviors including:
exhibiting more extreme emotions;
difficulty identifying emotional cues;
difficulty recognizing their own emotions;
focusing on the negative;
difficulty controlling their attention;
being impulsive;
difficulty decreasing their negative emotions;
difficulty calming down when upset.
Adolescence
In adolescents, emotional dysregulation is a risk factor for many mental health disorders including depressive disorders, anxiety disorders, post-traumatic stress disorder, bipolar disorder, borderline personality disorder, substance use disorder, alcohol use disorder, eating disorders, oppositional defiant disorder, and disruptive mood dysregulation disorder. Dysregulation is also associated with self-injury, suicidal ideation, suicide attempts, and risky sexual behavior. Emotional dysregulation is not a diagnosis, but an indicator of an emotional or behavioral problem that may need intervention.
Attachment theory and the idea of an insecure attachment is implicated in emotional dysregulation. Greater attachment security correlates with less emotional dysregulation in daughters. Moreover, it has been observed that more female teens struggle with emotional dysregulation than males. Professional treatment, such as therapy or admittance into a psychiatric facility, is recommended.
Adulthood
Emotional dysregulation tends to present as emotional responses that may seem excessive compared to the situation. Individuals with emotional dysregulation may have difficulty calming down, avoid difficult feelings, or focus on the negative. On average, women tend to score higher on scales of emotional reactivity than men. A study at University College in Ireland found that dysregulation correlates to negative feelings about one's ability to cope with emotions and rumination in adults. They also found dysregulation to be common in a sample of individuals not affected by mental disorders.
Part of emotional dysregulation, which is a core characteristic in borderline personality disorder, is affective instability, which manifests as rapid and frequent shifts in mood of high affect intensity and rapid onset of emotions, often triggered by environmental stimuli. The return to a stable emotional state is notably delayed, exacerbating the challenge of achieving emotional equilibrium. This instability is further intensified by an acute sensitivity to psychosocial cues, leading to significant challenges in managing emotions effectively.
Impact on relationships
Established relationships
Relationships are generally linked to better well-being, but dissatisfaction in relationships can lead to increased divorce, worsened health, and potential violence. Emotional dysregulation plays a role in relationship quality and overall satisfaction. It can be difficult for emotionally dysregulated individuals to maintain healthy relationships. People who struggle with emotional dysregulation often externalize, internalize, or dissociate when exposed to stressors. These behaviors are attempts to regulate emotions but often are ineffective in addressing stress in relationships. This commonly presents itself as intense anxiety around relationships, poor ability to set and sustain boundaries, frequent and damaging arguments, preoccupation with loneliness, worries about losing a relationship, and jealous or idealizing feelings towards others. These feelings may be accompanied by support-seeking behaviors such as clinging, smothering, or seeking to control.
The counterpart of emotional dysregulation, emotional regulation, strengthens relationships. The ability to regulate negative emotions in particular is linked to positive coping and thus higher relationship satisfaction. Emotional regulation and communication skills are linked to secure attachment, which has been related to higher partner support as well as openness in discussing negative experiences and resolving conflict. On the other hand, emotional dysregulation has a negative impact on relationships. Multiple studies note the effects of emotion dysregulation on relationship quality. One study found that relationship satisfaction is lower in couples that lack impulse control or regulatory strategies. Another study found that both husbands' and wives' emotional reactivity was negatively linked with marriage quality as well as perceptions of partner responsiveness. The literature concludes that dysregulation increases instances of perceived criticism, contributes to physical and psychological violence, and worsens depression, anxiety, and sexual difficulties. Dysregulation has also been observed to lower empathy and decrease relationship satisfaction, quality, and intimacy.
Sexual health
Research conflicts on whether higher levels of emotional reactivity are linked to increases or decreases in sexual desire. Moreover, this effect could differ between men and women based on observed differences in emotional reactivity between genders. Some research posits that higher emotional reactivity in women is linked to greater sexual attraction in their male partners. However, difficulties in regulating emotions have been linked to poorer sexual health, both in regards to ability and overall satisfaction.
Emotional dysregulation plays a role in nonconsensual and violent sexual encounters. Emotional regulation skills prevent verbal coercion by regulating feelings of sexual attraction in men. Consequently, a lack of emotional regulation skills can cause both internalizing and externalizing behaviors in a sexual context. This may mean violence, which can serve as a strategy for regulating emotion. In a non-violent context, insecurely attached individuals may seek to satisfy their need for connection or to resolve relational issues with sex. Communication can also be hindered, as emotional dysregulation has been linked to an inability to express oneself in sexual situations. This can lead to victimization as well as further sexual difficulties. Thus, the ability to both recognize emotions and express negative emotions are important for communication and social adjustment, including within sexual contexts.
Mediating effects
While personal characteristics and experiences can contribute to externalizing and internalizing behaviors as listed above, emotional regulation has an interpersonal aspect. Couples who effectively co-regulate have higher emotional satisfaction and stability. Openly discussing emotions in the relationship can help to validate feelings of insecurity and encourage closeness. For partners who struggle with emotional dysregulation, there are available treatments. Couple's therapy has shown itself to be an effective method of improving relationship satisfaction and quality by positively impacting the process of emotional regulation in relationships.
Protective factors
Early experiences with caregivers can lead to differences in emotional regulation. The responsiveness of a caregiver to an infant's signals can help an infant regulate their emotional systems. Caregiver interaction styles that overwhelm a child or that are unpredictable may undermine emotional regulation development. Effective strategies involve working with a child to support developing self-control such as modeling a desired behavior rather than demanding it.
The richness of an environment that a child is exposed to helps the development of emotional regulation. An environment must provide appropriate levels of freedom and constraint. The environment must allow opportunities for a child to practice self-regulation. An environment with opportunities to practice social skills without overstimulation or excessive frustration helps a child develop self-regulation skills.
Substance use
Several variables have been explored to explain the connection between emotional dysregulation and substance use in young adults, such as child maltreatment, cortisol levels, family environment, and symptoms of depression and anxiety. Vilhena-Churchill and Goldstein (2014) explored the association between childhood maltreatment and emotional dysregulation. More severe childhood maltreatment was found to be associated with an increase in difficulty regulating emotion, which in turn was associated with a greater likelihood of coping by using marijuana. Kliewer et al. (2016) performed a study on the relationship between negative family emotional climate, emotional dysregulation, blunted anticipatory cortisol, and substance use in adolescents. Increased negative family emotional climate was found to be associated with high levels of emotional dysregulation, which was then associated with increased substance use. Girls were seen to have blunted anticipatory cortisol levels, which was also associated with an increase in substance use. Childhood events and family climate with emotional dysregulation are both factors seemingly linked to substance use. Prosek, Giordano, Woehler, Price, and McCullough (2018) explored the relationship between mental health and emotional regulation in collegiate illicit substance users. Illicit drug users reported higher levels of depression and anxiety symptoms. Emotional dysregulation was more prominent in illicit drug users in the sense that they had less clarity and were less aware of their emotions when the emotions were occurring.
Treatment
Many people experience dysregulation and can struggle at times with uncontrollable emotions. Thus, potential underlying issues are important to consider in determining severity. As the ability to appropriately express and regulate emotions is related to better relationships and mental health, parental support can help regulate the emotions of children struggling with emotional dysregulation. Training to help parents address this issue focuses on predictability and consistency. These tenets are thought to provide comfort by creating a sense of familiarity and thus safety.
While cognitive behavioral therapy is the most widely prescribed treatment for such psychiatric disorders, a commonly prescribed psychotherapeutic treatment for emotional dysregulation is dialectical behavioral therapy, a psychotherapy which promotes the use of mindfulness, a concept called dialectics, and emphasis on the importance of validation and maintaining healthy behavioral habits.
When diagnosed as being part of ADHD, norepinephrine and dopamine reuptake inhibitors such as methylphenidate (Ritalin) and atomoxetine are often used. A few studies have also showed promise in terms of non-pharmacological treatments for people with ADHD and emotional problems, although the research is limited and requires additional inquiry.
Eye Movement Desensitization and Reprocessing (EMDR) can help recovery from emotional dysregulation in cases where the dysregulation is a symptom of prior trauma. Outside of therapy, there are helpful strategies to help individuals recognize how they are feeling and put space between an event and their response. These include mindfulness, affirmations, and gratitude journaling. Hypnosis may also help to improve emotional regulation. Movement such as yoga and aerobic exercise can also be therapeutic by aiding with regulation and the ability to understand how one's mind influences behavior.
See also
Adrenal insufficiency
Alexithymia
Anxiety
Conduct disorder
Emotional self-regulation
Epigenetics of anxiety and stress–related disorders
Pseudobulbar affect
Reduced affect display
Spiritual crisis
WAVE Trust
References
Sources
Borderline personality disorder
Child abuse
regul, dys
Emotional issues
Institutional abuse
Mood disorders
Psychological abuse
Symptoms and signs of mental disorders | 0.761501 | 0.998086 | 0.760044 |
Learning through play | Learning through play is a term used in education and psychology to describe how a child can learn to make sense of the world around them. Through play children can develop social and cognitive skills, mature emotionally, and gain the self-confidence required to engage in new experiences and environments.
Key ways that young children learn include playing, being with other people, being active, exploring and new experiences, talking to themselves, communication with others, meeting physical and mental challenges, being shown how to do new things, practicing and repeating skills and having fun.
Definitions of work and play
Play enables children to make sense of their world, as children possess a natural curiosity to explore and play acts as a medium to do so.
Definitions of play
In Einstein Never Used FlashCards five elements of children's play are outlined
Play must be pleasurable and enjoyable.
Play must have no extrinsic goals.
Play is spontaneous and voluntary.
Play involves active engagement.
Play involves an element of make-believe.
Additionally, play is characterized by Creativity and Imagination. Creativity is evident in role play, construction activities, and other forms of imaginative play. Imagination allows children to create mental images related to their feelings, thoughts, and ideas, which they then incorporate into their play.
Researchers Beverlie Dietze and Diane Kashin in Playing and Learning seven common characteristics of play include:
Play is active.
Play is child-initiated.
Play is process-oriented.
Play is intrinsic.
Play is episodic.
Play is rule-governed.
Play is symbolic.
Contrasted with Work
There are important distinctions between play and work in the context of children's activities. Play is generally a self-directed activity chosen by the child and is centered around exploration and enjoyment. In contrast, work typically involves structured tasks with specific goals and outcomes.
According to researchers, Dietze and Kashin, play is characterized by internal control, the ability to adapt or create new realities, and intrinsic motivation. When adults impose specific objectives on an activity and label it as play, it can blur the line between play and work. For example, using flash cards to help a child memorize information may be more closely associated with work due to its structured nature and goal orientation.
Understanding the distinction between play and work can have implications for child development. While structured activities can provide learning opportunities, play fosters creativity, problem-solving, and autonomy. Educators and parents mindful of these differences can create environments that support children's holistic development.
Play offers an opportunity for children to build new knowledge from previous experiences. Researchers have differentiated between work and play in various ways:
Primary Activities: a child's action is work if it adds immediate value to the family unit, even if the culture perceives the action as play.
Parental Perspective: Parents from different cultures have varying views on what constitutes work or play in children's actions. For instance, a Mayan mother may see her daughter setting up a fruit stand as play, while many Western cultures might view it as work if the child successfully sells the fruit.
Child's Perspective: Children may have different perceptions of play and work compared to adults, which can influence how they engage in and interpret various activities.
Classical, modern and contemporary perspectives
There are three main groups of play theories:
Classical Theories
Classical theorists such as Jean Jacques Rousseau, Fredrich Froebel, and John Dewey had a significant impact on changing societal views of childhood. They emphasized the importance of play in children's learning and development. These theorists promoted children's learning experiences through direct interaction with nature and life.
Classical theories of play also include concepts such as burning off excess energy, recreation and relaxation, replenishing energy after hard work, practicing future roles, and recapitulation theory. Herbert Spencer proposed that play allows humans to expend excess energy not required for survival.
Modern Theories
Modern theories focus on play's role in cognitive development. Jean Piaget emphasized how children construct knowledge through play-based stages of development, which has influenced many early childhood education programs. Fredrich Froebel's idea of play as 'serious work' aligns with modern perspectives on play's educational value.
Modern perspectives also examine play's impact on a child's development. For example, Dietze and Kashin view the learner as an active constructor of meaning.
Contemporary Theories
Contemporary theories emphasize the role of social and cultural contexts in children's learning and development. Rousseau's work on children's rights and the need for protection due to their innocence is an aspect of contemporary perspectives. Dewey's view of the child as an active agent in learning also aligns with contemporary theories that focus on empowering children through play.
Contemporary theories address the relationship between play, diversity, and social justice in daily life and learning. Children learn through their daily living experiences and are influenced by various contexts such as family, community, culture, and broader society. Lev Vygotsky's concept of the Zone of Proximal Development suggests that children need activities that support past learning while encouraging new challenges. Social engagement and collaboration with others can transform children's thinking. Urie Bronfenbrenner highlights the impact of the person-environment relationship on child development (Khuluqo 2016,Bodrova & Leong 2015).
Cultural Perspectives
Cross-Cultural Perspectives
While play has been studied extensively in Western cultures, including by Susan Isaacs in the first half of the 20th century, experts like Gunilla Dahlberg and Fleer challenge the universality of Western perspectives on play. Fleer's work with Australian Aboriginal children suggests that not all cultures emphasize play in the same way. Different cultures and communities have distinct ways of encouraging play. For instance, some may discourage adult involvement in play or expect children to play in mixed age groups away from adults. Additionally, some cultures may expect children to outgrow play by a certain age.
The Yucatec Maya culture offers a unique approach to play and learning, emphasizing reality-based activities and observation.
Learning through Play
Yucatec Maya children engage in play that is closely tied to real-life activities such as making tortillas, weaving, and cleaning clothing. They often learn through "Intent Community Participation," which involves observation and participation in community activities. Unlike children in many Western cultures, Yucatec Maya children do not engage in extensive pretend play, as it is considered akin to lying because it involves representing something that isn't real". For example, a Mayan mother told an ethnographer that she would "tolerate" her child pretending that the leaves in a bowl were a form of food. Instead, their play mirrors everyday life.
Age Groups and Interaction
Yucatec Maya children play and interact with individuals of all ages, rather than focusing on age-segregated play typical in some Western cultures. This approach helps them model adult behaviors and explore realistic representations of their culture.
Observational Learning
Observation plays a crucial role in Yucatec Maya children's learning process. They actively participate by observing and modeling useful activities within the community. Accordingly," It is inherently integrated into the daily activities of the compound."
Importance
Play is sufficiently important to the UN that it has recognized it as a specific right for all children. Children need the freedom to explore and play. Play also contributes to brain development. Play enables developmental in the prefrontal cortex of mammals, including humans. Evidence from neuroscience shows that the early years of a child's development (from birth to age six) set the basis for learning, behavior and health throughout life. A child's neural pathways are influenced in their development through the exploration, thinking, problem-solving and language expression which occur during play episodes. According to the Canadian Council on Learning, "Play nourishes every aspect of children's development – it forms the foundation of intellectual, social, physical, and emotional skills necessary for success in school and in life. Play 'paves the way for learning'”.
Learning occurs when children play with blocks, paint a picture or play make-believe. During play children try new things, solve problems, invent, create, test ideas and explore. Children need unstructured, creative playtime; in other words, children need time to learn through their play. The level of emotional arousal enacted during play is ideal for consolidation and integration of neural pathways. Allowing the child to direct the play means allowing the child to find the place most comfortable, allowing the promotion of neuroplasticity. Children engaged in self-directed play can create their own schemas allowing the integration of affect and cognition. Play also promotes neuroplasticity development by allowing children to co-construct wordless narratives of self-awareness and transformation.
According to Pascel, "Play is serious business for the development of young learners. This is such an important understanding. A deliberate and effective play-based approach supports young children's cognitive development. When well designed, such an approach taps into children's individual interests, draws out their emerging capacities, and responds to their sense of inquiry and exploration of the world around them. It generates highly motivated children enjoying an environment where the learning outcomes of a curriculum are more likely to be achieved”.
In childhood
Play is strongly linked to learning in young children, especially in areas such as problem-solving, language acquisition, literacy, numeracy, and social, physical, and emotional skills. Through learning-based play, children actively explore their environment and the world around them. Play is essential for a child's optimal social, cognitive, physical, and emotional development. Researchers agree that play establishes a foundation for intellectual growth, creativity, and basic academic knowledge.
According to Dorothy Singer, make-believe games allow children to imagine different roles and scenarios. Through sociodramatic play, children learn to manage emotions, understand the world, and navigate social interactions such as sharing and cooperation.
Purposeful, quality play experiences build critical skills for cognitive development and academic achievement. These include verbalization, language comprehension, vocabulary, imagination, questioning, problem-solving, observation, empathy, cooperation, and understanding others' perspectives.
Play also helps children develop social skills, creativity, hand-eye coordination, problem solving, and imagination. These skills are often more effectively learned through play than through flashcards or academic drills. Additionally, Slovak researchers Gmitrova and Gmitrov emphasize the importance of pretend play as a medium for children to progress beyond the educational curriculum.
Social play boosts children's confidence when trying new activities and enhances their ability to work with different symbols creatively. The benefits of play are so extensive that it is considered an evolutionary and developmentally important activity, helping children engage in socially appropriate behaviors that benefit them into adulthood.
Beliefs about the play-learning relationship
Linda Longley and colleagues found differing beliefs about the relationship between play and learning. While parents often see structured play activities (e.g., educational videos) as more valuable for learning, experts regard unstructured activities (such as pretend play) as more beneficial.
Even though teachers may recognize the value of play-based learning, research across several countries (such as China, India, and Ireland) suggests a gap between their beliefs and classroom practices. For example, in some settings, teachers who value play-based learning still rely more on traditional instruction methods. This may be due to factors such as accountability pressures or a lack of resources.
These challenges demonstrate a notable gap between teachers' beliefs about play-based learning and their classroom practices. This discrepancy can affect students' opportunities for growth and development through play-based activities, which support early literacy, language, mathematics, and socio-emotional skills.(Lynch, 2015)
Play-based learning
Play-based learning is an educational approach that supports children's development and learning. Through play, children can develop content knowledge, social skills, competences, and a positive disposition to learn.
This approach is rooted in Lev Vygotsky's model of scaffolding, where teachers focus on specific aspects of play activities and offer encouragement and feedback on children's learning. Play can challenge children's thinking, especially when they engage in real-life and imaginary activities. Sensitive intervention and adult support can be provided during play-based learning when necessary.
Children learn best through first-hand experiences in play-based learning. This approach motivates and stimulates children while supporting the development of skills, concepts, language acquisition, communication skills, and concentration. It also offers opportunities for children to develop positive attitudes and consolidate recent learning, skills, and competencies.
The DCSG outlined benefits of play-based learning in early childhood education. Playful children use and apply their knowledge, skills, and understanding in different ways and contexts. Practitioners also engage children in activities that help them learn and develop positive dispositions for learning. Practitioners should not plan children's play directly, as this can interfere with the choice and control central to play. Instead, they should plan for play by creating high-quality learning environments and ensuring uninterrupted periods for children to engage in play
According to researchers Kathy Hirsh-Pasek and Roberta Michnick Golinkoff, adults playing with children can positively impact the quality and variety of play. When adults join in, they guide and extend play without controlling it, which allows children to follow their own interests and engage in cognitive development more effectively. Play is the language and currency of children
Here are several ways educators, parents, and guardians can facilitate children's learning during play.
Model Positive Attitudes: Adults can encourage play by providing a balance of indoor and outdoor activities throughout the year. By participating in play, adults guide and shape the experience without dominating it.
Create an Engaging Environment: Select a variety of toys, materials, and equipment to suit different skill levels and interests. This approach motivates children's exploration and discovery.
Observe and Respond: Ongoing observation of how children interact with toys and materials can provide insights into their interests and abilities, guiding further learning and development.
Engage Thoughtfully: Adults should carefully join in play activities without overshadowing children's initiatives, allowing children to take the lead.
Extend Play: Listening, repeating, extending, and asking questions at the right moments can help expand and enhance play. Adults can provide the language needed for children to articulate their observations.
Encourage Social and Cognitive Skills: By providing social knowledge and opportunities for children to explore physical and logico- mathematical knowledge, adults help children understand the world around them and solve problems.
By using these approaches, adults can create a supportive environment that nurtures children's natural curiosity and cognitive growth during play.
Criticism of play-based learning
Knowledge acquisition
Research over the past forty years has shown a positive correlation between play and children's learning, indicating that play can benefit children's education. However, some findings suggest that play may be more closely associated with procedural knowledge(skills and strategies) rather than declarative knowledge (facts and information). Correlational research alone cannot definitively determine the extent to which play influences learning outcomes. While play can help children develop important procedural knowledge, which can later support the acquisition of declarative knowledge, the relationship between play and declarative learning is not yet fully established.
Pretend Play: Creativity, Intelligence, and Problem-Solving:
Regarding creativity, evidence from meta-analyses on pretend play is mixed, with some studies suggesting a relationship with creativity and others finding little impact. The connection between play and intelligence remains unclear, as research cannot conclusively determine whether play promotes intelligence or if intelligence encourages play. In terms of problem-solving, construction play is correlated with solving puzzles and other similar tasks.
Recent studies indicate that engaging in playful interactions with peers helps children develop essential life skills such as problem-solving abilities and conflict resolution. Play also fosters self-confidence and emotional regulation, promoting collaboration, communication, and the expression of ideas and feelings. Additionally, play provides caregivers with opportunities to observe children's behavior and intervene if necessary, offering support for developmental delays or trauma.
Pretend Play:
Pretend play, or "make-believe play," involves acting out scenarios and exploring different perspectives. While some studies question the impact of pretend play on child development, others suggest it can enhance language usage, awareness of others' perspectives, and self-regulation in areas such as empathy and delayed gratification. Pretend play may also improve social skills, such as problem-solving and communication. Play-based learning experiences provide caregivers with valuable insights into children's behavior, enabling early interventions when necessary.
Play-based learning programs
Play-based learning programs encompass a variety of educational approaches that enhance children's learning experiences through engaging play activities. These programs emphasize the development of skills such as listening, concentration, communication, and self-direction.
Enriched Curriculum
The Enriched Curriculum is designed to enhance children's learning experiences by incorporating play-based learning. This curriculum combines outdoor physical activities with indoor play in smaller group settings to promote children's development. Critics have expressed concerns about the Enriched Curriculum, particularly its potential to delay reading and writing lessons, needing extra resources and its ability to cater to different types of learners.
Notable Play-Based Learning Programs
High/Scope-is a cognitive approach that involves children actively in their own learning. It offers 58 key experiences and uses a plan-do-review approach during learning center time. This method helps children take responsibility for their own learning while adults serve as facilitators of play.
Creative Curriculum-Creative Curriculum is an early childhood teaching approach that emphasizes social and emotional development. It uses project-based investigations to allow children to apply skills and addresses four areas of development: social/emotional, physical, cognitive, and language.
Montessori Method-The Montessori Method promotes self-directed activity and clinical observation on the part of the teacher. This approach adapts the learning environment to the child's development level, encouraging children to learn through play.
Ontario Full Day Early Learning Kindergarten Program-This program for 4- and 5-year-olds consists of exploration, investigation, and guided and explicit instruction.
Ontario Early Years Centres-These centers focus on play-based learning through parent-child interaction. Parents and caregivers can stay with the child and access information about available programs and services.
Reggio Emilia approach-is a child-directed curriculum model that follows the children's interests. It emphasizes purposeful progression and emergent curriculum without a predetermined teacher-directed sequence.
Project Approach- The Project Approach involves preschoolers in studies of nearby topics that interest them. This teacher-instructed approach introduces new vocabulary and provides opportunities for informal conversation (Dfuss,2019).
Benefits of Different Types of Play in Child Development
Free Play
Free play is observed when children engage in activities based on their preferences, making their own choices regarding what they do and how they do it. It often occurs spontaneously, is enjoyable, and encourages imaginative thinking. This type of play typically unfolds without specific rules imposed by adults, allowing children the freedom to explore, express creativity, and experiment with different approaches.
This type of play is believed to allow children to tap into their creativity and problem-solving abilities as they tackle different tasks and obstacles independently. It also offers them opportunities to express themselves and participate in imaginative scenarios, potentially boosting cognitive development and fostering positive social interactions with peers. Some research suggests that free play may nurture imagination and social skills, which are seen as important for overall growth (Weisberg, Hirsh-Pasek, and Golinkoff, 2013).
Examples of how free play might foster imagination include
Playing to Learn Words
Some studies indicate that children from less privileged backgrounds may benefit from playful learning in vocabulary acquisition (Han, Moore, Vukelich, & Buell, 2010).
Learning by Exploring
Research suggests that children may perform better academically when they receive some guidance while exploring independently, compared to being left entirely on their own (Alfieri, Brooks, Aldrich, & Tenenbaum, 2010).
Shapes and Play
Studies have found that children may grasp concepts like shapes more effectively when they engage in playful activities (Fisher, Hirsh-Pasek, Newcombe, & Golinkoff).
Teacher-directed play
This type of play allows teachers to lead structured activities to teach new concepts and skills. It promotes valuable learning opportunities, teamwork, following instructions,and cooperative learning among children.
Mutually directed play
Hope-Southcott(2013) and McLennan(2012), Introduce a type of play that entails collaboration between children and teachers in play activities, fostering learning through shared experiences and interaction. Additionally, encourage communication, negotiation, and decision-making skills while promoting positive teacher-student relationships and peer interactions.
Examples of how mutually directed play is beneficial for both children and adults
Environmental Preparation
Adults set up the play environment with specific toys or materials to support learning. For example, a teacher might choose toys for a classroom activity, or a museum might design exhibits for children to explore.
Scaffolding Children’s Actions
Adults can help children during play by asking questions like "What do you think would happen if...". These questions gently guide children towards learning without rushing them.
Incorporating Objects
Adults introduce new objects during play to spark children's curiosity. For instance, they might say, "I wonder what would happen if you try using this one?" This lets children explore while still focusing on learning.
Diverse Play Adaptations: Enhancing Learning and Development.
Adapting Play to Meet the Needs of Children with Disabilities
Teachers can adapt play to meet the needs of children with disabilities or special needs in various ways. According to Sharifah & Aliza 2013, effective lesson planning tailored to students' specific needs and abilities can enhance the educational experience for all students. Selecting suitable techniques and strategies for each lesson topic and learning objective supports the diverse needs of students. Utilizing appropriate learning aids, such as visual or tactile resources, can also improve accessibility and engagement. Researcher Nor Azlinah (2010) ,found that encouraging collaborative learning allows students to work in groups and benefit from social interaction.
Benefits of Play-Based Learning for Children with Disabilities
Play-based learning offers numerous benefits for children with various types of disabilities. It supports cognitive and language development, particularly for children with autism spectrum disorders. Play-based learning also promotes emotional and social development by fostering positive interactions and cooperation among students. By considering different approaches and techniques, teachers can create inclusive learning environments that support the diverse needs of their students.
These insights provide an overview of how play can be adapted to meet the needs of children with disabilities and how play-based learning benefits children with various types of disabilities.
See also
Children's street culture
Educational entertainment
Home zone / Play street
References
Further reading
The Creative Curriculum for Preschool. (2013, March).
Dfuoss. (2019). Project Approach for Preschoolers.
Alternative education
Learning
Psychology of learning
Play (activity) | 0.770612 | 0.986266 | 0.760029 |
Self-image | Self-image is the mental picture, generally of a kind that is quite resistant to change, that depicts not only details that are potentially available to an objective investigation by others (height, weight, hair color, etc.), but also items that have been learned by persons about themselves, either from personal experiences or by internalizing the judgments of others. In some formulations, it is a component of self-concept.
Self-image may consist of six types:
Self-image resulting from how an individual sees oneself.
Self-image resulting from how others see the individual.
Self-image resulting from how the individual perceives the individual seeing oneself.
Self-image resulting from how the individual perceives how others see the individual.
Self-image resulting from how others perceive how the individual sees oneself.
Self-image resulting from how others perceive how others see the individual.
These six types may or may not be an accurate representation of the person. All, some, or none of them may be true.
A more technical term for self-image that is commonly used by social and cognitive psychologists is self-schema. Like any schema, self-schemas store information and influence the way we think and remember. For example, research indicates that information which refers to the self is preferentially encoded and recalled in memory tests, a phenomenon known as "self-referential encoding". Self-schemas are also considered the traits people use to define themselves, they draw information about the self into a coherent scheme.
Poor self-image
Poor self-image may be the result of accumulated criticisms that the person collected as a child which have led to damaging their own view of themselves. Children in particular are vulnerable to accepting negative judgments from authority figures because they have yet to develop competency in evaluating such comments. Also, adolescents are highly targeted to suffer from poor body-image issues. Individuals who already exhibit a low sense of self-worth may be vulnerable to develop social disorders.
Negative self-images can arise from a variety of factors. A prominent factor, however, is personality type. Perfectionists, high achievers and those with "type A" personalities seem to be prone to having negative self-images. This is because such people constantly set the standard for success high above a reasonable, attainable level. Thus, they are constantly disappointed in this "failure."
Another factor that contributes to a negative self-image is the beauty values of the society in which a person lives. In the American society, a popular beauty ideal is a slimness. Oftentimes, girls believe that they do not measure up to society's "thin" standards, which leads to their having a negative self-image.
Maintenance
When people are in the position of evaluating others, self-image maintenance processes can lead to a more negative evaluation depending on the self-image of the evaluator. That is to say stereotyping and prejudice may be the way individuals maintain their self-image. When individuals evaluate a member of a stereotyped group, they are less likely to evaluate that person negatively if their self-images had been bolstered through a self-affirmation procedure, and they are more likely to evaluate that person stereotypically if their self-images have been threatened by negative feedback. Individuals may restore their self-esteem by derogating the member of a stereotyped group.
Fein and Spencer (1997) conducted a study on Self-image Maintenance and Discriminatory Behavior. This study showed evidence that increased prejudice can result from a person's need to redeem a threatened positive perception of the self. The aim of the study was to test whether a particular threat to the self would instigate increased stereotyping and lead to actual discriminatory behavior or tendencies towards a member of a "negatively" stereotyped group.
The study began when Fein and Spencer gave participants an ostensible test of intelligence. Some of them received negative feedback, and others, positive and supportive feedback. In the second half of the experiment, the participants were asked to evaluate another person who either belonged to a negatively stereotyped group, or one who did not.
The results of the experiment showed that the participants who had previously received unfavorable comments on their test, evaluated the target of the negatively stereotyped group in a more antagonistic or opposing way, than the participants who were given excellent reports on their intelligence test. They suggested that the negative feedback on the test threatened the participants' self-image and they evaluated the target in a more negative manner, all in efforts to restore their own self-esteem.
A present study extends the studies of Fein and Spencer in which the principal behavior examined was avoidance behavior. In the study, Macrae et al. (2004) found that participants that had a salient negative stereotype of "skinheads" attached, physically placed themselves further from a skinhead target compared to those in which the stereotype was not as apparent. Therefore, greater salience of a negative stereotype led participants to show more stereotype-consistent behavior towards the target.
Residual
Residual self-image is the concept that individuals tend to think of themselves as projecting a certain physical appearance, or certain position of social entitlement, or lack thereof. The term was used at least as early as 1968, but was popularized in fiction by the Matrix series, where persons who existed in a digitally created world would subconsciously maintain the physical appearance that they had become accustomed to projecting.
Victimisation
Victims of abuse and manipulation often get trapped into a self-image of victimisation. The psychological profile of victimisation includes a pervasive sense of helplessness, passivity, loss of control, pessimism, negative thinking, strong feelings of self-guilt, shame, self-blame and depression. This way of thinking can lead to hopelessness and despair.
Children's disparity
Self-image disparity was found to be positively related to chronological age (CA) and intelligence. Two factors thought to increase concomitantly with maturity were capacity for guilt and ability for cognitive differentiation. However, males had larger self-image disparities than females, Caucasians had larger disparities and higher ideal self-images than African Americans, and socioeconomic status (SES) affected self-images differentially for the 2nd and 5th graders.
Strengtheners
A child's self-awareness of who they are differentiates into three categories around the age of five: their social self, academic persona, and physical attributes. Several ways to strengthen a child's self-image include communication, reassurance, support of hobbies, and finding good role models.
Evolved awareness in mirror
In the earliest stages of development, infants are not aware that images in mirrors are themselves. Research was done on 88 children between 3 and 24 months. Their behaviors were observed before a mirror. The results indicated that children's awareness of self-image followed three major age-related sequences:
From about 6 through 12 months of age, the first prolonged and repeated reaction of an infant to their mirror image is that of a sociable “playmate”.
In the second year of life, wariness and withdrawal appeared; self-admiring and embarrassed behavior accompanied those avoidance behaviors starting at 14 months, and was shown by 75% of the subjects after 20 months of age.
During the last part of the second year of life, from 20 to 24 months of age, 65% of the subjects demonstrated recognition of their mirror images.
Women's sexual behavior
A magazine survey that included items about body image, self-image, and sexual behaviors was completed by 3,627 women. The study found that overall self-image and body image are significant predictors of sexual activity. Women who were more satisfied with body image reported more sexual activity, orgasm, and initiating sex, greater comfort undressing in front of their partner, having sex with the lights on, trying new sexual behaviors (e.g. anal sex), and pleasing their partner sexually than those dissatisfied. Positive body image was inversely related to self-consciousness and importance of physical attractiveness, and directly related to relationships with others and overall satisfaction.
Men's sexual behavior
An article published in the journal, Psychology of Men & Masculinity, analyzed how (perceived) penile size affected body satisfaction in males. Based on the responses received from 110 heterosexual individuals (67 men; 43 women) to questions on the matter, the article concluded:Men showed significant dissatisfaction with penile size, despite perceiving themselves to be of average size. Importantly, there were significant relationships between penile dissatisfaction and comfort with others seeing their penis, and with likelihood of seeking medical advice with regard to penile and/or sexual function. Given the negative consequences of low body satisfaction and the importance of early intervention in sexually related illnesses (e.g., testicular cancer), it is imperative that attention be paid to male body dissatisfaction.
See also
Acquiescence
Body image
Body schema
Culture of poverty
Dunning–Kruger effect
End-of-history illusion
Face (self image)
Fear of negative evaluation
Figure rating scale
Ideas of fairness
Identity formation
Identity performance
Inferiority complex
The Honest Body Project
Positive mental attitude
Psychological projection
Self-concealment
Self-concept
Self-efficacy
Self-esteem
Self (psychology)
Self-schema
Style of life
Victimology
References
Conceptions of self
Perception
Psychological theories | 0.765086 | 0.993332 | 0.759984 |
Job demands-resources model | The job demands-resources model (JD-R model) is an occupational stress model that suggests strain is a response to imbalance between demands on the individual and the resources he or she has to deal with those demands. The JD-R was introduced as an alternative to other models of employee well-being, such as the demand-control model and the effort-reward imbalance model.
The authors of the JD-R model argue that these models "have been restricted to a given and limited set of predictor variables that may not be relevant for all job positions" (p.309). Therefore, the JD-R incorporates a wide range of working conditions into the analyses of organizations and employees. Furthermore, instead of focusing solely on negative outcome variables (e.g., burnout, ill health, and repetitive strain) the JD-R model includes both negative and positive indicators and outcomes of employee well-being.
Basic assumptions
The JD-R model can be summarized with a short list of assumptions/premises:
Whereas every occupation may have its own specific risk factors associated with job stress, these factors can be classified in two general categories: job demands and job resources.
Job demands: physical, psychological, social, or organizational aspects of the job, that require sustained physical and/or psychological effort or skills. Therefore, they are associated with certain physiological and/or psychological costs. Examples are work pressure and emotional demands.
Job resources: physical, psychological, social, or organizational aspects of the job that are either: functional in achieving work goals; reduce job demands and the associated physiological and psychological cost; stimulate personal growth, learning, and development. Examples are career opportunities, supervisor coaching, role-clarity, and autonomy.
Workplace resources vs. personal resources: The authors of the JD-R make a distinction between workplace resources and personal resources.
Two different underlying psychological processes play a role in the development of job strain and motivation. The first are physical and social resources available in the workplace setting. The latter, personal resources, are those the employee brings with them. These consist of specific personality traits: self-efficacy and optimism. Both types of resources are powerful mediators of employee well-being (e.g. engagement).
Outcomes of continued job strain
Health impairment process: through this process, poorly designed jobs or chronic job demands exhaust employees' mental and physical resources. In turn, this might lead to the depletion of energy and to health problems.
Outcomes of abundant job and personal resources
Motivational process: through this process, job resources exert their motivating potential and lead to high work engagement, low cynicism, and excellent performance. Job resources may play either an intrinsic or an extrinsic motivational role.
The interaction between job demands and job resources is important for the development of job strain and motivation as well. According to the JD-R model, job resources may buffer the effect of job demands on job strain, including burnout. Which specific job resources buffer the effect of different job demands, depends on the particular work environment. Thus, different types of job demands and job resources may interact in predicting job strain. Good examples of job resources that have the potential of buffering job demands are performance feedback and social support (e.g.,).
Job resources particularly influence motivation or work engagement when job demands are high. This assumption is based on the premises of the conservation of resources (COR) theory. According to this theory, people are motivated to obtain, retain and protect their resources, because they are valuable. Hobfoll argues that resource gain acquires its saliency in the context of resource loss. This implies that job resources gain their motivational potential particularly when employees are confronted with high job demands. For example, when employees are faced with high emotional demands, the social support of colleagues might become more visible and more instrumental.
Evidence
Evidence for the dual process: a number of studies have supported the dual pathways to employee well being proposed by the JD-R model. It has been shown that the model can predict important organizational outcomes (e.g. Taken together, research findings support the JD-R model's claim that job demands and job resources initiate two different psychological processes, which eventually affect important organizational outcomes (see also,). When both job demands and resources are high, high strain and motivation is to be expected. When both are low, absence of strain and motivation is to be expected. Consequently, the high demands-low resources condition should result in high strain and low motivation while the low demands-high resources condition should have as a consequence low strain and high motivation.
Evidence for the buffer effect of job resources: some support has been obtained for the proposed interaction between job demands and job resources in their relationship with employee well being (see,). However, most published studies on the model did either not examine or not report such interactions, whereas the practical relevance of this interaction – if present – is usually small. In a large-scale study, it was found that this interaction accounted for on average only 0.5% of the differences among workers in task enjoyment and work commitment.
Evidence for the salience of job resources in the context of high job demands: one previous study outside the framework of the JD-R model has supported the hypothesis that resources gain their salience in the context of high demands (see.) Studies using the JD-R model have shown that job resources particularly affect work engagement when job demands are high (see ); and ).
Practical implications
The JD-R model assumes that whereas every occupation may have its own specific working characteristics, these characteristics can be classified in two general categories (i.e. job demands and job resources), thus constituting an overarching model that may be applied to various occupational settings, irrespective of the particular demands and resources involved. The central assumption of the JD-R model is that job strain develops – irrespective of the type of job or occupation – when (certain) job demands are high and when (certain) job resources are limited. In contrast, work engagement is most likely when job resources are high (also in the face of high job demands). This implies that the JD-R model can be used as a tool for human resource management.
Continuing research
The most recent article written by the authors of the original JD-R paper proposes that the interactions of demands and resources are nuanced and not clearly understood. Here Bakker and Demerouti suggest that demands may sometimes actually have a positive influence on the employee, by providing a challenge to be overcome rather than an insurmountable obstacle. In this same article, the authors describe a cumulative effect of demands and resources in their suggestion of gain and loss spirals. They conclude that these issues and that of workplace aggression may all be part of the JD-R framework.
See also
European Academy of Occupational Health Psychology
Occupational health psychology
Occupational stress
Society for Occupational Health Psychology
Stress management
References
Occupational health psychology
Economics models
Human resource management | 0.769148 | 0.988076 | 0.759976 |
Health | Health has a variety of definitions, which have been used for different purposes over time. In general, it refers to physical and emotional well-being, especially that associated with normal functioning of the human body, absent of disease, pain (including mental pain), or injury.
Health can be promoted by encouraging healthful activities, such as regular physical exercise and adequate sleep, and by reducing or avoiding unhealthful activities or situations, such as smoking or excessive stress. Some factors affecting health are due to individual choices, such as whether to engage in a high-risk behavior, while others are due to structural causes, such as whether the society is arranged in a way that makes it easier or harder for people to get necessary healthcare services. Still, other factors are beyond both individual and group choices, such as genetic disorders.
History
The meaning of health has evolved over time. In keeping with the biomedical perspective, early definitions of health focused on the theme of the body's ability to function; health was seen as a state of normal function that could be disrupted from time to time by disease. An example of such a definition of health is: "a state characterized by anatomic, physiologic, and psychological integrity; ability to perform personally valued family, work, and community roles; ability to deal with physical, biological, psychological, and social stress". Then, in 1948, in a radical departure from previous definitions, the World Health Organization (WHO) proposed a definition that aimed higher, linking health to well-being, in terms of "physical, mental, and social well-being, and not merely the absence of disease and infirmity". Although this definition was welcomed by some as being innovative, it was also criticized for being vague and excessively broad and was not construed as measurable. For a long time, it was set aside as an impractical ideal, with most discussions of health returning to the practicality of the biomedical model.
Just as there was a shift from viewing disease as a state to thinking of it as a process, the same shift happened in definitions of health. Again, the WHO played a leading role when it fostered the development of the health promotion movement in the 1980s. This brought in a new conception of health, not as a state, but in dynamic terms of resiliency, in other words, as "a resource for living". In 1984, WHO revised the definition of health defined it as "the extent to which an individual or group is able to realize aspirations and satisfy needs and to change or cope with the environment. Health is a resource for everyday life, not the objective of living; it is a positive concept, emphasizing social and personal resources, as well as physical capacities." Thus, health referred to the ability to maintain homeostasis and recover from adverse events. Mental, intellectual, emotional and social health referred to a person's ability to handle stress, to acquire skills, to maintain relationships, all of which form resources for resiliency and independent living. This opens up many possibilities for health to be taught, strengthened and learned.
Since the late 1970s, the federal Healthy People Program has been a visible component of the United States' approach to improving population health. In each decade, a new version of Healthy People is issued, featuring updated goals and identifying topic areas and quantifiable objectives for health improvement during the succeeding ten years, with assessment at that point of progress or lack thereof. Progress has been limited to many objectives, leading to concerns about the effectiveness of Healthy People in shaping outcomes in the context of a decentralized and uncoordinated US health system. Healthy People 2020 gives more prominence to health promotion and preventive approaches and adds a substantive focus on the importance of addressing social determinants of health. A new expanded digital interface facilitates use and dissemination rather than bulky printed books as produced in the past. The impact of these changes to Healthy People will be determined in the coming years.
Systematic activities to prevent or cure health problems and promote good health in humans are undertaken by health care providers. Applications with regard to animal health are covered by the veterinary sciences. The term "healthy" is also widely used in the context of many types of non-living organizations and their impacts for the benefit of humans, such as in the sense of healthy communities, healthy cities or healthy environments. In addition to health care interventions and a person's surroundings, a number of other factors are known to influence the health status of individuals. These are referred to as the "determinants of health", which include the individual's background, lifestyle, economic status, social conditions and spirituality; Studies have shown that high levels of stress can affect human health.
In the first decade of the 21st century, the conceptualization of health as an ability opened the door for self-assessments to become the main indicators to judge the performance of efforts aimed at improving human health. It also created the opportunity for every person to feel healthy, even in the presence of multiple chronic diseases or a terminal condition, and for the re-examination of determinants of health (away from the traditional approach that focuses on the reduction of the prevalence of diseases).
Determinants
In general, the context in which an individual lives is of great importance for both his health status and quality of life. It is increasingly recognized that health is maintained and improved not only through the advancement and application of health science, but also through the efforts and intelligent lifestyle choices of the individual and society. According to the World Health Organization, the main determinants of health include the social and economic environment, the physical environment, and the person's individual characteristics and behaviors.
More specifically, key factors that have been found to influence whether people are healthy or unhealthy include the following:
Education and literacy
Employment/working conditions
Income and social status
Physical environments
Social environments
Social support networks
Biology and genetics
Culture
Gender
Health care services
Healthy child development
Personal health practices and coping skills
An increasing number of studies and reports from different organizations and contexts examine the linkages between health and different factors, including lifestyles, environments, health care organization and health policy, one specific health policy brought into many countries in recent years was the introduction of the sugar tax. Beverage taxes came into light with increasing concerns about obesity, particularly among youth. Sugar-sweetened beverages have become a target of anti-obesity initiatives with increasing evidence of their link to obesity.—such as the 1974 Lalonde report from Canada; the Alameda County Study in California; and the series of World Health Reports of the World Health Organization, which focuses on global health issues including access to health care and improving public health outcomes, especially in developing countries.
The concept of the "health field," as distinct from medical care, emerged from the Lalonde report from Canada. The report identified three interdependent fields as key determinants of an individual's health. These are:
Biomedical: all aspects of health, physical and mental, developed within the human body as influenced by genetic make-up.
Environmental: all matters related to health external to the human body and over which the individual has little or no control;
Lifestyle: the aggregation of personal decisions (i.e., over which the individual has control) that can be said to contribute to, or cause, illness or death;
The maintenance and promotion of health is achieved through different combination of physical, mental, and social well-being—a combination sometimes referred to as the "health triangle." The WHO's 1986 Ottawa Charter for Health Promotion further stated that health is not just a state, but also "a resource for everyday life, not the objective of living. Health is a positive concept emphasizing social and personal resources, as well as physical capacities."
Focusing more on lifestyle issues and their relationships with functional health, data from the Alameda County Study suggested that people can improve their health via exercise, enough sleep, spending time in nature, maintaining a healthy body weight, limiting alcohol use, and avoiding smoking. Health and illness can co-exist, as even people with multiple chronic diseases or terminal illnesses can consider themselves healthy.
The environment is often cited as an important factor influencing the health status of individuals. This includes characteristics of the natural environment, the built environment and the social environment. Factors such as clean water and air, adequate housing, and safe communities and roads all have been found to contribute to good health, especially to the health of infants and children. Some studies have shown that a lack of neighborhood recreational spaces including natural environment leads to lower levels of personal satisfaction and higher levels of obesity, linked to lower overall health and well-being. It has been demonstrated that increased time spent in natural environments is associated with improved self-reported health, suggesting that the positive health benefits of natural space in urban neighborhoods should be taken into account in public policy and land use.
Genetics, or inherited traits from parents, also play a role in determining the health status of individuals and populations. This can encompass both the predisposition to certain diseases and health conditions, as well as the habits and behaviors individuals develop through the lifestyle of their families. For example, genetics may play a role in the manner in which people cope with stress, either mental, emotional or physical. For example, obesity is a significant problem in the United States that contributes to poor mental health and causes stress in the lives of many people. One difficulty is the issue raised by the debate over the relative strengths of genetics and other factors; interactions between genetics and environment may be of particular importance.
Potential issues
A number of health issues are common around the globe. Disease is one of the most common. According to GlobalIssues.org, approximately 36 million people die each year from non-communicable (i.e., not contagious) diseases, including cardiovascular disease, cancer, diabetes and chronic lung disease.
Among communicable diseases, both viral and bacterial, AIDS/HIV, tuberculosis, and malaria are the most common, causing millions of deaths every year.
Another health issue that causes death or contributes to other health problems is malnutrition, especially among children. One of the groups malnutrition affects most is young children. Approximately 7.5 million children under the age of 5 die from malnutrition, usually brought on by not having the money to find or make food.
Bodily injuries are also a common health issue worldwide. These injuries, including bone fractures and burns, can reduce a person's quality of life or can cause fatalities including infections that resulted from the injury (or the severity injury in general).
Lifestyle choices are contributing factors to poor health in many cases. These include smoking cigarettes, and can also include a poor diet, whether it is overeating or an overly constrictive diet. Inactivity can also contribute to health issues and also a lack of sleep, excessive alcohol consumption, and neglect of oral hygiene. There are also genetic disorders that are inherited by the person and can vary in how much they affect the person (and when they surface).
Although the majority of these health issues are preventable, a major contributor to global ill health is the fact that approximately 1 billion people lack access to health care systems. Arguably, the most common and harmful health issue is that a great many people do not have access to quality remedies.
Mental health
The World Health Organization describes mental health as "a state of well-being in which the individual realizes his or her own abilities, can cope with the normal stresses of life, can work productively and fruitfully, and is able to make a contribution to his or her community". Mental health is not just the absence of mental illness.
Mental illness is described as 'the spectrum of cognitive, emotional, and behavioral conditions that interfere with social and emotional well-being and the lives and productivity of people. Having a mental illness can seriously impair, temporarily or permanently, the mental functioning of a person. Other terms include: 'mental health problem', 'illness', 'disorder', 'dysfunction'.
Approximately twenty percent of all adults in the US are considered diagnosable with a mental disorder. Mental disorders are the leading cause of disability in the United States and Canada. Examples of these disorders include schizophrenia, ADHD, major depressive disorder, bipolar disorder, anxiety disorder, post-traumatic stress disorder and autism.
Many factors contribute to mental health problems, including:
Biological factors, such as genes or brain chemistry
Family history of mental health problems
Life experiences, such as trauma or abuse
Maintaining
Achieving and maintaining health is an ongoing process, shaped by both the evolution of health care knowledge and practices as well as personal strategies and organized interventions for staying healthy.
Diet
An important way to maintain one's personal health is to have a healthy diet. A healthy diet includes a variety of plant-based and animal-based foods that provide nutrients to the body. Such nutrients provide the body with energy and keep it running. Nutrients help build and strengthen bones, muscles, and tendons and also regulate body processes (i.e., blood pressure). Water is essential for growth, reproduction and good health. Macronutrients are consumed in relatively large quantities and include proteins, carbohydrates, and fats and fatty acids. Micronutrients – vitamins and minerals – are consumed in relatively smaller quantities, but are essential to body processes. The food guide pyramid is a pyramid-shaped guide of healthy foods divided into sections. Each section shows the recommended intake for each food group (i.e., protein, fat, carbohydrates and sugars). Making healthy food choices can lower one's risk of heart disease and the risk of developing some types of cancer, and can help one maintain their weight within a healthy range.
The Mediterranean diet is commonly associated with health-promoting effects. This is sometimes attributed to the inclusion of bioactive compounds such as phenolic compounds, isoprenoids and alkaloids.
Exercise
Physical exercise enhances or maintains physical fitness and overall health and wellness. It strengthens one's bones and muscles and improves the cardiovascular system. According to the National Institutes of Health, there are four types of exercise: endurance, strength, flexibility, and balance. The CDC states that physical exercise can reduce the risks of heart disease, cancer, type 2 diabetes, high blood pressure, obesity, depression, and anxiety. For the purpose of counteracting possible risks, it is often recommended to start physical exercise gradually as one goes. Participating in any exercising, whether it is housework, yardwork, walking or standing up when talking on the phone, is often thought to be better than none when it comes to health.
Sleep
Sleep is an essential component to maintaining health. In children, sleep is also vital for growth and development. Ongoing sleep deprivation has been linked to an increased risk for some chronic health problems. In addition, sleep deprivation has been shown to correlate with both increased susceptibility to illness and slower recovery times from illness. In one study, people with chronic insufficient sleep, set as six hours of sleep a night or less, were found to be four times more likely to catch a cold compared to those who reported sleeping for seven hours or more a night. Due to the role of sleep in regulating metabolism, insufficient sleep may also play a role in weight gain or, conversely, in impeding weight loss. Additionally, in 2007, the International Agency for Research on Cancer, which is the cancer research agency for the World Health Organization, declared that "shiftwork that involves circadian disruption is probably carcinogenic to humans", speaking to the dangers of long-term nighttime work due to its intrusion on sleep. In 2015, the National Sleep Foundation released updated recommendations for sleep duration requirements based on age, and concluded that "Individuals who habitually sleep outside the normal range may be exhibiting signs or symptoms of serious health problems or, if done volitionally, may be compromising their health and well-being."
Role of science
Health science is the branch of science focused on health. There are two main approaches to health science: the study and research of the body and health-related issues to understand how humans (and animals) function, and the application of that knowledge to improve health and to prevent and cure diseases and other physical and mental impairments. The science builds on many sub-fields, including biology, biochemistry, physics, epidemiology, pharmacology, medical sociology. Applied health sciences endeavor to better understand and improve human health through applications in areas such as health education, biomedical engineering, biotechnology and public health.
Organized interventions to improve health based on the principles and procedures developed through the health sciences are provided by practitioners trained in medicine, nursing, nutrition, pharmacy, social work, psychology, occupational therapy, physical therapy and other health care professions. Clinical practitioners focus mainly on the health of individuals, while public health practitioners consider the overall health of communities and populations. Workplace wellness programs are increasingly being adopted by companies for their value in improving the health and well-being of their employees, as are school health services to improve the health and well-being of children.
Role of medicine and medical science
Contemporary medicine is in general conducted within health care systems. Legal, credentialing and financing frameworks are established by individual governments, augmented on occasion by international organizations, such as churches. The characteristics of any given health care system have significant impact on the way medical care is provided.
From ancient times, Christian emphasis on practical charity gave rise to the development of systematic nursing and hospitals and the Catholic Church today remains the largest non-government provider of medical services in the world. Advanced industrial countries (with the exception of the United States) and many developing countries provide medical services through a system of universal health care that aims to guarantee care for all through a single-payer health care system, or compulsory private or co-operative health insurance. This is intended to ensure that the entire population has access to medical care on the basis of need rather than ability to pay. Delivery may be via private medical practices or by state-owned hospitals and clinics, or by charities, most commonly by a combination of all three.
Most tribal societies provide no guarantee of healthcare for the population as a whole. In such societies, healthcare is available to those that can afford to pay for it or have self-insured it (either directly or as part of an employment contract) or who may be covered by care financed by the government or tribe directly.
Transparency of information is another factor defining a delivery system. Access to information on conditions, treatments, quality, and pricing greatly affects the choice by patients/consumers and, therefore, the incentives of medical professionals. While the US healthcare system has come under fire for lack of openness, new legislation may encourage greater openness. There is a perceived tension between the need for transparency on the one hand and such issues as patient confidentiality and the possible exploitation of information for commercial gain on the other.
Delivery
Provision of medical care is classified into primary, secondary, and tertiary care categories.
Primary care medical services are provided by physicians, physician assistants, nurse practitioners, or other health professionals who have first contact with a patient seeking medical treatment or care. These occur in physician offices, clinics, nursing homes, schools, home visits, and other places close to patients. About 90% of medical visits can be treated by the primary care provider. These include treatment of acute and chronic illnesses, preventive care and health education for all ages and both sexes.
Secondary care medical services are provided by medical specialists in their offices or clinics or at local community hospitals for a patient referred by a primary care provider who first diagnosed or treated the patient. Referrals are made for those patients who required the expertise or procedures performed by specialists. These include both ambulatory care and inpatient services, Emergency departments, intensive care medicine, surgery services, physical therapy, labor and delivery, endoscopy units, diagnostic laboratory and medical imaging services, hospice centers, etc. Some primary care providers may also take care of hospitalized patients and deliver babies in a secondary care setting.
Tertiary care medical services are provided by specialist hospitals or regional centers equipped with diagnostic and treatment facilities not generally available at local hospitals. These include trauma centers, burn treatment centers, advanced neonatology unit services, organ transplants, high-risk pregnancy, radiation oncology, etc.
Modern medical care also depends on information – still delivered in many health care settings on paper records, but increasingly nowadays by electronic means.
In low-income countries, modern healthcare is often too expensive for the average person. International healthcare policy researchers have advocated that "user fees" be removed in these areas to ensure access, although even after removal, significant costs and barriers remain.
Separation of prescribing and dispensing is a practice in medicine and pharmacy in which the physician who provides a medical prescription is independent from the pharmacist who provides the prescription drug. In the Western world there are centuries of tradition for separating pharmacists from physicians. In Asian countries, it is traditional for physicians to also provide drugs.
Role of public health
Public health has been described as "the science and art of preventing disease, prolonging life and promoting health through the organized efforts and informed choices of society, organizations, public and private, communities and individuals." It is concerned with threats to the overall health of a community based on population health analysis. The population in question can be as small as a handful of people or as large as all the inhabitants of several continents (for instance, in the case of a pandemic). Public health has many sub-fields, but typically includes the interdisciplinary categories of epidemiology, biostatistics and health services. environmental health, community health, behavioral health, and occupational health are also important areas of public health.
The focus of public health interventions is to prevent and manage diseases, injuries and other health conditions through surveillance of cases and the promotion of healthy behavior, communities, and (in aspects relevant to human health) environments. Its aim is to prevent health problems from happening or re-occurring by implementing educational programs, developing policies, administering services and conducting research. In many cases, treating a disease or controlling a pathogen can be vital to preventing it in others, such as during an outbreak. Vaccination programs and distribution of condoms to prevent the spread of communicable diseases are examples of common preventive public health measures, as are educational campaigns to promote vaccination and the use of condoms (including overcoming resistance to such).
Public health also takes various actions to limit the health disparities between different areas of the country and, in some cases, the continent or world. One issue is the access of individuals and communities to health care in terms of financial, geographical or socio-cultural constraints. Applications of the public health system include the areas of maternal and child health, health services administration, emergency response, and prevention and control of infectious and chronic diseases.
The great positive impact of public health programs is widely acknowledged. Due in part to the policies and actions developed through public health, the 20th century registered a decrease in the mortality rates for infants and children and a continual increase in life expectancy in most parts of the world. For example, it is estimated that life expectancy has increased for Americans by thirty years since 1900, and worldwide by six years since 1990.
Self-care strategies
Personal health depends partially on the active, passive, and assisted cues people observe and adopt about their own health. These include personal actions for preventing or minimizing the effects of a disease, usually a chronic condition, through integrative care. They also include personal hygiene practices to prevent infection and illness, such as bathing and washing hands with soap; brushing and flossing teeth; storing, preparing and handling food safely; and many others. The information gleaned from personal observations of daily living – such as about sleep patterns, exercise behavior, nutritional intake and environmental features – may be used to inform personal decisions and actions (e.g., "I feel tired in the morning so I am going to try sleeping on a different pillow"), as well as clinical decisions and treatment plans (e.g., a patient who notices his or her shoes are tighter than usual may be having exacerbation of left-sided heart failure, and may require diuretic medication to reduce fluid overload).
Personal health also depends partially on the social structure of a person's life. The maintenance of strong social relationships, volunteering, and other social activities have been linked to positive mental health and also increased longevity. One American study among seniors over age 70, found that frequent volunteering was associated with reduced risk of dying compared with older persons who did not volunteer, regardless of physical health status. Another study from Singapore reported that volunteering retirees had significantly better cognitive performance scores, fewer depressive symptoms, and better mental well-being and life satisfaction than non-volunteering retirees.
Prolonged psychological stress may negatively impact health, and has been cited as a factor in cognitive impairment with aging, depressive illness, and expression of disease. Stress management is the application of methods to either reduce stress or increase tolerance to stress. Relaxation techniques are physical methods used to relieve stress. Psychological methods include cognitive therapy, meditation, and positive thinking, which work by reducing response to stress. Improving relevant skills, such as problem solving and time management skills, reduces uncertainty and builds confidence, which also reduces the reaction to stress-causing situations where those skills are applicable.
Occupational
In addition to safety risks, many jobs also present risks of disease, illness and other long-term health problems. Among the most common occupational diseases are various forms of pneumoconiosis, including silicosis and coal worker's pneumoconiosis (black lung disease). Asthma is another respiratory illness that many workers are vulnerable to. Workers may also be vulnerable to skin diseases, including eczema, dermatitis, urticaria, sunburn, and skin cancer. Other occupational diseases of concern include carpal tunnel syndrome and lead poisoning.
As the number of service sector jobs has risen in developed countries, more and more jobs have become sedentary, presenting a different array of health problems than those associated with manufacturing and the primary sector. Contemporary problems, such as the growing rate of obesity and issues relating to stress and overwork in many countries, have further complicated the interaction between work and health.
Many governments view occupational health as a social challenge and have formed public organizations to ensure the health and safety of workers. Examples of these include the British Health and Safety Executive and in the United States, the National Institute for Occupational Safety and Health, which conducts research on occupational health and safety, and the Occupational Safety and Health Administration, which handles regulation and policy relating to worker safety and health.
See also
References
External links
Personal life
Articles containing video clips
Main topic articles | 0.760557 | 0.999215 | 0.75996 |
Structuralism (psychology) | Structuralism in psychology (also structural psychology) is a theory of consciousness developed by Edward Bradford Titchener. This theory was challenged in the 20th century.
Structuralists seek to analyze the adult mind (the total sum of experience from birth to the present) in terms of the simplest definable components of experience and then to find how these components fit together to form more complex experiences as well as how they correlate to physical events. To do this, structuralists employ introspection: self-reports of sensations, views, feelings, and emotions.
Titchener
Edward B. Titchener is credited for the theory of structuralism. It is considered to be the first "school" of psychology. Because he was a student of Wilhelm Wundt at the University of Leipzig, Titchener's ideas on how the mind worked were heavily influenced by Wundt's theory of voluntarism and his ideas of association and apperception (the passive and active combinations of elements of consciousness respectively). Titchener attempted to classify the structures of the mind in a similar way to how chemists classify the elements of nature, into the nature.
Titchener said that only observable events constituted that science and that any speculation concerning unobservable events has no place in society (this view was similar to the one expressed by Ernst Mach). In his book, Systematic Psychology, Titchener wrote:
Mind and consciousness
Titchener believed the mind was the accumulated experience of a lifetime. He believed that he could understand reasoning and the structure of the mind if he could define and categorize the basic components of mind and the rules by which the components interacted.
Introspection
The main tool Titchener used to try to determine the different components of consciousness was introspection. Titchener writes in his Systematic Psychology:
The state of consciousness which is to be the matter of psychology ... can become an object of immediate knowledge only by way of introspection or self-awareness.
and in his book An Outline of Psychology:
...within the sphere of psychology, introspection is the final and only court of appeal, that psychological evidence cannot be other than introspective evidence.
Titchener had very strict guidelines for the reporting of an introspective analysis. The subject would be presented with an object, such as a pencil. The subject would then report the characteristics of that pencil (e.g., color and length). The subject would be instructed not to report the name of the object (pencil) because that did not describe the raw data of what the subject was experiencing. Titchener referred to this as stimulus error.
In his translation of Wundt's work, Titchener illustrates Wundt as a supporter of introspection as a method through which to observe consciousness. However, introspection fits Wundt's theories only if the term is taken to refer to psychophysical methods.
Introspection literally means 'looking within', to try to describe a person's memory, perceptions, cognitive processes, and/or motivations.
Elements of the mind
Structuralists believe that our consciousness is composed of individual parts which contribute to overall structure and function of the mind.
Titchener's theory began with the question of what each element of the mind is. He concluded from his research that there were three types of mental elements constituting conscious experience: Sensations (elements of perceptions), Images (elements of ideas), and affections (elements of emotions). These elements could be broken down into their respective properties, which he determined were quality, intensity, duration, clearness, and extensity. Both sensations and images contained all of these qualities; however, affections were lacking in both clearness and extensity. And images and affections could be broken down further into just clusters of sensations. Therefore, by following this train of thinking all thoughts were images, which being constructed from elementary sensations meant that all complex reasoning and thought could eventually be broken down into just the sensations which he could get at through introspection.
Interaction of elements
The second issue in Titchener's theory of structuralism was the question of how the mental elements combined and interacted with each other to form conscious experience. His conclusions were largely based on ideas of associationism. In particular, Titchener focuses on the law of contiguity, which is the idea that the thought of something will tend to cause thoughts of things that are usually experienced along with it.
Titchener rejected Wundt's notions of apperception and creative synthesis (voluntary action), which were the basis of Wundt's voluntarism. Titchener argued that attention was simply a manifestation of the "clearness" property within sensation.
Physical and mental relationship
Once Titchener identified the elements of mind and their interaction, his theory then asked the question of why the elements interact in the way they do. In particular, Titchener was interested in the relationship between the conscious experience and the physical processes. Titchener believed that the physical processes provide a continuous substratum that gives psychological processes a continuity they otherwise would not have. Therefore, the nervous system does not cause conscious experience, but can be used to explain some characteristics of mental events.
Wundt and structuralism
Wilhelm Wundt instructed Titchener, the founder of structuralism, at the University of Leipzig. The 'science of immediate experience' was stated by him. This simply means that the complex perceptions can be raised through basic sensory information. Wundt is often associated in past literature with structuralism and the use of similar introspective methods. Wundt makes a clear distinction between pure introspection, which is the relatively unstructured self-observation used by earlier philosophers, and experimental introspection. Wundt believes this type of introspection to be acceptable since it uses laboratory instruments to vary conditions and make results of internal perceptions more precise.
The reason for this confusion lies in the translation of Wundt's writings. When Titchener brought his theory to America, he also brought with him Wundt's work. Titchener translated these works for the American audience, and in so doing misinterpreted Wundt's meaning. He then used this translation to show that Wundt supported his own theories. In fact, Wundt's main theory was that of psychological voluntarism (psychologischer Voluntarismus), the doctrine that the power of the will organizes the mind's content into higher-level thought processes.
Criticisms
Structuralism has faced a large amount of criticism, particularly from functionalism, the school of psychology which later evolved into the psychology of pragmatism (reconvening introspection into acceptable practices of observation). The main critique of structuralism was its focus on introspection as the method by which to gain an understanding of conscious experience. Critics argue that self-analysis was not feasible, since introspective students cannot appreciate the processes or mechanisms of their own mental processes. Introspection, therefore, yielded different results depending on who was using it and what they were seeking. Some critics also pointed out that introspective techniques actually resulted in retrospection – the memory of a sensation rather than the sensation itself.
Behaviorists, specifically methodological behaviorists, fully rejected even the idea of the conscious experience as a worthy topic in psychology, since they believed that the subject matter of scientific psychology should be strictly operationalized in an objective and measurable way. Because the notion of a mind could not be objectively measured, it was not worth further inquiry. However, radical behaviorism includes thinking, feeling, and private events in its theory and analysis of psychology. Structuralism also believes that the mind could be dissected into its individual parts, which then formed conscious experience. This also received criticism from the Gestalt school of psychology, which argues that the mind cannot be broken down into individual elements.
Besides theoretical attacks, structuralism was criticized for excluding and ignoring important developments happening outside of structuralism. For instance, structuralism did not concern itself with the study of animal behavior, and personality.
Titchener himself was criticized for not using his psychology to help answer practical problems. Instead, Titchener was interested in seeking pure knowledge that to him was more important than commonplace issues.
Alternatives
One alternative theory to structuralism, to which Titchener took offense, was functionalism (functional psychology). Functionalism was developed by William James in contrast to structuralism. It stressed the importance of empirical, rational thought over an experimental, trial-and-error philosophy. James in his theory included introspection (i.e., the psychologist's study of his own states of mind), but also included things like analysis (i.e., the logical criticism of precursor and contemporary views of the mind), experiment (e.g., in hypnosis or neurology), and comparison (i.e., the use of statistical means to distinguish norms from anomalies) which gave it somewhat of an edge. Functionalism also differed in that it focused on the how useful certain processes were in the brain to the environment you were in and not the processes and other detail like in structuralism.
Contemporary structuralism
Researchers are still working to offer objective experimental approaches to measuring conscious experience, in particular within the field of cognitive psychology, which is in some ways carrying on the torch of Titchener's ideas. It is working on the same type of issues such as sensations and perceptions. Today, any introspective methodologies are done under highly controlled situations and are understood to be subjective and retrospective. Proponents argue that psychology can still gain useful information from using introspection in this case.
See also
Association of ideas
Associationism
Mentalism (psychology)
History of psychology
Nature
Notes
References
Danziger, Kurt. "Wundt and the Two Traditions in Psychology." In Wilhelm Wundt and the Making of a Scientific Psychology, by R. W. Rieber, 73-88. New York, NY: Plenum Press, 1980.
Hergenhahn, B.R. An Introduction to the History of Psychology. 6th Edition. Belmont, CA: Wadsworth, 2009.
Leahey, T.M. "The mistaken mirror: On Wundt's and Titchener's psychologies." Journal of the History of the Behavioral Sciences, 17, (1981): 273-282.
Robinson, Daniel N. Toward a Science of Human Nature. New York, NY: Columbia University Press, 1982.
Uttal, William R. The War Between Mentalism and Behaviorism: On the Accessibility of Mental Processes. Mahwah, NJ: Lawrence Erlbaum Associates, Publishers, 2000.
Psychological theories | 0.765637 | 0.992552 | 0.759934 |
Mental time travel | In psychology, mental time travel is the capacity to mentally reconstruct personal events from the past (episodic memory) as well as to imagine possible scenarios in the future (episodic foresight/episodic future thinking). The term was coined by Thomas Suddendorf and Michael Corballis, building on
Endel Tulving's work on episodic memory. (Tulving proposed the alternative term chronesthesia.)
Mental time travel has been studied by psychologists, cognitive neuroscientists, philosophers and in a variety of other academic disciplines. Major areas of interest include the nature of the relationship between memory and foresight, the evolution of the ability (including whether it is uniquely human or shared with other animals), its development in young children, its underlying brain mechanisms, as well as its potential links to consciousness, the self, and free will.
Overview, terminology, and relationship to other cognitive capacities
Declarative memory refers to the capacity to store and retrieve information that can be explicitly expressed, and consists of both facts or knowledge about the world (semantic memory) and autobiographical details about one's own experiences (episodic memory). Tulving (1985) originally suggested that episodic memory involved a kind of 'autonoetic' ('self-knowing') consciousness that required the first-person subjective experience of previously lived events, whereas semantic memory is associated with 'noetic' (knowing) consciousness but does not require such mental simulation.
It has become increasingly clear that both semantic and episodic memory are integral for thinking about the future. Mental time travel, however, specifically refers to the 'autonoetic' systems, and thus selectively comprises episodic memory and episodic foresight.
The close link between episodic memory and episodic foresight has been established with evidence of their shared developmental trajectory, similar impairment profiles in neuropsychiatric disease and in brain damage, phenomenological analyses, and with neuroimaging. Mental time travel may be one of several processes enabled by a general scenario building or construction system in the brain. This general capacity to generate and reflect on mental scenarios has been compared to a theatre in the mind that depends on the working together of a host of components.
Investigations have been conducted into diverse aspects of mental time travel, including individual differences relating to personality, its instantiation in artificial intelligence systems, and its relationship with theory of mind and mind-wandering. The study of mental time travel in general terms is also related to – but distinct from – the study of the way individuals differ in terms of their future orientation, time perspective, and temporal self-continuity.
Brain regions involved
Various neuroimaging studies have elucidated the brain systems underlying the capacity for mental time travel in adults. Early fMRI studies on the topic revealed a number of close correspondences between remembering past experiences and imagining future experiences in brain activity.
fMRI mapping of brain regions
Addis et al. conducted an fMRI study to examine neural regions mediating construction and elaboration of past and future events.
The elaboration phase, unlike the construction phase, has overlap in the cortical areas comprising the autobiographical memory retrieval network. In this study, it was also found that the left hippocampus and the right middle occipital gyrus were significantly activated during past and future event construction, while the right hippocampus was significantly deactivated during past event construction. It was only activated during the creation of future events.
Episodic future thinking involves multiple component processes: retrieval and integration of relevant information from memory, processing of subjective time, and self-referential processing. D'Argembeau et al.'s study found that the ventral medial prefrontal cortex and posterior cingulate cortex are the most activated areas when imagining future events that are relevant to one's personal goals than to unrelated ones. This shows that these brain regions play a role in personal goal processing, which is a critical feature of episodic future thinking.
Brain regions involved in the 'what' and 'where' of an event
Cabeza et al. conducted a positron emission tomography (PET) scan study on a group of human test subjects to identify the brain regions involved in temporal memory, which is based on a linear progression of events. Since 'recollecting a past episode involves remembering not only what happened but also when it happened', PET scans were used to find the areas of the brain that were activated when trying to remember a certain word in a sequence. The results show that temporal-order memory of past events involves the frontal and posterior brain regions and item retrieval shows neural activity in the medial temporal and basal fore brain regions.
Evolution and human uniqueness
The ability to travel mentally in time – especially into the future – has been highlighted as a potential prime mover in human evolution, enabling humans to prepare, plan and shape the future to their advantage. However, the question of whether or to what extent animals other than human beings can engage in mental time travel has remained controversial.
One proposal, the Bischof-Köhler hypothesis, posits that non-human animals cannot act upon drive states they do not currently possess, for example seeking out water while currently fully quenched. Other proposals suggest that different species may have some capacities, but are limited because of shortcomings in a range of component capacities of mental scenario building and imagination. A number of studies have claimed to have demonstrated mental time travel in animals including, most notably, various great apes, crows, ravens, and western scrub jays, but these have been subjected to a number of criticisms and simpler alternative explanations have been proposed for the results. This debate is ongoing.
If mental time travel is unique to humans, then it must have emerged over the last 6 million years since the line leading to modern humans split from the line leading to modern chimpanzees. Perhaps the first hard evidence for the evolution of mental time travel in humans comes in the form of Acheulean bifacial handaxes associated with Homo erectus. Acheulean tools are complex and appear to have required advanced planning to create. There is also evidence that they were often crafted in one location and then taken elsewhere for repeated use.
A number of important adaptive functions have been identified that rely to some degree on the capacity to remember the past and imagine the future. These functions include predicting future emotional reactions (affective forecasting), deliberate practice, intertemporal choice, navigation, prospective memory, counterfactual thinking, and planning.
Episodic-like memory and planning for the future in great apes
Osvath et al. conducted a study on apes to show that they have the ability of foresight. The study consisted of testing for self-control, associative learning, and envisioning in chimpanzees and orangutans through a series of experiments. Critics questioned whether these animals truly exhibited mental time travel, or whether it was associative learning that caused them to behave as they did. The Bischof-Kohler hypothesis says that animals cannot anticipate future needs, and this study by Osvath tried to disprove the hypothesis.
The scientists showed that when the apes were presented with a food item in conjunction with a utensil that could be used to actually eat that particular food, these animals chose the utensil instead of food. They anticipated a future need for the utensil that overcame the current want for just a food reward. This is an example of mental time travel in animals. It was not a result of associative learning, that they actually chose the utensil instead of the food reward, since the scientists ran another experiment to account for that. Other examples, such as food caching by birds, may be examples of mental time travel in non-humans. Even survival instinct by certain animals such as elephants, in response to imminent danger, could involve mental time travel mechanisms.
Another study to show that great apes have the ability of foresight was conducted by Martin-Ordas G. et al. These scientists were able to show that "apes remember in an integrated fashion what, where and when" a particular event had happened. Two experiments were conducted in this study, the first being an investigation of the content of the memories of apes i.e. could these animals remember when and where two types of food they were shown before are hidden. The second experiment explored the structure of the memories. It was found that the apes' memories were formed in an integrated what–where–when structure. All these findings suggest that it is not instinctive or learning predispositions that made the animals behave the way they did, but rather that they have the ability to mentally time travel. However, comparative psychologists are divided about this conclusion.
Episodic-like memory in western scrub-jays
In their study to show that birds exhibit episodic-like memory, Clayton et al. used 3 behavioral criteria: content, structure, and flexibility, to decide whether the food caching habits of these birds were evidence of their ability to recall the past and plan for the future. Content involved remembering what happened based on a specific past experience. Structure required the formation of a 'what-where-when' representation of the event. Finally, flexibility was used to see how well the information could be organized and re-organized, based on facts and experiences. Mental time travel involves the use of both episodic future thinking and semantic knowledge. This study also contradicts the Bischof-Kohler hypothesis by showing that some animals may mentally time travel into the future or back to the past. However this interpretation has remained controversial.
Development in children
Studies into the development of mental time travel in infancy suggest that the involved component processes come online piece by piece. Most of the required psychological subcomponents appear to be available by approximately age four. This includes the fundamental capacity to prepare for two mutually exclusive possible future events, which appears to develop between the ages of 3 and 5.
Two and three-year-old children can report some information about upcoming events, and by ages four and five, children can talk more clearly about future situations. However, there is concern that children may understand more than they can articulate, and that they may say things without fully understanding. Thus, researchers have tried to examine future-oriented action. A carefully controlled study found that four-year-olds could already remember a specific problem they saw in a different room sufficiently enough to prepare for its future solution. These results suggest that children by the end of the preschool years have developed some fundamental capacity for foresight, capacities that continue to develop throughout childhood and adolescence.
Measurement
Studies of mental time travel require the measurement of both episodic memory and episodic foresight.
Episodic memory
Episodic memory is typically measured in human adults by asking people to report or describe past events that they had experienced. Many studies provide participants with information at one point in the study and then assess their memory for this information at a later point in the study. The advantage of these studies is that they allow the accuracy of recall to be assessed. However, Cheke and Clayton found that such different measurements of memory do not mutually correspond enough and may therefore capture different facets of memory. A more general limitation with studies in this field is that they fail to capture people's actual memories for real-life events, just like in other studied animals.
Many studies focus on asking people to recall episodes from their own lives. Some of them attempt to verify the accuracy of recall by comparing participants' memories to those of family, friends or relatives who experienced the same event, or in some cases by comparing peoples' memories of an event to public information about the event. However it is not always easy to verify the accuracy of recall, so many measures of episodic memory do not do this, focusing instead on aspects of people's verbal descriptions of their memories.
Three commonly used measures that do not verify the accuracy of people's memories are as follows:
Dritschel et al. adapted the Controlled Oral Word Association Test to assess the fluency with which people recall personal autobiographical episodes in specific given time periods (e.g., last week, last year, last 5 years, etc.) in a specific time limit (e.g., 1 min).
Baddeley and Wilson used a 4-point scale with which to rate participants' memories as (3) specific, (2) intermediate, (1) general, and (0) nil, based on the level of the detail provided in their description.
Levine and colleagues designed the Autobiographical Interview to distinguish between episodic and semantic components of episodic memories based on participants' verbal descriptions.
Episodic foresight
Miloyan and McFarlane performed a systematic review of episodic foresight measurement instruments used in human adults and found that most of these measures were adapted from measures of episodic memory.
The measure by Dritschel et al. based on the Controlled Oral Word Association Test was adapted by MacLeod and colleagues to assess episodic foresight.
Williams et al. adapted the 4-point scale from Baddeley and Wilson to assess episodic foresight.
The Autobiographical Interview by Levine and colleagues which was designed to distinguish the episodic and semantic components of episodic memories was adapted by Addis and colleagues to measure episodic foresight.
The authors of the systematic review noted that a limitation of all such episodic foresight measures is that they do not compare people's simulation of future events to objective preparatory behaviors or to the actual occurrence of future events. Thus, none of the available measures verify the accuracy or relevance of people's imaginings. This is in contrast to studies of episodic foresight in children and animals that require participants to demonstrate episodic foresight with behaviors to compensate for their lack of verbal ability.
See also
Episodic memory
Foresight (psychology)
Semantic memory
Prospective memory
Prospection
Time perception
Intertemporal choice
Practice (learning method)
Affective forecasting
Predictive coding
Free energy principle
References
Cognitive neuroscience
Cognitive psychology | 0.772931 | 0.983163 | 0.759917 |
Gestalt therapy | Gestalt therapy is a form of psychotherapy that emphasizes personal responsibility and focuses on the individual's experience in the present moment, the therapist–client relationship, the environmental and social contexts of a person's life, and the self-regulating adjustments people make as a result of their overall situation. It was developed by Fritz Perls, Laura Perls and Paul Goodman in the 1940s and 1950s, and was first described in the 1951 book Gestalt Therapy.
Overview
Edwin Nevis, co-founder of the Gestalt Institute of Cleveland, founder of the Gestalt International Study Center, and faculty member at the MIT Sloan School of Management, described Gestalt therapy as "a conceptual and methodological base from which helping professionals can craft their practice". In the same volume, Joel Latner stated that Gestalt therapy is built upon two central ideas:
that the most helpful focus of psychotherapy is the experiential present moment, and that everyone is caught in webs of relationships;
thus, it is only possible to know ourselves against the background of our relationships to others.
The historical development of Gestalt therapy (described below) discloses the influences that generated these two ideas. Expanded, they support the four chief theoretical constructs (explained in the theory and practice section) that comprise Gestalt theory, and that guide the practice and application of Gestalt therapy.
Gestalt therapy was forged from various influences upon the lives of its founders during the times in which they lived, including the new physics, Eastern religion, existential phenomenology, Gestalt psychology, psychoanalysis, experimental theatre, systems theory, and field theory. Gestalt therapy rose from its beginnings in the middle of the 20th century to rapid and widespread popularity during the decade of the 1960s and early 1970s. During the 1970s and 80s Gestalt therapy training centers spread globally; but they were, for the most part, not aligned with formal academic settings. As the cognitive revolution eclipsed Gestalt theory in psychology, many came to believe Gestalt was an anachronism. Because Gestalt therapists disdained the positivism underlying what they perceived to be the concern of research, they largely ignored the need to use research to further develop Gestalt theory and Gestalt therapy practice (with a few exceptions like Les Greenberg; see the interview "Validating Gestalt"). However, the new century has seen a sea change in attitudes toward research and Gestalt practice. In March 2020, Vikram Kolmannskog became the world's first Professor of Gestalt Therapy at the Norwegian Gestalt Institute, where he has been teaching and researching since 2015.
Gestalt therapy is not identical to Gestalt psychology, but Gestalt psychology influenced the development of Gestalt therapy to a large extent.
Gestalt therapy focuses on process (what is actually happening) over content (what is being talked about). The emphasis is on what is being done, thought, and felt at the present moment (the phenomenality of both client and therapist), rather than on what was, might be, could be, or should have been. Gestalt therapy is a method of awareness practice (also called "mindfulness" in other clinical domains), by which perceiving, feeling, and acting are understood to be conducive to interpreting, explaining, and conceptualizing (the hermeneutics of experience). This distinction between direct experience versus indirect or secondary interpretation is developed in the process of therapy. The client learns to become aware of what they are doing and that triggers the ability to risk a shift or change.
The objective of Gestalt therapy is to enable the client to become more fully and creatively alive and to become free from the blocks and unfinished business that may diminish satisfaction, fulfillment, and growth, and to experiment with new ways of being. For this reason Gestalt therapy falls within the category of humanistic psychotherapies. As Gestalt therapy includes perception and the meaning-making processes by which experience forms, it can also be considered a cognitive approach. Also, because Gestalt therapy relies on the contact between therapist and client, and because a relationship can be considered to be contact over time, Gestalt therapy can be considered a relational or interpersonal approach. As it appreciates the larger picture which is the complex situation involving multiple influences in a complex situation, it can also be considered a multi-systemic approach. In addition, the processes of Gestalt therapy are experimental, involving action, Gestalt therapy can be considered both a paradoxical and an experiential/experimental approach.
When Gestalt therapy is compared to other clinical domains, a person can find many matches, or points of similarity. "Probably the clearest case of consilience is between gestalt therapy's field perspective and the various organismic and field theories that proliferated in neuroscience, medicine, and physics in the early and mid-20th century. Within social science there is a consilience between gestalt field theory and systems or ecological psychotherapy; between the concept of dialogical relationship and object relations, attachment theory, client-centered therapy and the transference-oriented approaches; between the existential, phenomenological, and hermeneutical aspects of gestalt therapy and the constructivist aspects of cognitive therapy; and between gestalt therapy's commitment to awareness and the natural processes of healing and mindfulness, acceptance and Buddhist techniques adopted by cognitive behavioral therapy."
Contemporary theory and practice
The theoretical foundations of Gestalt therapy essentially rests atop four "load-bearing walls": phenomenological method, dialogical relationship, field-theoretical strategies, and experimental freedom. Although all these tenets were present in the early formulation and practice of Gestalt therapy, as described in Ego, Hunger and Aggression (Perls, 1947) and in Gestalt Therapy, Excitement and Growth in the Human Personality (Perls, Hefferline, & Goodman, 1951), the early development of Gestalt therapy theory emphasized personal experience and the experiential episodes understood as "safe emergencies" or experiments. Indeed, half of the Perls, Hefferline, and Goodman book consists of such experiments. Later, through the influence of such people as Erving and Miriam Polster, a second theoretical emphasis emerged: namely, contact between self and other, and ultimately the dialogical relationship between therapist and client. Later still, field theory emerged as an emphasis. At various times over the decades, since Gestalt therapy first emerged, one or more of these tenets and the associated constructs that go with them have captured the imagination of those who have continued developing the contemporary theory of Gestalt therapy. Since 1990 the literature focused upon Gestalt therapy has flourished, including the development of several professional Gestalt journals. Along the way, Gestalt therapy theory has also been applied in Organizational Development and coaching work. And, more recently, Gestalt methods have been combined with meditation practices into a unified program of human development called Gestalt Practice, which is used by some practitioners.
Richard G. Erskine, the originator of Integrative Psychotherapy (Developmentally Based, Relationally Focused), has written about the treatment of shame and self-righteousness in "A Gestalt therapy approach to shame and self-righteousness: theory and methods“ from his book Relational Patterns, Therapeutic Presence: Concepts and Practice of Integrative Psychotherapy (2015).
Phenomenological method
The goal of a phenomenological exploration is awareness. This exploration works systematically to reduce the effects of bias through repeated observations and inquiry.
The phenomenological method comprises three steps:
Applying the rule of epoché - one sets aside one's initial biases and prejudices in order to suspend expectations and assumptions.
Applying the rule of description - one occupies oneself with describing instead of explaining.
Applying the rule of horizontalization - one treats each item of description as having equal value or significance.
The rule of epoché sets aside any initial theories with regard to what is presented in the meeting between therapist and client. The rule of description implies immediate and specific observations, abstaining from interpretations or explanations, especially those formed from the application of a clinical theory superimposed over the circumstances of experience. The rule of horizontalization avoids any hierarchical assignment of importance such that the data of experience become prioritized and categorized as they are received. A Gestalt therapist using the phenomenological method might say something like, "I notice a slight tension at the corners of your mouth when I say that, and I see you shifting on the couch and folding your arms across your chest ... and now I see you rolling your eyes back". Of course, the therapist may make a clinically relevant evaluation, but when applying the phenomenological method, temporarily suspends the need to express it.
Dialogical relationship
To create the conditions under which a dialogic moment might occur, the therapist attends to their own presence, creates the space for the client to enter in and become present as well (called inclusion), and commits themself to the dialogic process, surrendering to what takes place, as opposed to attempting to control it. With presence, the therapist judiciously "shows up" as a whole and authentic person, instead of assuming a role, false self or persona. To be judicious, the therapist takes into account the specific strengths, weaknesses and values of the client. The only good client is a live client, so driving a client away by injudicious exposure of intolerable [to this client] experience of the therapist is obviously counter-productive. For example, for an atheistic therapist to tell a devout client that religion is myth would not be useful, especially in the early stages of the relationship. To practice inclusion is to accept however the client chooses to be present, whether in a defensive and obnoxious stance or a superficially cooperative one. To practice inclusion is to support the presence of the client, including their resistance, not as a gimmick but in full realization that this is how the client is actually present and is the best this client can do at this time. Finally, the Gestalt therapist is committed to the process, trusts in that process, and does not attempt to save themself from it.
Field-theoretical strategies
Field theory is a concept borrowed from physics in which people and events are no longer considered discrete units but as parts of something larger, which are influenced by everything including the past, and observation itself. "The field" can be considered in two ways. There are ontological dimensions and there are phenomenological dimensions to one's field. The ontological dimensions are all those physical and environmental contexts in which we live and move. They might be the office in which one works, the house in which one lives, the city and country of which one is a citizen, and so forth. The ontological field is the objective reality that supports our physical existence. The phenomenological dimensions are all mental and physical dynamics that contribute to a person's sense of self, one's subjective experience—not merely elements of the environmental context. These might be the memory of an uncle's inappropriate affection, one's color blindness, one's sense of the social matrix in operation at the office in which one works, and so forth. The way that Gestalt therapists choose to work with field dynamics makes what they do strategic. Gestalt therapy focuses upon character structure; according to Gestalt theory, the character structure is dynamic rather than fixed in nature. To become aware of one's character structure, the focus is upon the phenomenological dimensions in the context of the ontological dimensions.
Experimental freedom
Gestalt therapy is distinct because it moves toward action, away from mere talk therapy, and for this reason is considered an experiential approach. Through experiments, the therapist supports the client's direct experience of something new, instead of merely talking about the possibility of something new. Indeed, the entire therapeutic relationship may be considered experimental, because at one level it is a corrective, relational experience for many clients, and it is a "safe emergency" that is free to turn out however it will. An experiment can also be conceived as a teaching method that creates an experience in which a client might learn something as part of their growth.
Examples might include:
Rather than talking about the client's critical parent, a Gestalt therapist might ask the client to imagine the parent is present, or that the therapist is the parent, and talk to that parent directly
If a client is struggling with how to be assertive, a Gestalt therapist could either:
have the client say some assertive things to the therapist or members of a therapy group
give a talk about how one should never be assertive
A Gestalt therapist might notice something about the non-verbal behavior or tone of voice of the client; then the therapist might have the client exaggerate the non-verbal behavior and pay attention to that experience
A Gestalt therapist might work with the breathing or posture of the client, and direct awareness to changes that might happen when the client talks about different content.
With all these experiments the Gestalt therapist is working with process rather than content, the how rather than the what.
Noteworthy issues
Self
In field theory, self is a phenomenological concept, existing in comparison with other. Without the other there is no self, and how one experiences the other is inseparable from how one experiences oneself. The continuity of selfhood (functioning personality) is something that is achieved in relationship, rather than something inherently "inside" the person. This can have its advantages and disadvantages. At one end of the spectrum, someone may not have enough self-continuity to be able to make meaningful relationships, or to have a workable sense of who they are. In the middle, their personality is a loose set of ways of being that work for them, including commitments to relationships, work, culture and outlook, always open to change where they need to adapt to new circumstances or just want to try something new. At the other end, their personality is a rigid defensive denial of the new and spontaneous. They act in stereotyped ways, and either induce other people to act in particular and fixed ways towards them, or they redefine their actions to fit with fixed stereotypes.
In Gestalt therapy, the process is not about the self of the client being helped or healed by the fixed self of the therapist; rather it is an exploration of the co-creation of self and other in the here-and-now of the therapy. There is no assumption that the client will act in all other circumstances as they do in the therapy situation. However, the areas that cause problems will be either the lack of self-definition leading to chaotic or psychotic behaviour, or the rigid self-definition in some area of functioning that denies spontaneity and makes dealing with particular situations impossible. Both of these conditions show up very clearly in the therapy, and can be worked with in the relationship with the therapist.
The experience of the therapist is also very much part of the therapy. Since we co-create our self-other experiences, the way a therapist experiences being with a client is significant information about how the client experiences themselves. The proviso here is that a therapist is not operating from their own fixed responses. This is why Gestalt therapists are required to undertake significant therapy of their own during training.
From the perspective of this theory of self, neurosis can be seen as fixed predictability—a fixed Gestalt—and the process of therapy can be seen as facilitating the client to become unpredictable: more responsive to what is in the client's present environment, rather than responding in a stuck way to past introjects or other learning. If the therapist has expectations of how the client should end up, this defeats the aim of therapy.
Change
In what has now become a classic of Gestalt therapy literature, Arnold R. Beisser described Gestalt's paradoxical theory of change. The paradox is that the more one attempts to be who one is not, the more one remains the same. Conversely, when people identify with their current experience, the conditions of wholeness and growth support change. Put another way, change comes about as a result of "full acceptance of what is, rather than a striving to be different."<ref>Houston, G. (2003). Brief Gestalt Therapy. London, UK: Sage Publications.</ref>
The empty chair technique
Empty chair technique or chairwork is typically used in Gestalt therapy when a patient might have deep-rooted emotional problems from someone or something in their life, such as relationships with themselves, with aspects of their personality, their concepts, ideas, feelings, etc., or other people in their lives. The purpose of this technique is to get the patient to think about their emotions and attitudes. Common things the patient addresses in the empty chair are another person, aspects of their own personality, a certain feeling, etc., as if that thing were in that chair. They may also move between chairs and act out two or more sides of a discussion, typically involving the patient and persons significant to them. It uses a passive approach to opening up the patient's emotions and pent-up feelings so they can let go of what they have been holding back. A form of role-playing, the technique focuses on exploration of self and is used by therapists to help patients self-adjust. Gestalt techniques were originally a form of psychotherapy, but are now often used in counseling, for instance, by encouraging clients to act out their feelings helping them prepare for a new job. The purpose of the technique is so the patient will become more in touch with their feelings and have an emotional conversation that clears up any long-held feelings or reaction to the person or object in the chair.
Historical development
Fritz Perls was a German-Jewish psychoanalyst who fled Europe with his wife Laura Perls to South Africa in order to escape Nazi oppression in 1933. After World War II, the couple emigrated to New York City, which had become a center of intellectual, artistic and political experimentation by the late 1940s and early 1950s.
Early influences
Perls grew up on the bohemian scene in Berlin, participated in Expressionism and Dadaism, and experienced the turning of the artistic avant-garde toward the revolutionary left. Deployment to the front line, the trauma of war, anti-Semitism, intimidation, escape, and the Holocaust are further key sources of biographical influence.
Perls served in the German Army during World War I, and was wounded in the conflict. After the war he was educated as a medical doctor. He became an assistant to Kurt Goldstein, who worked with brain-injured soldiers. Perls went through a psychoanalysis with Wilhelm Reich and became a psychiatrist. Perls assisted Goldstein at Frankfurt University where he met his wife Lore (Laura) Posner, who had earned a doctorate in Gestalt psychology. They fled Nazi Germany in 1933 and settled in South Africa. Perls established a psychoanalytic training institute and joined the South African armed forces, serving as a military psychiatrist. During these years in South Africa, Perls was influenced by Jan Smuts and his ideas about "holism".
In 1936 Fritz Perls attended a psychoanalysts' conference in Marienbad, Czechoslovakia, where he presented a paper on oral resistances, mainly based on Laura Perls's notes on breastfeeding their children. Perls's paper was turned down. Perls did present his paper in 1936, but according to him, it met with "deep disapproval." Perls wrote his first book, Ego, Hunger and Aggression (1942, 1947), in South Africa, based in part on the rejected paper. It was later re-published in the United States. Laura Perls wrote two chapters of this book, but she was not given adequate recognition for her work.
The seminal book
Perls's seminal work was Gestalt Therapy: Excitement and Growth in the Human Personality, published in 1951, co-authored by Fritz Perls, Paul Goodman, and Ralph Hefferline (a university psychology professor and sometimes-patient of Fritz Perls). Most of Part II of the book was written by Paul Goodman from Perls's notes, and it contains the core of Gestalt theory. This part was supposed to appear first, but the publishers decided that Part I, written by Hefferline, fit into the nascent self-help ethos of the day, and they made it an introduction to the theory. Isadore From, a leading early theorist of Gestalt therapy, taught Goodman's Part II for an entire year to his students, going through it phrase by phrase.
First instances of Gestalt therapy
Fritz and Laura founded the first Gestalt Institute in 1952, running it out of their Manhattan apartment. Isadore From became a patient, first of Fritz, and then of Laura. Fritz soon made From a trainer, and also gave him some patients. From lived in New York until his death, at age seventy-five, in 1993. He was known worldwide for his philosophical and intellectually rigorous take on Gestalt therapy. Acknowledged as a supremely gifted clinician, he was indisposed to writing, so what remains of his work is merely transcripts of interviews.
Of great importance to understanding the development of Gestalt therapy is the early training which took place in experiential groups in the Perls's apartment, led by both Fritz and Laura before Fritz left for the West Coast, and after by Laura alone. These "trainings" were unstructured, with little didactic input from the leaders, although many of the principles were discussed in the monthly meetings of the institute, as well as at local bars after the sessions. Many notable Gestalt therapists emerged from these crucibles in addition to Isadore From, e.g., Richard Kitzler, Dan Bloom, Bud Feder, Carl Hodges, and Ruth Ronall. In these sessions, both Fritz and Laura used some variation of the "hot seat" method, in which the leader essentially works with one individual in front of an audience with little or no attention to group dynamics. In reaction to this omission emerged a more interactive approach in which Gestalt-therapy principles were blended with group dynamics; in 1980, the book Beyond the Hot Seat, edited by Feder and Ronall, was published, with contributions from members of both the New York and Cleveland Institutes, as well as others.
Fritz left Laura and New York in 1960, briefly lived in Miami, and ended up in California. Jim Simkin was a psychotherapist who became a client of Perls in New York and then a co-therapist with Perls in Los Angeles. Simkin was responsible for Perls's going to California, where Perls began a psychotherapy practice. Ultimately, the life of a peripatetic trainer and workshop leader was better suited to Fritz's personality—starting in 1963, Simkin and Perls co-led some of the early Gestalt workshops and training groups at Esalen Institute in Big Sur, California, where Perls eventually settled and built a home. Jim Simkin then purchased property next to Esalen and started his own training center, which he ran until his death in 1984. Simkin refined his precise version of Gestalt therapy, training psychologists, psychiatrists, counselors and social workers within a very rigorous, residential training model.
The schism
In the 1960s, Perls became infamous among the professional elite for his public workshops at Esalen Institute. Isadore From referred to some of Fritz's brief workshops as "hit-and-run" therapy, because of Perls's alleged emphasis on showmanship with little or no follow-through—but Perls never considered these workshops to be complete therapy; rather, he felt he was giving demonstrations of key points for a largely professional audience. Unfortunately, some films and tapes of his work were all that most graduate students were exposed to, along with the misperception that these represented the entirety of Perls's work.
When Fritz Perls left New York for California, there began to be a split with those who saw Gestalt therapy as a therapeutic approach similar to psychoanalysis. This view was represented by Isadore From, who practiced and taught mainly in New York, as well as by the members of the Cleveland Institute, which was co-founded by From. An entirely different approach was taken, primarily in California, by those who saw Gestalt therapy not just as a therapeutic modality, but as a way of life. The East Coast, New York–Cleveland axis was often appalled by the notion of Gestalt therapy leaving the consulting room and becoming a way of life on the West Coast in the 1960s (see the "Gestalt prayer").
An alternative view of this split saw Perls in his last years continuing to develop his a-theoretical and phenomenological methodology, while others, inspired by From, were inclined to theoretical rigor which verged on replacing experience with ideas.
The split continues between what has been called "East Coast Gestalt" and "West Coast Gestalt," at least from an Amerocentric point of view. While the communitarian form of Gestalt continues to flourish, Gestalt therapy was largely replaced in the United States by Cognitive Behavioral Therapy, and many Gestalt therapists in the U.S. drifted toward organizational management and coaching. At the same time, contemporary Gestalt Practice (to a large extent based upon Gestalt therapy theory and practice) was developed by Dick Price, the co-founder of Esalen Institute. Price was one of Perls's students at Esalen.
Post-Perls
In 1969, Fritz Perls left the United States to start a Gestalt community at Lake Cowichan on Vancouver Island, Canada. He died almost one year later, on 14 March 1970, in Chicago. One member of the Gestalt community was Barry Stevens. Her book about that phase of her life, Don't Push the River, became very popular. She developed her own form of Gestalt therapy body work, which is essentially a concentration on the awareness of body processes.
The Polsters
Erving and Miriam Polster started a training center in La Jolla, California, and published a book, Gestalt Therapy Integrated, in the 1970s.
They were influential in advancing the idea of contact boundary phenomena, which is a key part of Gestalt theory. The standard contact boundary resistances were confluence, introjection, projection, and retroflection, but the Polsters added "deflection" as a way of avoiding contact. Boundary phenomena can have good or bad effects, depending on the situation. For example, it's normal for a baby and mother to merge, but not for a therapist and client. If the therapist and client become too merged, then there can be no progress because there is no boundary for them to connect with. The client will not be able to learn anything new because the therapist will just become a part of them.
Influences upon Gestalt therapy
Some examples
There were a variety of psychological and philosophical influences upon the development of Gestalt therapy, not the least of which were the social forces at the time and place of its inception. Gestalt therapy is an approach that is holistic (including mind, body, and culture). It is present-centered and related to existential therapy in its emphasis on personal responsibility for action, and on the value of "I–thou" relationship in therapy. In fact, Perls considered calling Gestalt therapy existential-phenomenological therapy. "The I and thou in the Here and Now" was a semi-humorous shorthand mantra for Gestalt therapy, referring to the substantial influence of the work of Martin Buber—in particular his notion of the I–Thou relationship—on Perls and Gestalt. Buber's work emphasized immediacy, and required that any method or theory answer to the therapeutic situation, seen as a meeting between two people. Any process or method that turns the patient into an object (the I–It) must be strictly secondary to the intimate, and spontaneous, I–Thou relation. This concept became important in much of Gestalt theory and practice.
Both Fritz and Laura Perls were students and admirers of the neuropsychiatrist Kurt Goldstein. Gestalt therapy was based in part on Goldstein's concept called Organismic theory. Goldstein viewed a person in terms of a holistic and unified experience; he encouraged a "big picture" perspective, taking into account the whole context of a person's experience. The word Gestalt means whole, or configuration. Laura Perls, in an interview, denotes the Organismic theory as the base of Gestalt therapy.
There were additional influences on Gestalt therapy from existentialism, particularly the emphasis upon personal choice and responsibility.
The late 1950s–1960s movement toward personal growth and the human potential movement in California fed into, and was itself influenced by, Gestalt therapy. In this process Gestalt therapy somehow became a coherent Gestalt, which is the Gestalt psychology term for a perceptual unit that holds together and forms a unified whole.
Psychoanalysis
Fritz Perls trained as a neurologist at major medical institutions and as a Freudian psychoanalyst in Berlin and Vienna, the most important international centers of the discipline in his day. He worked as a training analyst for several years with the official recognition of the International Psychoanalytic Association (IPA), and must be considered an experienced clinician.
Gestalt therapy was influenced by psychoanalysis: it was part of a continuum moving from the early work of Freud, to the later Freudian ego analysis, to Wilhelm Reich and his character analysis and notion of character armor, with attention to nonverbal behavior; this was consonant with Laura Perls's background in dance and movement therapy. To this was added the insights of academic Gestalt psychology, including perception, Gestalt formation, and the tendency of organisms to complete an incomplete Gestalt and to form "wholes" in experience.
Central to Fritz and Laura Perls's modifications of psychoanalysis was the concept of dental or oral aggression. In Ego, Hunger and Aggression (1947), Fritz Perls's first book, to which Laura Perls contributed (ultimately without recognition), Perls suggested that when the infant develops teeth, he or she has the capacity to chew, to break food apart, and, by analogy, to experience, taste, accept, reject, or assimilate. This was opposed to Freud's notion that only introjection takes place in early experience. Thus Perls made assimilation, as opposed to introjection, a focal theme in his work, and the prime means by which growth occurs in therapy.
In contrast to the psychoanalytic stance, in which the "patient" introjects the (presumably more healthy) interpretations of the analyst, in Gestalt therapy the client must "taste" his or her own experience and either accept or reject it—but not introject or "swallow whole." Hence, the emphasis is on avoiding interpretation, and instead encouraging discovery. This is the key point in the divergence of Gestalt therapy from traditional psychoanalysis: growth occurs through gradual assimilation of experience in a natural way, rather than by accepting the interpretations of the analyst; thus, the therapist should not interpret, but lead the client to discover for him- or herself.
The Gestalt therapist contrives experiments that lead the client to greater awareness and fuller experience of his or her possibilities. Experiments can be focused on undoing projections or retroflections. The therapist can work to help the client with closure of unfinished Gestalts ("unfinished business" such as unexpressed emotions towards somebody in the client's life). There are many kinds of experiments that might be therapeutic, but the essence of the work is that it is experiential rather than interpretive, and in this way, Gestalt therapy distinguishes itself from psychoanalysis.
Principal influences: a summary list
Otto Rank's invention of "here-and-now" therapy and Rank's post-Freudian book Art and Artist (1932), both of which strongly influenced Paul Goodman
Wilhelm Reich's psychoanalytic developments, especially his early character analysis, and the later concept of character armor and its focus on the body
Jacob Moreno's psychodrama, principally the development of enactment techniques for the resolution of psychological conflicts
Kurt Goldstein's holistic theory of the organism, based on Gestalt theory
Martin Buber's philosophy of dialogue and relationship ("I–Thou")
Kurt Lewin's field theory as applied to the social sciences and group dynamics
European phenomenology of Franz Brentano, Edmund Husserl, Martin Heidegger, and Maurice Merleau-Ponty
The existentialism of Kierkegaard over that of Sartre, rejecting nihilism
The Jungian psychology of Carl Jung, particularly the polarities concept
Some elements from Zen Buddhism
Differentiation between thing and concept from Zen and the works of Alfred Korzybski
The American pragmatism of William James, George Herbert Mead, and John Dewey
Therapies influenced by Gestalt therapy
Psychotherapies influenced by Gestalt therapy include:
Acceptance and commitment therapy
Emotion-focused therapy
Current status
Gestalt therapy reached a zenith in the United States in the late 1970s and early 1980s. Since then, it has influenced other fields like organizational development, coaching, and teaching. Many of its contributions have become assimilated into other schools of therapy. In recent years, it has seen a resurgence in popularity as an active, psychodynamic form of therapy which has also incorporated some elements of recent developments in attachment theory. There are, for example, four Gestalt training institutes in the New York City metropolitan area alone, in addition to dozens of others worldwide.
Gestalt therapy continues to thrive as a widespread form of psychotherapy, especially throughout Europe, where there are many practitioners and training institutions. Dan Rosenblatt led Gestalt therapy training groups and public workshops at the Tokyo Psychotherapy Academy for seven years. Stewart Kiritz continued in this role from 1997 to 2006.
Training of Gestalt therapists
Pedagogical approach
Many Gestalt therapy training organizations exist worldwide. Ansel Woldt asserted that Gestalt teaching and training are built upon the belief that people are, by nature, health-seeking. Thus, such commitments as authenticity, optimism, holism, health, and trust become important principles to consider when engaged in the activity of teaching and learning—especially Gestalt therapy theory and practice.
Associations
The Association for the Advancement of Gestalt Therapy holds a biennial international conference in various locations—the first was in New Orleans, in 1995. Subsequent conferences have been held in San Francisco, Cleveland, New York, Dallas, St. Pete's Beach, Vancouver (British Columbia), Manchester (England), and Philadelphia. In addition, the it holds regional conferences, and its regional network has spawned regional conferences in Amsterdam, the Southwest and the Southeast of the United States, England, and Australia. Its Research Task Force generates and nurtures active research projects and an international conference on research.
The European Association for Gestalt Therapy, founded in 1985 to gather European individual Gestalt therapists, training institutes, and national associations from more than twenty European nations.
Gestalt Australia and New Zealand was formally established at the first Gestalt Therapy Conference held in Perth in September 1998.
Limitations
it has been shown that Gestalt Therapy is neither designed nor intended for treatment of young adolescents, especially those who manifest severe psychiatric or behavioral disorders.
See also
Role reversal
Topdog vs. underdog
Violet Oaklander
References
Further reading
Perls, F. (1969) Ego, Hunger, and Aggression: The Beginning of Gestalt Therapy. New York, NY: Random House. (originally published in 1942, and re-published in 1947)
Perls, F. (1969) Gestalt Therapy Verbatim. Moab, UT: Real People Press.
Perls, F., Hefferline, R., & Goodman, P. (1951) Gestalt Therapy: Excitement and growth in the human personality. New York, NY: Julian.
Perls, F. (1973) The Gestalt Approach & Eye Witness to Therapy. New York, NY: Bantam Books.
Brownell, P. (2012) Gestalt Therapy for Addictive and Self-Medicating Behaviors. New York, NY: Springer Publishing.
Levine, T.B-Y. (2011) Gestalt Therapy: Advances in Theory and Practice. New York, NY: Routledge.
Bloom, D. & Brownell, P. (eds)(2011) Continuity and Change: Gestalt Therapy Now. Newcastle, UK: Cambridge Scholars Publishing
Mann, D. (2010) Gestalt Therapy: 100 Key Points & Techniques. London & New York: Routledge.
Staemmler, F-M. (2009) Aggression, Time, and Understanding: Contributions to the Evolution of Gestalt Therapy''. New York, NY, US: Routledge/Taylor & Francis Group; GestaltPress Book
Woldt, A. & Toman, S. (2005) "Gestalt Therapy: History, Theory and Practice." Thousand Oaks, CA: Sage Publications.
External links | 0.761793 | 0.99753 | 0.759911 |
Overview effect | The overview effect is a cognitive shift reported by some astronauts while viewing the Earth from space. Researchers have characterized the effect as "a state of awe with self-transcendent qualities, precipitated by a particularly striking visual stimulus". The most prominent common aspects of personally experiencing the Earth from space are appreciation and perception of beauty, unexpected and even overwhelming emotion, and an increased sense of connection to other people and the Earth as a whole. The effect can cause changes in the observer's self concept and value system, and can be transformative.
Immersive virtual reality simulations have been designed to try to induce the overview effect in earthbound participants.
Characteristics
Broadly, Yaden et al. (2016) state that the most prominent common aspects of the astronauts' experience were appreciation and perception of beauty, unexpected and even overwhelming emotion, and an increased sense of connection to other people and the Earth as a whole. Yaden et al. proposed that the overview effect can be understood in terms of awe and self-transcendence, which they describe as "among the deepest and most powerful aspects of the human experience". More specifically, they write that the effect might best be understood as "a state of awe with self-transcendent qualities, precipitated by a particularly striking visual stimulus". Yaden posited that the overview effect triggers awe through both perceptual vastness (like seeing the Grand Canyon) and conceptual vastness (like contemplating big ideas like infinity).
Yaden et al. (2016) write that some astronauts viewing Earth from space "report overwhelming emotion and feelings of identification with humankind and the planet as a whole". The effect can cause changes in the observer's self concept and value system, and is sometimes transformative. Voski (2020) demonstrated a marked influence on astronauts' environmental attitudes and behaviors, and a new level of environmental awareness and consciousness. Though astronaut Leland Melvin said the effect seems to take hold of astronauts regardless of culture or nation of origin, Yaden et al. observed that cultural differences, including differences in religious and social identity, affect the ways in which the effect is experienced and interpreted. Expressions range from the religious, to the "vaguely spiritual", to the naturalistic, to calls to social duty.
Author Frank White, who in the 1980s coined the term overview effect after interviewing many astronauts, said that the overview effect is "beyond words", requiring experience to understand, even likening it in this regard to Zen Buddhism. He said that astronauts' very first views of the planet were generally very significant, adding that some experience the effect "in a moment" while in others it grows over time; and generally that the effect "does accumulate".
Not all astronauts experience the overview effect. Further, White distinguished experiences in low Earth orbit where the planet takes up most of an astronaut's view, from experiences on the Moon in which one sees the whole Earth "against a backdrop of the entire cosmos". He described a "big difference" between professional astronauts, who are focused on their missions—versus people who have recently been going into space "with an intention to have an experience" and who may already be aware of the overview effect.
Alternative characterizations
Beginning in the 2010s, science historian Jordan Bimm argued against White's interpretation that the overview effect is, in Bimm's words, "a reliably produced mental effect—a naturally occurring phenomenon between the environment and the human mind". Instead, Bimm asserts that the effect is "both a natural and cultural object" that is variable over particular individuals, divergent cultures, and different time periods. Preliminarily, Bimm noted that studies of early test pilots' negative-experience break-off phenomenon ended in 1973 (displaced by White's "positive conversion narrative"), that astronauts in a "lie to fly" culture feel career pressure to avoid reporting negative psychological reactions, and that individuals already aware of the overview effect may make it a self-fulfilling prophecy. He posited that it was Cold War and mastery-of-Earth mentalities of Western technological supremacy that contributed to the rise of borderless-world concepts such as the Gaia hypothesis, spaceship Earth, and the Blue Marble.
Bimm expressed concern over White's perception that the effect embodies a natural imperative for humans to pursue space travel and colonization, Bimm saying the attitude resembles the 18th-century American colonialist, expansionist concept of manifest destiny. Bimm warned of hubris underlying a perception of having achieved a new level of enlightenment that he called the "overlord effect".
History
English astronomer Fred Hoyle wrote in 1948 that, "once a photograph of the Earth, taken from the outside, is available, a new idea as powerful as any in history will be let loose". After Apollo 8 astronaut William Anders' December 1968 Earthrise photograph of the Earth from lunar orbit, the Apollo missions were credited with inspiring the environmental movement, the first Earth Day being held in April 1970. Hoyle said that people suddenly seemed to care about protecting Earth's natural environment, though others attribute that awareness to Rachel Carson's 1962 book Silent Spring and reactions to several environmental disasters in the 1960s.
The term overview effect was coined by author and researcher Frank White, who said he thought he first had "a mild experience" of the effect while flying across the country and looking out the aircraft's window. That experience led him to imagine living in an O'Neill cylinder (habitat in space), which inspired him to become involved with the Space Studies Institute and begin speaking with astronauts.
White's astronaut interviews confirmed the importance of the difference between intellectual knowledge versus experience, of perceiving the "striking thinness of the atmosphere", of thinking of ourselves interconnected and part of the Earth as an organic system, and that we as different people "are all in this together". The first public use of the term was in a poster at a Space Studies Institute meeting in 1985. Eventually, White wrote about the effect in his book The Overview Effect — Space Exploration and Human Evolution (1987), which has a fourth edition (2021). White's work did not attain broad influence until the 2010s—a period of increased societal divisions and a new prospect of private space travel.
Accounts
Michael Collins,
Yuri Gagarin,
Ron Garan,
Chris Hadfield,
James Irwin,
Mae Jemison,
Scott Kelly,
André Kuipers,
Jerry Linenger,
Mike Massimino,
Anne McClain,
Leland Melvin,
Edgar Mitchell,
Sian Proctor,
Rusty Schweickart,
William Shatner, and
Nicole Stott
are among those reported to have experienced the effect.
Michael Collins (Apollo 11; 1969) said that "the thing that really surprised me was that it [Earth] projected an air of fragility. And why, I don't know. I don't know to this day. I had a feeling it's tiny, it's shiny, it's beautiful, it's home, and it's fragile".
Edgar Mitchell (Apollo 14; 1971) described it as an "explosion of awareness" and an "overwhelming sense of oneness and connectedness... accompanied by an ecstasy... an epiphany".
William Shatner (Blue Origin NS-18, 2021) said immediately after landing that "everybody in the world needs to do this. ... The covering of blue was... the sheet, this blanket, this comforter of blue that we have around us... And then suddenly you shoot through it... as though you whip off a sheet off you when you're asleep, and you're looking into blackness, into black ugliness, and you look down, there's the blue down there, and the black up there and it's... Mother Earth and comfort, and there is—is there death? I don't know".
Lasting impact
Researchers have recognized that awe-based experiences—such as interaction with nature, religious or spiritual or mystical experiences, meditation, and peak and flow experiences during high task performance—can change a person and promote the feeling of unity or interconnectedness. Gallagher et al. (2015) defined a set of consensus categories for awe that included being captured by the view or drawn to the phenomenon, experiences of elation, desiring more of the experience, feeling overwhelmed, and scale effects – feelings of the vastness of the universe or of one's own smallness when faced with that vastness. Besides being an enjoyable experience, such phenomena can have short and long-term positive outcomes such as increased well-being, pro-social and pro-environmental attitudes, and improved physical health. The self-transcendent experience can cause long-term changes in personal outlook, and can influence peoples' very sense of self by affecting their self-schema ("the particular framework through which (people) imagine themselves in relation to the world").
Specifically, Frank White noted that upon return, some astronauts became involved in humanitarian activities, or became artists, with astronaut Edgar Mitchell founding the Institute of Noetic Sciences. Though Yaden et al. (2016) noted that the experience can be transformative, White said in 2019 that generally there was no "dramatic transformation" or "marching in peace parades"; that the lasting effect was more subtle.
A 2018 questionnaire survey of 39 astronauts and cosmonauts found that humanistic changes predominated over spiritual changes. In particular, the survey found a moderate degree of change in the Perceptions of Earth subscale (Earth as "a beautiful, fragile object to be treasured"), which significantly correlated with subsequent involvement in environmental causes. In contrast, the survey found "no to very small change" in the Spiritual Change subscale, which the researchers said likely reflected established pre-launch values.
Immediately after his October 2021 Blue Origin flight, William Shatner told founder Jeff Bezos, "what you have given me is the most profound experience. I hope I never recover from this. I hope that I can maintain what I feel now". However, in October 2022 he recounted that it took hours for him to realize why he wept after stepping out of the spacecraft: "I realized I was in grief for the Earth". He later said that "I saw more clearly than I have... (the) slow death of Earth and we on it". His biography Boldly Go recounted that "it was among the strongest feelings of grief I have ever encountered. The contrast between the vicious coldness of space and the warm nurturing of Earth below filled me with overwhelming sadness. Every day, we are confronted with the knowledge of further destruction of Earth at our hands... It filled me with dread. My trip to space was supposed to be a celebration; instead, it felt like a funeral."
At one point the Skylab 4 (1970s) crew refused to work, asserting, in the flight director's words, "their needs to reflect, to observe, to find their place amid these baffling, fascinating, unprecedented experiences". This event, plus research indicating that actively photographing the Earth has positive psychological effects, caused Yaden et al. to posit that studying the overview effect might improve understanding of psychological well-being in isolated, confined, extreme (ICE) environments such as space flight.
Early photos of Earth taken from space have inspired a mild version of the overview effect in earthbound viewers. The images became prominent symbols of environmental concern and have been credited for raising the public's consciousness about the fragility of Earth and expanding concern for long-term survival on a finite planet.
The accumulating experience of astronauts and space tourists inspires in many of them a strong desire to protect the Earth by actively communicating their broadened perspective, for example by speaking at international climate summits. Virgin Galactic officials specifically cite the overview effect as a motivation for carrying people to the edge of space, to fundamentally change the way people think about their home world. Critics note, however, that the space travel needed to personally experience the full overview effect, itself involves significant environmental pollution. A less polluting approach is to simulate the effect on Earth, with virtual reality technology.
Simulating the effect
Researchers have found that virtual reality (VR) technology elicits components of awe-based experiences and can induce minor cognitive shifts in participants' world views similar to those of the overview effect. Perceived safety, personal background and familiarity with the environment, and induction of a small visceral fear reaction, were found to be key contributors to the immersive experience. VR studies through 2019 had not observed a transformative experience on a scale similar to the overview effect itself, but VR experience can trigger profound emotional responses such as awe.
A 2019 study found that a virtual experience invoked "minor transformative experiences in some participants", including appreciation of beauty and vastness, realization of interconnectedness, and a potential intent to change one's behavior. Recognizing the relatively early state of VR technology, the researchers urged using knowledge of profound transformative experiences to motivate the design of VR installations, and thereafter study the VR experience itself as its own phenomenon.
On December 24, 2018—the fiftieth anniversary of the first day on which humans saw an earthrise with their own eyes—the Spacebuzz project was unveiled in Hilversum, Netherlands. Within a mobile, rocket-shaped vehicle more than in length, Spacebuzz's nine moving seats and virtual reality (VR) headsets simulate spaceflight in an experience designed especially for children.
Researchers at the University of Missouri tried to reproduce the experience with a water-filled flotation tank, half a tonne of Epsom salts, and a waterproof VR headset.
"The Infinite" provides an hour-long simulation of being on the International Space Station using 360-degree, 3D, astronaut-recorded footage from the VR film Space Explorers: The ISS Experience. Visitors share a 12,500 square foot (1150 square meter) area allowing them to physically explore the ISS and look outside.
A three-dimensional model of the Earth in diameter, created from detailed NASA imagery and appearing to float in the air, toured the U.K. in 2022, the installation "aim(ing) to create a sense of the Overview Effect".
Other names
The overview effect has been referred to as the big picture effect (Edgar Mitchell), orbital perspective (Ronald J. Garan Jr.), and the astronaut's secret (Albert Sacco).
Referring to how profound Mitchell's experience on the Moon was—distinguished from experiences in low Earth orbit—author Frank White called Mitchell's experience universal insight as it had a more universal perspective.
Related effects
A 1957 article in The Journal of Aviation Medicine studied the break-off phenomenon, which it defined as "a feeling of physical separation from the earth when piloting an aircraft at high altitude". Main precipitating factors were concluded to be: flying alone, at high altitude, with relatively little to do. Researchers summarized pilots' descriptions as "a feeling of being isolated, detached, or separated physically from the earth" or as a perception of "somehow losing their connection with the world". Individual reactions ranged from exhilaration or feeling nearer to God, to anxiety, fear, or loneliness. Alan Shepard reported feeling underwhelmed, and others reported feeling an attachment with the spacecraft rather than Earth, effects one researcher interpreted as a concept of disorientation. Scientific literature covering the break-off phenomenon ended in 1973.
Anthropologist Deana L. Weibel introduced the term ultraview effect as a response to an unobscured view of stars, an effect that she concluded was experienced more rarely than the overview effect. Contrasted from the overview effect's sense of connection, the ultraview effect responds to the limitations of our knowledge, causing "a transformative sense of incomprehension and a feeling of shrinking or self-diminution".
Frank White posited the term Copernicus Perspective—awareness of being part of the Solar System when one is on another planet.
Science historian Jordan Bimm described how the concept shares similarities with the British concept of the sublime—an experience associated with views from high mountains.
See also
Collective consciousness
Earth in culture
Earth phase
Effect of spaceflight on the human body
Extraterrestrial sky
Pale Blue Dot
The Real
Scale (analytical tool)
Space For Humanity
Spaceship Earth
Explanatory notes
References
External links
Overview Institute
Overview, short film from Planetary Collective
"Speech on the Overview Effect and Its Importance in Civilization"—five minute talk by JP Chastain at Ignite Boise (2012)
Articles containing video clips
Concepts in ethics
Concepts in philosophical anthropology
Concepts in the philosophy of mind
Psychological concepts
Space psychology
Spaceflight concepts
Spiritual concepts | 0.761322 | 0.998101 | 0.759876 |
Economics | Economics is a social science that studies the production, distribution, and consumption of goods and services.
Economics focuses on the behaviour and interactions of economic agents and how economies work. Microeconomics analyses what is viewed as basic elements within economies, including individual agents and markets, their interactions, and the outcomes of interactions. Individual agents may include, for example, households, firms, buyers, and sellers. Macroeconomics analyses economies as systems where production, distribution, consumption, savings, and investment expenditure interact, and factors affecting it: factors of production, such as labour, capital, land, and enterprise, inflation, economic growth, and public policies that have impact on these elements. It also seeks to analyse and describe the global economy.
Other broad distinctions within economics include those between positive economics, describing "what is", and normative economics, advocating "what ought to be"; between economic theory and applied economics; between rational and behavioural economics; and between mainstream economics and heterodox economics.
Economic analysis can be applied throughout society, including business, finance, cybersecurity, health care, engineering and government. It is also applied to such diverse subjects as crime, education, the family, feminism, law, philosophy, politics, religion, social institutions, war, science, and the environment.
Definitions of economics
The earlier term for the discipline was "political economy", but since the late 19th century, it has commonly been called "economics". The term is ultimately derived from Ancient Greek (oikonomia) which is a term for the "way (nomos) to run a household (oikos)", or in other words the know-how of an (oikonomikos), or "household or homestead manager". Derived terms such as "economy" can therefore often mean "frugal" or "thrifty". By extension then, "political economy" was the way to manage a polis or state.
There are a variety of modern definitions of economics; some reflect evolving views of the subject or different views among economists. Scottish philosopher Adam Smith (1776) defined what was then called political economy as "an inquiry into the nature and causes of the wealth of nations", in particular as:
Jean-Baptiste Say (1803), distinguishing the subject matter from its public-policy uses, defined it as the science of production, distribution, and consumption of wealth. On the satirical side, Thomas Carlyle (1849) coined "the dismal science" as an epithet for classical economics, in this context, commonly linked to the pessimistic analysis of Malthus (1798). John Stuart Mill (1844) delimited the subject matter further:
Alfred Marshall provided a still widely cited definition in his textbook Principles of Economics (1890) that extended analysis beyond wealth and from the societal to the microeconomic level:
Lionel Robbins (1932) developed implications of what has been termed "[p]erhaps the most commonly accepted current definition of the subject":
Robbins described the definition as not classificatory in "pick[ing] out certain kinds of behaviour" but rather analytical in "focus[ing] attention on a particular aspect of behaviour, the form imposed by the influence of scarcity." He affirmed that previous economists have usually centred their studies on the analysis of wealth: how wealth is created (production), distributed, and consumed; and how wealth can grow. But he said that economics can be used to study other things, such as war, that are outside its usual focus. This is because war has as the goal winning it (as a sought after end), generates both cost and benefits; and, resources (human life and other costs) are used to attain the goal. If the war is not winnable or if the expected costs outweigh the benefits, the deciding actors (assuming they are rational) may never go to war (a decision) but rather explore other alternatives. Economics cannot be defined as the science that studies wealth, war, crime, education, and any other field economic analysis can be applied to; but, as the science that studies a particular common aspect of each of those subjects (they all use scarce resources to attain a sought after end).
Some subsequent comments criticised the definition as overly broad in failing to limit its subject matter to analysis of markets. From the 1960s, however, such comments abated as the economic theory of maximizing behaviour and rational-choice modelling expanded the domain of the subject to areas previously treated in other fields. There are other criticisms as well, such as in scarcity not accounting for the macroeconomics of high unemployment.
Gary Becker, a contributor to the expansion of economics into new areas, described the approach he favoured as "combin[ing the] assumptions of maximizing behaviour, stable preferences, and market equilibrium, used relentlessly and unflinchingly." One commentary characterises the remark as making economics an approach rather than a subject matter but with great specificity as to the "choice process and the type of social interaction that [such] analysis involves." The same source reviews a range of definitions included in principles of economics textbooks and concludes that the lack of agreement need not affect the subject-matter that the texts treat. Among economists more generally, it argues that a particular definition presented may reflect the direction toward which the author believes economics is evolving, or should evolve.
Many economists including nobel prize winners James M. Buchanan and Ronald Coase reject the method-based definition of Robbins and continue to prefer definitions like those of Say, in terms of its subject matter. Ha-Joon Chang has for example argued that the definition of Robbins would make economics very peculiar because all other sciences define themselves in terms of the area of inquiry or object of inquiry rather than the methodology. In the biology department, it is not said that all biology should be studied with DNA analysis. People study living organisms in many different ways, so some people will perform DNA analysis, others might analyse anatomy, and still others might build game theoretic models of animal behaviour. But they are all called biology because they all study living organisms. According to Ha Joon Chang, this view that the economy can and should be studied in only one way (for example by studying only rational choices), and going even one step further and basically redefining economics as a theory of everything, is peculiar.
History of economic thought
From antiquity through the physiocrats
Questions regarding distribution of resources are found throughout the writings of the Boeotian poet Hesiod and several economic historians have described Hesiod as the "first economist". However, the word Oikos, the Greek word from which the word economy derives, was used for issues regarding how to manage a household (which was understood to be the landowner, his family, and his slaves) rather than to refer to some normative societal system of distribution of resources, which is a more recent phenomenon. Xenophon, the author of the Oeconomicus, is credited by philologues for being the source of the word economy. Joseph Schumpeter described 16th and 17th century scholastic writers, including Tomás de Mercado, Luis de Molina, and Juan de Lugo, as "coming nearer than any other group to being the 'founders' of scientific economics" as to monetary, interest, and value theory within a natural-law perspective.
Two groups, who later were called "mercantilists" and "physiocrats", more directly influenced the subsequent development of the subject. Both groups were associated with the rise of economic nationalism and modern capitalism in Europe. Mercantilism was an economic doctrine that flourished from the 16th to 18th century in a prolific pamphlet literature, whether of merchants or statesmen. It held that a nation's wealth depended on its accumulation of gold and silver. Nations without access to mines could obtain gold and silver from trade only by selling goods abroad and restricting imports other than of gold and silver. The doctrine called for importing inexpensive raw materials to be used in manufacturing goods, which could be exported, and for state regulation to impose protective tariffs on foreign manufactured goods and prohibit manufacturing in the colonies.
Physiocrats, a group of 18th-century French thinkers and writers, developed the idea of the economy as a circular flow of income and output. Physiocrats believed that only agricultural production generated a clear surplus over cost, so that agriculture was the basis of all wealth. Thus, they opposed the mercantilist policy of promoting manufacturing and trade at the expense of agriculture, including import tariffs. Physiocrats advocated replacing administratively costly tax collections with a single tax on income of land owners. In reaction against copious mercantilist trade regulations, the physiocrats advocated a policy of laissez-faire, which called for minimal government intervention in the economy.
Adam Smith (1723–1790) was an early economic theorist. Smith was harshly critical of the mercantilists but described the physiocratic system "with all its imperfections" as "perhaps the purest approximation to the truth that has yet been published" on the subject.
Classical political economy
The publication of Adam Smith's The Wealth of Nations in 1776, has been described as "the effective birth of economics as a separate discipline." The book identified land, labour, and capital as the three factors of production and the major contributors to a nation's wealth, as distinct from the physiocratic idea that only agriculture was productive.
Smith discusses potential benefits of specialisation by division of labour, including increased labour productivity and gains from trade, whether between town and country or across countries. His "theorem" that "the division of labor is limited by the extent of the market" has been described as the "core of a theory of the functions of firm and industry" and a "fundamental principle of economic organization." To Smith has also been ascribed "the most important substantive proposition in all of economics" and foundation of resource-allocation theory—that, under competition, resource owners (of labour, land, and capital) seek their most profitable uses, resulting in an equal rate of return for all uses in equilibrium (adjusted for apparent differences arising from such factors as training and unemployment).
In an argument that includes "one of the most famous passages in all economics," Smith represents every individual as trying to employ any capital they might command for their own advantage, not that of the society, and for the sake of profit, which is necessary at some level for employing capital in domestic industry, and positively related to the value of produce. In this:
The Reverend Thomas Robert Malthus (1798) used the concept of diminishing returns to explain low living standards. Human population, he argued, tended to increase geometrically, outstripping the production of food, which increased arithmetically. The force of a rapidly growing population against a limited amount of land meant diminishing returns to labour. The result, he claimed, was chronically low wages, which prevented the standard of living for most of the population from rising above the subsistence level. Economist Julian Simon has criticised Malthus's conclusions.
While Adam Smith emphasised production and income, David Ricardo (1817) focused on the distribution of income among landowners, workers, and capitalists. Ricardo saw an inherent conflict between landowners on the one hand and labour and capital on the other. He posited that the growth of population and capital, pressing against a fixed supply of land, pushes up rents and holds down wages and profits. Ricardo was also the first to state and prove the principle of comparative advantage, according to which each country should specialise in producing and exporting goods in that it has a lower relative cost of production, rather relying only on its own production. It has been termed a "fundamental analytical explanation" for gains from trade.
Coming at the end of the classical tradition, John Stuart Mill (1848) parted company with the earlier classical economists on the inevitability of the distribution of income produced by the market system. Mill pointed to a distinct difference between the market's two roles: allocation of resources and distribution of income. The market might be efficient in allocating resources but not in distributing income, he wrote, making it necessary for society to intervene.
Value theory was important in classical theory. Smith wrote that the "real price of every thing ... is the toil and trouble of acquiring it". Smith maintained that, with rent and profit, other costs besides wages also enter the price of a commodity. Other classical economists presented variations on Smith, termed the 'labour theory of value'. Classical economics focused on the tendency of any market economy to settle in a final stationary state made up of a constant stock of physical wealth (capital) and a constant population size.
Marxian economics
Marxist (later, Marxian) economics descends from classical economics and it derives from the work of Karl Marx. The first volume of Marx's major work, , was published in 1867. Marx focused on the labour theory of value and theory of surplus value. Marx wrote that they were mechanisms used by capital to exploit labour. The labour theory of value held that the value of an exchanged commodity was determined by the labour that went into its production, and the theory of surplus value demonstrated how workers were only paid a proportion of the value their work had created.
Marxian economics was further developed by Karl Kautsky (1854–1938)'s The Economic Doctrines of Karl Marx and The Class Struggle (Erfurt Program), Rudolf Hilferding's (1877–1941) Finance Capital, Vladimir Lenin (1870–1924)'s The Development of Capitalism in Russia and Imperialism, the Highest Stage of Capitalism, and Rosa Luxemburg (1871–1919)'s The Accumulation of Capital.
Neoclassical economics
At its inception as a social science, economics was defined and discussed at length as the study of production, distribution, and consumption of wealth by Jean-Baptiste Say in his Treatise on Political Economy or, The Production, Distribution, and Consumption of Wealth (1803). These three items were considered only in relation to the increase or diminution of wealth, and not in reference to their processes of execution. Say's definition has survived in part up to the present, modified by substituting the word "wealth" for "goods and services" meaning that wealth may include non-material objects as well. One hundred and thirty years later, Lionel Robbins noticed that this definition no longer sufficed, because many economists were making theoretical and philosophical inroads in other areas of human activity. In his Essay on the Nature and Significance of Economic Science, he proposed a definition of economics as a study of human behaviour, subject to and constrained by scarcity, which forces people to choose, allocate scarce resources to competing ends, and economise (seeking the greatest welfare while avoiding the wasting of scarce resources). According to Robbins: "Economics is the science which studies human behavior as a relationship between ends and scarce means which have alternative uses". Robbins' definition eventually became widely accepted by mainstream economists, and found its way into current textbooks. Although far from unanimous, most mainstream economists would accept some version of Robbins' definition, even though many have raised serious objections to the scope and method of economics, emanating from that definition.
A body of theory later termed "neoclassical economics" formed from about 1870 to 1910. The term "economics" was popularised by such neoclassical economists as Alfred Marshall and Mary Paley Marshall as a concise synonym for "economic science" and a substitute for the earlier "political economy". This corresponded to the influence on the subject of mathematical methods used in the natural sciences.
Neoclassical economics systematically integrated supply and demand as joint determinants of both price and quantity in market equilibrium, influencing the allocation of output and income distribution. It rejected the classical economics' labour theory of value in favour of a marginal utility theory of value on the demand side and a more comprehensive theory of costs on the supply side. In the 20th century, neoclassical theorists departed from an earlier idea that suggested measuring total utility for a society, opting instead for ordinal utility, which posits behaviour-based relations across individuals.
In microeconomics, neoclassical economics represents incentives and costs as playing a pervasive role in shaping decision making. An immediate example of this is the consumer theory of individual demand, which isolates how prices (as costs) and income affect quantity demanded. In macroeconomics it is reflected in an early and lasting neoclassical synthesis with Keynesian macroeconomics.
Neoclassical economics is occasionally referred as orthodox economics whether by its critics or sympathisers. Modern mainstream economics builds on neoclassical economics but with many refinements that either supplement or generalise earlier analysis, such as econometrics, game theory, analysis of market failure and imperfect competition, and the neoclassical model of economic growth for analysing long-run variables affecting national income.
Neoclassical economics studies the behaviour of individuals, households, and organisations (called economic actors, players, or agents), when they manage or use scarce resources, which have alternative uses, to achieve desired ends. Agents are assumed to act rationally, have multiple desirable ends in sight, limited resources to obtain these ends, a set of stable preferences, a definite overall guiding objective, and the capability of making a choice. There exists an economic problem, subject to study by economic science, when a decision (choice) is made by one or more players to attain the best possible outcome.
Keynesian economics
Keynesian economics derives from John Maynard Keynes, in particular his book The General Theory of Employment, Interest and Money (1936), which ushered in contemporary macroeconomics as a distinct field. The book focused on determinants of national income in the short run when prices are relatively inflexible. Keynes attempted to explain in broad theoretical detail why high labour-market unemployment might not be self-correcting due to low "effective demand" and why even price flexibility and monetary policy might be unavailing. The term "revolutionary" has been applied to the book in its impact on economic analysis.
During the following decades, many economists followed Keynes' ideas and expanded on his works. John Hicks and Alvin Hansen developed the IS–LM model which was a simple formalisation of some of Keynes' insights on the economy's short-run equilibrium. Franco Modigliani and James Tobin developed important theories of private consumption and investment, respectively, two major components of aggregate demand. Lawrence Klein built the first large-scale macroeconometric model, applying the Keynesian thinking systematically to the US economy.
Post-WWII economics
Immediately after World War II, Keynesian was the dominant economic view of the United States establishment and its allies, Marxian economics was the dominant economic view of the Soviet Union nomenklatura and its allies.
Monetarism
Monetarism appeared in the 1950s and 1960s, its intellectual leader being Milton Friedman. Monetarists contended that monetary policy and other monetary shocks, as represented by the growth in the money stock, was an important cause of economic fluctuations, and consequently that monetary policy was more important than fiscal policy for purposes of stabilisation. Friedman was also skeptical about the ability of central banks to conduct a sensible active monetary policy in practice, advocating instead using simple rules such as a steady rate of money growth.
Monetarism rose to prominence in the 1970s and 1980s, when several major central banks followed a monetarist-inspired policy, but was later abandoned because the results were unsatisfactory.
New classical economics
A more fundamental challenge to the prevailing Keynesian paradigm came in the 1970s from new classical economists like Robert Lucas, Thomas Sargent and Edward Prescott. They introduced the notion of rational expectations in economics, which had profound implications for many economic discussions, among which were the so-called Lucas critique and the presentation of real business cycle models.
New Keynesians
During the 1980s, a group of researchers appeared being called New Keynesian economists, including among others George Akerlof, Janet Yellen, Gregory Mankiw and Olivier Blanchard. They adopted the principle of rational expectations and other monetarist or new classical ideas such as building upon models employing micro foundations and optimizing behaviour, but simultaneously emphasised the importance of various market failures for the functioning of the economy, as had Keynes. Not least, they proposed various reasons that potentially explained the empirically observed features of price and wage rigidity, usually made to be endogenous features of the models, rather than simply assumed as in older Keynesian-style ones.
New neoclassical synthesis
After decades of often heated discussions between Keynesians, monetarists, new classical and new Keynesian economists, a synthesis emerged by the 2000s, often given the name the new neoclassical synthesis. It integrated the rational expectations and optimizing framework of the new classical theory with a new Keynesian role for nominal rigidities and other market imperfections like imperfect information in goods, labour and credit markets. The monetarist importance of monetary policy in stabilizing the economy and in particular controlling inflation was recognised as well as the traditional Keynesian insistence that fiscal policy could also play an influential role in affecting aggregate demand. Methodologically, the synthesis led to a new class of applied models, known as dynamic stochastic general equilibrium or DSGE models, descending from real business cycles models, but extended with several new Keynesian and other features. These models proved useful and influential in the design of modern monetary policy and are now standard workhorses in most central banks.
After the financial crisis
After the 2007–2008 financial crisis, macroeconomic research has put greater emphasis on understanding and integrating the financial system into models of the general economy and shedding light on the ways in which problems in the financial sector can turn into major macroeconomic recessions. In this and other research branches, inspiration from behavioural economics has started playing a more important role in mainstream economic theory. Also, heterogeneity among the economic agents, e.g. differences in income, plays an increasing role in recent economic research.
Other schools and approaches
Other schools or trends of thought referring to a particular style of economics practised at and disseminated from well-defined groups of academicians that have become known worldwide, include the Freiburg School, the School of Lausanne, the Stockholm school and the Chicago school of economics. During the 1970s and 1980s mainstream economics was sometimes separated into the Saltwater approach of those universities along the Eastern and Western coasts of the US, and the Freshwater, or Chicago school approach.
Within macroeconomics there is, in general order of their historical appearance in the literature; classical economics, neoclassical economics, Keynesian economics, the neoclassical synthesis, monetarism, new classical economics, New Keynesian economics and the new neoclassical synthesis.
Beside the mainstream development of economic thought, various alternative or heterodox economic theories have evolved over time, positioning themselves in contrast to mainstream theory. These include:
Austrian School, emphasizing human action, property rights and the freedom to contract and transact to have a thriving and successful economy. It also emphasises that the state should play as small role as possible (if any role) in the regulation of economic activity between two transacting parties. Friedrich Hayek and Ludwig von Mises are the two most prominent representatives of the Austrian school.
Post-Keynesian economics concentrates on macroeconomic rigidities and adjustment processes. It is generally associated with the University of Cambridge and the work of Joan Robinson.
Ecological economics like environmental economics studies the interactions between human economies and the ecosystems in which they are embedded, but in contrast to environmental economics takes an oppositional position towards general mainstream economic principles. A major difference between the two subdisciplines is their assumptions about the substitution possibilities between human-made and natural capital.
Additionally, alternative developments include Marxian economics, constitutional economics, institutional economics, evolutionary economics, dependency theory, structuralist economics, world systems theory, econophysics, econodynamics, feminist economics and biophysical economics.
Feminist economics emphasises the role that gender plays in economies, challenging analyses that render gender invisible or support gender-oppressive economic systems. The goal is to create economic research and policy analysis that is inclusive and gender-aware to encourage gender equality and improve the well-being of marginalised groups.
Methodology
Theoretical research
Mainstream economic theory relies upon analytical economic models. When creating theories, the objective is to find assumptions which are at least as simple in information requirements, more precise in predictions, and more fruitful in generating additional research than prior theories. While neoclassical economic theory constitutes both the dominant or orthodox theoretical as well as methodological framework, economic theory can also take the form of other schools of thought such as in heterodox economic theories.
In microeconomics, principal concepts include supply and demand, marginalism, rational choice theory, opportunity cost, budget constraints, utility, and the theory of the firm. Early macroeconomic models focused on modelling the relationships between aggregate variables, but as the relationships appeared to change over time macroeconomists, including new Keynesians, reformulated their models with microfoundations, in which microeconomic concepts play a major part.
Sometimes an economic hypothesis is only qualitative, not quantitative.
Expositions of economic reasoning often use two-dimensional graphs to illustrate theoretical relationships. At a higher level of generality, mathematical economics is the application of mathematical methods to represent theories and analyse problems in economics. Paul Samuelson's treatise Foundations of Economic Analysis (1947) exemplifies the method, particularly as to maximizing behavioural relations of agents reaching equilibrium. The book focused on examining the class of statements called operationally meaningful theorems in economics, which are theorems that can conceivably be refuted by empirical data.
Empirical research
Economic theories are frequently tested empirically, largely through the use of econometrics using economic data. The controlled experiments common to the physical sciences are difficult and uncommon in economics, and instead broad data is observationally studied; this type of testing is typically regarded as less rigorous than controlled experimentation, and the conclusions typically more tentative. However, the field of experimental economics is growing, and increasing use is being made of natural experiments.
Statistical methods such as regression analysis are common. Practitioners use such methods to estimate the size, economic significance, and statistical significance ("signal strength") of the hypothesised relation(s) and to adjust for noise from other variables. By such means, a hypothesis may gain acceptance, although in a probabilistic, rather than certain, sense. Acceptance is dependent upon the falsifiable hypothesis surviving tests. Use of commonly accepted methods need not produce a final conclusion or even a consensus on a particular question, given different tests, data sets, and prior beliefs.
Experimental economics has promoted the use of scientifically controlled experiments. This has reduced the long-noted distinction of economics from natural sciences because it allows direct tests of what were previously taken as axioms. In some cases these have found that the axioms are not entirely correct.
In behavioural economics, psychologist Daniel Kahneman won the Nobel Prize in economics in 2002 for his and Amos Tversky's empirical discovery of several cognitive biases and heuristics. Similar empirical testing occurs in neuroeconomics. Another example is the assumption of narrowly selfish preferences versus a model that tests for selfish, altruistic, and cooperative preferences. These techniques have led some to argue that economics is a "genuine science".
Microeconomics
Microeconomics examines how entities, forming a market structure, interact within a market to create a market system. These entities include private and public players with various classifications, typically operating under scarcity of tradable units and regulation. The item traded may be a tangible product such as apples or a service such as repair services, legal counsel, or entertainment.
Various market structures exist. In perfectly competitive markets, no participants are large enough to have the market power to set the price of a homogeneous product. In other words, every participant is a "price taker" as no participant influences the price of a product. In the real world, markets often experience imperfect competition.
Forms of imperfect competition include monopoly (in which there is only one seller of a good), duopoly (in which there are only two sellers of a good), oligopoly (in which there are few sellers of a good), monopolistic competition (in which there are many sellers producing highly differentiated goods), monopsony (in which there is only one buyer of a good), and oligopsony (in which there are few buyers of a good). Firms under imperfect competition have the potential to be "price makers", which means that they can influence the prices of their products.
In partial equilibrium method of analysis, it is assumed that activity in the market being analysed does not affect other markets. This method aggregates (the sum of all activity) in only one market. General-equilibrium theory studies various markets and their behaviour. It aggregates (the sum of all activity) across all markets. This method studies both changes in markets and their interactions leading towards equilibrium.
Production, cost, and efficiency
In microeconomics, production is the conversion of inputs into outputs. It is an economic process that uses inputs to create a commodity or a service for exchange or direct use. Production is a flow and thus a rate of output per period of time. Distinctions include such production alternatives as for consumption (food, haircuts, etc.) vs. investment goods (new tractors, buildings, roads, etc.), public goods (national defence, smallpox vaccinations, etc.) or private goods, and "guns" vs "butter".
Inputs used in the production process include such primary factors of production as labour services, capital (durable produced goods used in production, such as an existing factory), and land (including natural resources). Other inputs may include intermediate goods used in production of final goods, such as the steel in a new car.
Economic efficiency measures how well a system generates desired output with a given set of inputs and available technology. Efficiency is improved if more output is generated without changing inputs. A widely accepted general standard is Pareto efficiency, which is reached when no further change can make someone better off without making someone else worse off.
The production–possibility frontier (PPF) is an expository figure for representing scarcity, cost, and efficiency. In the simplest case an economy can produce just two goods (say "guns" and "butter"). The PPF is a table or graph (as at the right) showing the different quantity combinations of the two goods producible with a given technology and total factor inputs, which limit feasible total output. Each point on the curve shows potential total output for the economy, which is the maximum feasible output of one good, given a feasible output quantity of the other good.
Scarcity is represented in the figure by people being willing but unable in the aggregate to consume beyond the PPF (such as at X) and by the negative slope of the curve. If production of one good increases along the curve, production of the other good decreases, an inverse relationship. This is because increasing output of one good requires transferring inputs to it from production of the other good, decreasing the latter.
The slope of the curve at a point on it gives the trade-off between the two goods. It measures what an additional unit of one good costs in units forgone of the other good, an example of a real opportunity cost. Thus, if one more Gun costs 100 units of butter, the opportunity cost of one Gun is 100 Butter. Along the PPF, scarcity implies that choosing more of one good in the aggregate entails doing with less of the other good. Still, in a market economy, movement along the curve may indicate that the choice of the increased output is anticipated to be worth the cost to the agents.
By construction, each point on the curve shows productive efficiency in maximizing output for given total inputs. A point inside the curve (as at A), is feasible but represents production inefficiency (wasteful use of inputs), in that output of one or both goods could increase by moving in a northeast direction to a point on the curve. Examples cited of such inefficiency include high unemployment during a business-cycle recession or economic organisation of a country that discourages full use of resources. Being on the curve might still not fully satisfy allocative efficiency (also called Pareto efficiency) if it does not produce a mix of goods that consumers prefer over other points.
Much applied economics in public policy is concerned with determining how the efficiency of an economy can be improved. Recognizing the reality of scarcity and then figuring out how to organise society for the most efficient use of resources has been described as the "essence of economics", where the subject "makes its unique contribution."
Specialisation
Specialisation is considered key to economic efficiency based on theoretical and empirical considerations. Different individuals or nations may have different real opportunity costs of production, say from differences in stocks of human capital per worker or capital/labour ratios. According to theory, this may give a comparative advantage in production of goods that make more intensive use of the relatively more abundant, thus relatively cheaper, input.
Even if one region has an absolute advantage as to the ratio of its outputs to inputs in every type of output, it may still specialise in the output in which it has a comparative advantage and thereby gain from trading with a region that lacks any absolute advantage but has a comparative advantage in producing something else.
It has been observed that a high volume of trade occurs among regions even with access to a similar technology and mix of factor inputs, including high-income countries. This has led to investigation of economies of scale and agglomeration to explain specialisation in similar but differentiated product lines, to the overall benefit of respective trading parties or regions.
The general theory of specialisation applies to trade among individuals, farms, manufacturers, service providers, and economies. Among each of these production systems, there may be a corresponding division of labour with different work groups specializing, or correspondingly different types of capital equipment and differentiated land uses.
An example that combines features above is a country that specialises in the production of high-tech knowledge products, as developed countries do, and trades with developing nations for goods produced in factories where labour is relatively cheap and plentiful, resulting in different in opportunity costs of production. More total output and utility thereby results from specializing in production and trading than if each country produced its own high-tech and low-tech products.
Theory and observation set out the conditions such that market prices of outputs and productive inputs select an allocation of factor inputs by comparative advantage, so that (relatively) low-cost inputs go to producing low-cost outputs. In the process, aggregate output may increase as a by-product or by design. Such specialisation of production creates opportunities for gains from trade whereby resource owners benefit from trade in the sale of one type of output for other, more highly valued goods. A measure of gains from trade is the increased income levels that trade may facilitate.
Supply and demand
Prices and quantities have been described as the most directly observable attributes of goods produced and exchanged in a market economy. The theory of supply and demand is an organizing principle for explaining how prices coordinate the amounts produced and consumed. In microeconomics, it applies to price and output determination for a market with perfect competition, which includes the condition of no buyers or sellers large enough to have price-setting power.
For a given market of a commodity, demand is the relation of the quantity that all buyers would be prepared to purchase at each unit price of the good. Demand is often represented by a table or a graph showing price and quantity demanded (as in the figure). Demand theory describes individual consumers as rationally choosing the most preferred quantity of each good, given income, prices, tastes, etc. A term for this is "constrained utility maximisation" (with income and wealth as the constraints on demand). Here, utility refers to the hypothesised relation of each individual consumer for ranking different commodity bundles as more or less preferred.
The law of demand states that, in general, price and quantity demanded in a given market are inversely related. That is, the higher the price of a product, the less of it people would be prepared to buy (other things unchanged). As the price of a commodity falls, consumers move toward it from relatively more expensive goods (the substitution effect). In addition, purchasing power from the price decline increases ability to buy (the income effect). Other factors can change demand; for example an increase in income will shift the demand curve for a normal good outward relative to the origin, as in the figure. All determinants are predominantly taken as constant factors of demand and supply.
Supply is the relation between the price of a good and the quantity available for sale at that price. It may be represented as a table or graph relating price and quantity supplied. Producers, for example business firms, are hypothesised to be profit maximisers, meaning that they attempt to produce and supply the amount of goods that will bring them the highest profit. Supply is typically represented as a function relating price and quantity, if other factors are unchanged.
That is, the higher the price at which the good can be sold, the more of it producers will supply, as in the figure. The higher price makes it profitable to increase production. Just as on the demand side, the position of the supply can shift, say from a change in the price of a productive input or a technical improvement. The "Law of Supply" states that, in general, a rise in price leads to an expansion in supply and a fall in price leads to a contraction in supply. Here as well, the determinants of supply, such as price of substitutes, cost of production, technology applied and various factors inputs of production are all taken to be constant for a specific time period of evaluation of supply.
Market equilibrium occurs where quantity supplied equals quantity demanded, the intersection of the supply and demand curves in the figure above. At a price below equilibrium, there is a shortage of quantity supplied compared to quantity demanded. This is posited to bid the price up. At a price above equilibrium, there is a surplus of quantity supplied compared to quantity demanded. This pushes the price down. The model of supply and demand predicts that for given supply and demand curves, price and quantity will stabilise at the price that makes quantity supplied equal to quantity demanded. Similarly, demand-and-supply theory predicts a new price-quantity combination from a shift in demand (as to the figure), or in supply.
Firms
People frequently do not trade directly on markets. Instead, on the supply side, they may work in and produce through firms. The most obvious kinds of firms are corporations, partnerships and trusts. According to Ronald Coase, people begin to organise their production in firms when the costs of doing business becomes lower than doing it on the market. Firms combine labour and capital, and can achieve far greater economies of scale (when the average cost per unit declines as more units are produced) than individual market trading.
In perfectly competitive markets studied in the theory of supply and demand, there are many producers, none of which significantly influence price. Industrial organisation generalises from that special case to study the strategic behaviour of firms that do have significant control of price. It considers the structure of such markets and their interactions. Common market structures studied besides perfect competition include monopolistic competition, various forms of oligopoly, and monopoly.
Managerial economics applies microeconomic analysis to specific decisions in business firms or other management units. It draws heavily from quantitative methods such as operations research and programming and from statistical methods such as regression analysis in the absence of certainty and perfect knowledge. A unifying theme is the attempt to optimise business decisions, including unit-cost minimisation and profit maximisation, given the firm's objectives and constraints imposed by technology and market conditions.
Uncertainty and game theory
Uncertainty in economics is an unknown prospect of gain or loss, whether quantifiable as risk or not. Without it, household behaviour would be unaffected by uncertain employment and income prospects, financial and capital markets would reduce to exchange of a single instrument in each market period, and there would be no communications industry. Given its different forms, there are various ways of representing uncertainty and modelling economic agents' responses to it.
Game theory is a branch of applied mathematics that considers strategic interactions between agents, one kind of uncertainty. It provides a mathematical foundation of industrial organisation, discussed above, to model different types of firm behaviour, for example in a solipsistic industry (few sellers), but equally applicable to wage negotiations, bargaining, contract design, and any situation where individual agents are few enough to have perceptible effects on each other. In behavioural economics, it has been used to model the strategies agents choose when interacting with others whose interests are at least partially adverse to their own.
In this, it generalises maximisation approaches developed to analyse market actors such as in the supply and demand model and allows for incomplete information of actors. The field dates from the 1944 classic Theory of Games and Economic Behavior by John von Neumann and Oskar Morgenstern. It has significant applications seemingly outside of economics in such diverse subjects as the formulation of nuclear strategies, ethics, political science, and evolutionary biology.
Risk aversion may stimulate activity that in well-functioning markets smooths out risk and communicates information about risk, as in markets for insurance, commodity futures contracts, and financial instruments. Financial economics or simply finance describes the allocation of financial resources. It also analyses the pricing of financial instruments, the financial structure of companies, the efficiency and fragility of financial markets, financial crises, and related government policy or regulation.
Some market organisations may give rise to inefficiencies associated with uncertainty. Based on George Akerlof's "Market for Lemons" article, the paradigm example is of a dodgy second-hand car market. Customers without knowledge of whether a car is a "lemon" depress its price below what a quality second-hand car would be. Information asymmetry arises here, if the seller has more relevant information than the buyer but no incentive to disclose it. Related problems in insurance are adverse selection, such that those at most risk are most likely to insure (say reckless drivers), and moral hazard, such that insurance results in riskier behaviour (say more reckless driving).
Both problems may raise insurance costs and reduce efficiency by driving otherwise willing transactors from the market ("incomplete markets"). Moreover, attempting to reduce one problem, say adverse selection by mandating insurance, may add to another, say moral hazard. Information economics, which studies such problems, has relevance in subjects such as insurance, contract law, mechanism design, monetary economics, and health care. Applied subjects include market and legal remedies to spread or reduce risk, such as warranties, government-mandated partial insurance, restructuring or bankruptcy law, inspection, and regulation for quality and information disclosure.
Market failure
The term "market failure" encompasses several problems which may undermine standard economic assumptions. Although economists categorise market failures differently, the following categories emerge in the main texts.
Information asymmetries and incomplete markets may result in economic inefficiency but also a possibility of improving efficiency through market, legal, and regulatory remedies, as discussed above.
Natural monopoly, or the overlapping concepts of "practical" and "technical" monopoly, is an extreme case of failure of competition as a restraint on producers. Extreme economies of scale are one possible cause.
Public goods are goods which are under-supplied in a typical market. The defining features are that people can consume public goods without having to pay for them and that more than one person can consume the good at the same time.
Externalities occur where there are significant social costs or benefits from production or consumption that are not reflected in market prices. For example, air pollution may generate a negative externality, and education may generate a positive externality (less crime, etc.). Governments often tax and otherwise restrict the sale of goods that have negative externalities and subsidise or otherwise promote the purchase of goods that have positive externalities in an effort to correct the price distortions caused by these externalities. Elementary demand-and-supply theory predicts equilibrium but not the speed of adjustment for changes of equilibrium due to a shift in demand or supply.
In many areas, some form of price stickiness is postulated to account for quantities, rather than prices, adjusting in the short run to changes on the demand side or the supply side. This includes standard analysis of the business cycle in macroeconomics. Analysis often revolves around causes of such price stickiness and their implications for reaching a hypothesised long-run equilibrium. Examples of such price stickiness in particular markets include wage rates in labour markets and posted prices in markets deviating from perfect competition.
Some specialised fields of economics deal in market failure more than others. The economics of the public sector is one example. Much environmental economics concerns externalities or "public bads".
Policy options include regulations that reflect cost–benefit analysis or market solutions that change incentives, such as emission fees or redefinition of property rights.
Welfare
Welfare economics uses microeconomics techniques to evaluate well-being from allocation of productive factors as to desirability and economic efficiency within an economy, often relative to competitive general equilibrium. It analyses social welfare, however measured, in terms of economic activities of the individuals that compose the theoretical society considered. Accordingly, individuals, with associated economic activities, are the basic units for aggregating to social welfare, whether of a group, a community, or a society, and there is no "social welfare" apart from the "welfare" associated with its individual units.
Macroeconomics
Macroeconomics, another branch of economics, examines the economy as a whole to explain broad aggregates and their interactions "top down", that is, using a simplified form of general-equilibrium theory. Such aggregates include national income and output, the unemployment rate, and price inflation and subaggregates like total consumption and investment spending and their components. It also studies effects of monetary policy and fiscal policy.
Since at least the 1960s, macroeconomics has been characterised by further integration as to micro-based modelling of sectors, including rationality of players, efficient use of market information, and imperfect competition. This has addressed a long-standing concern about inconsistent developments of the same subject.
Macroeconomic analysis also considers factors affecting the long-term level and growth of national income. Such factors include capital accumulation, technological change and labour force growth.
Growth
Growth economics studies factors that explain economic growth – the increase in output per capita of a country over a long period of time. The same factors are used to explain differences in the level of output per capita between countries, in particular why some countries grow faster than others, and whether countries converge at the same rates of growth.
Much-studied factors include the rate of investment, population growth, and technological change. These are represented in theoretical and empirical forms (as in the neoclassical and endogenous growth models) and in growth accounting.
Business cycle
The economics of a depression were the spur for the creation of "macroeconomics" as a separate discipline. During the Great Depression of the 1930s, John Maynard Keynes authored a book entitled The General Theory of Employment, Interest and Money outlining the key theories of Keynesian economics. Keynes contended that aggregate demand for goods might be insufficient during economic downturns, leading to unnecessarily high unemployment and losses of potential output.
He therefore advocated active policy responses by the public sector, including monetary policy actions by the central bank and fiscal policy actions by the government to stabilise output over the business cycle. Thus, a central conclusion of Keynesian economics is that, in some situations, no strong automatic mechanism moves output and employment towards full employment levels. John Hicks' IS/LM model has been the most influential interpretation of The General Theory.
Over the years, understanding of the business cycle has branched into various research programmes, mostly related to or distinct from Keynesianism. The neoclassical synthesis refers to the reconciliation of Keynesian economics with classical economics, stating that Keynesianism is correct in the short run but qualified by classical-like considerations in the intermediate and long run.
New classical macroeconomics, as distinct from the Keynesian view of the business cycle, posits market clearing with imperfect information. It includes Friedman's permanent income hypothesis on consumption and "rational expectations" theory, led by Robert Lucas, and real business cycle theory.
In contrast, the new Keynesian approach retains the rational expectations assumption, however it assumes a variety of market failures. In particular, New Keynesians assume prices and wages are "sticky", which means they do not adjust instantaneously to changes in economic conditions.
Thus, the new classicals assume that prices and wages adjust automatically to attain full employment, whereas the new Keynesians see full employment as being automatically achieved only in the long run, and hence government and central-bank policies are needed because the "long run" may be very long.
Unemployment
The amount of unemployment in an economy is measured by the unemployment rate, the percentage of workers without jobs in the labour force. The labour force only includes workers actively looking for jobs. People who are retired, pursuing education, or discouraged from seeking work by a lack of job prospects are excluded from the labour force. Unemployment can be generally broken down into several types that are related to different causes.
Classical models of unemployment occurs when wages are too high for employers to be willing to hire more workers. Consistent with classical unemployment, frictional unemployment occurs when appropriate job vacancies exist for a worker, but the length of time needed to search for and find the job leads to a period of unemployment.
Structural unemployment covers a variety of possible causes of unemployment including a mismatch between workers' skills and the skills required for open jobs. Large amounts of structural unemployment can occur when an economy is transitioning industries and workers find their previous set of skills are no longer in demand. Structural unemployment is similar to frictional unemployment since both reflect the problem of matching workers with job vacancies, but structural unemployment covers the time needed to acquire new skills not just the short term search process.
While some types of unemployment may occur regardless of the condition of the economy, cyclical unemployment occurs when growth stagnates. Okun's law represents the empirical relationship between unemployment and economic growth. The original version of Okun's law states that a 3% increase in output would lead to a 1% decrease in unemployment.
Money and monetary policy
Money is a means of final payment for goods in most price system economies, and is the unit of account in which prices are typically stated. Money has general acceptability, relative consistency in value, divisibility, durability, portability, elasticity in supply, and longevity with mass public confidence. It includes currency held by the nonbank public and checkable deposits. It has been described as a social convention, like language, useful to one largely because it is useful to others. In the words of Francis Amasa Walker, a well-known 19th-century economist, "Money is what money does" ("Money is that money does" in the original).
As a medium of exchange, money facilitates trade. It is essentially a measure of value and more importantly, a store of value being a basis for credit creation. Its economic function can be contrasted with barter (non-monetary exchange). Given a diverse array of produced goods and specialised producers, barter may entail a hard-to-locate double coincidence of wants as to what is exchanged, say apples and a book. Money can reduce the transaction cost of exchange because of its ready acceptability. Then it is less costly for the seller to accept money in exchange, rather than what the buyer produces.
Monetary policy is the policy that central banks conduct to accomplish their broader objectives. Most central banks in developed countries follow inflation targeting, whereas the main objective for many central banks in development countries is to uphold a fixed exchange rate system. The primary monetary tool is normally the adjustment of interest rates, either directly via administratively changing the central bank's own interest rates or indirectly via open market operations. Via the monetary transmission mechanism, interest rate changes affect investment, consumption and net export, and hence aggregate demand, output and employment, and ultimately the development of wages and inflation.
Fiscal policy
Governments implement fiscal policy to influence macroeconomic conditions by adjusting spending and taxation policies to alter aggregate demand. When aggregate demand falls below the potential output of the economy, there is an output gap where some productive capacity is left unemployed. Governments increase spending and cut taxes to boost aggregate demand. Resources that have been idled can be used by the government.
For example, unemployed home builders can be hired to expand highways. Tax cuts allow consumers to increase their spending, which boosts aggregate demand. Both tax cuts and spending have multiplier effects where the initial increase in demand from the policy percolates through the economy and generates additional economic activity.
The effects of fiscal policy can be limited by crowding out. When there is no output gap, the economy is producing at full capacity and there are no excess productive resources. If the government increases spending in this situation, the government uses resources that otherwise would have been used by the private sector, so there is no increase in overall output. Some economists think that crowding out is always an issue while others do not think it is a major issue when output is depressed.
Sceptics of fiscal policy also make the argument of Ricardian equivalence. They argue that an increase in debt will have to be paid for with future tax increases, which will cause people to reduce their consumption and save money to pay for the future tax increase. Under Ricardian equivalence, any boost in demand from tax cuts will be offset by the increased saving intended to pay for future higher taxes.
Inequality
Economic inequality includes income inequality, measured using the distribution of income (the amount of money people receive), and wealth inequality measured using the distribution of wealth (the amount of wealth people own), and other measures such as consumption, land ownership, and human capital. Inequality exists at different extents between countries or states, groups of people, and individuals. There are many methods for measuring inequality, the Gini coefficient being widely used for income differences among individuals. An example measure of inequality between countries is the Inequality-adjusted Human Development Index, a composite index that takes inequality into account. Important concepts of equality include equity, equality of outcome, and equality of opportunity.
Research has linked economic inequality to political and social instability, including revolution, democratic breakdown and civil conflict. Research suggests that greater inequality hinders economic growth and macroeconomic stability, and that land and human capital inequality reduce growth more than inequality of income. Inequality is at the centre stage of economic policy debate across the globe, as government tax and spending policies have significant effects on income distribution. In advanced economies, taxes and transfers decrease income inequality by one-third, with most of this being achieved via public social spending (such as pensions and family benefits.)
Other branches of economics
Public economics
Public economics is the field of economics that deals with economic activities of a public sector, usually government. The subject addresses such matters as tax incidence (who really pays a particular tax), cost–benefit analysis of government programmes, effects on economic efficiency and income distribution of different kinds of spending and taxes, and fiscal politics. The latter, an aspect of public choice theory, models public-sector behaviour analogously to microeconomics, involving interactions of self-interested voters, politicians, and bureaucrats.
Much of economics is positive, seeking to describe and predict economic phenomena. Normative economics seeks to identify what economies ought to be like.
Welfare economics is a normative branch of economics that uses microeconomic techniques to simultaneously determine the allocative efficiency within an economy and the income distribution associated with it. It attempts to measure social welfare by examining the economic activities of the individuals that comprise society.
International economics
International trade studies determinants of goods-and-services flows across international boundaries. It also concerns the size and distribution of gains from trade. Policy applications include estimating the effects of changing tariff rates and trade quotas. International finance is a macroeconomic field which examines the flow of capital across international borders, and the effects of these movements on exchange rates. Increased trade in goods, services and capital between countries is a major effect of contemporary globalisation.
Labour economics
Labour economics seeks to understand the functioning and dynamics of the markets for wage labour. Labour markets function through the interaction of workers and employers. Labour economics looks at the suppliers of labour services (workers), the demands of labour services (employers), and attempts to understand the resulting pattern of wages, employment, and income. In economics, labour is a measure of the work done by human beings. It is conventionally contrasted with such other factors of production as land and capital. There are theories which have developed a concept called human capital (referring to the skills that workers possess, not necessarily their actual work), although there are also counter posing macro-economic system theories that think human capital is a contradiction in terms.
Development economics
Development economics examines economic aspects of the economic development process in relatively low-income countries focusing on structural change, poverty, and economic growth. Approaches in development economics frequently incorporate social and political factors.
Related subjects
Economics is one social science among several and has fields bordering on other areas, including economic geography, economic history, public choice, energy economics, cultural economics, family economics and institutional economics.
Law and economics, or economic analysis of law, is an approach to legal theory that applies methods of economics to law. It includes the use of economic concepts to explain the effects of legal rules, to assess which legal rules are economically efficient, and to predict what the legal rules will be. A seminal article by Ronald Coase published in 1961 suggested that well-defined property rights could overcome the problems of externalities.
Political economy is the interdisciplinary study that combines economics, law, and political science in explaining how political institutions, the political environment, and the economic system (capitalist, socialist, mixed) influence each other. It studies questions such as how monopoly, rent-seeking behaviour, and externalities should impact government policy. Historians have employed political economy to explore the ways in the past that persons and groups with common economic interests have used politics to effect changes beneficial to their interests.
Energy economics is a broad scientific subject area which includes topics related to energy supply and energy demand. Georgescu-Roegen reintroduced the concept of entropy in relation to economics and energy from thermodynamics, as distinguished from what he viewed as the mechanistic foundation of neoclassical economics drawn from Newtonian physics. His work contributed significantly to thermoeconomics and to ecological economics. He also did foundational work which later developed into evolutionary economics.
The sociological subfield of economic sociology arose, primarily through the work of Émile Durkheim, Max Weber and Georg Simmel, as an approach to analysing the effects of economic phenomena in relation to the overarching social paradigm (i.e. modernity). Classic works include Max Weber's The Protestant Ethic and the Spirit of Capitalism (1905) and Georg Simmel's The Philosophy of Money (1900). More recently, the works of James S. Coleman, Mark Granovetter, Peter Hedstrom and Richard Swedberg have been influential in this field.
Gary Becker in 1974 presented an economic theory of social interactions, whose applications included the family, charity, merit goods and multiperson interactions, and envy and hatred. He and Kevin Murphy authored a book in 2001 that analysed market behaviour in a social environment.
Profession
The professionalisation of economics, reflected in the growth of graduate programmes on the subject, has been described as "the main change in economics since around 1900". Most major universities and many colleges have a major, school, or department in which academic degrees are awarded in the subject, whether in the liberal arts, business, or for professional study. See Bachelor of Economics and Master of Economics.
In the private sector, professional economists are employed as consultants and in industry, including banking and finance. Economists also work for various government departments and agencies, for example, the national treasury, central bank or National Bureau of Statistics. See Economic analyst.
There are dozens of prizes awarded to economists each year for outstanding intellectual contributions to the field, the most prominent of which is the Nobel Memorial Prize in Economic Sciences, though it is not a Nobel Prize.
Contemporary economics uses mathematics. Economists draw on the tools of calculus, linear algebra, statistics, game theory, and computer science. Professional economists are expected to be familiar with these tools, while a minority specialise in econometrics and mathematical methods.
Women in economics
Harriet Martineau (1802–1876) was a widely-read populariser of classical economic thought. Mary Paley Marshall (1850–1944), the first women lecturer at a British economics faculty, wrote The Economics of Industry with her husband Alfred Marshall. Joan Robinson (1903–1983) was an important post-Keynesian economist. The economic historian Anna Schwartz (1915–2012) coauthored A Monetary History of the United States, 1867–1960 with Milton Friedman. Three women have received the Nobel Prize in Economics: Elinor Ostrom (2009), Esther Duflo (2019) and Claudia Goldin (2023). Five have received the John Bates Clark Medal: Susan Athey (2007), Esther Duflo (2010), Amy Finkelstein (2012), Emi Nakamura (2019) and Melissa Dell (2020).
Women's authorship share in prominent economic journals reduced from 1940 to the 1970s, but has subsequently risen, with different patterns of gendered coauthorship. Women remain globally under-represented in the profession (19% of authors in the RePEc database in 2018), with national variation.
See also
Asymmetric cointegration
Critical juncture theory
Democracy and economic growth
Economic democracy
Economic ideology
Economic union
Economics terminology that differs from common usage
Free trade
Glossary of economics
Happiness economics
Humanistic economics
Index of economics articles
List of economics awards
List of economics films
Outline of economics
Socioeconomics
Solidarity economy
Notes
References
Sources
Further reading
Post, Louis F. (1927), The Basic Facts of Economics: A Common-Sense Primer for Advanced Students. United States: Columbian Printing Company, Incorporated.
.
External links
General information
Economic journals on the web.
Economics at Encyclopædia Britannica
Economics A–Z. Definitions from The Economist.
Economics Online (UK-based), with drop-down menus at top, incl. Definitions.
Intute: Economics: Internet directory of UK universities.
Research Papers in Economics (RePEc)
Resources For Economists : American Economic Association-sponsored guide to 2,000+ Internet resources from "Data" to "Neat Stuff", updated quarterly.
Institutions and organizations
Economics Departments, Institutes and Research Centers in the World
Organization For Co-operation and Economic Development (OECD) Statistics
United Nations Statistics Division
World Bank Data
American Economic Association
Study resources
Economics at About.com
Economics textbooks on Wikibooks
MERLOT Learning Materials: Economics : US-based database of learning materials
Online Learning and Teaching Materials UK Economics Network's database of text, slides, glossaries and other resources | 0.760141 | 0.999622 | 0.759853 |
Medical humanities | Medical humanities is an interdisciplinary field of medicine which includes the humanities (philosophy of medicine, medical ethics and bioethics, history of medicine, literary studies and religion), social science (psychology, medical sociology, medical anthropology, cultural studies, health geography) and the arts (literature, theater, film, and visual arts) and their application to medical education and practice.
Medical humanities uses interdisciplinary research to explore experiences of health and illness, often focusing on subjective, hidden, or invisible experience. This interdisciplinary strength has given the field a noted diversity and encouraged creative 'epistemological innovation'.
Medical humanities is sometimes conflated with health humanities which also broadly links health and social care disciplines with the arts and humanities.
Definitions
Medical humanities can be defined as an interdisciplinary, and increasingly international endeavor that draws on the creative and intellectual strengths of diverse disciplines, including literature, art, creative writing, drama, film, music, philosophy, ethical decision making, anthropology, and history, in pursuit of medical educational goals. The humanistic sciences are relevant when multiple people’s perspectives on issues are compiled together to answer questions or even create questions. The arts can provide additional perspective to the sciences.
Critical medical humanities is an approach which argues that the arts and humanities have more to offer to healthcare than simply improving medical education. It proposes that the arts and humanities offer different ways of thinking about human history, culture, behaviour and experience which can be used to dissect, critique and influence healthcare practices and priorities.
The arts
Medical books, pictures, and diagrams help medical students build an appreciation for anything in the medical field from the human body to diseases.
The medical humanities can assist medical practitioners with viewing issues from more than one perspective, such as the visual arts and culture are supposed to do. Both patients and doctors/medical professionals deal with facing decision-making. Each person’s perspective of medical ethics is different from one another due to different cultures, religions, societies, and traditions. The humanities also assist and attempt to create a closer or more meaningful relationship between medical practitioners and their peers/patients. Ethics are perceived differently from person to person, so answering ethical questions requires the viewpoints of several people who may have different opinions of what is right from wrong.
Bioethics
The first category is bioethics, which includes the morals of healthcare. As science and technology develop, so does healthcare and medicine, and there is discussion and debate in society and healthcare committees that go over the ethics of these certain situations that pertain to medical humanities. For example, one of these cases involves the practice of body enhancements in which the ethics of this practice are questioned due to the fact that bio-medical and technological practices are making changes to a person’s body to improve the body and/or its appearance.
Clinical ethics
The second category in ethics of the medical humanities is clinical ethics, which refers to the respect that healthcare professionals have for patients and families, and this helps develop a sort of professionalism, respectability, and expertise that healthcare professionals must use in respect to their patients. Another example in the ethics of the medical humanities is bias people and society have against others with disabilities, and how these disabilities correlate with success or what the disabled person is able to do. It is unethical to judge or assume the incapability of a disabled person because disabled people are able to find ways to become successful through modern technology and even through self-determination.
Various academic institutions offer courses of study in the ethics of medical humanities. These programs help their students learn professionalism in the medical field so that they may respectfully help their patients and do what it is right in any situation that may arise.
Literature and medicine
Formerly called medicine in literature, literature and medicine is an interdisciplinary subfield of the medical humanities considered a "dialogue rather than a merger" between the literary and the medical. Literature and medicine is flourishing in undergraduate programs and in medical schools at all levels. The Pennsylvania State University College of Medicine-Hershey was the first to introduce literature into a medical school curriculum when Joanne Trautmann (Banks), an English professor, was appointed to a position in literature there in 1972. The rationale for using literature and medicine in medical education is three-fold: reading the stories of patients and writing about their experiences gives doctors in training the tools they need to better understand their patients; discussing and reflecting on literature brings the medical practitioner's biases and assumptions into focus, heightening awareness; and reading literature requires critical thinking and empathetic awareness about moral issues in medicine.
See also
Biopolitics
Cinemeducation, the use of film in medical education
Disability studies
Health communication
Health humanities
Medical anthropology
Medical journalism
Medical literature
Narrative medicine
Graphic medicine
Philosophy of medicine
Philosophy of healthcare
Public health
References
Further reading
http://philpapers.org/rec/HARWAS
Literature, Arts, and Medicine Database http://medhum.med.nyu.edu (includes works and issues)
Literature and Medicine Series http://www.kentstateuniversitypress.com/category/series/lit_med/
Literature and Medicine Track, Georgetown University School of Medicine https://web.archive.org/web/20160707114840/http://som.georgetown.edu/academics/lamt
https://web.archive.org/web/20160818092843/http://www.fondazionelanza.it/medicalhumanities/texts/Jones%20AH,%20Literature%20and%20medicine%20an%20evolving%20canon.pdf
Teaching Literature and Medicine: Ken Kesey's One Flew over the Cuckoo's Nest. https://www.cpcc.edu/taltp/archives/.../file
Literature and Medicine, 1982 journal
External links
Medical Humanities (Articles)
Medical Humanities
Medical Humanities (Blog)
Journal of Medical Humanities
Centre for Medical Humanities, Durham University (Blog)
Medicinae Humanistica (Blog)
Medical Humanities Research Centre (MHRC), University of Glasgow
SCOPE: The Health Humanities Learning Lab, University of Toronto
Northwest Narrative Medicine Collaborative - community of narrative medicine, medical humanities, and health humanities practitioners in the U.S. Pacific Northwest
Medical education
Theatre of the Oppressed | 0.773575 | 0.982213 | 0.759816 |
Biologist | A biologist is a scientist who conducts research in biology. Biologists are interested in studying life on Earth, whether it is an individual cell, a multicellular organism, or a community of interacting populations. They usually specialize in a particular branch (e.g., molecular biology, zoology, and evolutionary biology) of biology and have a specific research focus (e.g., studying malaria or cancer).
Biologists who are involved in basic research have the aim of advancing knowledge about the natural world. They conduct their research using the scientific method, which is an empirical method for testing hypotheses. Their discoveries may have applications for some specific purpose such as in biotechnology, which has the goal of developing medically useful products for humans.
In modern times, most biologists have one or more academic degrees such as a bachelor's degree, as well as an advanced degree such as a master's degree or a doctorate. Like other scientists, biologists can be found working in different sectors of the economy such as in academia, nonprofits, private industry, or government.
History
Francesco Redi, the founder of biology, is recognized to be one of the greatest biologists of all time. Robert Hooke, an English natural philosopher, coined the term cell, suggesting plant structure's resemblance to honeycomb cells.
Charles Darwin and Alfred Wallace independently formulated the theory of evolution by natural selection, which was described in detail in Darwin's book On the Origin of Species, which was published in 1859. In it, Darwin proposed that the features of all living things, including humans, were shaped by natural processes of descent with accumulated modification leading to divergence over long periods of time. The theory of evolution in its current form affects almost all areas of biology. Separately, Gregor Mendel formulated the principles of inheritance in 1866, which became the basis of modern genetics.
In 1953, James D. Watson and Francis Crick described the basic structure of DNA, the genetic material for expressing life in all its forms, building on the work of Maurice Wilkins and Rosalind Franklin, suggested that the structure of DNA was a double helix.
Ian Wilmut led a research group that in 1996 first cloned a mammal from an adult somatic cell, a Finnish Dorset lamb named Dolly.
Education
An undergraduate degree in biology typically requires coursework in molecular and cellular biology, development, ecology, genetics, microbiology, anatomy, physiology, botany, and zoology. Additional requirements may include physics, chemistry (general, organic, and biochemistry), calculus, and statistics.
Students who aspire to a research-oriented career usually pursue a graduate degree such as a master's or a doctorate (e.g., PhD) whereby they would receive training from a research head based on an apprenticeship model that has been in existence since the 1800s. Students in these graduate programs often receive specialized training in a particular subdiscipline of biology.
Research
Biologists who work in basic research formulate theories and devise experiments to advance human knowledge on life including topics such as evolution, biochemistry, molecular biology, neuroscience and cell biology.
Biologists typically conduct laboratory experiments involving animals, plants, microorganisms or biomolecules. However, a small part of biological research also occurs outside the laboratory and may involve natural observation rather than experimentation. For example, a botanist may investigate the plant species present in a particular environment, while an ecologist might study how a forest area recovers after a fire.
Biologists who work in applied research use instead the accomplishments gained by basic research to further knowledge in particular fields or applications. For example, this applied research may be used to develop new pharmaceutical drugs, treatments and medical diagnostic tests. Biological scientists conducting applied research and product development in private industry may be required to describe their research plans or results to non-scientists who are in a position to veto or approve their ideas. These scientists must consider the business effects of their work.
Swift advances in knowledge of genetics and organic molecules spurred growth in the field of biotechnology, transforming the industries in which biological scientists work. Biological scientists can now manipulate the genetic material of animals and plants, attempting to make organisms (including humans) more productive or resistant to disease. Basic and applied research on biotechnological processes, such as recombining DNA, has led to the production of important substances, including human insulin and growth hormone. Many other substances not previously available in large quantities are now produced by biotechnological means. Some of these substances are useful in treating diseases.
Those working on various genome (chromosomes with their associated genes) projects isolate genes and determine their function. This work continues to lead to the discovery of genes associated with specific diseases and inherited health risks, such as sickle cell anemia. Advances in biotechnology have created research opportunities in almost all areas of biology, with commercial applications in areas such as medicine, agriculture, and environmental remediation.
Specializations
Most biological scientists specialize in the study of a certain type of organism or in a specific activity, although recent advances have blurred some traditional classifications.
Geneticists study genetics, the science of genes, heredity, and variation of organisms.
Neuroscientists study the nervous system.
Developmental biologists study the process of development and growth of organisms
Biochemists study the chemical composition of living things. They analyze the complex chemical combinations and reactions involved in metabolism, reproduction, and growth.
Molecular biologists study the biological activity between biomolecules.
Microbiologists investigate the growth and characteristics of microscopic organisms such as bacteria, algae, or fungi.
Physiologists study life functions of plants and animals, in the whole organism and at the cellular or molecular level, under normal and abnormal conditions. Physiologists often specialize in functions such as growth, reproduction, photosynthesis, respiration, or movement, or in the physiology of a certain area or system of the organism.
Biophysicists use experimental methods traditionally employed in physics to answer biological questions .
Computational biologists apply the techniques of computer science, applied mathematics and statistics to address biological problems. The main focus lies on developing mathematical modeling and computational simulation techniques. By these means it addresses scientific research topics with their theoretical and experimental questions without a laboratory.
Zoologists and wildlife biologists study animals and wildlife—their origin, behavior, diseases, and life processes. Some experiment with live animals in controlled or natural surroundings, while others dissect dead animals to study their structure. Zoologists and wildlife biologists also may collect and analyze biological data to determine the environmental effects of current and potential uses of land and water areas. Zoologists usually are identified by the animal group they study. For example, ornithologists study birds, mammalogists study mammals, herpetologists study reptiles and amphibians, ichthyologists study fish, cnidariologists study jellyfishes and entomologists study insects.
Botanists study plants and their environments. Some study all aspects of plant life, including algae, lichens, mosses, ferns, conifers, and flowering plants; others specialize in areas such as identification and classification of plants, the structure and function of plant parts, the biochemistry of plant processes, the causes and cures of plant diseases, the interaction of plants with other organisms and the environment, the geological record of plants and their evolution. Mycologists study fungi, such as yeasts, mold and mushrooms, which are a separate kingdom from plants.
Aquatic biologists study micro-organisms, plants, and animals living in water. Marine biologists study salt water organisms, and limnologists study fresh water organisms. Much of the work of marine biology centers on molecular biology, the study of the biochemical processes that take place inside living cells. Marine biology is a branch of oceanography, which is the study of the biological, chemical, geological, and physical characteristics of oceans and the ocean floor. (See the Handbook statements on environmental scientists and hydrologists and on geoscientists.)
Ecologists investigate the relationships among organisms and between organisms and their environments, examining the effects of population size, pollutants, rainfall, temperature, and altitude. Using knowledge of various scientific disciplines, ecologists may collect, study, and report data on the quality of air, food, soil, and water.
Evolutionary biologists investigate the evolutionary processes that produced the diversity of life on Earth, starting from a single common ancestor. These processes include natural selection, common descent, and speciation.
Employment
Biologists typically work regular hours but longer hours are not uncommon. Researchers may be required to work odd hours in laboratories or other locations (especially while in the field), depending on the nature of their research.
Many biologists depend on grant money to fund their research. They may be under pressure to meet deadlines and to conform to rigid grant-writing specifications when preparing proposals to seek new or extended funding.
Marine biologists encounter a variety of working conditions. Some work in laboratories; others work on research ships, and those who work underwater must practice safe diving while working around sharp coral reefs and hazardous marine life. Although some marine biologists obtain their specimens from the sea, many still spend a good deal of their time in laboratories and offices, conducting tests, running experiments, recording results, and compiling data.
Biologists are not usually exposed to unsafe or unhealthy conditions. Those who work with dangerous organisms or toxic substances in the laboratory must follow strict safety procedures to avoid contamination. Many biological scientists, such as botanists, ecologists, and zoologists, conduct field studies that involve strenuous physical activity and primitive living conditions. Biological scientists in the field may work in warm or cold climates, in all kinds of weather.
Honors and awards
The highest honor awarded to biologists is the Nobel Prize in Physiology or Medicine, awarded since 1901, by the Royal Swedish Academy of Sciences. Another significant award is the Crafoord Prize in Biosciences; established in 1980.
See also
Biology
Glossary of biology
List of biologists
Lists of biologists by author abbreviation
References
U.S. Department of Labor, Occupational Outlook Handbook
Science occupations
sl:Biolog | 0.763588 | 0.99505 | 0.759808 |
Cynicism (contemporary) | Cynicism is an attitude characterized by a general distrust of the motives of others. A cynic may have a general lack of faith or hope in people motivated by ambition, desire, greed, gratification, materialism, goals, and opinions that a cynic perceives as vain, unobtainable, or ultimately meaningless. The term originally derives from the ancient Greek philosophers, the Cynics, who rejected conventional goals of wealth, power, and honor. They practiced shameless nonconformity with social norms in religion, manners, housing, dress, or decency, instead advocating the pursuit of virtue in accordance with a simple and natural way of life.
By the 19th century, emphasis on the ascetic ideals and the critique of current civilization based on how it might fall short of an ideal civilization or negativistic aspects of Cynic philosophy led the modern understanding of cynicism to mean a disposition of disbelief in the sincerity or goodness of human motives and actions. Modern cynicism is a distrust toward professed ethical and social values, especially when there are high expectations concerning society, institutions, and authorities that are unfulfilled. It can manifest itself as a result of frustration, disillusionment, and distrust perceived as owing to organizations, authorities, and other aspects of society.
Cynicism is often confused with pessimism or nihilism, perhaps due to their shared association with a lack of faith in humanity. The differences among the three is that cynicism is a distrust by prudence; while due to a sense of defeatism, pessimism is the distrust of potential success. Nihilism on its part is the general distrust cast upon the belief that anything in life (including life itself) has any valuable meaning.
Overview
Modern cynicism has been defined as an attitude of distrust toward claimed ethical and social values and a rejection of the need to be socially involved. It is pessimistic about the capacity of human beings to make correct ethical choices; in this aspect, naiveté is an antonym. Modern cynicism is sometimes regarded as a product of mass society, especially in those circumstances where the individual believes there is a conflict between society's stated motives and goals and actual motives and goals.
Critical evaluation
Cynicism can appear more active in depression. In Critique of Cynical Reason (1983), Peter Sloterdijk defined modern cynics as "borderline melancholics, who can keep their symptoms of depression under control and yet retain the ability to work, whatever might happen ... indeed, this is the essential point in modern cynicism: the ability of its bearers to work—in spite of anything that might happen."
One active aspect of cynicism involves the desire to expose hypocrisy and to point out gaps between ideals and practices. George Bernard Shaw allegedly expressed this succinctly: "The power of accurate observation is commonly called cynicism by those who don't have it".
Health effects
A study published in Neurology journal in 2014 found an association between high levels of late-life "cynical distrust" (interpreted and measured in the study in terms of hostility) and dementia. The survey included 622 people who were tested for dementia for a period of eight years. In that period, 46 people were diagnosed with dementia. "Once researchers adjusted for other factors that could affect dementia risk, such as high blood pressure, high cholesterol and smoking, people with high levels of cynical distrust were three times more likely to develop dementia than people with low levels of cynicism. Of the 164 people with high levels of cynicism, 14 people developed dementia, compared to nine of the 212 people with low levels of cynicism."
Research has also shown that cynicism is related to feelings of disrespect. According to a study published in the Journal of Experimental Psychology: General in 2020, "everyday experiences of disrespect elevated cynical beliefs and vice versa. Moreover, cynical individuals tended to treat others with disrespect, which in turn predicted more disrespectful treatment by others."
In politics
In a 1996 paper, J. N. Cappella and K. H. Jamieson claimed that "healthy skepticism may have given way to corrosive cynicism". Cynicism regarding government or politics can logically lead to political withdrawal and effective political helplessness. In 2013 conservative politician and political theorist William J. Bennett warned that the United States could "crumble from within; that we would become cynical and withdraw".
Possible effects
A 2004 experiment and paper called The Effects of Strategic News on Political Cynicism, Issue Evaluations, and Policy Support: A Two-Wave Experiment found that the way the news media presents the news can cause political cynicism. The experiment also demonstrated "a negative relation between efficacy and cynicism suggesting that efficacious citizens were less likely to be cynical about politics". It was found that straight dry, "issues-based" news did not cause political cynicism, but that "Strategic News" and "game news" did. The latter two types of news presentation emphasize:
Social cynicism
Social cynicism results from high expectations concerning society, institutions and authorities; unfulfilled expectations lead to disillusionment, which releases feelings of disappointment and betrayal.
In organizations, cynicism manifests itself as a general or specific attitude, characterized by frustration, hopelessness, disillusionment and distrust in regard to economic or governmental organizations, managers or other aspects of work.
Psychology
Legal cynicism
See also
Doubt
Doomer
Fanaticism
Human nature
Melancholia
Misanthropy
Moral realism
Nihilism
"No good deed goes unpunished"
Pessimism
Pragmatism
Resentment
Rational choice theory
Skepticism
Weltschmerz
References
Further reading
Mazella, David, (2007), The Making of Modern Cynicism, University of Virginia Press.
Sloterdijk, Peter, (1988), Critique of Cynical Reason, University of Minnesota Press.
External links
The Cynic's Sanctuary
The Cynical Web Site
Cynicism / Conspiracism from Project Worldview
"Is it worse to be cynical or jaded?" Retrieved 2008-05-30
Belief
Virtue ethics
Modernism
Psychological attitude
Social theories | 0.762315 | 0.996698 | 0.759798 |
Self-knowledge (psychology) | Self-knowledge is a term used in psychology to describe the information that an individual draws upon when finding answers to the questions "What am I like?" and "Who am I?".
While seeking to develop the answer to this question, self-knowledge requires ongoing self-awareness and self-consciousness (which is not to be confused with consciousness). Young infants and chimpanzees display some of the traits of self-awareness and agency/contingency, yet they are not considered as also having self-consciousness. At some greater level of cognition, however, a self-conscious component emerges in addition to an increased self-awareness component, and then it becomes possible to ask "What am I like?", and to answer with self-knowledge, though self-knowledge has limits, as introspection has been said to be limited and complex.
Self-knowledge is a component of the self or, more accurately, the self-concept. It is the knowledge of oneself and one's properties and the desire to seek such knowledge that guide the development of the self-concept, even if that concept is flawed. Self-knowledge informs us of our mental representations of ourselves, which contain attributes that we uniquely pair with ourselves, and theories on whether these attributes are stable or dynamic, to the best that we can evaluate ourselves.
The self-concept is thought to have three primary aspects:
The cognitive self
The affective self
The executive self
The affective and executive selves are also known as the felt and active selves respectively, as they refer to the emotional and behavioral components of the self-concept.
Self-knowledge is linked to the cognitive self in that its motives guide our search to gain greater clarity and assurance that our own self-concept is an accurate representation of our true self; for this reason the cognitive self is also referred to as the known self. The cognitive self is made up of everything we know (or think we know) about ourselves. This implies physiological properties such as hair color, race, and height etc.; and psychological properties like beliefs, values, and dislikes to name but a few.
Self knowledge just simply means introspecting your behaviour and actions from a third persons view to the various situations faced in life and then trying to identify the causes of these issues in life.
Relationship with memory
Self-knowledge and its structure affect how events we experience are encoded, how they are selectively retrieved/recalled, and what conclusions we draw from how we interpret the memory. The analytical interpretation of our own memory can also be called meta memory, and is an important factor of meta cognition.
The connection between our memory and our self-knowledge has been recognized for many years by leading minds in both philosophy and psychology, yet the precise specification of the relation remains a point of controversy.
Specialized memory
Studies have shown there is a memory advantage for information encoded with reference to the self.
Somatic markers, that is memories connected to an emotional charge, can be helpful or dysfunctional - there is a correlation but not causation, and therefore cannot be relied on.
Patients with Alzheimer's who have difficulty recognizing their own family have not shown evidence of self-knowledge.
The division of memory
Self-theories have traditionally failed to distinguish between different source that inform self-knowledge, these are episodic memory and semantic memory. Both episodic and semantic memory are facets of declarative memory, which contains memory of facts. Declarative memory is the explicit counterpart to procedural memory, which is implicit in that it applies to skills we have learnt; they are not facts that can be stated.
Episodic memory
Episodic memory is the autobiographical memory that individuals possess which contains events, emotions, and knowledge associated with a given context.
Semantic memory
Semantic memory does not refer to concept-based knowledge stored about a specific experience like episodic memory. Instead it includes the memory of meanings, understandings, general knowledge about the world, and factual information etc. This makes semantic knowledge independent of context and personal information. Semantic memory enables an individual to know information, including information about their selves, without having to consciously recall the experiences that taught them such knowledge.
Semantic self as the source
People are able to maintain a sense of self that is supported by semantic knowledge of personal facts in the absence of direct access to the memories that describe the episodes on which the knowledge is based.
Individuals have been shown to maintain a sense of self despite catastrophic impairments in episodic recollection. For example, subject W.J., who suffered dense retrograde amnesia leaving her unable to recall any events that occurred prior to the development of amnesia. However, her memory for general facts about her life during the period of amnesia remained intact.
This suggests that a separate type of knowledge contributes to the self-concept, as W.J.'s knowledge could not have come from her episodic memory.
A similar dissociation occurred in K.C. who suffered a total loss of episodic memory, but still knew a variety of facts about himself.
Evidence also exists that shows how patients with severe amnesia can have accurate and detailed semantic knowledge of what they are like as a person, for example which particular personality traits and characteristics they possess.
This evidence for the dissociation between episodic and semantic self-knowledge has made several things clear:
Episodic memory is not the only drawing point for self-knowledge, contrary to long-held beliefs. Self-knowledge must therefore be expanded to include the semantic component of memory.
Self-knowledge about the traits one possesses can be accessed without the need for episodic retrieval. This is shown through study of individuals with neurological impairments that make it impossible to recollect trait-related experiences, yet who can still make reliable and accurate trait-ratings of themselves, and even revise these judgements based on new experiences they cannot even recall.
Motives that guide our search
People have goals that lead them to seek, notice, and interpret information about themselves. These goals begin the quest for self-knowledge.
There are three primary motives that lead us in the search for self-knowledge:
Self-enhancement
Accuracy
Consistency
Self-enhancement
Self-enhancement refers to the fact that people seem motivated to experience positive emotional states and to avoid experiencing negative emotional states. People are motivated to feel good about themselves in order to maximize their feelings of self-worth, thus enhancing their self-esteem.
The emphasis on feelings differs slightly from how other theories have previously defined self-enhancement needs, for example the Contingencies of Self-Worth Model.
Other theorists have taken the term to mean that people are motivated to think about themselves in highly favorable terms, rather than feel they are "good".
In many situations and cultures, feelings of self-worth are promoted by thinking of oneself as highly capable or better than one's peers. However, in some situations and cultures, feelings of self-worth are promoted by thinking of oneself as average or even worse than others. In both cases, thoughts about the self still serve to enhance feelings of self-worth.
The universal need is not a need to think about oneself in any specific way, rather a need to maximize one's feelings of self-worth. This is the meaning of the self enhancement motive with respect to self-knowledge.
Arguments
In Western societies, feelings of self-worth are in fact promoted by thinking of oneself in favorable terms.
In this case, self-enhancement needs lead people to seek information about themselves in such a way that they are likely to conclude that they truly possess what they see as a positive defining quality.
See "Self-verification theory" section.
Accuracy
Accuracy needs influence the way in which people search for self-knowledge. People frequently wish to know the truth about themselves without regard as to whether they learn something positive or negative.
There are three considerations which underlie this need:
Occasionally people simply want to reduce any uncertainty. They may want to know for the sheer intrinsic pleasure of knowing what they are truly like.
Some people believe they have a moral obligation to know what they are really like. This view holds particularly strong in theology and philosophy, particularly existentialism.
Knowing what one is really like can sometimes help an individual to achieve their goals. The basic fundamental goal to any living thing is survival, therefore accurate self-knowledge can be adaptive to survival.
Accurate self-knowledge can also be instrumental in maximizing feelings of self-worth. Success is one of the number of things that make people feel good about themselves, and knowing what we are like can make successes more likely, so self-knowledge can again be adaptive. This is because self-enhancement needs can be met by knowing that one can not do something particularly well, thus protecting the person from pursuing a dead-end dream that is likely to end in failure.
Consistency
Many theorists believe that we have a motive to protect the self-concept (and thus our self-knowledge) from change. This motive to have consistency leads people to look for and welcome information that is consistent with what they believe to be true about themselves; likewise, they will avoid and reject information which presents inconsistencies with their beliefs. This phenomenon is also known as self-verification theory.
Not everyone has been shown to pursue a self-consistency motive; but it has played an important role in various other influential theories, such as cognitive dissonance theory.
Self-verification theory
This theory was put forward by William Swann of the University of Texas at Austin in 1983 to put a name to the aforementioned phenomena. The theory states that once a person develops an idea about what they are like, they will strive to verify the accompanying self-views.
Two considerations are thought to drive the search for self-verifying feedback:
We feel more comfortable and secure when we believe that others see us in the same way that we see ourselves. Actively seeking self-verifying feedback helps people avoid finding out that they are wrong about their self-views.
Self-verification theory assumes that social interactions will proceed more smoothly and profitably when other people view us the same way as we view ourselves. This provides a second reason to selectively seek self-verifying feedback.
These factors of self-verification theory create controversy when persons suffering from low-self-esteem are taken into consideration. People who hold negative self-views about themselves selectively seek negative feedback in order to verify their self-views. This is in stark contrast to self-enhancement motives that suggest people are driven by the desire to feel good about themselves.
Sources
There are three sources of information available to an individual through which to search for knowledge about the self:
The physical world
The social world
The psychological world
The physical world
The physical world is generally a highly visible, and quite easily measurable source of information about one's self. Information one may be able to obtain from the physical world may include:
Weight - by weighing oneself.
Strength - by measuring how much one can lift.
Height - by measuring oneself.
Limitations
Many attributes are not measurable in the physical world, such as kindness, cleverness and sincerity.
Even when attributes can be assessed with reference to the physical world, the knowledge that we gain is not necessarily the knowledge we are seeking. Every measure is simply a relative measure to the level of that attribute in, say, the general population or another specific individual.
This means that any measurement only merits meaning when it is expressed in respect to the measurements of others.
Most of our personal identities are therefore sealed in comparative terms from the social world.
The social world
The comparative nature of self-views means that people rely heavily on the social world when seeking information about their selves. Two particular processes are important:
Social Comparison Theory
Reflected Appraisals
Social comparison
People compare attributes with others and draw inferences about what they themselves are like. However, the conclusions a person ultimately draws depend on whom in particular they compare themselves with. The need for accurate self-knowledge was originally thought to guide the social comparison process, and researchers assumed that comparing with others who are similar to us in the important ways is more informative.
Complications of the social comparison theory
People are also known to compare themselves with people who are slightly better off than they themselves are (known as an upward comparison); and with people who are slightly worse off or disadvantaged (known as a downward comparison).
There is also substantial evidence that the need for accurate self-knowledge is neither the only, nor most important factor that guides the social comparison process, the need to feel good about ourselves affects the social comparison process.
Reflected appraisals
Reflected appraisals occur when a person observes how others respond to them. The process was first explained by the sociologist Charles H. Cooley in 1902 as part of his discussion of the "looking-glass self", which describes how we see ourselves reflected in other peoples' eyes. He argued that a person's feelings towards themselves are socially determined via a three-step process:
"A self-idea of this sort seems to have three principled elements: the imagination of our appearance to the other person; the imagination of his judgment of that appearance; and some sort of self-feeling, such as pride or mortification. The comparison with a looking-glass hardly suggests the second element, the imagined judgment which is quite essential. The thing that moves us to pride or shame is not the mere mechanical reflection of ourselves, but an imputed sentiment, the imagined effect of this reflection upon another's mind." (Cooley, 1902, p. 153)
In simplified terms, Cooley's three stages are:
We imagine how we appear in the eyes of another person.
We then imagine how that person is evaluating us.
The imagined evaluation leads us to feel good or bad, in accordance with the judgement we have conjured.
Note that this model is of a phenomenological nature.
In 1963, John W. Kinch adapted Cooley's model to explain how a person's thoughts about themselves develop rather than their feelings.
Kinch's three stages were:
Actual appraisals - what other people actually think of us.
Perceived appraisals - our perception of these appraisals.
Self-appraisals - our ideas about what we are like based on the perceived appraisals.
This model is also of a phenomenological approach.
Arguments against the reflected appraisal models
Research has only revealed limited support for the models and various arguments raise their heads:
People are not generally good at knowing what an individual thinks about them.
Felson believes this is due to communication barriers and imposed social norms which place limits on the information people receive from others. This is especially true when the feedback would be negative; people rarely give one another negative feedback, so people rarely conclude that another person dislikes them or is evaluating them negatively.
Despite being largely unaware of how one person in particular is evaluating them, people are better at knowing what other people on the whole think.
The reflected appraisal model assumes that actual appraisals determine perceived appraisals. Although this may in fact occur, the influence of a common third variable could also produce an association between the two.
The sequence of reflected appraisals may accurately characterize patterns in early childhood due to the large amount of feedback infants receive from their parents, yet it appears to be less relevant later in life. This is because people are not passive, as the model assumes. People actively and selectively process information from the social world. Once a person's ideas about themselves take shape, these also influence the manner in which new information is gathered and interpreted, and thus the cycle continues.
The psychological world
The psychological world describes our "inner world". There are three processes that influence how people acquire knowledge about themselves:
Introspection
Self-perception processes
Causal attributions
Introspection
Introspection involves looking inwards and directly consulting our attitudes, feelings and thoughts for meaning.
Consulting one's own thoughts and feelings can sometimes result in meaningful self-knowledge. The accuracy of introspection, however, has been called into question since the 1970s. Generally, introspection relies on people's explanatory theories of the self and their world, the accuracy of which is not necessarily related to the form of self-knowledge that they are attempting to assess.
A stranger's ratings about a participant are more correspondent to the participant's self-assessment ratings when the stranger has been subject to the participant's thoughts and feelings than when the stranger has been subject to the participant's behavior alone, or a combination of the two.
Comparing sources of introspection. People believe that spontaneous forms of thought provide more meaningful self-insight than more deliberate forms of thinking. Morewedge, Giblin, and Norton (2014) found that the more spontaneous a kind of thought, the more spontaneous a particular thought, and the more spontaneous thought a particular thought was perceived to be, the more insight into the self it was attributed. In addition, the more meaning the thought was attributed, the more the particular thought influenced their judgment and decision making. People asked to let their mind wander until they randomly thought of a person to whom they were attracted to, for example, reported that the person they identified provided them with more self-insight than people asked to simply think of a person to whom they were attracted to. Moreover, the greater self-insight attributed to the person identified by the (former) random thought process than by the latter deliberate thought process led those people in the random condition to report feeling more attracted to the person they identified.
Arguments against introspection
Whether introspection always fosters self-insight is not entirely clear. Thinking too much about why we feel the way we do about something can sometimes confuse us and undermine true self-knowledge. Participants in an introspection condition are less accurate when predicting their own future behavior than controls and are less satisfied with their choices and decisions. In addition, it is important to notice that introspection allows the exploration of the conscious mind only, and does not take into account the unconscious motives and processes, as found and formulated by Freud.
Self-perception processes
Wilson's work is based on the assumption that people are not always aware of why they feel the way they do. Bem's self-perception theory makes a similar assumption.
The theory is concerned with how people explain their behavior. It argues that people don't always know why they do what they do. When this occurs, they infer the causes of their behavior by analyzing their behavior in the context in which it occurred. Outside observers of the behavior would reach a similar conclusion as the individual performing it. The individuals then draw logical conclusions about why they behaved as they did.
"Individuals come to "know" their own attitudes, emotions, and other internal states partially by inferring them from observations of their own overt behavior and/or the circumstances in which this behavior occurs. Thus, to the extent that internal cues are weak, ambiguous, or uninterpretable, the individual is functionally in the same position as an outside observer, an observer who must necessarily rely upon those same external cues to infer the individual's inner states." (Bem, 1972, p.2)
The theory has been applied to a wide range of phenomena. Under particular conditions, people have been shown to infer their attitudes, emotions, and motives, in the same manner described by the theory.
Similar to introspection, but with an important difference: with introspection we directly examine our attitudes, feelings and motives. With self-perception processes we indirectly infer our attitudes, feelings, and motives by analyzing our behavior.
Causal attributions
Causal attributions are an important source of self-knowledge, especially when people make attributions for positive and negative events. The key elements in self-perception theory are explanations people give for their actions, these explanations are known as causal attributions.
Causal attributions provide answers to "Why?" questions by attributing a person's behavior (including our own) to a cause.
People also gain self-knowledge by making attributions for other people's behavior; for example "If nobody wants to spend time with me it must be because I'm boring".
Activation
Individuals think of themselves in many different ways, yet only some of these ideas are active at any one given time. The idea that is specifically active at a given time is known as the Current Self-Representation. Other theorists have referred to the same thing in several different ways:
The phenomenal self
Spontaneous self-concept
Self-identifications
Aspects of the working self-concept
The current self-representation influences information processing, emotion, and behavior and is influenced by both personal and situational factors.
Personal factors that influence current self-representation
Self-concept
Self-concept, or how people usually think of themselves is the most important personal factor that influences current self-representation. This is especially true for attributes that are important and self-defining.
Self-concept is also known as the self-schema, made of innumerable smaller self-schemas that are "chronically accessible".
Self-esteem
Self-esteem affects the way people feel about themselves. People with high self-esteem are more likely to be thinking of themselves in positive terms at a given time than people suffering low self-esteem.
Mood state
Mood state influences the accessibility of positive and negative self-views.
When we are happy we tend to think more about our positive qualities and attributes, whereas when we are sad our negative qualities and attributes become more accessible.
This link is particularly strong for people suffering low self-esteem.
Goals
People can deliberately activate particular self-views. We select appropriate images of ourselves depending on what role we wish to play in a given situation.
One particular goal that influences activation of self-views is the desire to feel good.
Situational factors that influence current self-representation
Social roles
How a person thinks of themselves depends largely on the social role they are playing. Social roles influence our personal identities.
Social context and self-description
People tend to think of themselves in ways that distinguish them from their social surroundings.
The more distinctive the attribute, the more likely it will be used to describe oneself.
Distinctiveness also influences the salience of group identities.
Self-categorization theory proposes that whether people are thinking about themselves in terms of either their social groups or various personal identities depends partly on the social context.
Group identities are more salient in the intergroup contexts.
Group size
The size of the group affects the salience of group-identities. Minority groups are more distinctive, so group identity should be more salient among minority group members than majority group members.
Group status
Group status interacts with group size to affect the salience of social identities.
Social context and self-evaluation
The social environment has an influence on the way people evaluate themselves as a result of social-comparison processes.
The contrast effect
People regard themselves as at the opposite end of the spectrum of a given trait to the people in their company. However, this effect has come under criticism as to whether it is a primary effect, as it seems to share space with the assimilation effect, which states that people evaluate themselves more positively when they are in the company of others who are exemplary on some dimension.
Whether the assimilation or contrast effect prevails depends on the psychological closeness, with people feeling psychologically disconnected with their social surroundings being more likely to show contrast effects. Assimilation effects occur when the subject feels psychologically connected to their social surroundings.
Significant others and self-evaluations
Imagining how one appears to others has an effect on how one thinks about oneself.
Recent events
Recent events can cue particular views of the self, either as a direct result of failure, or via mood.
The extent of the effect depends on personal variables. For example people with high self-esteem do not show this effect, and sometimes do the opposite.
Memory for prior events influence how people think about themselves.
Fazio et al. found that selective memory for prior events can temporarily activate self-representations which, once activated, guide our behavior.
Deficiencies
Specific types
Misperceiving
Deficiency in knowledge of the present self.
Giving reasons but not feelings disrupts self-insight.
Misremembering
Deficiency of knowledge of the past self.
Knowledge from the present overinforms the knowledge of the past.
False theories shape autobiographical memory.
Misprediction
Deficiency of knowledge of the future self.
Knowledge of the present overinforms predictions of future knowledge.
Affective forecasting can be affected by durability bias.
Miswanting
See also
References
Further reading
Brown, J. D. (1998). The self. New York: McGraw Hill.
Sedikides, C., & Brewer, M. B. (2001). Individual self, relational self, collective self. Philadelphia, PA: Psychology Press.
Suls, J. (1982). Psychological perspectives on the self (Vol. 1). Hillsdale, NJ: Lawrence Erlbaum Associates.
Sedikides, C., & Spencer, S. J. (Eds.) (2007). The self. New York: Psychology Press.
Thinking and Action: A Cognitive Perspective on Self-Regulation during Endurance Performance
External links
William Swann's Homepage including many of his works
International Society for Self and Identity
Journal of Self and Identity
Know Thyself - The Value and Limits of Self-Knowledge: The Examined Life is a course offered by Coursera, and is created by a partnership between The University of Edinburgh and Humility & Conviction and Public Life Project, a research project based at the University of Connecticut.
Self
Knowledge | 0.767363 | 0.990062 | 0.759737 |
Organic brain syndrome | Organic brain syndrome, also known as organic brain disease, organic brain damage, organic brain disorder, organic mental syndrome, or organic mental disorder, refers to any syndrome or disorder of mental function whose cause is alleged to be known as organic (physiologic) rather than purely of the mind. These names are older and nearly obsolete general terms from psychiatry, referring to many physical disorders that cause impaired mental function. They are meant to exclude psychiatric disorders (mental disorders). Originally, the term was created to distinguish physical (termed "organic") causes of mental impairment from psychiatric (termed "functional") disorders, but during the era when this distinction was drawn, not enough was known about brain science (including neuroscience, cognitive science, neuropsychology, and mind-brain correlation) for this cause-based classification to be more than educated guesswork labeled with misplaced certainty, which is why it has been deemphasized in current medicine. While mental or behavioural abnormalities related to the dysfunction can be permanent, treating the disease early may prevent permanent damage in addition to fully restoring mental functions. An organic cause to brain dysfunction is suspected when there is no indication of a clearly defined psychiatric or "inorganic" cause, such as a mood disorder.
Types
Organic brain syndrome can be divided into 2 major subgroups: acute (delirium or acute confusional state) and chronic (dementia). A third entity, encephalopathy (amnestic), denotes a gray zone between delirium and dementia. The Diagnostic and Statistical Manual of Mental Disorders has broken up the diagnoses that once fell under the diagnostic category organic mental disorder into three categories: delirium, dementia, and amnestic.
Delirium
Delirium or Acute organic brain syndrome is a recently appearing state of mental impairment, as a result of intoxication, drug overdose, infection, pain, and many other physical problems affecting mental status. In medical contexts, "acute" means "of recent onset". As is the case with most acute disease problems, acute organic brain syndrome is often temporary, although this does not guarantee that it will not recur or progress to become chronic, that is, long-term. A more specific medical term for the acute subset of organic brain syndromes is delirium.
Dementia
Dementia or chronic organic brain syndrome is long-term. For example, some forms of chronic drug or alcohol dependence can cause organic brain syndrome due to their long-lasting or permanent toxic effects on brain function. Other common causes of chronic organic brain syndrome sometimes listed are the various types of dementia, which result from permanent brain damage due to strokes, Alzheimer's disease, or other damaging causes which are irreversible. Amnestic pertains to amnesia and is the impairment in ability to learn or recall new information, or recall previously learned information. Although similar, it is not coupled with dementia or delirium.
Amnestic
Amnestic conditions denotes a gray zone between delirium and dementia; its early course may fluctuate, but it is often persistent and progressive. Damage to brain functioning could be due not only to organic (physical) injury (a severe blow to the head, stroke, chemical and toxic exposures, organic brain disease, substance use, etc.) and also to non-organic means such as severe deprivation, abuse, neglect, and severe psychological trauma.
Symptoms
Many of the symptoms of Organic Mental Disorder depend on the cause of the disorder, but are similar and include physical or behavioral elements. Dementia and delirium are the cause of the confusion, orientation, cognition or alertness impairment. Therefore, these symptoms require more attention because hallucinations, delusions, amnesia, and personality changes are the result. These effects of the dementia and delirium are not joined with the changes of sensory or perception abilities. Memory impairment, judgment, logical function and agitation are also some extremely common symptoms. The more common symptoms of OBS are confusion; impairment of memory, judgment, and intellectual function; and agitation. Often these symptoms are attributed to psychiatric illness, which causes a difficulty in diagnosis.
Associated conditions
Disorders that are related to injury or damage to the brain and contribute to OBS include, but are not limited to:
Alcoholism
Alzheimer's disease
Attention deficit/hyperactivity disorder
Autism
Concussion
Encephalitis
Epilepsy
Fetal alcohol syndrome
Hypoxia
Parkinson's disease
Intoxication/overdose caused by substance use disorders including alcohol use disorder
Non-medical use of sedative hypnotics
Intracranial hemorrhage/trauma
Korsakoff syndrome
Mastocytosis
Meningitis
Psychoorganic syndrome
Stroke/transient ischemic attack (TIA)
Withdrawal from drugs, especially sedative hypnotics, e.g. alcohol or benzodiazepines
Other conditions that may be related to organic brain syndrome include: clinical depression, neuroses, and psychoses, which may occur simultaneously with the OBS.
Treatment
While the treatment depends on which particular disorder is involved in Organic Mental Disorder, a few that are possible. Treatments can include, but are not limited to, rehabilitation therapy such as physical or occupational, pharmacological modification of the neurotransmitter function, or medication. The affected parts of the brain can recover some function with the help of different types of therapy, or tractographical psysurgery. Online therapy can be just as intense and helpful as rehabilitation therapy, in person, and can help those affected regain function in daily life.
Prognosis
Some disorders are short-term and treatable, and their prognosis is not as lengthy. Rest and medication are the most common courses of action for these treatable cases to help the patient return to proper health. Many of the cases are long-term, and there is not as much of a set and defined prognosis. The course of action can include extensive counseling and therapy. There are many reasons that the long-term cases are harder to treat and these include these cases normally get worse over time, and medication or therapy could not work. In this case, many of the prognosis tracks are to help the patient and their family become more comfortable and understand what will happen.
Associated conditions
Brain injury caused by trauma
Bleeding into the brain (intracerebral hemorrhage)
Bleeding into the space around the brain (subarachnoid hemorrhage)
Blood clot inside the skull causing pressure on brain (subdural hematoma)
Concussion
Breathing conditions
Low oxygen in the body (hypoxia)
High carbon dioxide levels in the body (hypercapnia)
Cardiovascular disorders
Abnormal heart rhythm (arrhythmias)
Brain injury due to high blood pressure (hypertensive brain injury)
Dementia due to many strokes (multi-infarct dementia)
Heart infections (endocarditis, myocarditis)
Stroke
Transient ischemic attack (TIA)
Degenerative disorders
Alzheimer's disease (also called senile dementia, Alzheimer's type)
Creutzfeldt–Jakob disease
Diffuse Lewy Body disease
Huntington's disease
Multiple sclerosis
Normal pressure hydrocephalus
Parkinson's disease
Pick's disease
Dementia due to metabolic causes
Drug and alcohol-related conditions
Alcohol withdrawal state
Intoxication from drug or alcohol use
Wernicke–Korsakoff syndrome (a long-term effect of excessive alcohol consumption or malnutrition)
Withdrawal from drugs (especially sedative-hypnotics and corticosteroids)
Infections
Any sudden onset (acute) or long-term (chronic) infection
Blood poisoning (sepsis)
Brain infection (encephalitis)
Meningitis (infection of the lining of the brain and spinal cord)
Prion infections such as mad cow disease
Late-stage syphilis (general paresis)
Other medical disorders
Cancer
Kidney disease
Liver disease
Thyroid disease (high or low)
Vitamin deficiency (B1, B12, or folate)
Lithium toxicity can cause permanent organic brain damage
Accumulation of metals in the brains
Aluminum
Mercury poisoning
References
External links
AllRefer Health.com. 13 December 2006.
Mental disorders due to brain damage
Syndromes | 0.764763 | 0.99342 | 0.759731 |
Depersonalization | Depersonalization is a dissociative phenomenon characterized by a subjective feeling of detachment from oneself, manifesting as a sense of disconnection from one's thoughts, emotions, sensations, or actions, and often accompanied by a feeling of observing oneself from an external perspective. Subjects perceive that the world has become vague, dreamlike, surreal, or strange, leading to a diminished sense of individuality or identity. Sufferers often feel as though they are observing the world from a distance, as if separated by a barrier "behind glass". They maintain insight into the subjective nature of their experience, recognizing that it pertains to their own perception rather than altering objective reality. This distinction between subjective experience and objective reality distinguishes depersonalization from delusions, where individuals firmly believe in false perceptions as genuine truths. Depersonalization is also distinct from derealization, which involves a sense of detachment from the external world rather than from oneself.
Depersonalization-derealization disorder refers to chronic depersonalization, classified as a dissociative disorder in both the DSM-4 and the DSM-5, which underscores its association with disruptions in consciousness, memory, identity, or perception. This classification is based on the findings that depersonalization and derealization are prevalent in other dissociative disorders including dissociative identity disorder.
Though degrees of depersonalization can happen to anyone who is subject to temporary anxiety or stress, chronic depersonalization is more related to individuals who have experienced a severe trauma or prolonged stress/anxiety. Depersonalization-derealization is the single most important symptom in the spectrum of dissociative disorders, including dissociative identity disorder and "dissociative disorder not otherwise specified" (DD-NOS). It is also a prominent symptom in some other non-dissociative disorders, such as anxiety disorders, clinical depression, bipolar disorder, schizophrenia, schizoid personality disorder, hypothyroidism or endocrine disorders, schizotypal personality disorder, borderline personality disorder, obsessive–compulsive disorder, migraines, and sleep deprivation; it can also be a symptom of some types of neurological seizure, and it has been suggested that there could be common aetiology between depersonalization symptoms and panic disorder, on the basis of their high co-occurrence rates.
In social psychology, and in particular self-categorization theory, the term depersonalization has a different meaning and refers to "the stereotypical perception of the self as an example of some defining social category".
Description
Individuals who experience depersonalization feel divorced from their own personal self by sensing their body sensations, feelings, emotions, behaviors, etc. as not belonging to the same person or identity. Often a person who has experienced depersonalization claims that things seem unreal or hazy. Also, a recognition of a self breaks down (hence the name). Depersonalization can result in very high anxiety levels, which further increase these perceptions.
Depersonalization is a subjective experience of unreality in one's self, while derealization is unreality of the outside world. Although most authors currently regard depersonalization (personal/self) and derealization (reality/surroundings) as independent constructs, many do not want to separate derealization from depersonalization.
Epidemiology
Despite the distressing nature of symptoms, estimating the prevalence rates of depersonalization is challenging due to inconsistent definitions and variable timeframes.
Depersonalization is a symptom of anxiety disorders, such as panic disorder. It can also accompany sleep deprivation (often occurring when experiencing jet lag), migraine,
epilepsy (especially temporal lobe epilepsy,
complex-partial seizure, both as part of the aura and during the seizure),
obsessive-compulsive disorder, severe stress or trauma, anxiety, the use of recreational drugs
especially cannabis, hallucinogens, ketamine, and MDMA, certain types of meditation, deep hypnosis, extended mirror or crystal gazing, sensory deprivation, and mild-to-moderate head injury with little or full loss of consciousness (less likely if unconscious for more than 30 minutes).
Interoceptive exposure is a non-pharmacological method that can be used to induce depersonalization.
In the general population, transient depersonalization and derealization are common, having a lifetime prevalence between 26 and 74%. A random community-based survey of 1,000 adults in the US rural south found a 1-year depersonalization prevalence rate at 19%. Standardized diagnostic interviews have reported prevalence rates of 1.2% to 1.7% over one month in UK samples, and a rate of 2.4% in a single-point Canadian sample. In clinical populations, prevalence rates range from 1% to 16%, with varying rates in specific psychiatric disorders such as panic disorder and unipolar depression. Co-occurrence between depersonalization/derealization and panic disorder is common, suggesting a possible common etiology. Co-morbidity with other disorders does not influence symptom severity consistently.
Depersonalization is reported 2-4 times more in women than in men, but depersonalization/derealization disorder is diagnosed approximately equally across men and women, with symptoms typically emerging around the age of 16.
A similar and overlapping concept called ipseity disturbance (ipse is Latin for "self" or "itself") may be part of the core process of schizophrenia spectrum disorders. However, specific to the schizophrenia spectrum seems to be "a dislocation of first-person perspective such that self and other or self and world may seem to be non-distinguishable, or in which the individual self or field of consciousness takes on an inordinate significance in relation to the objective or intersubjective world" (emphasis in original).
For the purposes of evaluation and measurement depersonalization can be conceived of as a construct and scales are now available to map its dimensions in A study of undergraduate students found that individuals high on the depersonalization/derealization subscale of the Dissociative Experiences Scale exhibited a more pronounced cortisol response in stress. Individuals high on the absorption subscale, which measures a subject's experiences of concentration to the exclusion of awareness of other events, showed weaker cortisol responses.
Causes
Depersonalization can arise from a variety of factors, of both a psychological and physiological nature. Common immediate precipitants include instances of severe stress, depressive episodes, panic attacks, and the consumption of psychoactive substances such as marijuana and hallucinogens. Additionally, there exists a correlation between frequent depersonalization and childhood interpersonal trauma, particularly cases involving emotional maltreatment.
A case-control study conducted at a specialized depersonalization clinic included 164 individuals with chronic depersonalization symptoms, of which 40 linked their symptoms to illicit drug use. Phenomenological similarity between drug-induced and non-drug groups was observed, and comparison with matched controls further supported the lack of distinction. The severity of clinical depersonalization symptoms remains consistent regardless of whether they are triggered by illicit drugs or psychological factors.
Pharmacological
Depersonalization has been described by some as a desirable state, particularly by those that have experienced it under the influence of mood-altering recreational drugs. It is an effect of dissociatives and psychedelics, as well as a possible side effect of caffeine, alcohol, amphetamine, cannabis, and antidepressants. It is a classic withdrawal symptom from many drugs.
Benzodiazepine dependence, which can occur with long-term use of benzodiazepines, can induce chronic depersonalization symptomatology and perceptual disturbances in some people, even in those who are taking a stable daily dosage, and it can also become a protracted feature of the benzodiazepine withdrawal syndrome.
Lieutenant Colonel Dave Grossman, in his book On Killing, suggests that military training artificially creates depersonalization in soldiers, suppressing empathy and making it easier for them to kill other human beings.
Graham Reed (1974) claimed that depersonalization occurs in relation to the experience of falling in love.
Situational
Experiences of depersonalization/derealization occur on a continuum, ranging from momentary episodes in healthy individuals under conditions of stress, fatigue, or drug use, to severe and chronic disorders that can persist for decades. Several studies found that up to 66% of individuals in life-threatening accidents report at least transient depersonalization during or immediately after the accidents.
Several studies, but not all, found age to be a significant factor: adolescents and young adults in the normal population reported the highest rate. In a study, 46% of college students reported at least one significant episode in the previous year. In another study, 20% of patients with minor head injury experience significant depersonalization and derealization.
In general infantry and special forces soldiers, measures of depersonalization and derealization increased significantly after training that includes experiences of uncontrollable stress, semi-starvation, sleep deprivation, as well as lack of control over hygiene, movement, communications, and social interactions.
Psychobiological mechanisms
Proximate mechanism
Depersonalization involves disruptions in the integration of interoceptive and exteroceptive signals, particularly in response to acute anxiety or trauma-related events. Studies spanning from 1992 to 2020 have highlighted abnormalities in primary somatosensory cortex processing and insula activity as contributing factors to depersonalization experiences. Additionally, abnormal EEG activities, notably in the theta band, suggest potential biomarkers for emotion processing, attention, and working memory, though specific oscillatory signatures associated with depersonalization are yet to be determined. Reduced brain activities in sensory processing units, along with alterations in visceral signal processing regions, are observed, particularly in the early stages of information processing.
Furthermore, vestibular signal processing, crucial for balance and spatial orientation, is increasingly recognized as a factor contributing to feelings of disembodiment during depersonalization experiences. Research suggests that abnormal activity in the left hemisphere may play a role, although abnormalities in right hemisphere brain activity, responsible for self-awareness and emotion processing, may also contribute to depersonalization symptoms. Higher activity in the right parietal lobe's angular gyrus has been linked to more severe depersonalisation, supporting this idea.
Potential involvement of serotonergic, endogenous opioid, and glutamatergic NMDA pathways has also been proposed, alongside alterations in metabolic activity in the sensory association cortex, prefrontal hyperactivation, and limbic inhibition in response to aversive stimuli revealed by brain imaging studies.
In addition to this, research suggests that individuals with depersonalization often exhibit autonomic blunting, characterized by reduced physiological responses to stressors or emotional stimuli. This blunting may reflect a diminished capacity to engage with the external world or to experience emotions fully, contributing to the subjective sense of detachment from oneself. Additionally, dysregulation of the HPA axis, which governs the body's stress response system, is frequently observed in individuals who experience depersonalization. This dysregulation can manifest as alterations in cortisol levels and responsiveness to stress, potentially exacerbating feelings of detachment and unreality.
Ultimate mechanism
Depersonalization is a classic response to acute trauma, and may be highly prevalent in individuals involved in different traumatic situations including motor vehicle collision and imprisonment.
Psychologically depersonalization can, just like dissociation in general, be considered a type of coping mechanism, used to decrease the intensity of unpleasant experience, whether that is something as mild as stress or something as severe as chronically high anxiety and post-traumatic stress disorder.
The decrease in anxiety and psychobiological hyperarousal helps preserving adaptive behaviors and resources under threat or danger.
Depersonalization is an overgeneralized reaction in that it doesn't diminish just the unpleasant experience, but more or less all experience – leading to a feeling of being detached from the world and experiencing it in a more bland way. An important distinction must be made between depersonalization as a mild, short-term reaction to unpleasant experience and depersonalization as a chronic symptom stemming from a severe mental disorder such as PTSD or dissociative identity disorder.
Chronic symptoms may represent persistence of depersonalization beyond the situations under threat.
Treatment
Currently, no universally accepted treatment guidelines have been established for depersonalization. Pharmacotherapy remains a primary avenue of treatment, with medications such as clomipramine, fluoxetine, lamotrigine, and opioid antagonists being commonly prescribed. However, it is important to note that none of these medications have demonstrated a potent anti-dissociative effect in managing symptoms.
In addition to pharmacological interventions, various psychotherapeutic techniques have been employed in attempts to alleviate depersonalization symptoms. Modalities such as trauma-focused therapy and cognitive-behavioral techniques have been utilized, although their efficacy remains uncertain and not firmly established.
Treatment is dependent on the underlying cause, whether it is organic or psychological in origin. If depersonalization is a symptom of neurological disease, then diagnosis and treatment of the specific disease is the first approach. Depersonalization can be a cognitive symptom of such diseases as amyotrophic lateral sclerosis, Alzheimer's disease, multiple sclerosis (MS), or any other neurological disease affecting the brain. For those with both depersonalization and migraine, tricyclic antidepressants are often prescribed.
If depersonalization is a symptom of psychological causes such as developmental trauma, treatment depends on the diagnosis. In case of dissociative identity disorder or DD-NOS as a developmental disorder, in which extreme developmental trauma interferes with formation of a single cohesive identity, treatment requires proper psychotherapy, and—in the case of additional (co-morbid) disorders such as eating disorders—a team of specialists treating such an individual. It can also be a symptom of borderline personality disorder, which can be treated in the long term with proper psychotherapy and psychopharmacology.
The treatment of chronic depersonalization is considered in depersonalization disorder.
A 2001 Russian study showed that naloxone, a drug used to reverse the intoxicating effects of opioid drugs, can successfully treat depersonalization disorder. According to the study: "In three of 14 patients, depersonalization symptoms disappeared entirely and seven patients showed a marked improvement. The therapeutic effect of naloxone provides evidence for the role of the endogenous opioid system in the pathogenesis of depersonalization." The anticonvulsant drug lamotrigine has shown some success in treating symptoms of depersonalization, often in combination with a selective serotonin reuptake inhibitor and is the first drug of choice at the depersonalisation research unit at King's College London.
Research directions
Interest in DPDR has increased over the past few decades, leading to a large accumulation of literature on dissociative disorders. There has been a shift towards the use of research studies, rather than case studies to understand depersonalization. However, there remains a lack of solid consensus on its definition and scales used for assessment. Salami and colleagues argued that studies of electrophysiological depersonalization-derealization markers are urgently needed, and that future research should use analysis methods that can account for the integration of interoceptive and exteroceptive signals.
The Depersonalisation Research Unit at the Institute of Psychiatry in London conducts research into depersonalization disorder. Researchers there use the acronym DPAFU (Depersonalisation and Feelings of Unreality) as a shortened label for the disorder.
In a 2020 article in the Journal Nature, Vesuna, et al. describe experimental findings which show that layer 5 of the retrosplenial cortex is likely responsible for dissociative states of consciousness in mammals.
See also
References
Other references
Dissociative disorders
Symptoms and signs of mental disorders | 0.760661 | 0.998773 | 0.759727 |
Lacanianism | Lacanianism or Lacanian psychoanalysis is a theoretical system that explains the mind, behaviour, and culture through a structuralist and post-structuralist extension of classical psychoanalysis, initiated by the work of Jacques Lacan from the 1950s to the 1980s. Lacanian perspectives contend that the human mind is structured by the world of language, known as the Symbolic. They stress the importance of desire, which is conceived of as perpetual and impossible to satisfy. Contemporary Lacanianism is characterised by a broad range of thought and extensive debate among Lacanians.
Lacanianism has been particularly influential in post-structuralism, literary theory, and feminist theory, as well as in various branches of critical theory, including queer theory. Equally, it has been criticised by the post-structuralists Deleuze and Guattari and by various feminist theorists. Its clinical relevance is limited and outside France it has had no influence on psychiatry. There is a Lacanian strand in left-wing politics, including Saul Newman's and Duane Rousselle's post-anarchism, Louis Althusser's structural Marxism, and the works of Slavoj Žižek and Alain Badiou. Influential figures in Lacanianism include Slavoj Žižek, Julia Kristeva and Serge Leclaire.
Overview
Lacanians view the structure of the mind as defined by the individual's entry as an infant into the world of language, the Symbolic, through an Oedipal process. Like other post-structuralist approaches, Lacanianism regards the subject as an illusion created when an individual is signified (represented in language). However, this initial signification is incomplete, as there is always something about the subject which cannot be properly represented in language, which means that signification also divides the subject. The Symbolic is defined by the Other, those parts of the outside world with which the subject cannot identify, which is the place where signifiers are given meaning. Language is hence a discourse of the Other, outside conscious control.
The unconscious mind is constituted by a network of empty signifiers that resurface in language—particularly dreams and Freudian slips—and Lacanian clinical practice focuses closely on the precise words used by the analysand (patient), which Lacan characterised as a "return to Freud". Analysis focuses largely on desire. Lacanians contend that desire cannot be satisfied, as the object and cause of desire is an unobtainable object, the objet petit a, which the subject continually associates with different things that they wrongly believe will satisfy their desire. Objet a exists as a consequence of the division of the subject in signification, so desire is said to result from an unsolvable lack at the heart of the subject.
Lacanianism posits that all people belong to one of three "clinical structures" and are either psychotic, perverse, or, most commonly, neurotic. Neurotic subjects—that is to say, most people—are then always either hysterical or obsessional. The three clinical structures describe the subject's relationship to the Other and are each associated with a different defence mechanism: psychotics use foreclosure, a rejection of the father's authority in the Oedipus complex that results in a failure to form a Symbolic unconscious; perverts use disavowal, failing to accept that lack causes desire and nominating a specific object as its cause, their fetish; and neurotics use repression.
Psychical reality is constituted by the Symbolic, the Imaginary, the Real, and for Lacanians who follow Kristeva, the Semiotic.
Mirror stage
Lacan's first official contribution to psychoanalysis was the mirror stage, which he described as "formative of the function of the 'I' as revealed in psychoanalytic experience." By the early 1950s, he came to regard the mirror stage as more than a moment in the life of the infant; instead, it formed part of the permanent structure of subjectivity. In the "imaginary order", the subject's own image permanently catches and captivates the subject. Lacan explains that "the mirror stage is a phenomenon to which I assign a twofold value. In the first place, it has historical value as it marks a decisive turning-point in the mental development of the child. In the second place, it typifies an essential libidinal relationship with the body-image".
As this concept developed further, the stress fell less on its historical value and more on its structural value. In his fourth seminar, "La relation d'objet", Lacan states that "the mirror stage is far from a mere phenomenon which occurs in the development of the child. It illustrates the conflictual nature of the dual relationship. "
The mirror stage describes the formation of the ego via the process of objectification, the ego being the result of a conflict between one's perceived visual appearance and one's emotional experience. This identification is what Lacan called "alienation". At six months, the baby still lacks physical co-ordination. The child is able to recognize themselves in a mirror prior to the attainment of control over their bodily movements. The child sees their image as a whole and the synthesis of this image produces a sense of contrast with the lack of co-ordination of the body, which is perceived as a fragmented body. The child experiences this contrast initially as a rivalry with their image, because the wholeness of the image threatens the child with fragmentation—thus the mirror stage gives rise to an aggressive tension between the subject and the image. To resolve this aggressive tension, the child identifies with the image: this primary identification with the counterpart forms the ego. Lacan understood this moment of identification as a moment of jubilation, since it leads to an imaginary sense of mastery; yet when the child compares their own precarious sense of mastery with the omnipotence of the mother, a depressive reaction may accompany the jubilation.
Lacan calls the specular image "orthopaedic", since it leads the child to anticipate the overcoming of its "real specific prematurity of birth". The vision of the body as integrated and contained, in opposition to the child's actual experience of motor incapacity and the sense of his or her body as fragmented, induces a movement from "insufficiency to anticipation". In other words, the mirror image initiates and then aids, like a crutch, the process of the formation of an integrated sense of self.
In the mirror stage a "misunderstanding" (méconnaissance) constitutes the ego—the "me" (moi) becomes alienated from itself through the introduction of an imaginary dimension to the subject. The mirror stage also has a significant symbolic dimension, due to the presence of the figure of the adult who carries the infant. Having jubilantly assumed the image as their own, the child turns their head towards this adult, who represents the big other, as if to call on the adult to ratify this image.
Desire
Lacan's concept of desire is related to Hegel's Begierde, a term that implies a continuous force, and therefore somehow differs from Freud's concept of Wunsch. Lacan's desire refers always to unconscious desire because it is unconscious desire that forms the central concern of psychoanalysis.
The aim of psychoanalysis is to lead the analysand to recognize their desire and by doing so to uncover the truth about their desire. However this is possible only if desire is articulated in speech: "It is only once it is formulated, named in the presence of the other, that desire appears in the full sense of the term." And again in The Ego in Freud's Theory and in the Technique of Psychoanalysis: "what is important is to teach the subject to name, to articulate, to bring desire into existence. The subject should come to recognize and to name their desire. But it isn't a question of recognizing something that could be entirely given. In naming it, the subject creates, brings forth, a new presence in the world." The truth about desire is somehow present in discourse, although discourse is never able to articulate the entire truth about desire; whenever discourse attempts to articulate desire, there is always a leftover or surplus.
Lacan distinguishes desire from need and from demand. Need is a biological instinct where the subject depends on the Other to satisfy its own needs: in order to get the Other's help, "need" must be articulated in "demand". But the presence of the Other not only ensures the satisfaction of the "need", it also represents the Other's love. Consequently, "demand" acquires a double function: on the one hand, it articulates "need", and on the other, acts as a "demand for love". Even after the "need" articulated in demand is satisfied, the "demand for love" remains unsatisfied since the Other cannot provide the unconditional love that the subject seeks. "Desire is neither the appetite for satisfaction, nor the demand for love, but the difference that results from the subtraction of the first from the second." Desire is a surplus, a leftover, produced by the articulation of need in demand: "desire begins to take shape in the margin in which demand becomes separated from need". Unlike need, which can be satisfied, desire can never be satisfied: it is constant in its pressure and eternal. The attainment of desire does not consist in being fulfilled but in its reproduction as such. As Slavoj Žižek puts it, "desire's raison d'être is not to realize its goal, to find full satisfaction, but to reproduce itself as desire".
Lacan also distinguishes between desire and the drives: desire is one and drives are many. The drives are the partial manifestations of a single force called desire. Lacan's concept of "objet petit a" is the object of desire, although this object is not that towards which desire tends, but rather the cause of desire. Desire is not a relation to an object but a relation to a lack (manque).
In The Four Fundamental Concepts of Psychoanalysis Lacan argues that "man's desire is the desire of the Other." This entails the following:
Desire is the desire of the Other's desire, meaning that desire is the object of another's desire and that desire is also desire for recognition. Here Lacan follows Alexandre Kojève, who follows Hegel: for Kojève the subject must risk his own life if he wants to achieve the desired prestige. This desire to be the object of another's desire is best exemplified in the Oedipus complex, when the subject desires to be the phallus of the mother.
In "The Subversion of the Subject and the Dialectic of Desire in the Freudian Unconscious", Lacan contends that the subject desires from the point of view of another whereby the object of someone's desire is an object desired by another one: what makes the object desirable is that it is precisely desired by someone else. Again Lacan follows Kojève. who follows Hegel. This aspect of desire is present in hysteria, for the hysteric is someone who converts another's desire into their own (see Sigmund Freud's "Fragment of an Analysis of a Case of Hysteria" in SE VII, where Dora desires Frau K because she identifies with Herr K). What matters then in the analysis of a hysteric is not to find out the object of her desire but to discover the subject with whom she identifies.
Désir de l'Autre, which is translated as "desire for the Other" (though it could also be "desire of the Other"). The fundamental desire is the incestuous desire for the mother, the primordial Other.
Desire is "the desire for something else", since it is impossible to desire what one already has. The object of desire is continually deferred, which is why desire is a metonymy.
Desire appears in the field of the Otherthat is, in the unconscious.
Last but not least for Lacan, the first person who occupies the place of the Other is the mother and at first the child is at her mercy. Only when the father articulates desire with the Law by castrating the mother is the subject liberated from desire for the mother.
History
Jacques Lacan's lifetime
Lacan considered the human psyche to be framed within the three orders of The Imaginary, The Symbolic and The Real (RSI). The three divisions in their varying emphases also correspond roughly to the development of Lacan's thought. As he himself put it in Seminar XXII, "I began with the Imaginary, I then had to chew on the story of the Symbolic...and I finished by putting out for you this famous Real".
Lacan's early psychoanalytic period spans the 1930s and 1940s. His contributions from this period centered on the questions of image, identification and unconscious fantasy. Developing Henri Wallon's concept of infant mirroring, he used the idea of the mirror stage to demonstrate the imaginary nature of the ego, in opposition to the views of ego psychology.
In the fifties, the focus of Lacan's interest shifted to the symbolic order of kinship, culture, social structure and roles—all mediated by the acquisition of language—into which each one of us is born and with which we all have to come to terms.
The focus of therapy became that of dealing with disruptions on the part of the Imaginary of the structuring role played by the signifier/Other/Symbolic Order.
Lacan's approach to psychoanalysis created a dialectic between Freud's thinking and that of both Structuralist thinkers such as Ferdinand de Saussure, as well as with Heidegger, Hegel and other continental philosophers.
The sixties saw Lacan's attention increasingly focused on what he termed the Real—not external consensual reality, but rather that unconscious element in the personality, linked to trauma, dream and the drive, which resists signification.
The Real was what was lacking or absent from every totalising structural theory; and in the form of jouissance, and the persistence of the symptom or synthome, marked Lacan's shifting of psychoanalysis from modernity to postmodernity.
Then Real, together with the Imaginary and the Symbolic came to form a triad of "elementary registers." Lacan believed these three concepts were inseparably intertwined, and by the 1970s they were an integral part of his thought.
Multiple "Lacanianisms"
Lacan's thinking was intimately geared not only to the work of Freud but to that of the most prominent of his psychoanalytic successors—Heinz Hartmann, Melanie Klein, Michael Balint, D. W. Winnicott and more. With Lacan's break with official psychoanalysis in 1963–1964, however, a tendency developed to look for a pure, self-contained Lacanianism, without psychoanalytic trappings. Jacques-Alain Miller's index to Ecrits had already written of "the Lacanian epistemology...the analytic experience (in its Lacanian definition...)"; and where the old guard of first-generation disciples like Serge Leclaire continued to stress the importance of the re-reading of Freud, the new recruits of the sixties and seventies favoured instead an ahistorical Lacan, systematised after the event into a rigorous if over-simplified theoretical whole.
Three main phases may be identified in Lacan's mature work: his Fifties exploration of the Imaginary and the Symbolic; his concern with the Real and the lost object of desire, the objet petit a, during the Sixties; and a final phase highlighting jouissance and the mathematical formulation of psychoanalytic teaching.
As the fifties Lacan developed a distinctive style of teaching based on a linguistic reading of Freud, so too he built up a substantial following within the Société Française de Psychanalyse [SFP], with Serge Leclaire only the first of many French "Lacanians". It was this phase of his teaching that was memorialised in Écrits, and which first found its way into the English-speaking world, where more Lacanians were thus to be found in English or Philosophy Departments than in clinical practice.
However the very extent of Lacan's following raised serious criticisms: he was accused both of abusing the positive transference to tie his analysands to himself, and of magnifying their numbers by the use of shortened analytic sessions. The questionable nature of his following was one of the reasons for his failure to gain recognition for his teaching from the International Psychoanalytical Association recognition for the French form of Freudianism that was "Lacanianism"—a failure that led to his founding the École Freudienne de Paris (EFP) in 1964. Many of his closest and most creative followers, such as Jean Laplanche, chose the IPA over Lacan at this point, in the first of many subsequent Lacanian schisms.
Lacan's 1973 Letter to the Italians, nominated Muriel Drazien, Giacomo Contri and Armando Verdiglione to carry his teaching in Italy.
As a body of thought, Lacanianism began to make its way into the English-speaking world from the sixties onwards, influencing film theory, feminist thought, queer theory, and psychoanalytic criticism,<ref>J. Childers/G. Hentzi, The Columbia Dictionary of Modern Literary and Cultural Criticism (1996) p. 270 and p. 246-8</ref> as well as politics and social sciences, primarily through the concepts of the Imaginary and the Symbolic. As the role of the real and of jouissance in opposing structure became more widely recognised, however, so too Lacanianism developed as a tool for the exploration of the divided subject of postmodernity.
Since Lacan's death, however, much of the public attention focused on his work began to decline. Lacan had always been criticised for an obscurantist writing style; and many of his disciples simply replicated the mystificatory elements in his work (in a sort of transferential identification) without his freshness.
Where interest in Lacanianism did revive in the 21st century, it was in large part the work of figures like Slavoj Žižek who have been able to use Lacan's thought for their own intellectual ends, without the sometimes stifling orthodoxy of many of the formal Lacanian traditions. The continued influence of Lacanianism is thus paradoxically strongest in those who seem to have embraced Malcolm Bowie's recommendation: "learn to unlearn the Lacanian idiom in the way Lacan unlearns the Freudian idiom".
During Lacan's lifetime
Élisabeth Roudinesco has suggested that, after the founding of the EFP "the history of psychoanalysis in France became subordinate to that of Lacanianism...the Lacanian movement occupied thereafter the motor position in relation to which the other movements were obliged to determine their course'". There was certainly a large expansion in the numbers of the school, if arguably at the expense of quantity over quality, as a flood of psychologists submerged the analysts who had come with him from the SFP. Protests against the new regime reached a head with the introduction of the self-certifying 'passe' to analytic status, and old comrades such as François Perrier broke away in the bitter schism of 1968 to found the Quatrieme Groupe.
However, major divisions remained within the EDF, which underwent another split over the question of analytic qualifications. There remained within the movement a broad division between the old guard of first generation Lacanians, focused on the symbolic—on the study of Freud through the structural linguistic tools of the fifties—and the younger group of mathematicians and philosophers centred on Jacques-Alain Miller, who favoured a self-contained Lacanianism, formalised and free of its Freudian roots.
As the seventies Lacan spoke of the mathematicisation of psychoanalysis and coined the term 'matheme' to describe its formulaic abstraction, so Leclaire brusquely dismissed the new formulas as “graffiti” Nevertheless, despite these and other tensions, the EDF held together under the charisma of their Master, until (despairing of his followers) Lacan himself dissolved the school in 1980 the year before his death.
Post-Lacanianism
The start of the eighties saw the post-Lacanian movement dissolve into a plethora of new organisations, of which the Millerite Ecole de la Cause freudienne (ECF, 273 members) and the Centre de formation et de recherches psychoanalytiques (CFRP, 390 members) are perhaps the most important. By 1993 another fourteen associations had grown out of the former EDF; nor did the process stop there. Early resignations and splits from the ECF were followed in the late 1990s by a massive exodus of analysts worldwide from Miller's organisation under allegations of misuse of authority.
Attempts were made to re-unite the various factions, Leclaire arguing that Lacanianism was "becoming ossified, stiffening into a kind of war of religion, into theoretical debates that no longer contribute anything new". But with French Lacanianism (in particular) haunted by a past of betrayals and conflict—by faction after faction claiming their segment of Lacanian thought as the only genuine one—reunification of any kind has proven very problematic; and Roudinesco was perhaps correct to conclude that "'Lacanianism, born of subversion and a wish to transgress, is essentially doomed to fragility and dispersal".
Contemporary Lacanianism
Three main divisions can be made in contemporary Lacanianism.
In one form, the academic reading of a de-clinicalised Lacan has become a pursuit in itself.
The (self-styled) legitimatism of the ECF, developed into an international movement with strong Spanish support as well as Latin American roots, set itself up as a rival challenge to the IPA.
The third form is a plural Lacanianism, best epitomised in the moderate CFRP, with its abandonment of the passe and openness to traditional psychoanalysis, and (after the 1995 dissolution) in its two successors.
Attempts to rejoin the IPA remain problematic, however, not least due to the persistence of the 'short session' and of Lacan's rejection of countertransference as a therapeutic tool.
Schools of thought
Gender theory
Judith Butler, Bracha L. Ettinger and Jane Gallop have used Lacanian work, though in a critical way, to develop gender theory.
Criticism
Deleuzoguattarian
Gilles Deleuze and Félix Guattari, the latter a trained Lacanian analyst, launched a major attack on Lacanian psychoanalysis from within post-structuralism in Anti-Oedipus: Capitalism and Schizophrenia (1972). Frederick Crews writes that when they "indicted Lacanian psychoanalysis as a capitalist disorder" and "pilloried analysts as the most sinister priest-manipulators of a psychotic society" in Anti-Oedipus, their "demonstration was widely regarded as unanswerable" and "devastated the already shrinking Lacanian camp in Paris."
The Deleuzoguattarian critique of Lacanianism attacks its conception of desire as "negative", in that it results from a lack in the subject, and its belief that the unconscious mind is "structured like a language". Deleuze and Guattari argued that the unconscious mind was schizophrenic, characterised by rhizomes of libidinal investment, and that desire was a creative force that powered the essential building blocks of psychical structures, desiring-machines. The networks of signifiers to which so much weight is given in Lacanianism are structures created by desiring-machines, above the level of the unconscious. Hence Lacanian analysis works to solve neurosis, but it fails to see that neuroses are a second-order problem that reveal nothing about the unconscious—as does Freud's classical psychoanalysis.
Deleuze and Guattari proposed an alternative post-structuralist extension of classical psychoanalysis, schizoanalysis, which was defined in opposition to these apparent flaws in Lacanianism. Unlike Lacanianism, schizoanalysis openly repudiates parts of Freud, particularly his neurotic conception of the unconscious, and Deleuze and Guattari insisted that it was distinct from psychoanalysis. Schizoanalysis was further elaborated on in A Thousand Plateaus (1980) and Guattari's individual work in the 1980s and early 90s.
Feminist
Elizabeth Grosz accuses Lacan of maintaining a sexist tradition in psychoanalysis.
Luce Irigaray accuses Lacan of perpetuating phallocentric mastery in philosophical and psychoanalytic discourse. Others have echoed this accusation, seeing Lacan as trapped in the very phallocentric mastery his language ostensibly sought to undermine. The result—Cornelius Castoriadis would maintain—was to make all thought depend upon himself, and thus to stifle the capacity for independent thought among all those around him.
See also
Aphanisis
Four discourses
Freudo-Marxism
Male gaze
Name of the Father
Screen theory
World Association of Psychoanalysis
Notable Lacanians
References
Further reading
David Macey, Lacan in Contexts (1988)
Marini Marcelle, Jacques Lacan: The French Context (1992)
Élisabeth Roudinesco, Jacques Lacan & Co: A History of Psychoanalysis in France (1990)
Jean-Michel Rabate, Jacques Lacan: Psychoanalysis and the Subject of Literature'' (2001)
External links
Practice
École de la Cause freudienne
World Association of Psychoanalysis
CFAR – The Centre for Freudian Analysis and Research. London-based Lacanian psychoanalytic training agency
Homepage of the Lacanian School of Psychoanalysis and the San Francisco Society for Lacanian Studies
The London Society of the New Lacanian School. Site includes online library of clinical & theoretical texts
The Freudian School of Melbourne, School of Lacanian Psychoanalysis – Clinical and theoretical teaching and training of psychoanalysts
Theory
Lacan Dot Com
Links about Jacques Lacan at Lacan.com
"How to Read Lacan" by Slavoj Zizek – full version
Jacques Lacan at The Internet Encyclopedia of Philosophy
LacanOnline.com
Structuralism
Post-structuralism
Psychoanalytic schools
Jacques Lacan
Freudian psychology | 0.762373 | 0.996506 | 0.759709 |
Theory X and Theory Y | Theory X and Theory Y are theories of human work motivation and management. They were created by Douglas McGregor while he was working at the MIT Sloan School of Management in the 1950s, and developed further in the 1960s. McGregor's work was rooted in motivation theory alongside the works of Abraham Maslow, who created the hierarchy of needs. The two theories proposed by McGregor describe contrasting models of workforce motivation applied by managers in human resource management, organizational behavior, organizational communication and organizational development. Theory X explains the importance of heightened supervision, external rewards, and penalties, while Theory Y highlights the motivating role of job satisfaction and encourages workers to approach tasks without direct supervision. Management use of Theory X and Theory Y can affect employee motivation and productivity in different ways, and managers may choose to implement strategies from both theories into their practices.
McGregor and Maslow
McGregor's Theory X and Theory Y and Maslow's hierarchy of needs are both rooted in motivation theory. Maslow's hierarchy of needs consists of physiological needs (lowest level), safety needs, love needs, esteem needs, and self-actualization (highest level). According to Maslow, a human is motivated by the level they have not yet reached, and self-actualization cannot be met until each of the lower levels has been fulfilled. Assumptions of Theory Y, in relation to Maslow's hierarchy put an emphasis on employee higher level needs, such as esteem needs and self-actualization.
McGregor also believed that self-actualization was the highest level of reward for employees. He theorized that the motivation employees use to reach self-actualization allows them to reach their full potential. This led companies to focus on how their employees were motivated, managed, and led, creating a Theory Y management style which focuses on the drive for individual self-fulfillment. McGregor's perspective places the responsibility for performance on managers as well as subordinates.
Theory X
Theory X is based on negative assumptions regarding the typical worker. This management style assumes that the typical worker has little ambition, avoids responsibility, and is individual-goal oriented. In general, Theory X style managers believe their employees are less intelligent, lazier, and work solely for a sustainable income. Management believes employees' work is based on their own self-interest. Managers who believe employees operate in this manner are more likely to use rewards or punishments as motivation. Due to these assumptions, Theory X concludes the typical workforce operates more efficiently under a hands-on approach to management. Theory X managers believe all actions should be traceable to the individual responsible. This allows the individual to receive either a direct reward or a reprimand, depending on the outcome's positive or negative nature. This managerial style is more effective when used in a workforce that is not essentially motivated to perform.
According to McGregor, there are two opposing approaches to implementing Theory X: the hard approach and the soft approach. The hard approach depends on close supervision, intimidation, and immediate punishment. This approach can potentially yield a hostile, minimally cooperative workforce and resentment towards management. Managers are always looking for mistakes from employees, because they do not trust their work. Theory X is a "we versus they" approach, meaning it is the management versus the employees.
The soft approach is characterized by leniency and less strict rules in hopes for creating high workplace morale and cooperative employees. Implementing a system that is too soft could result in an entitled, low-output workforce. McGregor believes both ends of the spectrum are too extreme for efficient real-world application. Instead, McGregor feels that an approach located in the middle would be the most effective implementation of Theory X.
Because managers and supervisors are in almost complete control of the work, this produces a more systematic and uniform product or work flow. Theory X can benefit a work place that utilizes an assembly line or manual labor. Using this theory in these types of work conditions allows employees to specialize in particular work areas which in turn allows the company to mass-produce a higher quantity and quality of work.
Theory Y
Theory Y is based on positive assumptions regarding the typical worker. Theory Y managers assume employees are internally motivated, enjoy their job, and work to better themselves without a direct reward in return. These managers view their employees as one of the most valuable assets to the company, driving the internal workings of the corporation. Employees additionally tend to take full responsibility for their work and do not need close supervision to create a quality product. It is important to note, however, that before an employee carries out their task, they must first obtain the manager's approval. This ensures work stays efficient, productive, and in-line with company standards.
Theory Y managers gravitate towards relating to the worker on a more personal level, as opposed to a more conductive and teaching-based relationship. As a result, Theory Y followers may have a better relationship with their boss, creating a healthier atmosphere in the workplace. In comparison to Theory X, Theory Y incorporates a pseudo-democratic environment to the workforce. This allows the employee to design, construct, and publish their work in a timely manner in co-ordinance to their workload and projects.
Although Theory Y encompasses creativity and discussion, it does have limitations. While there is a more personal and individualistic feel, this leaves room for error in terms of consistency and uniformity. The workplace lacks unvarying rules and practices, which could potentially be detrimental to the quality standards of the product and strict guidelines of a given company.
Theory Z
Humanistic psychologist Abraham Maslow, upon whose work McGregor drew for Theories X and Y,
went on to propose his own model of workplace motivation, Theory Z. Unlike Theories X and Y, Theory Z recognizes a transcendent dimension to work and worker motivation. An optimal managerial style would help cultivate worker creativity, insight, meaning and moral excellence.
Choosing a management style
For McGregor, Theory X and Theory Y are not opposite ends of the same continuum, but rather two different continua in themselves. In order to achieve the most efficient production, a combination of both theories may be appropriate. This approach is derived from Fred Fiedler's research over various leadership styles known as the contingency theory. This theory states that managers evaluate the workplace and choose their leadership style based upon both internal and external conditions presented. Managers who choose the Theory X approach have an authoritarian style of management. An organization with this style of management is made up of several levels of supervisors and managers who actively intervene and micromanage the employees. On the contrary, managers who choose the Theory Y approach have a hands-off style of management. An organization with this style of management encourages participation and values individuals' thoughts and goals. However, because there is no optimal way for a manager to choose between adopting either Theory X or Theory Y, it is likely that a manager will need to adopt both approaches depending on the evolving circumstances and levels of internal and external locus of control throughout the workplace.
Military command and control
Theory X and Theory Y also have implications in military command and control (C2). Older, strictly hierarchical conceptions of C2, with narrow centralization of decision rights, highly constrained patterns of interaction, and limited information distribution tend to arise from cultural and organizational assumptions compatible with Theory X. On the other hand, more modern, network-centric, and decentralized concepts of C2, that rely on individual initiative and self-synchronization, tend to arise more from a "Theory Y" philosophy. Mission Command, for example, is a command philosophy to which many modern military establishments aspire, and which involves individual judgment and action within the overall framework of the commander's intent. Its assumptions about the value of individual initiative make it more a Theory-Y than a Theory X philosophy.
See also
Outline of management
Scientific management
References
External links
A diagram representing Theory X and Theory Y, Alan Chapman, 2002.
Another diagram representing Theory X and Theory Y
Organizational behavior
Motivational theories
Human resource management | 0.762302 | 0.996584 | 0.759698 |
Cognitive revolution | The cognitive revolution was an intellectual movement that began in the 1950s as an interdisciplinary study of the mind and its processes, from which emerged a new field known as cognitive science. The preexisting relevant fields were psychology, linguistics, computer science, anthropology, neuroscience, and philosophy. The approaches used were developed within the then-nascent fields of artificial intelligence, computer science, and neuroscience. In the 1960s, the Harvard Center for Cognitive Studies and the Center for Human Information Processing at the University of California, San Diego were influential in developing the academic study of cognitive science. By the early 1970s, the cognitive movement had surpassed behaviorism as a psychological paradigm. Furthermore, by the early 1980s the cognitive approach had become the dominant line of research inquiry across most branches in the field of psychology.
A key goal of early cognitive psychology was to apply the scientific method to the study of human cognition. Some of the main ideas and developments from the cognitive revolution were the use of the scientific method in cognitive science research, the necessity of mental systems to process sensory input, the innateness of these systems, and the modularity of the mind. Important publications in triggering the cognitive revolution include psychologist George Miller's 1956 article "The Magical Number Seven, Plus or Minus Two" (one of the most frequently cited papers in psychology), linguist Noam Chomsky's Syntactic Structures (1957) and "Review of B. F. Skinner's Verbal Behavior" (1959), and foundational works in the field of artificial intelligence by John McCarthy, Marvin Minsky, Allen Newell, and Herbert Simon, such as the 1958 article "Elements of a Theory of Human Problem Solving". Ulric Neisser's 1967 book Cognitive Psychology was also a landmark contribution.
Historical background
Prior to the cognitive revolution, behaviorism was the dominant trend in psychology in the United States. Behaviorists were interested in "learning", which was seen as "the novel association of stimuli with responses." Animal experiments played a significant role in behaviorist research, and prominent behaviorist J. B. Watson, interested in describing the responses of humans and animals as one group, stated that there was no need to distinguish between the two. Watson hoped to learn to predict and control behavior through his research. The popular Hull-Spence stimulus-response approach was, according to George Mandler, impossible to use to research topics that held the interest of cognitive scientists, like memory and thought, because both the stimulus and the response were thought of as completely physical events. Behaviorists typically did not research these subjects. B. F. Skinner, a functionalist behaviorist, criticized certain mental concepts like instinct as "explanatory fiction(s)", ideas that assume more than humans actually know about a mental concept. Various types of behaviorists had different views on the exact role (if any) that consciousness and cognition played in behavior. Although behaviorism was popular in the United States, Europe was not particularly influenced by it, and research on cognition could easily be found in Europe during this time.
Noam Chomsky has framed the cognitive and behaviorist positions as rationalist and empiricist, respectively, which are philosophical positions that arose long before behaviorism became popular and the cognitive revolution occurred. Empiricists believe that humans acquire knowledge only through sensory input, while rationalists believe that there is something beyond sensory experience that contributes to human knowledge. However, whether Chomsky's position on language fits into the traditional rationalist approach has been questioned by philosopher John Cottingham.
George Miller, one of the scientists involved in the cognitive revolution, sets the date of its beginning as September 11, 1956, when several researchers from fields like experimental psychology, computer science, and theoretical linguistics presented their work on cognitive science-related topics at a meeting of the 'Special Interest Group in Information Theory' at the Massachusetts Institute of Technology. This interdisciplinary cooperation went by several names like cognitive studies and information-processing psychology but eventually came to be known as cognitive science. Grants from the Alfred P. Sloan Foundation in the 1970s advanced interdisciplinary understanding in the relevant fields and supported the research that led to the field of cognitive neuroscience.
Main ideas
George Miller states that six fields participated in the development of cognitive science: psychology, linguistics, computer science, anthropology, neuroscience, and philosophy, with the first three playing the main roles.
Scientific method
A key goal of early cognitive psychology was to apply the scientific method to the study of human cognition. This was done by designing experiments that used computational models of artificial intelligence to systematically test theories about human mental processes in a controlled laboratory setting.
Mediation and information processing
When defining the "Cognitive Approach," Ulric Neisser says that humans can only interact with the "real world" through intermediary systems that process information like sensory input. As understood by a cognitive scientist, the study of cognition is the study of these systems and the ways they process information from the input. The processing includes not just the initial structuring and interpretation of the input but also the storage and later use.
Steven Pinker claims that the cognitive revolution bridged the gap between the physical world and the world of ideas, concepts, meanings and intentions. It unified the two worlds with a theory that mental life can be explained in terms of information, computation and feedback.
Innateness
In his 1975 book Reflections on Language, Noam Chomsky questions how humans can know so much, despite relatively limited input. He argues that they must have some kind of innate, domain-specific learning mechanism that processes input. Chomsky observes that physical organs do not develop based on their experience, but based on some inherent genetic coding, and wrote that the mind should be treated the same way. He says that there is no question that there is some kind of innate structure in the mind, but it is less agreed upon whether the same structure is used by all organisms for different types of learning. He compares humans to rats in the task of maze running to show that the same learning theory cannot be used for different species because they would be equally good at what they are learning, which is not the case. He also says that even within humans, using the same learning theory for multiple types of learning could be possible, but there is no solid evidence to suggest it. He proposes a hypothesis that claims that there is a biologically based language faculty that organizes the linguistic information in the input and constrains human language to a set of particular types of grammars. He introduces universal grammar, a set of inherent rules and principles that all humans have to govern language, and says that the components of universal grammar are biological. To support this, he points out that children seem to know that language has a hierarchical structure, and they never make mistakes that one would expect from a hypothesis that language is linear.
Steven Pinker has also written on this subject from the perspective of modern-day cognitive science. He says that modern cognitive scientists, like figures in the past such as Gottfried Wilhelm Leibniz (1646-1716), don't believe in the idea of the mind starting as a "blank slate." Though they have disputes on the nature-nurture diffusion, they all believe that learning is based on something innate to humans. Without this innateness, there will be no learning process. He points out that humans' acts are non-exhaustive, even though basic biological functions are finite. An example of this from linguistics is the fact that humans can produce infinite sentences, most of which are brand new to the speaker themselves, even though the words and phrases they have heard are not infinite.
Pinker, who agrees with Chomsky's idea of innate universal grammar, claims that although humans speak around six thousand mutually unintelligible languages, the grammatical programs in their minds differ far less than the actual speech. Many different languages can be used to convey the same concepts or ideas, which suggests there may be a common ground for all the languages.
Modularity of the mind
Pinker claims another important idea from the cognitive revolution was that the mind is modular, with many parts cooperating to generate a train of thought or an organized action. It has different distinct systems for different specific missions. Behaviors can vary across cultures, but the mental programs that generate the behaviors don't need to be varied.
Criticism
There have been criticisms of the typical characterization of the shift from behaviorism to cognitivism.
Henry L. Roediger III argues that the common narrative most people believe about the cognitive revolution is inaccurate. The narrative he describes states that psychology started out well but lost its way and fell into behaviorism, but this was corrected by the Cognitive Revolution, which essentially put an end to behaviorism. He claims that behavior analysis is actually still an active area of research that produces successful results in psychology and points to the Association for Behavior Analysis International as evidence. He claims that behaviorist research is responsible for successful treatments of autism, stuttering, and aphasia, and that most psychologists actually study observable behavior, even if they interpret their results cognitively. He believes that the change from behaviorism to cognitivism was gradual, slowly evolving by building on behaviorism.
Lachman and Butterfield were among the first to imply that cognitive psychology has a revolutionary origin. Thomas H. Leahey has criticized the idea that the introduction of behaviorism and the cognitive revolution were actually revolutions and proposed an alternative history of American psychology as "a narrative of research traditions."
Other authors criticize behaviorism, but they also criticize the cognitive revolution for having adopted new forms of anti-mentalism.
Cognitive psychologist Jerome Bruner criticized the adoption of the computational theory of mind and the exclusion of meaning from cognitive science, and he characterized one of the primary objects of the cognitive revolution as changing the study of psychology so that meaning was its core.
His understanding of the cognitive revolution revolves entirely around "meaning-making" and the hermeneutic description of how people go about this. He believes that the cognitive revolution steered psychology away from behaviorism and this was good, but then another form of anti-mentalism took its place: computationalism. Bruner states that the cognitive revolution should replace behaviorism rather than only modify it.
Neuroscientist Gerald Edelman argues in his book Bright Air, Brilliant Fire (1991) that a positive result of the emergence of "cognitive science" was the departure from "simplistic behaviorism". However, he adds, a negative result was the growing popularity of a total misconception of the nature of thought: the computational theory of mind or cognitivism, which asserts that the brain is a computer that processes symbols whose meanings are entities of the objective world. In this view, the symbols of the mind correspond exactly to entities or categories in the world defined by criteria of necessary and sufficient conditions, that is, classical categories. The representations would be manipulated according to certain rules that constitute a syntax.
Edelman rejects the idea that objects of the world come in classical categories, and also rejects the idea that the brain/mind is a computer. The author rejects behaviorism (a points he also makes in his 2006 book Second Nature. Brain science and human knowledge), but also cognitivism (the computational-representational theory of the mind), since the latter conceptualizes the mind as a computer and meaning as objective correspondence. Furthermore, Edelman criticizes "functionalism", the idea that formal and abstract functional properties of the mind can be analyzed without making direct reference to the brain and its processes.
Edelman asserts that most of those who work in the field of cognitive psychology and cognitive science seem to adhere to this computational view, but he mentions some important exceptions. Exceptions include John Searle, Jerome Bruner, George Lakoff, Ronald Langacker, Alan Gauld, Benny Shanon, Claes von Hofsten, and others. Edelman argues that he agrees with the critical and dissenting approaches of these authors that are exceptions to the majority view of cognitivism.
Perceptual symbols, imagery and the cognitive neuroscience revolution
In their paper "The cognitive neuroscience revolution", Gualtiero Piccinini and Worth Boone argue that cognitive neuroscience emerged as a discipline in the late 1980s. Prior to that time, cognitive science and neuroscience had largely developed in isolation. Cognitive science developed between the 1950s and 1970s as an interdisciplinary field composed primarily of aspects of psychology, linguistics, and computer science. However, both classical symbolic computational theories and connectionist models developed largely independently of biological considerations. The authors argue that connectionist models were closer to symbolic models than to neurobiology.
Piccinini and Boone state that a revolutionary change is currently taking place: the move from cognitive science (autonomous from neuroscience) to cognitive neuroscience. The authors point out that many researchers who previously carried out psychological and behavioral studies now give properly cognitive neuroscientific explanations. They mention the example of Stephen Kosslyn, who postulated his theory of the pictorial format of mental images in the 1980s based on behavioral studies. Later, with the advent of magnetic resonance imaging technology, Kosslyn was able to show that when people imagine, the visual cortex is activated. This lent strong neuroscientific evidence to his theory of the pictorial format, refuting speculations about a supposed non-pictorial format of mental images.
According to Canales Johnson et al. (2021):
Neuroscientist Joseph LeDoux in his book The Emotional Brain argues that cognitive science emerged around the middle of the 20th century, and is often described as 'the new science of the mind.' However, in fact, cognitive science is actually a science of only one part of the mind, the part that has to do with thinking, reasoning, and intellect. It leaves emotions out. "And minds without emotions are not really minds at all…"
Psychologist Lawrence Barsalou argues that human cognitive processing involves the simulation of perceptual, motor, and emotional states. The classical and 'intellectualist' view of cognition, considers that it is essentially processing propositional information of a verbal or numerical type. However, Barsalou's theory explains human conceptual processing by the activation of regions of the sensory cortices of different modalities, as well as of the motor cortex, and by the simulation of embodied experiences –visual, auditory, emotional, motor–, that ground meaning in experience situated in the world.
Modal symbols are those analogical mental representations linked to a specific sensory channel: for example, the representation of 'dog' through a visual image similar to a dog or through an auditory image of the barking of dogs, based on the memory of the experiences of seeing a dog or hearing its barking. Lawrence Barsalou's 'perceptual symbols' theory asserts that mental processes operate with modal symbols that maintain the sensory properties of perceptual experiences.
According to Barsalou (2020), the "grounded cognition" perspective in which his theory is framed asserts that cognition emerges from the interaction between amodal symbols, modal symbols, the body and the world. Therefore, this perspective does not rule out 'classical' symbols –amodal ones, such as those typical of verbal language or numerical reasoning– but rather considers that these interact with imagination, perception and action situated in the world.
See also
Digital infinity
Embodied cognition
Enactivism (psychology)
Human factors
Postcognitivism
Notes
References
Bruner, J. S., Bruner, U. P. J. (1990). Acts of meaning. Cambridge: Harvard University Press.
Mandler, G. (2007) A history of modern experimental psychology: From James and Wundt to cognitive science. Cambridge, MA: MIT Press.
Further reading
Books
Baars, Bernard J. (1986) The cognitive revolution in psychology Guilford Press, New York,
Gardner, Howard (1986) The mind's new science : a history of the cognitive revolution Basic Books, New York, ; reissued in 1998 with an epilogue by the author: "Cognitive science after 1984"
Johnson, David Martel and Emeling, Christina E. (1997) The future of the cognitive revolution Oxford University Press, New York,
LePan, Don (1989) The cognitive revolution in Western culture Macmillan, Basingstoke, England,
Murray, David J. (1995) Gestalt psychology and the cognitive revolution Harvester Wheatsheaf, New York,
Olson, David R. (2007) Jerome Bruner: the cognitive revolution in educational theory Continuum, London,
Richardson, Alan and Steen, Francis F. (editors) (2002) Literature and the cognitive revolution Duke University Press, Durham, North Carolina, being Poetics today 23(1),
Royer, James M. (2005) The cognitive revolution in educational psychology Information Age Publishing, Greenwich, Connecticut,
Simon, Herbert A. et al. (1992) Economics, bounded rationality and the cognitive revolution E. Elgar, Aldershot, England,
Todd, James T. and Morris, Edward K. (editors) (1995) Modern perspectives on B. F. Skinner and contemporary behaviorism (Series: Contributions in psychology, no. 28) Greenwood Press, Westport, Connecticut,
Articles
Pinker, Steven (2011) "The Cognitive Revolution" Harvard Gazette
Cognitive psychology
Cognitive science
History of psychology
Philosophical schools and traditions
Revolutions by type
Western culture | 0.765675 | 0.992177 | 0.759686 |
Zoology | Zoology ( , ) is the scientific study of animals. Its studies include the structure, embryology, classification, habits, and distribution of all animals, both living and extinct, and how they interact with their ecosystems. Zoology is one of the primary branches of biology. The term is derived from Ancient Greek , ('animal'), and , ('knowledge', 'study').
Although humans have always been interested in the natural history of the animals they saw around them, and used this knowledge to domesticate certain species, the formal study of zoology can be said to have originated with Aristotle. He viewed animals as living organisms, studied their structure and development, and considered their adaptations to their surroundings and the function of their parts. Modern zoology has its origins during the Renaissance and early modern period, with Carl Linnaeus, Antonie van Leeuwenhoek, Robert Hooke, Charles Darwin, Gregor Mendel and many others.
The study of animals has largely moved on to deal with form and function, adaptations, relationships between groups, behaviour and ecology. Zoology has increasingly been subdivided into disciplines such as classification, physiology, biochemistry and evolution. With the discovery of the structure of DNA by Francis Crick and James Watson in 1953, the realm of molecular biology opened up, leading to advances in cell biology, developmental biology and molecular genetics.
History
The history of zoology traces the study of the animal kingdom from ancient to modern times. Prehistoric people needed to study the animals and plants in their environment to exploit them and survive. Cave paintings, engravings and sculptures in France dating back 15,000 years show bison, horses, and deer in carefully rendered detail. Similar images from other parts of the world illustrated mostly the animals hunted for food and the savage animals.
The Neolithic Revolution, which is characterized by the domestication of animals, continued throughout Antiquity. Ancient knowledge of wildlife is illustrated by the realistic depictions of wild and domestic animals in the Near East, Mesopotamia, and Egypt, including husbandry practices and techniques, hunting and fishing. The invention of writing is reflected in zoology by the presence of animals in Egyptian hieroglyphics.
Although the concept of zoology as a single coherent field arose much later, the zoological sciences emerged from natural history reaching back to the biological works of Aristotle and Galen in the ancient Greco-Roman world. In the fourth century BC, Aristotle looked at animals as living organisms, studying their structure, development and vital phenomena. He divided them into two groups: animals with blood, equivalent to our concept of vertebrates, and animals without blood, invertebrates. He spent two years on Lesbos, observing and describing the animals and plants, considering the adaptations of different organisms and the function of their parts. Four hundred years later, Roman physician Galen dissected animals to study their anatomy and the function of the different parts, because the dissection of human cadavers was prohibited at the time. This resulted in some of his conclusions being false, but for many centuries it was considered heretical to challenge any of his views, so the study of anatomy stultified.
During the post-classical era, Middle Eastern science and medicine was the most advanced in the world, integrating concepts from Ancient Greece, Rome, Mesopotamia and Persia as well as the ancient Indian tradition of Ayurveda, while making numerous advances and innovations. In the 13th century, Albertus Magnus produced commentaries and paraphrases of all Aristotle's works; his books on topics like botany, zoology, and minerals included information from ancient sources, but also the results of his own investigations. His general approach was surprisingly modern, and he wrote, "For it is [the task] of natural science not simply to accept what we are told but to inquire into the causes of natural things." An early pioneer was Conrad Gessner, whose monumental 4,500-page encyclopedia of animals, , was published in four volumes between 1551 and 1558.
In Europe, Galen's work on anatomy remained largely unsurpassed and unchallenged up until the 16th century. During the Renaissance and early modern period, zoological thought was revolutionized in Europe by a renewed interest in empiricism and the discovery of many novel organisms. Prominent in this movement were Andreas Vesalius and William Harvey, who used experimentation and careful observation in physiology, and naturalists such as Carl Linnaeus, Jean-Baptiste Lamarck, and Buffon who began to classify the diversity of life and the fossil record, as well as studying the development and behavior of organisms. Antonie van Leeuwenhoek did pioneering work in microscopy and revealed the previously unknown world of microorganisms, laying the groundwork for cell theory. van Leeuwenhoek's observations were endorsed by Robert Hooke; all living organisms were composed of one or more cells and could not generate spontaneously. Cell theory provided a new perspective on the fundamental basis of life.
Having previously been the realm of gentlemen naturalists, over the 18th, 19th and 20th centuries, zoology became an increasingly professional scientific discipline. Explorer-naturalists such as Alexander von Humboldt investigated the interaction between organisms and their environment, and the ways this relationship depends on geography, laying the foundations for biogeography, ecology and ethology. Naturalists began to reject essentialism and consider the importance of extinction and the mutability of species.
These developments, as well as the results from embryology and paleontology, were synthesized in the 1859 publication of Charles Darwin's theory of evolution by natural selection; in this Darwin placed the theory of organic evolution on a new footing, by explaining the processes by which it can occur, and providing observational evidence that it had done so. Darwin's theory was rapidly accepted by the scientific community and soon became a central axiom of the rapidly developing science of biology. The basis for modern genetics began with the work of Gregor Mendel on peas in 1865, although the significance of his work was not realized at the time.
Darwin gave a new direction to morphology and physiology, by uniting them in a common biological theory: the theory of organic evolution. The result was a reconstruction of the classification of animals upon a genealogical basis, fresh investigation of the development of animals, and early attempts to determine their genetic relationships. The end of the 19th century saw the fall of spontaneous generation and the rise of the germ theory of disease, though the mechanism of inheritance remained a mystery. In the early 20th century, the rediscovery of Mendel's work led to the rapid development of genetics, and by the 1930s the combination of population genetics and natural selection in the modern synthesis created evolutionary biology.
Research in cell biology is interconnected to other fields such as genetics, biochemistry, medical microbiology, immunology, and cytochemistry. With the determination of the double helical structure of the DNA molecule by Francis Crick and James Watson in 1953, the realm of molecular biology opened up, leading to advances in cell biology, developmental biology and molecular genetics. The study of systematics was transformed as DNA sequencing elucidated the degrees of affinity between different organisms.
Scope
Zoology is the branch of science dealing with animals. A species can be defined as the largest group of organisms in which any two individuals of the appropriate sex can produce fertile offspring; about 1.5 million species of animal have been described and it has been estimated that as many as 8 million animal species may exist.
An early necessity was to identify the organisms and group them according to their characteristics, differences and relationships, and this is the field of the taxonomist. Originally it was thought that species were immutable, but with the arrival of Darwin's theory of evolution, the field of cladistics came into being, studying the relationships between the different groups or clades. Systematics is the study of the diversification of living forms, the evolutionary history of a group is known as its phylogeny, and the relationship between the clades can be shown diagrammatically in a cladogram.
Although someone who made a scientific study of animals would historically have described themselves as a zoologist, the term has come to refer to those who deal with individual animals, with others describing themselves more specifically as physiologists, ethologists, evolutionary biologists, ecologists, pharmacologists, endocrinologists or parasitologists.
Branches of zoology
Although the study of animal life is ancient, its scientific incarnation is relatively modern. This mirrors the transition from natural history to biology at the start of the 19th century. Since Hunter and Cuvier, comparative anatomical study has been associated with morphography, shaping the modern areas of zoological investigation: anatomy, physiology, histology, embryology, teratology and ethology. Modern zoology first arose in German and British universities. In Britain, Thomas Henry Huxley was a prominent figure. His ideas were centered on the morphology of animals. Many consider him the greatest comparative anatomist of the latter half of the 19th century. Similar to Hunter, his courses were composed of lectures and laboratory practical classes in contrast to the previous format of lectures only.
Classification
Scientific classification in zoology, is a method by which zoologists group and categorize organisms by biological type, such as genus or species. Biological classification is a form of scientific taxonomy. Modern biological classification has its root in the work of Carl Linnaeus, who grouped species according to shared physical characteristics. These groupings have since been revised to improve consistency with the Darwinian principle of common descent. Molecular phylogenetics, which uses nucleic acid sequence as data, has driven many recent revisions and is likely to continue to do so. Biological classification belongs to the science of zoological systematics.
Many scientists now consider the five-kingdom system outdated. Modern alternative classification systems generally start with the three-domain system: Archaea (originally Archaebacteria); Bacteria (originally Eubacteria); Eukaryota (including protists, fungi, plants, and animals) These domains reflect whether the cells have nuclei or not, as well as differences in the chemical composition of the cell exteriors.
Further, each kingdom is broken down recursively until each species is separately classified. The order is:
Domain; kingdom; phylum; class; order; family; genus; species. The scientific name of an organism is generated from its genus and species. For example, humans are listed as Homo sapiens. Homo is the genus, and sapiens the specific epithet, both of them combined make up the species name. When writing the scientific name of an organism, it is proper to capitalize the first letter in the genus and put all of the specific epithet in lowercase. Additionally, the entire term may be italicized or underlined.
The dominant classification system is called the Linnaean taxonomy. It includes ranks and binomial nomenclature. The classification, taxonomy, and nomenclature of zoological organisms is administered by the International Code of Zoological Nomenclature. A merging draft, BioCode, was published in 1997 in an attempt to standardize nomenclature, but has yet to be formally adopted.
Vertebrate and invertebrate zoology
Vertebrate zoology is the biological discipline that consists of the study of vertebrate animals, that is animals with a backbone, such as fish, amphibians, reptiles, birds and mammals. The various taxonomically oriented disciplines i.e.
mammalogy, biological anthropology, herpetology, ornithology, and ichthyology seek to identify and classify species and study the structures and mechanisms specific to those groups. The rest of the animal kingdom is dealt with by invertebrate zoology, a vast and very diverse group of animals that includes sponges, echinoderms, tunicates, worms, molluscs, arthropods and many other phyla, but single-celled organisms or protists are not usually included.
Structural zoology
Cell biology studies the structural and physiological properties of cells, including their behavior, interactions, and environment. This is done on both the microscopic and molecular levels for single-celled organisms such as bacteria as well as the specialized cells in multicellular organisms such as humans. Understanding the structure and function of cells is fundamental to all of the biological sciences. The similarities and differences between cell types are particularly relevant to molecular biology.
Anatomy considers the forms of macroscopic structures such as organs and organ systems. It focuses on how organs and organ systems work together in the bodies of humans and other animals, in addition to how they work independently. Anatomy and cell biology are two studies that are closely related, and can be categorized under "structural" studies. Comparative anatomy is the study of similarities and differences in the anatomy of different groups. It is closely related to evolutionary biology and phylogeny (the evolution of species).
Physiology
Physiology studies the mechanical, physical, and biochemical processes of living organisms by attempting to understand how all of the structures function as a whole. The theme of "structure to function" is central to biology. Physiological studies have traditionally been divided into plant physiology and animal physiology, but some principles of physiology are universal, no matter what particular organism is being studied. For example, what is learned about the physiology of yeast cells can also apply to human cells. The field of animal physiology extends the tools and methods of human physiology to non-human species. Physiology studies how, for example, the nervous, immune, endocrine, respiratory, and circulatory systems function and interact.
Developmental biology
Developmental biology is the study of the processes by which animals and plants reproduce and grow. The discipline includes the study of embryonic development, cellular differentiation, regeneration, asexual and sexual reproduction, metamorphosis, and the growth and differentiation of stem cells in the adult organism. Development of both animals and plants is further considered in the articles on evolution, population genetics, heredity, genetic variability, Mendelian inheritance, and reproduction.
Evolutionary biology
Evolutionary biology is the subfield of biology that studies the evolutionary processes (natural selection, common descent, speciation) that produced the diversity of life on Earth. Evolutionary research is concerned with the origin and descent of species, as well as their change over time, and includes scientists from many taxonomically oriented disciplines. For example, it generally involves scientists who have special training in particular organisms such as mammalogy, ornithology, herpetology, or entomology, but use those organisms as systems to answer general questions about evolution.
Evolutionary biology is partly based on paleontology, which uses the fossil record to answer questions about the mode and tempo of evolution, and partly on the developments in areas such as population genetics and evolutionary theory. Following the development of DNA fingerprinting techniques in the late 20th century, the application of these techniques in zoology has increased the understanding of animal populations. In the 1980s, developmental biology re-entered evolutionary biology from its initial exclusion from the modern synthesis through the study of evolutionary developmental biology. Related fields often considered part of evolutionary biology are phylogenetics, systematics, and taxonomy.
Ethology
Ethology is the scientific and objective study of animal behavior under natural conditions, as opposed to behaviorism, which focuses on behavioral response studies in a laboratory setting. Ethologists have been particularly concerned with the evolution of behavior and the understanding of behavior in terms of the theory of natural selection. In one sense, the first modern ethologist was Charles Darwin, whose book, The Expression of the Emotions in Man and Animals, influenced many future ethologists.
A subfield of ethology is behavioral ecology which attempts to answer Nikolaas Tinbergen's four questions with regard to animal behavior: what are the proximate causes of the behavior, the developmental history of the organism, the survival value and phylogeny of the behavior? Another area of study is animal cognition, which uses laboratory experiments and carefully controlled field studies to investigate an animal's intelligence and learning.
Biogeography
Biogeography studies the spatial distribution of organisms on the Earth, focusing on topics like dispersal and migration, plate tectonics, climate change, and cladistics. It is an integrative field of study, uniting concepts and information from evolutionary biology, taxonomy, ecology, physical geography, geology, paleontology and climatology. The origin of this field of study is widely accredited to Alfred Russel Wallace, a British biologist who had some of his work jointly published with Charles Darwin.
Molecular biology
Molecular biology studies the common genetic and developmental mechanisms of animals and plants, attempting to answer the questions regarding the mechanisms of genetic inheritance and the structure of the gene. In 1953, James Watson and Francis Crick described the structure of DNA and the interactions within the molecule, and this publication jump-started research into molecular biology and increased interest in the subject. While researchers practice techniques specific to molecular biology, it is common to combine these with methods from genetics and biochemistry. Much of molecular biology is quantitative, and recently a significant amount of work has been done using computer science techniques such as bioinformatics and computational biology.
Molecular genetics, the study of gene structure and function, has been among the most prominent sub-fields of molecular biology since the early 2000s. Other branches of biology are informed by molecular biology, by either directly studying the interactions of molecules in their own right such as in cell biology and developmental biology, or indirectly, where molecular techniques are used to infer historical attributes of populations or species, as in fields in evolutionary biology such as population genetics and phylogenetics. There is also a long tradition of studying biomolecules "from the ground up", or molecularly, in biophysics.
Reproduction
Animals generally reproduce by sexual reproduction, a process involving the union of a male and female haploid gamete, each gamete formed by meiosis. Ordinarily, gametes produced by separate individuals unite by a process of fertilization to form a diploid zygote that can then develop into a genetically unique individual progeny. However, some animals are also capable, as an alternative reproductive process, to reproduce parthenogenetically. Parthenogenesis has been described in snakes and lizards (see Wikipedia Parthenogenesis in squamates), in amphibians (see Wikipedia Parthenogenesis in amphibians) and in numerous other species (see Wikipedia Parthenogenesis). Generally, meiosis in parthanogenetically reproducing animals occurs by a similar process to that in sexually reproducing animals, but the diploid zygote nucleus is generated by the union of two haploid genomes from the same individual rather than from different individuals.
See also
Animal science, the biology of domesticated animals
Astrobiology
Cognitive zoology
Evolutionary biology
List of zoologists
Outline of zoology
Palaeontology
Timeline of zoology
Zoological distribution
Notes
References
External links
Books on Zoology at Project Gutenberg
Online Dictionary of Invertebrate Zoology
Branches of biology | 0.76116 | 0.99801 | 0.759645 |
4+1 architectural view model | 4+1 is a view model used for "describing the architecture of software-intensive systems, based on the use of multiple, concurrent views". The views are used to describe the system from the viewpoint of different stakeholders, such as end-users, developers, system engineers, and project managers. The four views of the model are logical, development, process and physical view. In addition, selected use cases or scenarios are used to illustrate the architecture serving as the 'plus one' view. Hence, the model contains 4+1 views:
Logical view: The logical view is concerned with the functionality that the system provides to end-users. UML diagrams are used to represent the logical view, and include class diagrams, and state diagrams.
Process view: The process view deals with the dynamic aspects of the system, explains the system processes and how they communicate, and focuses on the run time behavior of the system. The process view addresses concurrency, distribution, integrator, performance, and scalability, etc. UML diagrams to represent process view include the sequence diagram, communication diagram, activity diagram.
Development view: The development view (aka the implementation view) illustrates a system from a programmer's perspective and is concerned with software management. UML Diagrams used to represent the development view include the Package diagram and the Component diagram.
Physical view: The physical view (aka the deployment view) depicts the system from a system engineer's point of view. It is concerned with the topology of software components on the physical layer as well as the physical connections between these components. UML diagrams used to represent the physical view include the deployment diagram.
Scenarios: The description of an architecture is illustrated using a small set of use cases, or scenarios, which become a fifth view. The scenarios describe sequences of interactions between objects and between processes. They are used to identify architectural elements and to illustrate and validate the architecture design. They also serve as a starting point for tests of an architecture prototype. This view is also known as the use case view.
The 4+1 view model is generic and is not restricted to any notation, tool or design method. Quoting Kruchten,
See also
View model
C4 model
ISO/IEC 42010
References
Software architecture | 0.763846 | 0.994482 | 0.759631 |