title
stringlengths 3
71
| text
stringlengths 538
109k
| relevans
float64 0.76
0.83
| popularity
float64 0.92
1
| ranking
float64 0.75
0.83
|
---|---|---|---|---|
Piaget's theory of cognitive development | Piaget's theory of cognitive development, or his genetic epistemology, is a comprehensive theory about the nature and development of human intelligence. It was originated by the Swiss developmental psychologist Jean Piaget (1896–1980). The theory deals with the nature of knowledge itself and how humans gradually come to acquire, construct, and use it. Piaget's theory is mainly known as a developmental stage theory.
In 1919, while working at the Alfred Binet Laboratory School in Paris, Piaget "was intrigued by the fact that children of different ages made different kinds of mistakes while solving problems". His experience and observations at the Alfred Binet Laboratory were the beginnings of his theory of cognitive development.
He believed that children of different ages made different mistakes because of the "quality rather than quantity" of their intelligence. Piaget proposed four stages to describe the development process of children: sensorimotor stage, pre-operational stage, concrete operational stage, and formal operational stage. Each stage describes a specific age group. In each stage, he described how children develop their cognitive skills. For example, he believed that children experience the world through actions, representing things with words, thinking logically, and using reasoning.
To Piaget, cognitive development was a progressive reorganisation of mental processes resulting from biological maturation and environmental experience. He believed that children construct an understanding of the world around them, experience discrepancies between what they already know and what they discover in their environment, then adjust their ideas accordingly. Moreover, Piaget claimed that cognitive development is at the centre of the human organism, and language is contingent on knowledge and understanding acquired through cognitive development. Piaget's earlier work received the greatest attention.
Child-centred classrooms and "open education" are direct applications of Piaget's views. Despite its huge success, Piaget's theory has some limitations that Piaget recognised himself: for example, the theory supports sharp stages rather than continuous development (horizontal and vertical décalage).
Nature of intelligence: operative and figurative
Piaget argued that reality is a construction. Reality is defined in reference to the two conditions that define dynamic systems. Specifically, he argued that reality involves transformations and states. Transformations refer to all manners of changes that a thing or person can undergo. States refer to the conditions or the appearances in which things or persons can be found between transformations. For example, there might be changes in shape or form (for instance, liquids are reshaped as they are transferred from one vessel to another, and similarly humans change in their characteristics as they grow older), in size (a toddler does not walk and run without falling, but after 7 yrs of age, the child's sensorimotor anatomy is well developed and now acquires skill faster), or in placement or location in space and time (e.g., various objects or persons might be found at one place at one time and at a different place at another time). Thus, Piaget argued, if human intelligence is to be adaptive, it must have functions to represent both the transformational and the static aspects of reality. He proposed that operative intelligence is responsible for the representation and manipulation of the dynamic or transformational aspects of reality, and that figurative intelligence is responsible for the representation of the static aspects of reality.
Operative intelligence is the active aspect of intelligence. It involves all actions, overt or covert, undertaken in order to follow, recover, or anticipate the transformations of the objects or persons of interest. Figurative intelligence is the more or less static aspect of intelligence, involving all means of representation used to retain in mind the states (i.e., successive forms, shapes, or locations) that intervene between transformations. That is, it involves perception, imitation, mental imagery, drawing, and language. Therefore, the figurative aspects of intelligence derive their meaning from the operative aspects of intelligence, because states cannot exist independently of the transformations that interconnect them. Piaget stated that the figurative or the representational aspects of intelligence are subservient to its operative and dynamic aspects, and therefore, that understanding essentially derives from the operative aspect of intelligence.
At any time, operative intelligence frames how the world is understood and it changes if understanding is not successful. Piaget stated that this process of understanding and change involves two basic functions: assimilation and accommodation.
Assimilation and accommodation
Through his study of the field of education, Piaget focused on two processes, which he named assimilation and accommodation. To Piaget, assimilation meant integrating external elements into structures of lives or environments, or those we could have through experience. Assimilation is how humans perceive and adapt to new information. It is the process of fitting new information into pre-existing cognitive schemas. Assimilation in which new experiences are reinterpreted to fit into, or assimilate with, old ideas and analyzing new facts accordingly. It occurs when humans are faced with new or unfamiliar information and refer to previously learned information in order to make sense of it. In contrast, accommodation is the process of taking new information in one's environment and altering pre-existing schemas in order to fit in the new information. This happens when the existing schema (knowledge) does not work, and needs to be changed to deal with a new object or situation. Accommodation is imperative because it is how people will continue to interpret new concepts, schemas, frameworks, and more.
Various teaching methods have been developed based on Piaget's insights that call for the use of questioning and inquiry-based education to help learners more blatantly face the sorts of contradictions to their pre-existing schemas that are conducive to learning.
Piaget believed that the human brain has been programmed through evolution to bring equilibrium, which is what he believed ultimately influences structures by the internal and external processes through assimilation and accommodation.
Piaget's understanding was that assimilation and accommodation cannot exist without the other. They are two sides of a coin. To assimilate an object into an existing mental schema, one first needs to take into account or accommodate to the particularities of this object to a certain extent. For instance, to recognize (assimilate) an apple as an apple, one must first focus (accommodate) on the contour of this object. To do this, one needs to roughly recognize the size of the object. Development increases the balance, or equilibration, between these two functions. When in balance with each other, assimilation and accommodation generate mental schemas of the operative intelligence. When one function dominates over the other, they generate representations which belong to figurative intelligence.
Cognitive equilibration
Piaget agreed with most other developmental psychologists in that there are three very important factors that are attributed to development: maturation, experience, and the social environment. But where his theory differs involves his addition of a fourth factor, equilibration, which "refers to the organism's attempt to keep its cognitive schemes in balance".
. Also see Piaget, and Boom's detailed account.
Equilibration is the motivational element that guides cognitive development. As humans, we have a biological need to make sense of the things we encounter in every aspect of our world in order to muster a greater understanding of it, and therefore, to flourish in it. This is where the concept of equilibration comes into play. If a child is confronted with information that does not fit into his or her previously held schemes, disequilibrium is said to occur. This, as one would imagine, is unsatisfactory to the child, so he or she will try to fix it. The incongruence will be fixed in one of three ways. The child will either ignore the newly discovered information, assimilate the information into a preexisting scheme, or accommodate the information by modifying a different scheme. Using any of these methods will return the child to a state of equilibrium, however, depending on the information being presented to the child, that state of equilibrium is not likely to be permanent.
For example, let's say Dave, a three-year-old boy who has grown up on a farm and is accustomed to seeing Horses regularly, has been brought to the zoo by his parents and sees an Elephant for the first time. Immediately he shouts "look mommy, Horsey!" Because Dave does not have a scheme for Elephants, he interprets the Elephant as being a Horse due to its large size, color, tail, and long face. He believes the Elephant is a Horse until his mother corrects. The new information Dave has received has put him in a state of disequilibrium. He now has to do one of three things. He can either: (1) turn his head, move towards another section of animals, and ignore this newly presented information; (2) distort the defining characteristics of an Elephant so that he can assimilate it into his "Horsey" scheme; or (3) he can modify his preexisting "Animal" schema to accommodate this new information regarding Elephants by slightly altering his knowledge of animals as he knows them.
With age comes entry into a higher stage of development. With that being said, previously held schemes (and the children that hold them) are more than likely to be confronted with discrepant information the older they get. Silverman and Geiringer propose that one would be more successful in attempting to change a child's mode of thought by exposing that child to concepts that reflect a higher rather than a lower stage of development. Furthermore, children are better influenced by modeled performances that are one stage above their developmental level, as opposed to modeled performances that are either lower or two or more stages above their level.
Four stages of development
In his theory of cognitive development, Jean Piaget proposed that humans progress through four developmental stages: the sensorimotor stage, preoperational stage, concrete operational stage, and formal operational stage.
Sensorimotor stage
The first of these, the sensorimotor stage "extends from birth to the acquisition of language". In this stage, infants progressively construct knowledge and understanding of the world by coordinating experiences (such as vision and hearing) from physical interactions with objects (such as grasping, sucking, and stepping). Infants gain knowledge of the world from the physical actions they perform within it. They progress from reflexive, instinctual action at birth to the beginning of symbolic thought toward the end of the stage.
Children learn that they are separate from the environment. They can think about aspects of the environment, even though these may be outside the reach of the child's senses. In this stage, according to Piaget, the development of object permanence is one of the most important accomplishments. Object permanence is a child's understanding that an object continues to exist even though they cannot see or hear it. Peek-a-boo is a game in which children who have yet to fully develop object permanence respond to sudden hiding and revealing of a face. By the end of the sensorimotor period, children develop a permanent sense of self and object and will quickly lose interest in Peek-a-boo.
Piaget divided the sensorimotor stage into six sub-stages.
Preoperational stage
By observing sequences of play, Piaget was able to demonstrate the second stage of his theory, the pre-operational stage. He said that this stage starts towards the end of the second year. It starts when the child begins to learn to speak and lasts up until the age of seven. During the pre-operational stage of cognitive development, Piaget noted that children do not yet understand concrete logic and cannot mentally manipulate information. Children's increase in playing and pretending takes place in this stage. However, the child still has trouble seeing things from different points of view. The children's play is mainly categorized by symbolic play and manipulating symbols. Such play is demonstrated by the idea of checkers being snacks, pieces of paper being plates, and a box being a table. Their observations of symbols exemplifies the idea of play with the absence of the actual objects involved.
The pre-operational stage is sparse and logically inadequate in regard to mental operations. The child is able to form stable concepts as well as magical beliefs (magical thinking). The child, however, is still not able to perform operations, which are tasks that the child can do mentally, rather than physically. Thinking in this stage is still egocentric, meaning the child has difficulty seeing the viewpoint of others. The Pre-operational Stage is split into two substages: the symbolic function substage, and the intuitive thought substage. The symbolic function substage is when children are able to understand, represent, remember, and picture objects in their mind without having the object in front of them. The intuitive thought substage is when children tend to propose the questions of "why?" and "how come?" This stage is when children want to understand everything.
Symbolic function substage
At about two to four years of age, children cannot yet manipulate and transform information in a logical way. However, they now can think in images and symbols. Other examples of mental abilities are language and pretend play. Symbolic play is when children develop imaginary friends or role-play with friends. Children's play becomes more social and they assign roles to each other. Some examples of symbolic play include playing house, or having a tea party. The type of symbolic play in which children engage is connected with their level of creativity and ability to connect with others. Additionally, the quality of their symbolic play can have consequences on their later development. For example, young children whose symbolic play is of a violent nature tend to exhibit less prosocial behavior and are more likely to display antisocial tendencies in later years.
In this stage, there are still limitations, such as egocentrism and precausal thinking.
Egocentrism occurs when a child is unable to distinguish between their own perspective and that of another person. Children tend to stick to their own viewpoint, rather than consider the view of others. Indeed, they are not even aware that such a concept as "different viewpoints" exists. Egocentrism can be seen in an experiment performed by Piaget and Swiss developmental psychologist Bärbel Inhelder, known as the three mountain problem. In this experiment, three views of a mountain are shown to the child, who is asked what a traveling doll would see at the various angles. The child will consistently describe what they can see from the position from which they are seated, regardless of the angle from which they are asked to take the doll's perspective. Egocentrism would also cause a child to believe, "I like The Lion Guard, so the high school student next door must like The Lion Guard, too."
Similar to preoperational children's egocentric thinking is their structuring of a cause and effect relationships. Piaget coined the term "precausal thinking" to describe the way in which preoperational children use their own existing ideas or views, like in egocentrism, to explain cause-and-effect relationships. Three main concepts of causality as displayed by children in the preoperational stage include: animism, artificialism and transductive reasoning.
Animism is the belief that inanimate objects are capable of actions and have lifelike qualities. An example could be a child believing that the sidewalk was mad and made them fall down, or that the stars twinkle in the sky because they are happy. Artificialism refers to the belief that environmental characteristics can be attributed to human actions or interventions. For example, a child might say that it is windy outside because someone is blowing very hard, or the clouds are white because someone painted them that color. Finally, precausal thinking is categorized by transductive reasoning. Transductive reasoning is when a child fails to understand the true relationships between cause and effect. Unlike deductive or inductive reasoning (general to specific, or specific to general), transductive reasoning refers to when a child reasons from specific to specific, drawing a relationship between two separate events that are otherwise unrelated. For example, if a child hears the dog bark and then a balloon popped, the child would conclude that because the dog barked, the balloon popped.
Intuitive thought substage
A main feature of the pre-operational stage of development is primitive reasoning. Between the ages of four and seven, reasoning changes from symbolic thought to intuitive thought. This stage is "marked by greater dependence on intuitive thinking rather than just perception." Children begin to have more automatic thoughts that don't require evidence. During this stage there is a heightened sense of curiosity and need to understand how and why things work. Piaget named this substage "intuitive thought" because they are starting to develop more logical thought but cannot explain their reasoning. Thought during this stage is still immature and cognitive errors occur. Children in this stage depend on their own subjective perception of the object or event. This stage is characterized by centration, conservation, irreversibility, class inclusion, and transitive inference.
Centration is the act of focusing all attention on one characteristic or dimension of a situation, whilst disregarding all others. Conservation is the awareness that altering a substance's appearance does not change its basic properties. Children at this stage are unaware of conservation and exhibit centration. Both centration and conservation can be more easily understood once familiarized with Piaget's most famous experimental task.
In this task, a child is presented with two identical beakers containing the same amount of liquid. The child usually notes that the beakers do contain the same amount of liquid. When one of the beakers is poured into a taller and thinner container, children who are younger than seven or eight years old typically say that the two beakers no longer contain the same amount of liquid, and that the taller container holds the larger quantity (centration), without taking into consideration the fact that both beakers were previously noted to contain the same amount of liquid. Due to superficial changes, the child was unable to comprehend that the properties of the substances continued to remain the same (conservation).
Irreversibility is a concept developed in this stage which is closely related to the ideas of centration and conservation. Irreversibility refers to when children are unable to mentally reverse a sequence of events. In the same beaker situation, the child does not realize that, if the sequence of events was reversed and the water from the tall beaker was poured back into its original beaker, then the same amount of water would exist. Another example of children's reliance on visual representations is their misunderstanding of "less than" or "more than". When two rows containing equal numbers of blocks are placed in front of a child, one row spread farther apart than the other, the child will think that the row spread farther contains more blocks.
Class inclusion refers to a kind of conceptual thinking that children in the preoperational stage cannot yet grasp. Children's inability to focus on two aspects of a situation at once inhibits them from understanding the principle that one category or class can contain several different subcategories or classes. For example, a four-year-old girl may be shown a picture of eight dogs and three cats. The girl knows what cats and dogs are, and she is aware that they are both animals. However, when asked, "Are there more dogs or animals?" she is likely to answer "more dogs". This is due to her difficulty focusing on the two subclasses and the larger class all at the same time. She may have been able to view the dogs as dogs or animals, but struggled when trying to classify them as both, simultaneously. Similar to this is concept relating to intuitive thought, known as "transitive inference".
Transitive inference is using previous knowledge to determine the missing piece, using basic logic. Children in the preoperational stage lack this logic. An example of transitive inference would be when a child is presented with the information "A" is greater than "B" and "B" is greater than "C". This child may have difficulty here understanding that "A" is also greater than "C".
Concrete operational stage
The concrete operational stage is the third stage of Piaget's theory of cognitive development. This stage, which follows the preoperational stage, occurs between the ages of 7 and 11 (middle childhood and preadolescence) years, and is characterized by the appropriate use of logic. During this stage, a child's thought processes become more mature and "adult like". They start solving problems in a more logical fashion. Abstract, hypothetical thinking is not yet developed in the child, and children can only solve problems that apply to concrete events or objects. At this stage, the children undergo a transition where the child learns rules such as conservation. Piaget determined that children are able to incorporate inductive reasoning. Inductive reasoning involves drawing inferences from observations in order to make a generalization. In contrast, children struggle with deductive reasoning, which involves using a generalized principle in order to try to predict the outcome of an event. Children in this stage commonly experience difficulties with figuring out logic in their heads. For example, a child will understand that "A is more than B" and "B is more than C". However, when asked "is A more than C?", the child might not be able to logically figure the question out mentally.
Two other important processes in the concrete operational stage are logic and the elimination of egocentrism.
Egocentrism is the inability to consider or understand a perspective other than one's own. It is the phase where the thought and morality of the child is completely self focused. During this stage, the child acquires the ability to view things from another individual's perspective, even if they think that perspective is incorrect. For instance, show a child a comic in which Jane puts a doll under a box, leaves the room, and then Melissa moves the doll to a drawer, and Jane comes back. A child in the concrete operations stage will say that Jane will still think it's under the box even though the child knows it is in the drawer. (See also False-belief task.)
Children in this stage can, however, only solve problems that apply to actual (concrete) objects or events, and not abstract concepts or hypothetical tasks. Understanding and knowing how to use full common sense has not yet been completely adapted.
Piaget determined that children in the concrete operational stage were able to incorporate inductive logic. On the other hand, children at this age have difficulty using deductive logic, which involves using a general principle to predict the outcome of a specific event. This includes mental reversibility. An example of this is being able to reverse the order of relationships between mental categories. For example, a child might be able to recognize that his or her dog is a Labrador, that a Labrador is a dog, and that a dog is an animal, and draw conclusions from the information available, as well as apply all these processes to hypothetical situations.
The abstract quality of the adolescent's thought at the formal operational level is evident in the adolescent's verbal problem solving ability. The logical quality of the adolescent's thought is when children are more likely to solve problems in a trial-and-error fashion. Adolescents begin to think more as a scientist thinks, devising plans to solve problems and systematically test opinions. They use hypothetical-deductive reasoning, which means that they develop hypotheses or best guesses, and systematically deduce, or conclude, which is the best path to follow in solving the problem. During this stage the adolescent is able to understand love, logical proofs and values. During this stage the young person begins to entertain possibilities for the future and is fascinated with what they can be.
Adolescents also are changing cognitively by the way that they think about social matters. One thing that brings about a change is egocentrism. This happens by heightening self-consciousness and giving adolescents an idea of who they are through their personal uniqueness and invincibility. Adolescent egocentrism can be dissected into two types of social thinking: imaginary audience and personal fable. Imaginary audience consists of an adolescent believing that others are watching them and the things they do. Personal fable is not the same thing as imaginary audience but is often confused with imaginary audience. Personal fable consists of believing that you are exceptional in some way. These types of social thinking begin in the concrete stage but carry on to the formal operational stage of development.
Testing for concrete operations
Piagetian tests are well known and practiced to test for concrete operations. The most prevalent tests are those for conservation. There are some important aspects that the experimenter must take into account when performing experiments with these children.
One example of an experiment for testing conservation is the water level task. An experimenter will have two glasses that are the same size, fill them to the same level with liquid, and make sure the child understands that both of the glasses have the same amount of water in them. Then, the experimenter will pour the liquid from one of the small glasses into a tall, thin glass. The experimenter will then ask the child if the taller glass has more liquid, less liquid, or the same amount of liquid. The child will then give his answer. There are three keys for the experimenter to keep in mind with this experiment. These are justification, number of times asking, and word choice.
Justification: After the child has answered the question being posed, the experimenter must ask why the child gave that answer. This is important because the answers they give can help the experimenter to assess the child's developmental age.
Number of times asking: Some argue that a child's answers can be influenced by the number of times an experimenter asks them about the amount of water in the glasses. For example, a child is asked about the amount of liquid in the first set of glasses and then asked once again after the water is moved into a different sized glass. Some children will doubt their original answer and say something they would not have said if they did not doubt their first answer.
Word choice: The phrasing that the experimenter uses may affect how the child answers. If, in the liquid and glass example, the experimenter asks, "Which of these glasses has more liquid?", the child may think that his thoughts of them being the same is wrong because the adult is saying that one must have more. Alternatively, if the experimenter asks, "Are these equal?", then the child is more likely to say that they are, because the experimenter is implying that they are.
Classification: As children's experiences and vocabularies grow, they build schemata and are able to organize objects in many different ways. They also understand classification hierarchies and can arrange objects into a variety of classes and subclasses.
Identity: One feature of concrete operational thought is the understanding that objects have qualities that do not change even if the object is altered in some way. For instance, mass of an object does not change by rearranging it. A piece of chalk is still chalk even when the piece is broken in two.
Reversibility: The child learns that some things that have been changed can be returned to their original state. Water can be frozen and then thawed to become liquid again; however, eggs cannot be unscrambled. Children use reversibility a lot in mathematical problems such as: 2 + 3 = 5 and 5 – 3 = 2.
Conservation: The ability to understand that the quantity (mass, weight volume) of something doesn't change due to the change of appearance.
Decentration: The ability to focus on more than one feature of scenario or problem at a time. This also describes the ability to attend to more than one task at a time. Decentration is what allows for conservation to occur.
Seriation: Arranging items along a quantitative dimension, such as length or weight, in a methodical way is now demonstrated by the concrete operational child. For example, they can logically arrange a series of different-sized sticks in order by length. Younger children not yet in the concrete stage approach a similar task in a haphazard way.
These new cognitive skills increase the child's understanding of the physical world. However, according to Piaget, they still cannot think in abstract ways. Additionally, they do not think in systematic scientific ways. For example, most children under age twelve would not be able to come up with the variables that influence the period that a pendulum takes to complete its arc. Even if they were given weights they could attach to strings in order to do this experiment, they would not be able to draw a clear conclusion.
Formal operational stage
The final stage is known as the formal operational stage (early to middle adolescence, beginning at age 11 and finalizing around 14–15): Intelligence is demonstrated through the logical use of symbols related to abstract concepts. This form of thought includes "assumptions that have no necessary relation to reality." At this point, the person is capable of hypothetical and deductive reasoning. During this time, people develop the ability to think about abstract concepts.
Piaget stated that "hypothetico-deductive reasoning" becomes important during the formal operational stage. This type of thinking involves hypothetical "what-if" situations that are not always rooted in reality, i.e. counterfactual thinking. It is often required in science and mathematics.
Abstract thought emerges during the formal operational stage. Children tend to think very concretely and specifically in earlier stages, and begin to consider possible outcomes and consequences of actions.
Metacognition, the capacity for "thinking about thinking" that allows adolescents and adults to reason about their thought processes and monitor them.
Problem-solving is demonstrated when children use trial-and-error to solve problems. The ability to systematically solve a problem in a logical and methodical way emerges.
Children in primary school years mostly use inductive reasoning, but adolescents start to use deductive reasoning. Inductive reasoning is when children draw general conclusions from personal experiences and specific facts. Adolescents learn how to use deductive reasoning by applying logic to create specific conclusions from abstract concepts. This capability results from their capacity to think hypothetically.
"However, research has shown that not all persons in all cultures reach formal operations, and most people do not use formal operations in all aspects of their lives".
Experiments
Piaget and his colleagues conducted several experiments to assess formal operational thought.
In one of the experiments, Piaget evaluated the cognitive capabilities of children of different ages through the use of a scale and varying weights. The task was to balance the scale by hooking weights on the ends of the scale. To successfully complete the task, the children must use formal operational thought to realize that the distance of the weights from the center and the heaviness of the weights both affected the balance. A heavier weight has to be placed closer to the center of the scale, and a lighter weight has to be placed farther from the center, so that the two weights balance each other. While 3- to 5- year olds could not at all comprehend the concept of balancing, children by the age of 7 could balance the scale by placing the same weights on both ends, but they failed to realize the importance of the location. By age 10, children could think about location but failed to use logic and instead used trial-and-error. Finally, by age 13 and 14, in early to middle adolescence, some children more clearly understood the relationship between weight and distance and could successfully implement their hypothesis.
The stages and causation
Piaget sees children's conception of causation as a march from "primitive" conceptions of cause to those of a more scientific, rigorous, and mechanical nature. These primitive concepts are characterized as supernatural, with a decidedly non-natural or non-mechanical tone. Piaget has as his most basic assumption that babies are phenomenists. That is, their knowledge "consists of assimilating things to schemas" from their own action such that they appear, from the child's point of view, "to have qualities which, in fact, stem from the organism". Consequently, these "subjective conceptions," so prevalent during Piaget's first stage of development, are dashed upon discovering deeper empirical truths.
Piaget gives the example of a child believing that the moon and stars follow him on a night walk. Upon learning that such is the case for his friends, he must separate his self from the object, resulting in a theory that the moon is immobile, or moves independently of other agents.
The second stage, from around three to eight years of age, is characterized by a mix of this type of magical, animistic, or "non-natural" conceptions of causation and mechanical or "naturalistic" causation. This conjunction of natural and non-natural causal explanations supposedly stems from experience itself, though Piaget does not make much of an attempt to describe the nature of the differences in conception. In his interviews with children, he asked questions specifically about natural phenomena, such as: "What makes clouds move?", "What makes the stars move?", "Why do rivers flow?" The nature of all the answers given, Piaget says, are such that these objects must perform their actions to "fulfill their obligations towards men". He calls this "moral explanation".
Postulated physical mechanisms underlying schemes, schemas, and stages
First note the distinction between 'schemes' (analogous to 1D lists of action-instructions, e.g. leading to separate pen-strokes), and figurative 'schemas' (aka 'schemata', akin to 2D drawings/sketches or virtual 3D models); see schema. This distinction (often overlooked by translators) is emphasized by Piaget & Inhelder, and others + (Appendix p. 21-22).
In 1967, Piaget considered the possibility of RNA molecules as likely embodiments of his still-abstract schemes (which he promoted as units of action) — though he did not come to any firm conclusion. At that time, due to work such as that of Swedish biochemist Holger Hydén, RNA concentrations had, indeed, been shown to correlate with learning.
To date, with one exception, it has been impossible to investigate such RNA hypotheses by traditional direct observation and logical deduction. The one exception is that such ultra-micro sites would almost certainly have to use optical communication, and recently studies have demonstrated that nerve-fibres can indeed transmit light/infra-red (in addition to their acknowledged role). However it accords with the philosophy of science, especially scientific realism, to do indirect investigations of such phenomena which are intrinsically unobservable for practical reasons. The art then is to build up a plausible interdisciplinary case from the indirect evidence (as indeed the child does during concept development) — and then retain that model until it is disproved by observable-or-other new evidence which then calls for new accommodation.
In that spirit, it now might be said that the RNA/infra-red model is valid (for explaining Piagetian higher intelligence). Anyhow the current situation opens the way for more testing, and further development in several directions, including the finer points of Piaget's agenda.
Practical applications
Parents can use Piaget's theory in many ways to support their child's growth. Teachers can also use Piaget's theory to help their students. For example, recent studies have shown that children in the same grade and of the same age perform differently on tasks measuring basic addition and subtraction accuracy. Children in the preoperational and concrete operational levels of cognitive development perform arithmetic operations (such as addition and subtraction) with similar accuracy; however, children in the concrete operational level have been able to perform both addition problems and subtraction problems with overall greater precision. Teachers can use Piaget's theory to see where each child in their class stands with each subject by discussing the syllabus with their students and the students' parents.
The stage of cognitive growth of a person differ from another. Cognitive development or thinking is an active process from the beginning to the end of life. Intellectual advancement happens because people at every age and developmental period look for cognitive equilibrium. To achieve this balance, the easiest way is to understand the new experiences through the lens of the preexisting ideas. Infants learn that new objects can be grabbed in the same way of familiar objects, and adults explain the day's headlines as evidence for their existing worldview.
However, the application of standardized Piagetian theory and procedures in different societies established widely varying results that lead some to speculate not only that some cultures produce more cognitive development than others but that without specific kinds of cultural experience, but also formal schooling, development might cease at certain level, such as concrete operational level. A procedure was done following methods developed in Geneva (i.e. water level task). Participants were presented with two beakers of equal circumference and height, filled with equal amounts of water. The water from one beaker was transferred into another with taller and smaller circumference. The children and young adults from non-literate societies of a given age were more likely to think that the taller, thinner beaker had more water in it. On the other hand, an experiment on the effects of modifying testing procedures to match local cultural produced a different pattern of results. In the revised procedures, the participants explained in their own language and indicated that while the water was now "more", the quantity was the same. Piaget's water level task has also been applied to the elderly by Formann and results showed an age-associated non-linear decline of performance.
Relation to psychometric theories of intelligence
Researchers have linked Piaget's theory to Cattell and Horn's theory of fluid and crystallized abilities. Piaget's operative intelligence corresponds to the Cattell-Horn formulation of fluid ability in that both concern logical thinking and the "eduction of relations" (an expression Cattell used to refer to the inferring of relationships). Piaget's treatment of everyday learning corresponds to the Cattell-Horn formulation of crystallized ability in that both reflect the impress of experience. Piaget's operativity is considered to be prior to, and ultimately provides the foundation for, everyday learning, much like fluid ability's relation to crystallized intelligence.
Piaget's theory also aligns with another psychometric theory, namely the psychometric theory of g, general intelligence. Piaget designed a number of tasks to assess hypotheses arising from his theory. The tasks were not intended to measure individual differences and they have no equivalent in psychometric intelligence tests. Notwithstanding the different research traditions in which psychometric tests and Piagetian tasks were developed, the correlations between the two types of measures have been found to be consistently positive and generally moderate in magnitude. g is thought to underlie performance on the two types of tasks. It has been shown that it is possible to construct a battery consisting of Piagetian tasks that is as good a measure of g as standard IQ tests.
Challenges to Piagetian stage theory
Piagetian accounts of development have been challenged on several grounds. First, as Piaget himself noted, development does not always progress in the smooth manner his theory seems to predict. Décalage, or progressive forms of cognitive developmental progression in a specific domain, suggest that the stage model is, at best, a useful approximation. Furthermore, studies have found that children may be able to learn concepts and capability of complex reasoning that supposedly represented in more advanced stages with relative ease (Lourenço & Machado, 1996, p. 145). More broadly, Piaget's theory is "domain general," predicting that cognitive maturation occurs concurrently across different domains of knowledge (such as mathematics, logic, and understanding of physics or language). Piaget did not take into account variability in a child's performance notably how a child can differ in sophistication across several domains.
During the 1980s and 1990s, cognitive developmentalists were influenced by "neo-nativist" and evolutionary psychology ideas. These ideas de-emphasized domain general theories and emphasized domain specificity or modularity of mind. Modularity implies that different cognitive faculties may be largely independent of one another, and thus develop according to quite different timetables, which are "influenced by real world experiences". In this vein, some cognitive developmentalists argued that, rather than being domain general learners, children come equipped with domain specific theories, sometimes referred to as "core knowledge," which allows them to break into learning within that domain. For example, even young infants appear to be sensitive to some predictable regularities in the movement and interactions of objects (for example, an object cannot pass through another object), or in human behavior (for example, a hand repeatedly reaching for an object has that object, not just a particular path of motion), as it becomes the building block of which more elaborate knowledge is constructed.
Piaget's theory has been said to undervalue the influence that culture has on cognitive development. Piaget demonstrates that a child goes through several stages of cognitive development and come to conclusions on their own, however, a child's sociocultural environment plays an important part in their cognitive development. Social interaction teaches the child about the world and helps them develop through the cognitive stages, which Piaget neglected to consider.
More recent work from a newer dynamic systems approach has strongly challenged some of the basic presumptions of the "core knowledge" school that Piaget suggested. Dynamic systems approaches harken to modern neuroscientific research that was not available to Piaget when he was constructing his theory. This brought new light into research in psychology in which new techniques such as brain imaging provided new understanding to cognitive development. One important finding is that domain-specific knowledge is constructed as children develop and integrate knowledge. This enables the domain to improve the accuracy of the knowledge as well as organization of memories. However, this suggests more of a "smooth integration" of learning and development than either Piaget, or his neo-nativist critics, had envisioned. Additionally, some psychologists, such as Lev Vygotsky and Jerome Bruner, thought differently from Piaget, suggesting that language was more important for cognition development than Piaget implied.
Post-Piagetian and neo-Piagetian stages
In recent years, several theorists attempted to address concerns with Piaget's theory by developing new theories and models that can accommodate evidence which violates Piagetian predictions and postulates.
The neo-Piagetian theories of cognitive development, advanced by Robbie Case, Andreas Demetriou, Graeme S. Halford, Kurt W. Fischer, Michael Lamport Commons, and Juan Pascual-Leone, attempted to integrate Piaget's theory with cognitive and differential theories of cognitive organization and development. Their aim was to better account for the cognitive factors of development and for intra-individual and inter-individual differences in cognitive development. They suggested that development along Piaget's stages is due to increasing working memory capacity and processing efficiency by "biological maturation". Moreover, Demetriou's theory ascribes an important role to hypercognitive processes of "self-monitoring, self-recording, self-evaluation, and self-regulation", and it recognizes the operation of several relatively autonomous domains of thought (Demetriou, 1998; Demetriou, Mouyi, Spanoudis, 2010; Demetriou, 2003, p. 153).
Piaget's theory stops at the formal operational stage, but other researchers have observed the thinking of adults is more nuanced than formal operational thought. This fifth stage has been named post formal thought or operation. Post formal stages have been proposed. Michael Commons presented evidence for four post formal stages in the model of hierarchical complexity: systematic, meta-systematic, paradigmatic, and cross-paradigmatic (Commons & Richards, 2003, p. 206–208; Oliver, 2004, p. 31). There are many theorists, however, who have criticized "post formal thinking," because the concept lacks both theoretical and empirical verification. The term "integrative thinking" has been suggested for use instead.
A "sentential" stage, said to occur before the early preoperational stage, has been proposed by Fischer, Biggs and Biggs, Commons, and Richards.
Jerome Bruner has expressed views on cognitive development in a "pragmatic orientation" in which humans actively use knowledge for practical applications, such as problem solving and understanding reality.
Michael Lamport Commons proposed the model of hierarchical complexity (MHC) in two dimensions: horizontal complexity and vertical complexity (Commons & Richards, 2003, p. 205).
Kieran Egan has proposed five stages of understanding. These are "somatic", "mythic", "romantic", "philosophic", and "ironic". These stages are developed through cognitive tools such as "stories", "binary oppositions", "fantasy" and "rhyme, rhythm, and meter" to enhance memorization to develop a long-lasting learning capacity.
Lawrence Kohlberg developed three stages of moral development: "Preconventional", "Conventional" and "Postconventional". Each level is composed of two orientation stages, with a total of six orientation stages: (1) "Punishment-Obedience", (2) "Instrumental Relativist", (3) "Good Boy-Nice Girl", (4) "Law and Order", (5) "Social Contract", and (6) "Universal Ethical Principle".
Andreas Demetriou has expressed neo-Piagetian theories of cognitive development.
Jane Loevinger's stages of ego development occur through "an evolution of stages". "First is the Presocial Stage followed by the Symbiotic Stage, Impulsive Stage, Self-Protective Stage, Conformist Stage, Self-Aware Level: Transition from Conformist to Conscientious Stage, Individualistic Level: Transition from Conscientious to the Autonomous Stage, Conformist Stage, and Integrated Stage".
Ken Wilber has incorporated Piaget's theory in his multidisciplinary field of integral theory. The human consciousness is structured in hierarchical order and organized in "holon" chains or "great chain of being", which are based on the level of spiritual and psychological development.
Oliver Kress published a model that connected Piaget's theory of development and Abraham Maslow's concept of self-actualization.
Cheryl Armon has proposed five stages of " the Good Life". These are "Egoistic Hedonism", "Instrumental Hedonism", "Affective/Altruistic Mutuality", "Individuality", and "Autonomy/Community" (Andreoletti & Demick, 2003, p. 284) (Armon, 1984, p. 40–43).
Christopher R. Hallpike proposed that human evolution of cognitive moral understanding had evolved from the beginning of time from its primitive state to the present time.
Robert Kegan extended Piaget's developmental model to adults in describing what he called constructive-developmental psychology.
References
External links
Cognitive psychology
Constructivism (psychological school)
Enactive cognition
Developmental neuroscience
Developmental stage theories | 0.760405 | 0.99896 | 0.759613 |
Human enhancement | Human enhancement is the natural, artificial, or technological alteration of the human body in order to enhance physical or mental capabilities.
Technologies
Existing technologies
Three forms of human enhancement currently exist: reproductive, physical, and mental. Reproductive enhancements include embryo selection by preimplantation genetic diagnosis, cytoplasmictransfer, and in vitro-generated gametes. Physical enhancements include cosmetics (plastic surgery and orthodontics), Drug-induced (doping and performance-enhancing drugs), functional (prosthetics and powered exoskeletons), Medical (implants (e.g. pacemaker) and organ replacements (e.g. bionic lenses)), and strength training (weights (e.g. barbells) and dietary supplement)). Examples of mental enhancements are nootropics, neurostimulation, and supplements that improve mental functions.
Computers, mobile phones, and Internet can also be used to enhance cognitive efficiency. Notable efforts in human augmentation are driven by the interconnected Internet of Things (IoT) devices, including wearable electronics (e.g., augmented reality glasses, smart watches, smart textile), personal drones, on-body and in-body nanonetworks.
Emerging technologies
Many different forms of human enhancing technologies are either on the way or are currently being tested and trialed. A few of these emerging technologies include: human genetic engineering (gene therapy), neurotechnology (neural implants and brain–computer interfaces), cyberware, strategies for engineered negligible senescence, nanomedicine, and 3D bioprinting. Variants of human genetic engineering with so far limited usage include the artificial creation of human-animal hybrids (where each cell has partly human and partly animal genetic contents) and human-animal chimeras (where some cells are human and some cells are animal in origin).
Speculative technologies
Some other human enhancement technologies are still speculative, such as: mind uploading, exocortex, and endogenous artificial nutrition. Mind uploading is the hypothetical process of "transferring"/"uploading" or copying a conscious mind from a brain to a non-biological substrate by scanning and mapping a biological brain in detail and copying its state into a computer system or another computational device. The exocortex can be defined as a theoretical artificial external information processing system that would augment a brain's biological high-level cognitive processes. Endogenous artificial nutrition can be similar to having a radioisotope generator that resynthesizes glucose (similarly to photosynthesis), amino acids and vitamins from their degradation products, theoretically availing for weeks without food if necessary.
Nick Bostrom listed some additional capabilities that are expected to be physically possible in theory, given a sufficient technological level, such as:
Reversal of aging
Cures for all diseases
Arbitrary sensory inputs (e.g. generating subjective experience of taste without eating anything)
Precise control of personality, mood, motivation, well-being
Nootropics
There are many substances that are purported to have promise in augmenting human cognition by various means. These substances are called nootropics and can potentially benefit individuals with cognitive decline and many different disorders, but may also be capable of yielding results in cognitively healthy persons. Generally speaking, nootropics are said to be effective for enhancing focus, learning, memory function, mood, and in some cases, physical brain development. Some examples of these include Citicoline, Huperzine A, Phosphatidylserine, Bacopa monnieri, Acetyl-L-carnitine, Uridine monophosphate, L-theanine, Rhodiola rosea, and Pycnogenol which are all forms of dietary supplement. There are also nootropic drugs such as the common racetams, e.g. piracetam (Nootropil) and omberacetam (Noopept) along with the neuroprotective Semax, and N-Acetyl Semax. There are also nootropics related to naturally occurring substances but that are either modified in a lab or are analogs such as Vinpocetine and Sulbutiamine. Some authors have explored nootropics as relationship enhancements to help couples maintain bonds over time.
Ethics
Much debate surrounds the topic of human enhancement and the means used to achieve one's enhancement goals. Ethical attitudes toward human enhancement can depend on many factors such as religious affiliation, age, gender, ethnicity, culture of origin, and nationality.
In some circles the expression "human enhancement" is roughly synonymous with human genetic engineering, but most often it is referred to the general application of the convergence of nanotechnology, biotechnology, information technology and cognitive science (NBIC) to improve human performance.
Since the 1990s, several academics (such as some of the fellows of the Institute for Ethics and Emerging Technologies) have risen to become advocates of the case for human enhancement while other academics (such as the members of President Bush's Council on Bioethics) have become outspoken critics.
Advocacy of the case for human enhancement is increasingly becoming synonymous with "transhumanism", a controversial ideology and movement which has emerged to support the recognition and protection of the right of citizens to either maintain or modify their own minds and bodies; so as to guarantee them the freedom of choice and informed consent of using human enhancement technologies on themselves and their children. Their common understanding of the world can be seen from a physicist perspective rather than a biological perspective. Based on the idea of technological singularity, human enhancement is merging with technological innovation that will advance post-humanism.
Neuromarketing consultant Zack Lynch argues that neurotechnologies will have a more immediate effect on society than gene therapy and will face less resistance as a pathway of radical human enhancement. He also argues that the concept of "enablement" needs to be added to the debate over "therapy" versus "enhancement".
The prospect of human enhancement has sparked public controversy. The main ethical question in the debate about human enhancement involves which legal restrictions, if any, should exist.
Dale Carrico wrote that "human enhancement" is a loaded term which has eugenic overtones because it may imply the improvement of human hereditary traits to attain a universally accepted norm of biological fitness (at the possible expense of human biodiversity and neurodiversity), and therefore can evoke negative reactions far beyond the specific meaning of the term. Michael Selgelid terms this as a phase of "neugenics" suggesting that gene enhancements occurring now have already revived the idea of eugenics in our society. Practices of prenatal diagnosis, selective abortion and in-vitro fertilization aims to improve human life allowing for parents to decide via genetic information if they want to continue or terminate the pregnancy.
A criticism of human enhancement is that it will create unfair physical or mental advantages, or unequal access to such enhancements, can and will further the gulf between the "haves" and "have-nots".
Futurist Ray Kurzweil has shown some concern that, within the century, humans may be required to merge with this technology in order to compete in the marketplace. Enhanced individuals have a better chance of being chosen for better opportunities in careers, entertainment and resources. For example, life extending technologies can increase the average individual life span, affecting the distribution of pension throughout the society. Increasing lifespan will affect human population, further dividing limited resources such as food, energy, monetary resources and habitat. Other critics of human enhancement fear that such capabilities would change, for the worse, the dynamic relations within a family. Given the choices of superior qualities, parents make their child as opposed to merely birthing it, and the newborn becomes a product of their will rather than a gift of nature to be loved unconditionally.
Effects on identity
Human enhancement technologies can impact human identity by affecting one's self-conception. The argument does not necessarily come from the idea of improving the individual but rather changing who they are and becoming someone new. Altering an individual identity affects their personal story, development and mental capabilities. The basis of this argument comes from two main points : the charge of inauthenticity and the charge of violating an individual's core characteristics. Gene therapy has the ability to alter one’s mental capacity, and through this argument, has the ability to affect their narrative identity. An individual's core characteristics may include internal psychological style, personality, general intelligence, necessity to sleep, normal aging, gender and being Homo sapiens. Technologies threaten to alter the self fundamentally to the point where the result is, essentially, a different person entirely. For example, extreme changes in personality may affect the individual's relationships because others can no longer relate to the new person.
The capability approach focuses on a normative framework that can be applied to how human enhancement technologies affects human capabilities. The ethics of this does not necessarily focus on the make up of the individual but rather what it allows individuals to do in today's society. This approach was first termed by Amartya Sen, where he mainly focused on the objectives of the approach rather than the aim for those objectives which entail resources, technological processes, and economic arrangement. The central human capabilities include life, bodily health, bodily integrity, sense, emotions, practical reason, affiliation, other species, play, and control over one's environment. This normative framework recognizes that human capabilities are always changing and technology has already played a part in this.
See also
References
Further reading
External links
Enhancement Technologies Group
Institute for Ethics and Emerging Technologies
Humanity+
RTÉ's Big Science Debate 2007
Human Enhancement Study (European Parliament STOA 2009)
Ethics + Emerging Sciences Group (Cal Poly, San Luis Obispo)
"Ethics of Human Enhancement: 25 Questions & Answers" (an NSF-funded report), August 31, 2009
NeoHumanitas: Thinking our Future. Think tank reflecting on enhancing technologies
The Case for Perfection: Ethics in the Age of Human Enhancement (PeterLang, 2016)
Future-Human.Life (NeoHumanitas, 2017)
Augmented Human International Conferences
Bioethics
Human evolution | 0.764773 | 0.993191 | 0.759566 |
Dysgenics | Dysgenics refers to any decrease in the prevalence of traits deemed to be either socially desirable or generally adaptive to their environment due to selective pressure disfavouring their reproduction.
In 1915 the term was used by David Starr Jordan to describe the supposed deleterious effects of modern warfare on group-level genetic fitness because of its tendency to kill physically healthy men while preserving the disabled at home. Similar concerns had been raised by early eugenicists and social Darwinists during the 19th century, and continued to play a role in scientific and public policy debates throughout the 20th century.
More recent concerns about supposed dysgenic effects in human populations have been advanced by the controversial psychologist Richard Lynn, notably in his 1996 book Dysgenics: Genetic Deterioration in Modern Populations, which argued that changes in selection pressures and decreased infant mortality since the Industrial Revolution have resulted in an increased propagation of deleterious traits and genetic disorders.
Despite these concerns, genetic studies have shown no evidence for dysgenic effects in human populations. Reviewing Lynn's book, the scholar John R. Wilmoth notes: "Overall, the most puzzling aspect of Lynn's alarmist position is that the deterioration of average intelligence predicted by the eugenicists has not occurred."
See also
Behavioural genetics
Degeneration theory
Devolution (biology)
Fertility and income
Fertility and intelligence
Flynn effect
Heritability of IQ
List of congenital disorders
List of biological development disorders
New eugenics
Recent human evolution
Further reading
Loehlin, John C. (1997). "Dysgenesis and IQ: What evidence is relevant?" (PDF). American Psychologist, 52(11), 1236–1239. doi:10.1037/0003-066X.52.11.1236
References
Eugenics
Evolutionary biology
Futures studies | 0.763228 | 0.995135 | 0.759515 |
Double standard | A double standard is the application of different sets of principles for situations that are, in principle, the same. It is often used to describe treatment whereby one group is given more latitude than another. A double standard arises when two or more people, groups, organizations, circumstances, or events are treated differently even though they should be treated the same way. A double standard "implies that two things which are the same are measured by different standards".
Applying different principles to similar situations may or may not indicate a double standard. To distinguish between the application of a double standard and a valid application of different standards toward circumstances that only appear to be the same, several factors must be examined. One is the sameness of those circumstances – what are the parallels between those circumstances, and in what ways do they differ? Another is the philosophy or belief system informing which principles should be applied to those circumstances. Different standards can be applied to situations that appear similar based on a qualifying truth or fact that, upon closer examination, renders those situations distinct (a physical reality or moral obligation, for example). However, if similar-looking situations have been treated according to different principles and there is no truth, fact or principle that distinguishes those situations, then a double standard has been applied.
If correctly identified, a double standard usually indicates the presence of hypocrisy, bias or unjust behaviors.
Causes and explanations
Double standards are believed to develop in people's minds for a multitude of possible reasons, including: finding an excuse for oneself, emotions clouding judgement, twisting facts to support beliefs (such as confirmation biases, cognitive biases, attraction biases, prejudices or the desire to be right). Human beings have a tendency to evaluate people's actions based on who did them.
In a study conducted in 2000, Dr. Martha Foschi observed the application of double standards in group competency tests. She concluded that status characteristics, such as gender, ethnicity and socioeconomic class, can provide a basis for the formation of double standards in which stricter standards are applied to people who are perceived to be of lower status. Dr. Foschi also noted the ways in which double standards can form based on other socially valued attributes such as beauty, morality, and mental health.
Dr. Tristan Botelho and Dr. Mabel Abraham, Assistant Professors at the Yale School of Management and Columbia Business School, studied the effect that gender has on the way people rank others in financial markets. Their research showed that average-quality men were given the benefit of the doubt more than average-quality women, who were more often "penalized" in people's judgments. Botelho and Abraham also showed that women and men are similarly risk-loving, contrary to popular belief. Altogether, their research showed that double standards (at least in financial markets) do exist around gender. They encourage the adoption of controls to eliminate gender bias in application, hiring, and evaluation processes within organizations. Examples of such controls include using only initials on applications so that applicants' genders are not apparent, or auditioning musicians from behind a screen so that their skills, and not their gender, influence their acceptance or rejection into orchestras. Practices like these are, according to Botelho and Abraham, already being implemented in a number of organizations.
Common areas
Gender
It has long been debated how someone's gender role affects others' moral, social, political and legal responses. Some believe that differences in the way men and women are perceived and treated is a function of social norms, thus indicating a double standard. For example, one claim is that a double standard exists in society's judgment of women's and men's sexual conduct. Research has found that casual sexual activity is regarded as more acceptable for men than for women. According to William G Axinn, double standards between men and women can potentially exist with regards to: dating, cohabitation, virginity, marriage/remarriage, sexual abuse/assault/harassment, domestic violence and singleness.
Kennair et al. (2023) found no signs on a sexual double standard in long or short-term mating contexts, nor in choosing a friend. They did find however that women's self-stimulation was judged positively, and men's self-stimulated was judged negatively. A 2017 study of American college students also found no evidence of a gendered double standard around promiscuity.
Law
A double standard may arise if two or more groups who have equal legal rights are given different degrees of legal protection or representation. Such double standards are seen as unjustified because they violate a common maxim of modern legal jurisprudence - that all parties should stand equal before the law. Where judges are expected to be impartial, they must apply the same standards to all people, regardless of their own subjective biases or favoritism, based on: social class, rank, ethnicity, gender, sexual orientation, religion, age or other distinctions.
Politics
A double standard arises in politics when the treatment of the same political matters between two or more parties (such as the response to a public crisis or the allocation of funding) is handled differently.
Double standard policies can include situations when a country's or commentator's assessment of the same phenomenon, process or event in international relations depends on their relationship with or attitude to the parties involved. In Harry's Game (1975), Gerald Seymour wrote: "One man's terrorist is another man's freedom fighter".
Ethnicity
Double standards exist when people are preferred or rejected on the basis of their ethnicity in situations in which ethnicity is not a relevant or justifiable factor for discrimination (as might be the case for a cultural performance or ethnic ceremony).
The intentional efforts of some people to counteract racism and ethnic double standards can sometimes be interpreted by others as actually perpetuating racism and double standards among ethnic groups. Writing for The American Conservative, Rod Dreher quotes the account published in Quillette by Coleman Hughes, a black student at Columbia University, who said he was given an opportunity to play in a backup band for Grammy Award-winning pop artist Rihanna at the 2016 MTV Video Music Awards Show. According to Hughes, several of his friends were also invited; however, one of them was fired and replaced because, according to Hughes, his white Hispanic background did not suit the all-black aesthetic that Rihanna's team had chosen for her show. The team had decided that all performers on stage were to be black, aside from Rihanna's regular guitar player. Hughes was uncertain about whether he believed this action was unethical, given that the show was racially themed to begin with. He observed what he believed to be a double standard in the entertainment industry, saying, "if a black musician had been fired in order to achieve an all-white aesthetic — it would have made front page headlines. It would have been seen as an unambiguous moral infraction."
Dreher argues that Hughes's observations highlight the difficulty in distinguishing between the exclusion of one ethnic group in order to celebrate another, and the exclusion of an ethnic group as the exercise of racism or a double standard. Dreher also discussed another incident, in which New York Times columnist Bari Weiss, who is Jewish, was heavily criticized for tweeting, "Immigrants: They get the job done", in a positive reference to Mirai Nagasu, a Japanese-American Olympic ice skater, who Weiss was trying to honor. The public debate about ethnicity and double standards remains controversial and, by all appearances, will continue.
See also
Discrimination
Double bind
Doublethink
Golden Rule/ethic of reciprocity
Honne and tatemae
Hypocrisy
In-group and out-group
In-group favoritism
Nordic sexual morality debate
Political hypocrisy
Quod licet Iovi, non licet bovi
Psychological projection
Reciprocity (social and political philosophy)
Social exclusion
References
Further reading
Axinn, William G., et al. "Gender Double Standards in Parenting Attitudes." Social Science Research, vol. 40, no. 2, 2011, pp. 417–432., doi:10.1016/j.ssresearch.2010.08.010.
327 pages.
440 pages.
Hudspeth, Christopher. "8 Modern Day Double Standards." Thought Catalog, 26 July 2012, thoughtcatalog.com/cehudspeth/2012/07/8-modern-day-double-standards/.
127 pages.
Thomas, Keith. "The Double Standard." Journal of the History of Ideas, vol. 20, no. 2, Apr. 1959, pp. 195–216., doi:10.2307/2707819.
37 pages.
Injustice
Barriers to critical thinking
Discrimination
Social inequality
Cognitive dissonance
Hypocrisy
Bias | 0.765241 | 0.992508 | 0.759508 |
Externalization (psychology) | Externalization is a term used in psychoanalytic theory which describes the tendency to project one's internal states onto the outside world. It is generally regarded as an unconscious defense mechanism, thus the person is unaware they are doing it. Externalization takes on a different meaning in narrative therapy, where the client is encouraged to externalize a problem in order to gain a new perspective on it.
Psychoanalysis
In Freudian psychology, externalization (or externalisation) is a defense mechanism by which an individual projects their own internal characteristics onto the outside world, particularly onto other people. For example, a patient who is overly argumentative might instead perceive others as argumentative and themselves as blameless.
Like other defense mechanisms, externalization can be a protection against anxiety and is, therefore, part of a healthy, normally functioning mind. However, if taken to excess, it can lead to the development of a neurosis.
Narrative therapy
Michael White states that the problem of the client is externalized, to alter the client's point of view.
Neuroscience of externalization
Problems with self-regulation, including impulsivity, violence, sensation-seeking, and rule-breaking, are indicative of an externalizing risk pathway. A discrepancy exists between bottom-up reward-related circuitry, such as the ventral striatum, and top-down inhibitory control circuitry, which is located in the prefrontal cortex, linking externalizing behaviors. Externalization is often related to substance use disorders. In particular, alcohol use disorder is one of disorders that much externalization research has been dedicated to. Often, issues within the externalizing risk pathway, namely vulnerabilities in self-regulation, may impact the development of alcohol use disorder differently across stages of the addiction cycle. Likewise, marijuana use has been linked to an externalizing pathway that highlights aggressive and delinquent behavior. Another type of disorder that is linked to the externalizing pathway is Antisocial Personality Disorder due to its tendency to relate by lack of constraint. Much research has examined the similarities of antisocial personality disorder and substance use disorder in relation to externalizing behaviors.
See also
Internalization
Notes
References
Defence mechanisms | 0.776306 | 0.978344 | 0.759494 |
Biomedical engineering | Biomedical engineering (BME) or medical engineering is the application of engineering principles and design concepts to medicine and biology for healthcare applications (e.g., diagnostic or therapeutic purposes). BME is also traditionally logical sciences to advance health care treatment, including diagnosis, monitoring, and therapy. Also included under the scope of a biomedical engineer is the management of current medical equipment in hospitals while adhering to relevant industry standards. This involves procurement, routine testing, preventive maintenance, and making equipment recommendations, a role also known as a Biomedical Equipment Technician (BMET) or as a clinical engineer.
Biomedical engineering has recently emerged as its own field of study, as compared to many other engineering fields. Such an evolution is common as a new field transitions from being an interdisciplinary specialization among already-established fields to being considered a field in itself. Much of the work in biomedical engineering consists of research and development, spanning a broad array of subfields (see below). Prominent biomedical engineering applications include the development of biocompatible prostheses, various diagnostic and therapeutic medical devices ranging from clinical equipment to micro-implants, imaging technologies such as MRI and EKG/ECG, regenerative tissue growth, and the development of pharmaceutical drugs including biopharmaceuticals.
Subfields and related fields
Bioinformatics
Bioinformatics is an interdisciplinary field that develops methods and software tools for understanding biological data. As an interdisciplinary field of science, bioinformatics combines computer science, statistics, mathematics, and engineering to analyze and interpret biological data.
Bioinformatics is considered both an umbrella term for the body of biological studies that use computer programming as part of their methodology, as well as a reference to specific analysis "pipelines" that are repeatedly used, particularly in the field of genomics. Common uses of bioinformatics include the identification of candidate genes and nucleotides (SNPs). Often, such identification is made with the aim of better understanding the genetic basis of disease, unique adaptations, desirable properties (esp. in agricultural species), or differences between populations. In a less formal way, bioinformatics also tries to understand the organizational principles within nucleic acid and protein sequences.
Biomechanics
Biomechanics is the study of the structure and function of the mechanical aspects of biological systems, at any level from whole organisms to organs, cells and cell organelles, using the methods of mechanics.
Biomaterials
A biomaterial is any matter, surface, or construct that interacts with living systems. As a science, biomaterials is about fifty years old. The study of biomaterials is called biomaterials science or biomaterials engineering. It has experienced steady and strong growth over its history, with many companies investing large amounts of money into the development of new products. Biomaterials science encompasses elements of medicine, biology, chemistry, tissue engineering and materials science.
Biomedical optics
Biomedical optics combines the principles of physics, engineering, and biology to study the interaction of biological tissue and light, and how this can be exploited for sensing, imaging, and treatment. It has a wide range of applications, including optical imaging, microscopy, ophthalmoscopy, spectroscopy, and therapy. Examples of biomedical optics techniques and technologies include optical coherence tomography (OCT), fluorescence microscopy, confocal microscopy, and photodynamic therapy (PDT). OCT, for example, uses light to create high-resolution, three-dimensional images of internal structures, such as the retina in the eye or the coronary arteries in the heart. Fluorescence microscopy involves labeling specific molecules with fluorescent dyes and visualizing them using light, providing insights into biological processes and disease mechanisms. More recently, adaptive optics is helping imaging by correcting aberrations in biological tissue, enabling higher resolution imaging and improved accuracy in procedures such as laser surgery and retinal imaging.
Tissue engineering
Tissue engineering, like genetic engineering (see below), is a major segment of biotechnology – which overlaps significantly with BME.
One of the goals of tissue engineering is to create artificial organs (via biological material) for patients that need organ transplants. Biomedical engineers are currently researching methods of creating such organs. Researchers have grown solid jawbones and tracheas from human stem cells towards this end. Several artificial urinary bladders have been grown in laboratories and transplanted successfully into human patients. Bioartificial organs, which use both synthetic and biological component, are also a focus area in research, such as with hepatic assist devices that use liver cells within an artificial bioreactor construct.
Genetic engineering
Genetic engineering, recombinant DNA technology, genetic modification/manipulation (GM) and gene splicing are terms that apply to the direct manipulation of an organism's genes. Unlike traditional breeding, an indirect method of genetic manipulation, genetic engineering utilizes modern tools such as molecular cloning and transformation to directly alter the structure and characteristics of target genes. Genetic engineering techniques have found success in numerous applications. Some examples include the improvement of crop technology (not a medical application, but see biological systems engineering), the manufacture of synthetic human insulin through the use of modified bacteria, the manufacture of erythropoietin in hamster ovary cells, and the production of new types of experimental mice such as the oncomouse (cancer mouse) for research.
Neural engineering
Neural engineering (also known as neuroengineering) is a discipline that uses engineering techniques to understand, repair, replace, or enhance neural systems. Neural engineers are uniquely qualified to solve design problems at the interface of living neural tissue and non-living constructs. Neural engineering can assist with numerous things, including the future development of prosthetics. For example, cognitive neural prosthetics (CNP) are being heavily researched and would allow for a chip implant to assist people who have prosthetics by providing signals to operate assistive devices.
Pharmaceutical engineering
Pharmaceutical engineering is an interdisciplinary science that includes drug engineering, novel drug delivery and targeting, pharmaceutical technology, unit operations of Chemical Engineering, and Pharmaceutical Analysis. It may be deemed as a part of pharmacy due to its focus on the use of technology on chemical agents in providing better medicinal treatment.
Hospital and medical devices
This is an extremely broad category—essentially covering all health care products that do not achieve their intended results through predominantly chemical (e.g., pharmaceuticals) or biological (e.g., vaccines) means, and do not involve metabolism.
A medical device is intended for use in:
the diagnosis of disease or other conditions
in the cure, mitigation, treatment, or prevention of disease.
Some examples include pacemakers, infusion pumps, the heart-lung machine, dialysis machines, artificial organs, implants, artificial limbs, corrective lenses, cochlear implants, ocular prosthetics, facial prosthetics, somato prosthetics, and dental implants.
Stereolithography is a practical example of medical modeling being used to create physical objects. Beyond modeling organs and the human body, emerging engineering techniques are also currently used in the research and development of new devices for innovative therapies, treatments, patient monitoring, of complex diseases.
Medical devices are regulated and classified (in the US) as follows (see also Regulation):
Class I devices present minimal potential for harm to the user and are often simpler in design than Class II or Class III devices. Devices in this category include tongue depressors, bedpans, elastic bandages, examination gloves, and hand-held surgical instruments, and other similar types of common equipment.
Class II devices are subject to special controls in addition to the general controls of Class I devices. Special controls may include special labeling requirements, mandatory performance standards, and postmarket surveillance. Devices in this class are typically non-invasive and include X-ray machines, PACS, powered wheelchairs, infusion pumps, and surgical drapes.
Class III devices generally require premarket approval (PMA) or premarket notification (510k), a scientific review to ensure the device's safety and effectiveness, in addition to the general controls of Class I. Examples include replacement heart valves, hip and knee joint implants, silicone gel-filled breast implants, implanted cerebellar stimulators, implantable pacemaker pulse generators and endosseous (intra-bone) implants.
Medical imaging
Medical/biomedical imaging is a major segment of medical devices. This area deals with enabling clinicians to directly or indirectly "view" things not visible in plain sight (such as due to their size, and/or location). This can involve utilizing ultrasound, magnetism, UV, radiology, and other means.
Alternatively, navigation-guided equipment utilizes electromagnetic tracking technology, such as catheter placement into the brain or feeding tube placement systems. For example, ENvizion Medical's ENvue, an electromagnetic navigation system for enteral feeding tube placement. The system uses an external field generator and several EM passive sensors enabling scaling of the display to the patient's body contour, and a real-time view of the feeding tube tip location and direction, which helps the medical staff ensure the correct placement in the GI tract.
Imaging technologies are often essential to medical diagnosis, and are typically the most complex equipment found in a hospital including: fluoroscopy, magnetic resonance imaging (MRI), nuclear medicine, positron emission tomography (PET), PET-CT scans, projection radiography such as X-rays and CT scans, tomography, ultrasound, optical microscopy, and electron microscopy.
Medical implants
An implant is a kind of medical device made to replace and act as a missing biological structure (as compared with a transplant, which indicates transplanted biomedical tissue). The surface of implants that contact the body might be made of a biomedical material such as titanium, silicone or apatite depending on what is the most functional. In some cases, implants contain electronics, e.g. artificial pacemakers and cochlear implants. Some implants are bioactive, such as subcutaneous drug delivery devices in the form of implantable pills or drug-eluting stents.
Bionics
Artificial body part replacements are one of the many applications of bionics. Concerned with the intricate and thorough study of the properties and function of human body systems, bionics may be applied to solve some engineering problems. Careful study of the different functions and processes of the eyes, ears, and other organs paved the way for improved cameras, television, radio transmitters and receivers, and many other tools.
Biomedical sensors
In recent years biomedical sensors based in microwave technology have gained more attention. Different sensors can be manufactured for specific uses in both diagnosing and monitoring disease conditions, for example microwave sensors can be used as a complementary technique to X-ray to monitor lower extremity trauma. The sensor monitor the dielectric properties and can thus notice change in tissue (bone, muscle, fat etc.) under the skin so when measuring at different times during the healing process the response from the sensor will change as the trauma heals.
Clinical engineering
Clinical engineering is the branch of biomedical engineering dealing with the actual implementation of medical equipment and technologies in hospitals or other clinical settings. Major roles of clinical engineers include training and supervising biomedical equipment technicians (BMETs), selecting technological products/services and logistically managing their implementation, working with governmental regulators on inspections/audits, and serving as technological consultants for other hospital staff (e.g. physicians, administrators, I.T., etc.). Clinical engineers also advise and collaborate with medical device producers regarding prospective design improvements based on clinical experiences, as well as monitor the progression of the state of the art so as to redirect procurement patterns accordingly.
Their inherent focus on practical implementation of technology has tended to keep them oriented more towards incremental-level redesigns and reconfigurations, as opposed to revolutionary research & development or ideas that would be many years from clinical adoption; however, there is a growing effort to expand this time-horizon over which clinical engineers can influence the trajectory of biomedical innovation. In their various roles, they form a "bridge" between the primary designers and the end-users, by combining the perspectives of being both close to the point-of-use, while also trained in product and process engineering. Clinical engineering departments will sometimes hire not just biomedical engineers, but also industrial/systems engineers to help address operations research/optimization, human factors, cost analysis, etc. Also, see safety engineering for a discussion of the procedures used to design safe systems. The clinical engineering department is constructed with a manager, supervisor, engineer, and technician. One engineer per eighty beds in the hospital is the ratio. Clinical engineers are also authorized to audit pharmaceutical and associated stores to monitor FDA recalls of invasive items.
Rehabilitation engineering
Rehabilitation engineering is the systematic application of engineering sciences to design, develop, adapt, test, evaluate, apply, and distribute technological solutions to problems confronted by individuals with disabilities. Functional areas addressed through rehabilitation engineering may include mobility, communications, hearing, vision, and cognition, and activities associated with employment, independent living, education, and integration into the community.
While some rehabilitation engineers have master's degrees in rehabilitation engineering, usually a subspecialty of Biomedical engineering, most rehabilitation engineers have an undergraduate or graduate degrees in biomedical engineering, mechanical engineering, or electrical engineering. A Portuguese university provides an undergraduate degree and a master's degree in Rehabilitation Engineering and Accessibility. Qualification to become a Rehab' Engineer in the UK is possible via a University BSc Honours Degree course such as Health Design & Technology Institute, Coventry University.
The rehabilitation process for people with disabilities often entails the design of assistive devices such as Walking aids intended to promote the inclusion of their users into the mainstream of society, commerce, and recreation.
Regulatory issues
Regulatory issues have been constantly increased in the last decades to respond to the many incidents caused by devices to patients. For example, from 2008 to 2011, in US, there were 119 FDA recalls of medical devices classified as class I. According to U.S. Food and Drug Administration (FDA), Class I recall is associated to "a situation in which there is a reasonable probability that the use of, or exposure to, a product will cause serious adverse health consequences or death"
Regardless of the country-specific legislation, the main regulatory objectives coincide worldwide. For example, in the medical device regulations, a product must be: 1) safe and 2) effective and 3) for all the manufactured devices (why is this part deleted?)
A product is safe if patients, users, and third parties do not run unacceptable risks of physical hazards (death, injuries, ...) in its intended use. Protective measures have to be introduced on the devices to reduce residual risks at an acceptable level if compared with the benefit derived from the use of it.
A product is effective if it performs as specified by the manufacturer in the intended use. Effectiveness is achieved through clinical evaluation, compliance to performance standards or demonstrations of substantial equivalence with an already marketed device.
The previous features have to be ensured for all the manufactured items of the medical device. This requires that a quality system shall be in place for all the relevant entities and processes that may impact safety and effectiveness over the whole medical device lifecycle.
The medical device engineering area is among the most heavily regulated fields of engineering, and practicing biomedical engineers must routinely consult and cooperate with regulatory law attorneys and other experts. The Food and Drug Administration (FDA) is the principal healthcare regulatory authority in the United States, having jurisdiction over medical devices, drugs, biologics, and combination products. The paramount objectives driving policy decisions by the FDA are safety and effectiveness of healthcare products that have to be assured through a quality system in place as specified under 21 CFR 829 regulation. In addition, because biomedical engineers often develop devices and technologies for "consumer" use, such as physical therapy devices (which are also "medical" devices), these may also be governed in some respects by the Consumer Product Safety Commission. The greatest hurdles tend to be 510K "clearance" (typically for Class 2 devices) or pre-market "approval" (typically for drugs and class 3 devices).
In the European context, safety effectiveness and quality is ensured through the "Conformity Assessment" which is defined as "the method by which a manufacturer demonstrates that its device complies with the requirements of the European Medical Device Directive". The directive specifies different procedures according to the class of the device ranging from the simple Declaration of Conformity (Annex VII) for Class I devices to EC verification (Annex IV), Production quality assurance (Annex V), Product quality assurance (Annex VI) and Full quality assurance (Annex II). The Medical Device Directive specifies detailed procedures for Certification. In general terms, these procedures include tests and verifications that are to be contained in specific deliveries such as the risk management file, the technical file, and the quality system deliveries. The risk management file is the first deliverable that conditions the following design and manufacturing steps. The risk management stage shall drive the product so that product risks are reduced at an acceptable level with respect to the benefits expected for the patients for the use of the device. The technical file contains all the documentation data and records supporting medical device certification. FDA technical file has similar content although organized in a different structure. The Quality System deliverables usually include procedures that ensure quality throughout all product life cycles. The same standard (ISO EN 13485) is usually applied for quality management systems in the US and worldwide.
In the European Union, there are certifying entities named "Notified Bodies", accredited by the European Member States. The Notified Bodies must ensure the effectiveness of the certification process for all medical devices apart from the class I devices where a declaration of conformity produced by the manufacturer is sufficient for marketing. Once a product has passed all the steps required by the Medical Device Directive, the device is entitled to bear a CE marking, indicating that the device is believed to be safe and effective when used as intended, and, therefore, it can be marketed within the European Union area.
The different regulatory arrangements sometimes result in particular technologies being developed first for either the U.S. or in Europe depending on the more favorable form of regulation. While nations often strive for substantive harmony to facilitate cross-national distribution, philosophical differences about the optimal extent of regulation can be a hindrance; more restrictive regulations seem appealing on an intuitive level, but critics decry the tradeoff cost in terms of slowing access to life-saving developments.
RoHS II
Directive 2011/65/EU, better known as RoHS 2 is a recast of legislation originally introduced in 2002. The original EU legislation "Restrictions of Certain Hazardous Substances in Electrical and Electronics Devices" (RoHS Directive 2002/95/EC) was replaced and superseded by 2011/65/EU published in July 2011 and commonly known as RoHS 2.
RoHS seeks to limit the dangerous substances in circulation in electronics products, in particular toxins and heavy metals, which are subsequently released into the environment when such devices are recycled.
The scope of RoHS 2 is widened to include products previously excluded, such as medical devices and industrial equipment. In addition, manufacturers are now obliged to provide conformity risk assessments and test reports – or explain why they are lacking. For the first time, not only manufacturers but also importers and distributors share a responsibility to ensure Electrical and Electronic Equipment within the scope of RoHS complies with the hazardous substances limits and have a CE mark on their products.
IEC 60601
The new International Standard IEC 60601 for home healthcare electro-medical devices defining the requirements for devices used in the home healthcare environment. IEC 60601-1-11 (2010) must now be incorporated into the design and verification of a wide range of home use and point of care medical devices along with other applicable standards in the IEC 60601 3rd edition series.
The mandatory date for implementation of the EN European version of the standard is June 1, 2013. The US FDA requires the use of the standard on June 30, 2013, while Health Canada recently extended the required date from June 2012 to April 2013. The North American agencies will only require these standards for new device submissions, while the EU will take the more severe approach of requiring all applicable devices being placed on the market to consider the home healthcare standard.
AS/NZS 3551:2012
AS/ANS 3551:2012 is the Australian and New Zealand standards for the management of medical devices. The standard specifies the procedures required to maintain a wide range of medical assets in a clinical setting (e.g. Hospital). The standards are based on the IEC 606101 standards.
The standard covers a wide range of medical equipment management elements including, procurement, acceptance testing, maintenance (electrical safety and preventive maintenance testing) and decommissioning.
Training and certification
Education
Biomedical engineers require considerable knowledge of both engineering and biology, and typically have a Bachelor's (B.Sc., B.S., B.Eng. or B.S.E.) or Master's (M.S., M.Sc., M.S.E., or M.Eng.) or a doctoral (Ph.D., or MD-PhD) degree in BME (Biomedical Engineering) or another branch of engineering with considerable potential for BME overlap. As interest in BME increases, many engineering colleges now have a Biomedical Engineering Department or Program, with offerings ranging from the undergraduate (B.Sc., B.S., B.Eng. or B.S.E.) to doctoral levels. Biomedical engineering has only recently been emerging as its own discipline rather than a cross-disciplinary hybrid specialization of other disciplines; and BME programs at all levels are becoming more widespread, including the Bachelor of Science in Biomedical Engineering which includes enough biological science content that many students use it as a "pre-med" major in preparation for medical school. The number of biomedical engineers is expected to rise as both a cause and effect of improvements in medical technology.
In the U.S., an increasing number of undergraduate programs are also becoming recognized by ABET as accredited bioengineering/biomedical engineering programs. As of 2023, 155 programs are currently accredited by ABET.
In Canada and Australia, accredited graduate programs in biomedical engineering are common. For example, McMaster University offers an M.A.Sc, an MD/PhD, and a PhD in Biomedical engineering. The first Canadian undergraduate BME program was offered at University of Guelph as a four-year B.Eng. program. The Polytechnique in Montreal is also offering a bachelors's degree in biomedical engineering as is Flinders University.
As with many degrees, the reputation and ranking of a program may factor into the desirability of a degree holder for either employment or graduate admission. The reputation of many undergraduate degrees is also linked to the institution's graduate or research programs, which have some tangible factors for rating, such as research funding and volume, publications and citations. With BME specifically, the ranking of a university's hospital and medical school can also be a significant factor in the perceived prestige of its BME department/program.
Graduate education is a particularly important aspect in BME. While many engineering fields (such as mechanical or electrical engineering) do not need graduate-level training to obtain an entry-level job in their field, the majority of BME positions do prefer or even require them. Since most BME-related professions involve scientific research, such as in pharmaceutical and medical device development, graduate education is almost a requirement (as undergraduate degrees typically do not involve sufficient research training and experience). This can be either a Masters or Doctoral level degree; while in certain specialties a Ph.D. is notably more common than in others, it is hardly ever the majority (except in academia). In fact, the perceived need for some kind of graduate credential is so strong that some undergraduate BME programs will actively discourage students from majoring in BME without an expressed intention to also obtain a master's degree or apply to medical school afterwards.
Graduate programs in BME, like in other scientific fields, are highly varied, and particular programs may emphasize certain aspects within the field. They may also feature extensive collaborative efforts with programs in other fields (such as the university's Medical School or other engineering divisions), owing again to the interdisciplinary nature of BME. M.S. and Ph.D. programs will typically require applicants to have an undergraduate degree in BME, or another engineering discipline (plus certain life science coursework), or life science (plus certain engineering coursework).
Education in BME also varies greatly around the world. By virtue of its extensive biotechnology sector, its numerous major universities, and relatively few internal barriers, the U.S. has progressed a great deal in its development of BME education and training opportunities. Europe, which also has a large biotechnology sector and an impressive education system, has encountered trouble in creating uniform standards as the European community attempts to supplant some of the national jurisdictional barriers that still exist. Recently, initiatives such as BIOMEDEA have sprung up to develop BME-related education and professional standards. Other countries, such as Australia, are recognizing and moving to correct deficiencies in their BME education. Also, as high technology endeavors are usually marks of developed nations, some areas of the world are prone to slower development in education, including in BME.
Licensure/certification
As with other learned professions, each state has certain (fairly similar) requirements for becoming licensed as a registered Professional Engineer (PE), but, in US, in industry such a license is not required to be an employee as an engineer in the majority of situations (due to an exception known as the industrial exemption, which effectively applies to the vast majority of American engineers). The US model has generally been only to require the practicing engineers offering engineering services that impact the public welfare, safety, safeguarding of life, health, or property to be licensed, while engineers working in private industry without a direct offering of engineering services to the public or other businesses, education, and government need not be licensed. This is notably not the case in many other countries, where a license is as legally necessary to practice engineering as it is for law or medicine.
Biomedical engineering is regulated in some countries, such as Australia, but registration is typically only recommended and not required.
In the UK, mechanical engineers working in the areas of Medical Engineering, Bioengineering or Biomedical engineering can gain Chartered Engineer status through the Institution of Mechanical Engineers. The Institution also runs the Engineering in Medicine and Health Division. The Institute of Physics and Engineering in Medicine (IPEM) has a panel for the accreditation of MSc courses in Biomedical Engineering and Chartered Engineering status can also be sought through IPEM.
The Fundamentals of Engineering exam – the first (and more general) of two licensure examinations for most U.S. jurisdictions—does now cover biology (although technically not BME). For the second exam, called the Principles and Practices, Part 2, or the Professional Engineering exam, candidates may select a particular engineering discipline's content to be tested on; there is currently not an option for BME with this, meaning that any biomedical engineers seeking a license must prepare to take this examination in another category (which does not affect the actual license, since most jurisdictions do not recognize discipline specialties anyway). However, the Biomedical Engineering Society (BMES) is, as of 2009, exploring the possibility of seeking to implement a BME-specific version of this exam to facilitate biomedical engineers pursuing licensure.
Beyond governmental registration, certain private-sector professional/industrial organizations also offer certifications with varying degrees of prominence. One such example is the Certified Clinical Engineer (CCE) certification for Clinical engineers.
Career prospects
In 2012 there were about 19,400 biomedical engineers employed in the US, and the field was predicted to grow by 5% (faster than average) from 2012 to 2022. Biomedical engineering has the highest percentage of female engineers compared to other common engineering professions.
Notable figures
Julia Tutelman Apter (deceased) – One of the first specialists in neurophysiological research and a founding member of the Biomedical Engineering Society
Earl Bakken (deceased) – Invented the first transistorised pacemaker, co-founder of Medtronic.
Forrest Bird (deceased) – aviator and pioneer in the invention of mechanical ventilators
Y.C. Fung (deceased) – professor emeritus at the University of California, San Diego, considered by many to be the founder of modern biomechanics
Leslie Geddes (deceased) – professor emeritus at Purdue University, electrical engineer, inventor, and educator of over 2000 biomedical engineers, received a National Medal of Technology in 2006 from President George Bush for his more than 50 years of contributions that have spawned innovations ranging from burn treatments to miniature defibrillators, ligament repair to tiny blood pressure monitors for premature infants, as well as a new method for performing cardiopulmonary resuscitation (CPR).
Willem Johan Kolff (deceased) – pioneer of hemodialysis as well as in the field of artificial organs
Robert Langer – Institute Professor at MIT, runs the largest BME laboratory in the world, pioneer in drug delivery and tissue engineering
John Macleod (deceased) – one of the co-discoverers of insulin at Case Western Reserve University.
Alfred E. Mann – Physicist, entrepreneur and philanthropist. A pioneer in the field of Biomedical Engineering.
J. Thomas Mortimer – Emeritus professor of biomedical engineering at Case Western Reserve University. Pioneer in Functional Electrical Stimulation (FES)
Robert M. Nerem – professor emeritus at Georgia Institute of Technology. Pioneer in regenerative tissue, biomechanics, and author of over 300 published works. His works have been cited more than 20,000 times cumulatively.
P. Hunter Peckham – Donnell Professor of Biomedical Engineering and Orthopaedics at Case Western Reserve University. Pioneer in Functional Electrical Stimulation (FES)
Nicholas A. Peppas – Chaired Professor in Engineering, University of Texas at Austin, pioneer in drug delivery, biomaterials, hydrogels and nanobiotechnology.
Robert Plonsey – professor emeritus at Duke University, pioneer of electrophysiology
Otto Schmitt (deceased) – biophysicist with significant contributions to BME, working with biomimetics
Ascher Shapiro (deceased) – Institute Professor at MIT, contributed to the development of the BME field, medical devices (e.g. intra-aortic balloons)
Gordana Vunjak-Novakovic – University Professor at Columbia University, pioneer in tissue engineering and bioreactor design
John G. Webster – professor emeritus at the University of Wisconsin–Madison, a pioneer in the field of instrumentation amplifiers for the recording of electrophysiological signals
Fred Weibell, coauthor of Biomedical Instrumentation and Measurements
U.A. Whitaker (deceased) – provider of the Whitaker Foundation, which supported research and education in BME by providing over $700 million to various universities, helping to create 30 BME programs and helping finance the construction of 13 buildings
See also
Biomedical Engineering and Instrumentation Program (BEIP)
References
Further reading
External links | 0.76085 | 0.998214 | 0.759492 |
Salutogenesis | Salutogenesis is the study of the origins of health and focuses on factors that support human health and well-being, rather than on factors that cause disease (pathogenesis). More specifically, the "salutogenic model" was originally concerned with the relationship between health, stress, and coping through a study of Holocaust survivors. Despite going through the dramatic tragedy of the holocaust, some survivors were able to thrive later in life. The discovery that there must be powerful health causing factors led to the development of salutogenesis. The term was coined by Aaron Antonovsky (1923-1994), a professor of medical sociology. The salutogenic question posed by Aaron Antonovsky is, "How can this person be helped to move toward greater health?"
Antonovsky's theories reject the "traditional medical-model dichotomy separating health and illness". He described the relationship as a continuous variable, what he called the "health-ease versus dis-ease continuum". Salutogenesis now encompasses more than the origins of health and has evolved to be about multidimensional causes of higher levels of health. Models associated with salutogenesis generally include wholistic approaches related to at least the physical, social, emotional, spiritual, intellectual, vocational, and environmental dimensions.
Derivation
The word "salutogenesis" comes from the Latin salus (meaning health) and the Greek genesis (meaning origin). Antonovsky developed the term from his studies of "how people manage stress and stay well" (unlike pathogenesis which studies the causes of diseases). He observed that stress is ubiquitous, but not all individuals have negative health outcomes in response to stress. Instead, some people achieve health despite their exposure to potentially disabling stress factors.
Development
In his 1979 book, Health, Stress and Coping, Antonovsky described a variety of influences that led him to the question of how people survive, adapt, and overcome in the face of even the most punishing life-stress experiences. In his 1987 book, Unraveling the Mysteries of Health, he focused more specifically on a study of women and aging; he found that 29% of women who had survived Nazi concentration camps had positive emotional health, compared to 51% of a control group. His insight was that 29% of the survivors were not emotionally impaired by the stress. Antonovsky wrote: "this for me was the dramatic experience that consciously set me on the road to formulating what I came to call the 'salutogenic model'."
In salutogenic theory, people continually battle with the effects of hardship. These ubiquitous forces are called generalized resource deficits (GRDs). On the other hand, there are generalized resistance resources (GRRs), which are all of the resources that help a person cope and are effective in avoiding or combating a range of psychosocial stressors. Examples are resources such as money, ego-strength, and social support.
Generalized resource deficits will cause the coping mechanisms to fail whenever the sense of coherence is not robust to weather the current situation. This causes illness and possibly even death. However, if the sense of coherence is high, a stressor will not necessarily be harmful. But it is the balance between generalized resource deficits and resources that determines whether a factor will be pathogenic, neutral, or salutary.
Antonovsky's formulation was that the generalized resistance resources enabled individuals to make sense of and manage events. He argued that over time, in response to positive experiences provided by successful use of different resources, an individual would develop an attitude that was "in itself the essential tool for coping".
Sense of coherence
The "sense of coherence" is a theoretical formulation that provides a central explanation for the role of stress in human functioning. "Beyond the specific stress factors that one might encounter in life, and beyond your perception and response to those events, what determines whether stress will cause you harm is whether or not the stress violates your sense of coherence." Antonovsky defined Sense of Coherence as:
"a global orientation that expresses the extent to which one has a pervasive, enduring though dynamic feeling of confidence that (1) the stimuli deriving from one's internal and external environments in the course of living are structured, predictable and explicable; (2) the resources are available to one to meet the demands posed by these stimuli; and (3) these demands are challenges, worthy of investment and engagement."
In his formulation, the sense of coherence has three components:
Comprehensibility: a belief that things happen in an orderly and predictable fashion and a sense that you can understand events in your life and reasonably predict what will happen in the future.
Manageability: a belief that you have the skills or ability, the support, the help, or the resources necessary to take care of things, and that things are manageable and within your control.
Meaningfulness: a belief that things in life are interesting and a source of satisfaction, that things are really worthwhile and that there is good reason or purpose to care about what happens.
According to Antonovsky, the third element is the most important. If a person believes there is no reason to persist and survive and confront challenges, if they have no sense of meaning, then they will have no motivation to comprehend and manage events. His essential argument is that "salutogenesis" depends on experiencing a strong "sense of coherence". His research demonstrated that the sense of coherence predicts positive health outcomes.
During the COVID-19 pandemic, one's sense of coherence was shown to be associated with the likelihood of their adherence to the pandemic safety guidelines.
Fields of application
Health and medicine
Antonovsky viewed his work as primarily addressed to the fields of health psychology, behavioral medicine, and the sociology of health. It has been adopted as a term to describe contemporary approaches to nursing, psychiatry, integrative medicine, and healthcare architecture. The salutogenic framework has also been adapted as a method for decision making on the fly; the method has been applied for emergency care and for healthcare architecture. Incorporating concepts from salutogenesis can support a transition from curative to preventive medicine.
Workplace
The sense of coherence with its three components meaningfulness, manageability and understandability has also been applied to the workplace.
Meaningfulness is considered to be related to the feeling of participation and motivation and to a perceived meaning of the work. The meaningfulness component has also been linked with job control and task significance. Job control implies that employees have more authority to make decisions concerning their work and the working process. Task significance involves "the experience of congruence between personal values and work activities, which is accompanied by strong feelings of identification with the attitudes, values or goals of the working tasks and feelings of motivation and involvement".
The manageability component is considered to be linked to job control as well as to access to resources. It has also been considered to be linked with social skills and trust. Social relations relate also to the meaningfulness component.
The comprehensibility component may be influenced by consistent feedback at work, for example concerning the performance appraisal.
Salutogenics perspectives are also considered in the design of offices.
See also
References
Further reading
Becker, C. M., Glascoff, M. A., & Felts, W. M. (2010). "Salutogenesis 30 Years Later: Where do we go from here?" International Electronic Journal of Health Education, 13, 25-32. Can access at:
Studying Health vs. Studying Disease - Aaron Antonovsky. Lecture at the Congress for Clinical Psychology and Psychotherapy, Berlin, 19 February 1990.
Coping with Existential Threats and the Inevitability of Asking for Meaningfulness - Peter Novak. A philosophical perspective
Start making sense - Start Making Sense; Applying a salutogenic model to architectural design for psychiatric care - Jan Golembiewski. A method of applying salutogenic theory.
Salutogenesis
Bengt Lindström, "Salutogenesis – an introduction"
Golembiewski, J. (2012). "Salutogenic design: The neural basis for health promoting environments." World Health Design Scientific Review 5(4): 62-68.https://www.academia.edu/2456916/Salutogenic_design_The_neural_basis_for_health_promoting_environments
Mayer, C.-H. & Krause, C. (Eds.)(2012): Exploring Mental Health: Theoretical and Empirical Discourses on Salutogenesis. Pabst Science Publishers.
Mayer, C.-H. & Hausner, s. (Eds.) (2015): Salutogene Aufstellungen. Beiträge zur Gesundheitsförderung in der systemischen Arbeit. - Vandenhoeck & Ruprecht
Mittelmark, M.B., Sagy, S., Eriksson, M., Bauer, G., Pelikan, J.M., Lindström, B., Espnes, G.A. (Eds.) (2016): The Handbook of Salutogenesis Comprehensive overview of salutogenesis and its contribution to health promotion theory.
Medical sociology
Positive psychology
Health psychology
Public health
Determinants of health | 0.769329 | 0.987211 | 0.75949 |
Self-report study | A self-report study is a type of survey, questionnaire, or poll in which respondents read the question and select a response by themselves without any outside interference. A self-report is any method which involves asking a participant about their feelings, attitudes, beliefs and so on. Examples of self-reports are questionnaires and interviews; self-reports are often used as a way of gaining participants' responses in observational studies and experiments.
Self-report studies have validity problems. Patients may exaggerate symptoms in order to make their situation seem worse, or they may under-report the severity or frequency of symptoms in order to minimize their problems. Patients might also simply be mistaken or misremember the material covered by the survey.
Questionnaires and interviews
Questionnaires are a type of self-report method which consist of a set of questions usually in a highly structured written form.
Questionnaires can contain both open questions and closed questions and participants record their own answers.
Interviews are a type of spoken questionnaire where the interviewer records the responses. Interviews can be structured whereby there is a predetermined set of questions or unstructured whereby no questions are decided in advance.
The main strength of self-report methods are that they are allowing participants to describe their own experiences rather than inferring this from observing participants.
Questionnaires and interviews are often able to study large samples of people fairly easy and quickly. They are able to examine a large number of variables and can ask people to reveal behaviour and feelings which have been experienced in real situations.
However participants may not respond truthfully, either because they cannot remember or because they wish to present themselves in a socially acceptable manner. Social desirability bias can be a big problem with self-report measures as participants often answer in a way to portray themselves in a good light.
Questions are not always clear and it is not known if respondents have really understood the question in which case valid data would not be collected.
If questionnaires are sent out, say via email or through tutor groups, response rate can be very low.
Questions can often be leading. That is, they may be unwittingly forcing the respondent to give a particular reply.
Unstructured interviews can be very time consuming and difficult to carry out whereas structured interviews can restrict the respondents’ replies.
Therefore psychologists often carry out semi-structured interviews which consist of some pre-determined questions and followed up with further questions which allow the respondent to develop their answers.
Open and closed questions
Questionnaires and interviews can use open or closed questions or both.
Closed questions are questions that provide a limited choice (for example, a participant's age or their favorite type of football team), especially if the answer must be taken from a predetermined list. Such questions provide quantitative data, which is easy to analyze. However, these questions do not allow the participant to give in-depth insights.
Open questions are those questions that invite the respondent to provide answers in their own words and provide qualitative data. Although these types of questions are more difficult to analyze, they can produce more in-depth responses and tell the researcher what the participant actually thinks, rather than being restricted by categories.
Rating scales
One of the most common rating scales is the Likert scale. A statement is used and the participant decides how strongly they agree or disagree with the statements. For example the participant decides whether Mozzarella cheese is great with the options of "strongly agree", "agree", "undecided", "disagree", and "strongly disagree". One strength of Likert scales is that they can give an idea about how strongly a participant feels about something. This therefore gives more detail than a simple yes no answer. Another strength is that the data are quantitative, which are easy to analyse statistically. However, there is a tendency with Likert scales for people to respond towards the middle of the scale, perhaps to make them look less extreme. As with any questionnaire, participants may provide the answers that they feel they should. Moreover, because the data is quantitative, it does not provide in-depth replies.
Fixed-choice questions
Fixed-choice questions are phrased so that the respondent has to make a fixed-choice answer, usually 'yes' or 'no'.
This type of questionnaire is easy to measure and quantify. It also prevents a participant from choosing an option that is not in the list. Respondents may not feel that their desired response is available. For example, a person who dislikes all alcoholic beverages may feel that it is inaccurate to choose a favorite alcoholic beverage from a list that includes beer, wine, and liquor, but does not include none of the above as an option. Answers to fixed-choice questions are not in-depth.
Reliability
Reliability refers to how consistent a measuring device is. A measurement is said to be reliable or consistent if the measurement can produce similar results if used again in similar circumstances. For example, if a speedometer gave the same readings at the same speed it would be reliable. If it did not it would be pretty useless and unreliable.
Importantly reliability of self-report measures, such as psychometric tests and questionnaires can be assessed using the split half method. This involves splitting a test into two and having the same participant doing both halves of the test.
Validity
Validity refers to whether a study measures or examines what it claims to measure or examine. Questionnaires are said to often lack validity for a number of reasons. Participants may lie; give answers that are desired and so on.
A way of assessing the validity of self-report measures is to compare the results of the self-report with another self-report on the same topic. (This is called concurrent validity). For example if an interview is used to investigate sixth grade students' attitudes toward smoking, the scores could be compared with a questionnaire of former sixth graders' attitudes toward smoking.
Results of self-report studies have been confirmed by other methods. For example, results of prior self-reported outcomes were confirmed by studies involving smaller participant population using direct observation strategies.
The overarching question asked regarding this strategy is, "Why would the researcher trust what people say about themselves?" In case, however, when there is a challenge to the validity of collected data, there are research tools that can be used to address the problem of respondent bias in self-report studies. These include the construction of some inventories to minimize respondent distortions such as the use of scales to assess the attitude of the participant, measure personal bias, as well as identify the level of resistance, confusion, and insufficiency of self-reporting time, among others. Leading questions could also be avoided, open questions could be added to allow respondents to expand upon their replies and confidentiality could be reinforced to allow respondents to give more truthful responses.
Disadvantages
Self-report studies have many advantages, but they also suffer from specific disadvantages due to the way that subjects generally behave. Self-reported answers may be exaggerated; respondents may be too embarrassed to reveal private details; various biases may affect the results, like social desirability bias. There are also cases when respondents guess the hypothesis of the study and provide biased responses that 1) confirm the researcher's conjecture; 2) make them look good; or, 3) make them appear more distressed to receive promised services.
Subjects may also forget pertinent details. Self-report studies are inherently biased by the person's feelings at the time they filled out the questionnaire. If a person feels bad at the time they fill out the questionnaire, for example, their answers will be more negative. If the person feels good at the time, then the answers will be more positive.
As with all studies relying on voluntary participation, results can be biased by a lack of respondents, if there are systematic differences between people who respond and people who do not. Care must be taken to avoid biases due to interviewers and their demand characteristics.
See also
Questionnaire
Self-report inventory
References
Survey methodology | 0.77229 | 0.983421 | 0.759486 |
Social relation | A social relation is the fundamental unit of analysis within the social sciences, and describes any voluntary or involuntary interpersonal relationship between two or more conspecifics within and/or between groups. The group can be a language or kinship group, a social institution or organization, an economic class, a nation, or gender. Social relations are derived from human behavioral ecology, and, as an aggregate, form a coherent social structure whose constituent parts are best understood relative to each other and to the social ecosystem as a whole.
History
Early inquiries into the nature of social relations featured in the work of sociologists such as Max Weber in his theory of social action, where social relationships composed of both positive (affiliative) and negative (agonistic) interactions represented opposing effects. Categorizing social interactions enables observational and other social research, such as Gemeinschaft and Gesellschaft (lit. 'community and society'), collective consciousness, etc.
Ancient works which include manuals of good practice in social relations include the text of Pseudo-Phocylides, 175–227, Josephus' polemical work Against Apion, 198–210, and the deutero-canonical Jewish Book of Sirach or Ecclesiasticus, .
More recent research on social behaviour has demonstrated that newborn infants tend to instinctually gravitate towards prosocial behaviour. As obligate social apes, humans are born highly altricial, and require an extended period of post-natal development for cultural transmission of social organization, language, and moral frameworks. In linguistic and anthropological frameworks, this is reflected in a culture's kinship terminology, with the default mother-child relation emerging as part of the embryological process.
Forms of relation and interaction
According to Piotr Sztompka, forms of relation and interaction in sociology and anthropology may be described as follows: first and most basic are animal-like behaviors, i.e. various physical movements of the body. Then there are actions—movements with a meaning and purpose. Then there are social behaviors, or social actions, which address (directly or indirectly) other people, which solicit a response from another agent.
Next are social contacts, a pair of social actions, which form the beginning of social interactions. Symbols define social relationships. Without symbols, our social life would be no more sophisticated than that of animals. For example, without symbols, people would have no aunts or uncles, employers or teachers—or even brothers and sisters. In sum, symbolic interactionists analyze how social life depends on the ways people define themselves and others. They study face-to-face interaction, examining how people make sense of life and how they determine their relationships.
See also
Affectional action
Communicative action
Dramaturgical action
Instrumental and value-rational action
Interdependence
Interpersonal relationship
Relations of production
Social isolation
Social movement
Social multiplier effect
Social robot
Symbolic interactionism
Traditional action
Related disciplines
Behavioral ecology
Behavioral sciences
Engaged theory
Social ecology
Social philosophy
Social psychology
References
Bibliography
Azarian, Reza. 2010. "Social Ties: Elements of a Substantive Conceptualisation". Acta Sociologica 53(4):323–38.
Piotr Sztompka, Socjologia, Znak, 2002,
Weber, Max. "The Nature of Social Action". In Weber: Selections in Translation, edited by W. G. Runciman. Cambridge: Cambridge University Press. 1991.
Community building
Interpersonal relationships | 0.762992 | 0.995385 | 0.759471 |
Anthroposophy | Anthroposophy is a spiritual new religious movement which was founded in the early 20th century by the esotericist Rudolf Steiner that postulates the existence of an objective, intellectually comprehensible spiritual world, accessible to human experience. Followers of anthroposophy aim to engage in spiritual discovery through a mode of thought independent of sensory experience. Though proponents claim to present their ideas in a manner that is verifiable by rational discourse and say that they seek precision and clarity comparable to that obtained by scientists investigating the physical world, many of these ideas have been termed pseudoscientific by experts in epistemology and debunkers of pseudoscience.
Anthroposophy has its roots in German idealism, Western and Eastern esoteric ideas, various religious traditions, and modern Theosophy. Steiner chose the term anthroposophy (from Greek ἄνθρωπος , 'human', and σοφία sophia, 'wisdom') to emphasize his philosophy's humanistic orientation. He defined it as "a scientific exploration of the spiritual world", others have variously called it a "philosophy and cultural movement", a "spiritual movement", a "spiritual science", "a system of thought", or "a spiritualist movement".
Anthroposophical ideas have been applied in a range of fields including education (both in Waldorf schools and in the Camphill movement), environmental conservation and banking; with additional applications in agriculture, organizational development, the arts, and more.
The Anthroposophical Society is headquartered at the Goetheanum in Dornach, Switzerland. Anthroposophy's supporters include writers Saul Bellow, and Selma Lagerlöf, painters Piet Mondrian, Wassily Kandinsky and Hilma af Klint, filmmaker Andrei Tarkovsky, child psychiatrist Eva Frommer, music therapist Maria Schüppel, Romuva religious founder Vydūnas, and former president of Georgia Zviad Gamsakhurdia. While critics and proponents alike acknowledge Steiner's many anti-racist statements. "Steiner's collected works...contain pervasive internal contradictions and inconsistencies on racial and national questions."
The historian of religion Olav Hammer has termed anthroposophy "the most important esoteric society in European history". Many scientists, physicians, and philosophers, including Michael Shermer, Michael Ruse, Edzard Ernst, David Gorski, and Simon Singh have criticized anthroposophy's application in the areas of medicine, biology, agriculture, and education to be dangerous and pseudoscientific. Ideas of Steiner's that are unsupported or disproven by modern science include: racial evolution, clairvoyance (Steiner claimed he was clairvoyant), and the Atlantis myth.
History
The early work of the founder of anthroposophy, Rudolf Steiner, culminated in his Philosophy of Freedom (also translated as The Philosophy of Spiritual Activity and Intuitive Thinking as a Spiritual Path). Here, Steiner developed a concept of free will based on inner experiences, especially those that occur in the creative activity of independent thought. "Steiner was a moral individualist".
By the beginning of the twentieth century, Steiner's interests turned almost exclusively to spirituality. His work began to draw the attention of others interested in spiritual ideas; among these was the Theosophical Society. From 1900 on, thanks to the positive reception his ideas received from Theosophists, Steiner focused increasingly on his work with the Theosophical Society, becoming the secretary of its section in Germany in 1902. During his leadership, membership increased dramatically, from just a few individuals to sixty-nine lodges.
By 1907, a split between Steiner and the Theosophical Society became apparent. While the Society was oriented toward an Eastern and especially Indian approach, Steiner was trying to develop a path that embraced Christianity and natural science. The split became irrevocable when Annie Besant, then president of the Theosophical Society, presented the child Jiddu Krishnamurti as the reincarnated Christ. Steiner strongly objected and considered any comparison between Krishnamurti and Christ to be nonsense; many years later, Krishnamurti also repudiated the assertion. Steiner's continuing differences with Besant led him to separate from the Theosophical Society Adyar. He was subsequently followed by the great majority of the Theosophical Society's German members, as well as many members of other national sections.
By this time, Steiner had reached considerable stature as a spiritual teacher and expert in the occult. He spoke about what he considered to be his direct experience of the Akashic Records (sometimes called the "Akasha Chronicle"), thought to be a spiritual chronicle of the history, pre-history, and future of the world and mankind. In a number of works, Steiner described a path of inner development he felt would let anyone attain comparable spiritual experiences. In Steiner's view, sound vision could be developed, in part, by practicing rigorous forms of ethical and cognitive self-discipline, concentration, and meditation. In particular, Steiner believed a person's spiritual development could occur only after a period of moral development.
In 1912, Steiner broke away from the Theosophical Society to found an independent group, which he named the Anthroposophical Society. After World War I, members of the young society began applying Steiner's ideas to create cultural movements in areas such as traditional and special education, farming, and medicine.
By 1923, a schism had formed between older members, focused on inner development, and younger members eager to become active in contemporary social transformations. In response, Steiner attempted to bridge the gap by establishing an overall School for Spiritual Science. As a spiritual basis for the reborn movement, Steiner wrote a Foundation Stone Meditation which remains a central touchstone of anthroposophical ideas.
Steiner died just over a year later, in 1925. The Second World War temporarily hindered the anthroposophical movement in most of Continental Europe, as the Anthroposophical Society and most of its practical counter-cultural applications were banned by the Nazi government. Though at least one prominent member of the Nazi Party, Rudolf Hess, was a strong supporter of anthroposophy, very few anthroposophists belonged to the National Socialist Party. In reality, Steiner had both enemies and loyal supporters in the upper echelons of the Nazi regime. Staudenmaier speaks of the "polycratic party-state apparatus", so Nazism's approach to Anthroposophy was not characterized by monolithic ideological unity. When Hess flew to the UK and was imprisoned, their most powerful protector was gone, but Anthroposophists were still not left without supporters among higher-placed Nazis.
The Third Reich had banned almost all esoteric organizations, claiming that these were controlled by Jews. The truth was that while Anthroposophists complained of bad press, they were to a surprising extent tolerated by the Nazi regime, "including outspokenly supportive pieces in the Völkischer Beobachter". Ideological purists from Sicherheitsdienst argued largely in vain against Anthroposophy. According to Staudenmaier, "The prospect of unmitigated persecution was held at bay for years in a tenuous truce between pro-anthroposophical and anti-anthroposophical Nazi factions."
Morals: Anthroposophy was not the stake of that dispute, but merely powerful Nazis wanting to get rid of other powerful Nazis. E.g. Jehovah's Witnesses were treated much more aggressively than Anthroposophists.
Kurlander stated that "the Nazis were hardly ideologically opposed to the supernatural sciences themselves"—rather they objected to the free (i.e. non-totalitarian) pursuit of supernatural sciences.
According to Hans Büchenbacher, an anthroposophist, the Secretary General of the General Anthroposophical Society, Guenther Wachsmuth, as well as Steiner's widow, Marie Steiner, were “completely pro-Nazi.” Marie Steiner-von Sivers, Guenther Wachsmuth, and Albert Steffen, had publicly expressed sympathy for the Nazi regime since its beginnings; led by such sympathies of their leadership, the Swiss and German Anthroposophical organizations chose for a path conflating accommodation with collaboration, which in the end ensured that while the Nazi regime hunted the esoteric organizations, Gentile Anthroposophists from Nazi Germany and countries occupied by it were let be to a surprising extent. Of course they had some setbacks from the enemies of Anthroposophy among the upper echelons of the Nazi regime, but Anthroposophists also had loyal supporters among them, so overall Gentile Anthroposophists were not badly hit by the Nazi regime.
Staudenmaier's overall argument is that "there were often no clear-cut lines between theosophy, anthroposophy, ariosophy, astrology and the völkisch movement from which the Nazi Party arose."
By 2007, national branches of the Anthroposophical Society had been established in fifty countries and about 10,000 institutions around the world were working on the basis of anthroposophical ideas.
Etymology and earlier uses of the word
Anthroposophy is an amalgam of the Greek terms ( 'human') and ( 'wisdom'). An early English usage is recorded by Nathan Bailey (1742) as meaning "the knowledge of the nature of man."
The first known use of the term anthroposophy occurs within Arbatel de magia veterum, summum sapientiae studium, a book published anonymously in 1575 and attributed to Heinrich Cornelius Agrippa. The work describes anthroposophy (as well as theosophy) variously as an understanding of goodness, nature, or human affairs. In 1648, the Welsh philosopher Thomas Vaughan published his Anthroposophia Theomagica, or a discourse of the nature of man and his state after death.
The term began to appear with some frequency in philosophical works of the mid- and late-nineteenth century. In the early part of that century, Ignaz Troxler used the term anthroposophy to refer to philosophy deepened to self-knowledge, which he suggested allows deeper knowledge of nature as well. He spoke of human nature as a mystical unity of God and world. Immanuel Hermann Fichte used the term anthroposophy to refer to "rigorous human self-knowledge," achievable through thorough comprehension of the human spirit and of the working of God in this spirit, in his 1856 work Anthropology: The Study of the Human Soul. In 1872, the philosopher of religion Gideon Spicker used the term anthroposophy to refer to self-knowledge that would unite God and world: "the true study of the human being is the human being, and philosophy's highest aim is self-knowledge, or Anthroposophy."
In 1882, the philosopher Robert Zimmermann published the treatise, "An Outline of Anthroposophy: Proposal for a System of Idealism on a Realistic Basis," proposing that idealistic philosophy should employ logical thinking to extend empirical experience. Steiner attended lectures by Zimmermann at the University of Vienna in the early 1880s, thus at the time of this book's publication.
In the early 1900s, Steiner began using the term anthroposophy (i.e. human wisdom) as an alternative to the term theosophy (i.e. divine wisdom).
Central ideas
Spiritual knowledge and freedom
Anthroposophical proponents aim to extend the clarity of the scientific method to phenomena of human soul-life and spiritual experiences. Steiner believed this required developing new faculties of objective spiritual perception, which he maintained was still possible for contemporary humans. The steps of this process of inner development he identified as consciously achieved imagination, inspiration, and intuition. Steiner believed results of this form of spiritual research should be expressed in a way that can be understood and evaluated on the same basis as the results of natural science.
Steiner hoped to form a spiritual movement that would free the individual from any external authority. For Steiner, the human capacity for rational thought would allow individuals to comprehend spiritual research on their own and bypass the danger of dependency on an authority such as himself.
Steiner contrasted the anthroposophical approach with both conventional mysticism, which he considered lacking the clarity necessary for exact knowledge, and natural science, which he considered arbitrarily limited to what can be seen, heard, or felt with the outward senses.
Nature of the human being
In Theosophy, Steiner suggested that human beings unite a physical body of substances gathered from and returning to the inorganic world; a life body (also called the etheric body), in common with all living creatures (including plants); a bearer of sentience or consciousness (also called the astral body), in common with all animals; and the ego, which anchors the faculty of self-awareness unique to human beings.
Anthroposophy describes a broad evolution of human consciousness. Early stages of human evolution possess an intuitive perception of reality, including a clairvoyant perception of spiritual realities. Humanity has progressively evolved an increasing reliance on intellectual faculties and a corresponding loss of intuitive or clairvoyant experiences, which have become atavistic. The increasing intellectualization of consciousness, initially a progressive direction of evolution, has led to an excessive reliance on abstraction and a loss of contact with both natural and spiritual realities. However, to go further requires new capacities that combine the clarity of intellectual thought with the imagination and with consciously achieved inspiration and intuitive insights.
Anthroposophy speaks of the reincarnation of the human spirit: that the human being passes between stages of existence, incarnating into an earthly body, living on earth, leaving the body behind, and entering into the spiritual worlds before returning to be born again into a new life on earth. After the death of the physical body, the human spirit recapitulates the past life, perceiving its events as they were experienced by the objects of its actions. A complex transformation takes place between the review of the past life and the preparation for the next life. The individual's karmic condition eventually leads to a choice of parents, physical body, disposition, and capacities that provide the challenges and opportunities that further development requires, which includes karmically chosen tasks for the future life.
Steiner described some conditions that determine the interdependence of a person's lives, or karma.
Evolution
The anthroposophical view of evolution considers all animals to have evolved from an early, unspecialized form. As the least specialized animal, human beings have maintained the closest connection to the archetypal form; contrary to the Darwinian conception of human evolution, all other animals devolve from this archetype. The spiritual archetype originally created by spiritual beings was devoid of physical substance; only later did this descend into material existence on Earth. In this view, human evolution has accompanied the Earth's evolution throughout the existence of the Earth.
Anthroposophy adapted Theosophy's complex system of cycles of world development and human evolution. The evolution of the world is said to have occurred in cycles. The first phase of the world consisted only of heat. In the second phase, a more active condition, light, and a more condensed, gaseous state separate out from the heat. In the third phase, a fluid state arose, as well as a sounding, forming energy. In the fourth (current) phase, solid physical matter first exists. This process is said to have been accompanied by an evolution of consciousness which led up to present human culture.
Ethics
The anthroposophical view is that good is found in the balance between two polar influences on world and human evolution. These are often described through their mythological embodiments as spiritual adversaries which endeavour to tempt and corrupt humanity, Lucifer and his counterpart Ahriman. These have both positive and negative aspects. Lucifer is the light spirit, which "plays on human pride and offers the delusion of divinity", but also motivates creativity and spirituality; Ahriman is the dark spirit that tempts human beings to "...deny [their] link with divinity and to live entirely on the material plane", but that also stimulates intellectuality and technology. Both figures exert a negative effect on humanity when their influence becomes misplaced or one-sided, yet their influences are necessary for human freedom to unfold.
Each human being has the task to find a balance between these opposing influences, and each is helped in this task by the mediation of the Representative of Humanity, also known as the Christ being, a spiritual entity who stands between and harmonizes the two extremes.
Claimed applications
Steiner/Waldorf education
There is a pedagogical movement with over 1000 Steiner or Waldorf schools (the latter name stems from the first such school, founded in Stuttgart in 1919) located in some 60 countries; the great majority of these are independent (private) schools. Sixteen of the schools have been affiliated with the United Nations' UNESCO Associated Schools Project Network, which sponsors education projects that foster improved quality of education throughout the world. Waldorf schools receive full or partial governmental funding in some European nations, Australia and in parts of the United States (as Waldorf method public or charter schools) and Canada.
The schools have been founded in a variety of communities: for example in the favelas of São Paulo to wealthy suburbs of major cities; in India, Egypt, Australia, the Netherlands, Mexico and South Africa. Though most of the early Waldorf schools were teacher-founded, the schools today are usually initiated and later supported by a parent community. Waldorf schools are among the most visible anthroposophical institutions.
Biodynamic agriculture
Biodynamic agriculture, is a form of alternative agriculture based on pseudo-scientific and esoteric concepts. It was also the first intentional form of organic farming, begun in 1924, when Rudolf Steiner gave a series of lectures published in English as The Agriculture Course. Steiner is considered one of the founders of the modern organic farming movement.
"And Himmler, Hess, and Darré all promoted biodynamic (anthroposophic) approaches to farming as an alternative to industrial agriculture." "'[...] with the active cooperation of the Reich League for Biodynamic Agriculture' [...] Pancke, Pohl, and Hans Merkel established additional biodynamic plantations across the eastern territories as well as Dachau, Ravensbrück, and Auschwitz concentration camps. Many were staffed by anthroposophists."
"Steiner’s 'biodynamic agriculture' based on 'restoring the quasi-mystical relationship between earth and the cosmos' was widely accepted in the Third Reich (28)."
Anthroposophical medicine
Anthroposophical medicine is a form of alternative medicine based on pseudoscientific and occult notions rather than in science-based medicine.
Most anthroposophic medical preparations are highly diluted, like homeopathic remedies, while harmless in of themselves, using them in place of conventional medicine to treat illness is ineffective and risks adverse consequences.
One of the most studied applications has been the use of mistletoe extracts in cancer therapy, but research has found no evidence of benefit.
Special needs education and services
In 1922, Ita Wegman founded an anthroposophical center for special needs education, the Sonnenhof, in Switzerland. In 1940, Karl König founded the Camphill Movement in Scotland. The latter in particular has spread widely, and there are now over a hundred Camphill communities and other anthroposophical homes for children and adults in need of special care in about 22 countries around the world. Both Karl König, Thomas Weihs and others have written extensively on these ideas underlying Special education.
Architecture
Steiner designed around thirteen buildings in an organic—expressionist architectural style. Foremost among these are his designs for the two Goetheanum buildings in Dornach, Switzerland. Thousands of further buildings have been built by later generations of anthroposophic architects.
Architects who have been strongly influenced by the anthroposophic style include Imre Makovecz in Hungary, Hans Scharoun and Joachim Eble in Germany, Erik Asmussen in Sweden, Kenji Imai in Japan, Thomas Rau, Anton Alberts and Max van Huut in the Netherlands, Christopher Day and Camphill Architects in the UK, Thompson and Rose in America, Denis Bowman in Canada, and Walter Burley Griffin and Gregory Burgess in Australia.
ING House in Amsterdam is a contemporary building by an anthroposophical architect which has received awards for its ecological design and approach to a self-sustaining ecology as an autonomous building and example of sustainable architecture.
Eurythmy
Together with Marie von Sivers, Steiner developed eurythmy, a performance art combining dance, speech, and music.
Social finance and entrepreneurship
Around the world today are a number of banks, companies, charities, and schools for developing co-operative forms of business using Steiner's ideas about economic associations, aiming at harmonious and socially responsible roles in the world economy. The first anthroposophic bank was the Gemeinschaftsbank für Leihen und Schenken in Bochum, Germany, founded in 1974.
Socially responsible banks founded out of anthroposophy include Triodos Bank, founded in the Netherlands in 1980 and also active in the UK, Germany, Belgium, Spain and France. Other examples include Cultura Sparebank which dates from 1982 when a group of Norwegian anthroposophists began an initiative for ethical banking but only began to operate as a savings bank in Norway in the late 90s, La Nef in France and RSF Social Financein San Francisco.
Harvard Business School historian Geoffrey Jones traced the considerable impact both Steiner and later anthroposophical entrepreneurs had on the creation of many businesses in organic food, ecological architecture and sustainable finance.
Organizational development, counselling and biography work
Bernard Lievegoed, a psychiatrist, founded a new method of individual and institutional development oriented towards humanizing organizations and linked with Steiner's ideas of the threefold social order. This work is represented by the NPI Institute for Organizational Development in the Netherlands and sister organizations in many other countries.
Speech and drama
There are also anthroposophical movements to renew speech and drama, the most important of which are based in the work of Marie Steiner-von Sivers (speech formation, also known as Creative Speech) and the Chekhov Method originated by Michael Chekhov (nephew of Anton Chekhov).
Art
Anthroposophic painting, a style inspired by Rudolf Steiner, featured prominently in the first Goetheanum's cupola. The technique frequently begins by filling the surface to be painted with color, out of which forms are gradually developed, often images with symbolic-spiritual significance. Paints that allow for many transparent layers are preferred, and often these are derived from plant materials. Rudolf Steiner appointed the English sculptor Edith Maryon as head of the School of Fine Art at the Goetheanum. Together they carved the 9-metre tall sculpture titled The Representative of Humanity, on display at the Goetheanum.
Other
Phenomenological approaches to science, pseudo-scientific ideas based on Goethe's philosophy of nature.
John Wilkes' fountain-like flowforms, sculptural forms that guide water into rhythmic movement for the purposes of decoration.
Antisemitic legislation in Italy (1938–1945).
The Fellowship Community in Chestnut Ridge, New York, United States, which includes a retirement community and other anthroposophic projects.
The Harduf kibbutz in Israel.
Social goals
For a period after World War I, Steiner was extremely active and well known in Germany, in part because he lectured widely proposing social reforms. Steiner was a sharp critic of nationalism, which he saw as outdated, and a proponent of achieving social solidarity through individual freedom. A petition proposing a radical change in the German constitution and expressing his basic social ideas (signed by Herman Hesse, among others) was widely circulated. His main book on social reform is Toward Social Renewal.
Anthroposophy continues to aim at reforming society through maintaining and strengthening the independence of the spheres of cultural life, human rights and the economy. It emphasizes a particular ideal in each of these three realms of society:
Liberty in cultural life
Equality of rights, the sphere of legislation
Fraternity in the economic sphere
According to Cees Leijenhorst, "Steiner outlined his vision of a new political and social philosophy that avoids the two extremes of capitalism and socialism."
Steiner did influence Italian Fascism, which exploited "his racial and anti-democratic dogma." The fascist ministers Giovanni Antonio Colonna di Cesarò (nicknamed "the Anthroposophist duke"; he became antifascist after taking part in Benito Mussolini's government) and Ettore Martinoli have openly expressed their sympathy for Rudolf Steiner. Most from the occult pro-fascist UR Group were Anthroposophists.
According to Egil Asprem, "Steiner’s teachings had a clear authoritarian ring, and developed a rather crass polemic against 'materialism', 'liberalism', and cultural 'degeneration'. [...] For example, anthroposophical medicine was developed to contrast with the 'materialistic' (and hence 'degenerate') medicine of the establishment."
Esoteric path
Paths of spiritual development
According to Steiner, a real spiritual world exists, evolving along with the material one. Steiner held that the spiritual world can be researched in the right circumstances through direct experience, by persons practicing rigorous forms of ethical and cognitive self-discipline. Steiner described many exercises he said were suited to strengthening such self-discipline; the most complete exposition of these is found in his book How To Know Higher Worlds. The aim of these exercises is to develop higher levels of consciousness through meditation and observation. Details about the spiritual world, Steiner suggested, could on such a basis be discovered and reported, though no more infallibly than the results of natural science.
Steiner regarded his research reports as being important aids to others seeking to enter into spiritual experience. He suggested that a combination of spiritual exercises (for example, concentrating on an object such as a seed), moral development (control of thought, feelings and will combined with openness, tolerance and flexibility) and familiarity with other spiritual researchers' results would best further an individual's spiritual development. He consistently emphasised that any inner, spiritual practice should be undertaken in such a way as not to interfere with one's responsibilities in outer life. Steiner distinguished between what he considered were true and false paths of spiritual investigation.
In anthroposophy, artistic expression is also treated as a potentially valuable bridge between spiritual and material reality.
Prerequisites to and stages of inner development
Steiner's stated prerequisites to beginning on a spiritual path include a willingness to take up serious cognitive studies, a respect for factual evidence, and a responsible attitude. Central to progress on the path itself is a harmonious cultivation of the following qualities:
Control over one's own thinking
Control over one's will
Composure
Positivity
Impartiality
Steiner sees meditation as a concentration and enhancement of the power of thought. By focusing consciously on an idea, feeling or intention the meditant seeks to arrive at pure thinking, a state exemplified by but not confined to pure mathematics. In Steiner's view, conventional sensory-material knowledge is achieved through relating perception and concepts. The anthroposophic path of esoteric training articulates three further stages of supersensory knowledge, which do not necessarily follow strictly sequentially in any single individual's spiritual progress.
By focusing on symbolic patterns, images, and poetic mantras, the meditant can achieve consciously directed Imaginations that allow sensory phenomena to appear as the expression of underlying beings of a soul-spiritual nature.
By transcending such imaginative pictures, the meditant can become conscious of the meditative activity itself, which leads to experiences of expressions of soul-spiritual beings unmediated by sensory phenomena or qualities. Steiner calls this stage Inspiration.
By intensifying the will-forces through exercises such as a chronologically reversed review of the day's events, the meditant can achieve a further stage of inner independence from sensory experience, leading to direct contact, and even union, with spiritual beings ("Intuition") without loss of individual awareness.
Spiritual exercises
Steiner described numerous exercises he believed would bring spiritual development; other anthroposophists have added many others. A central principle is that "for every step in spiritual perception, three steps are to be taken in moral development." According to Steiner, moral development reveals the extent to which one has achieved control over one's inner life and can exercise it in harmony with the spiritual life of other people; it shows the real progress in spiritual development, the fruits of which are given in spiritual perception. It also guarantees the capacity to distinguish between false perceptions or illusions (which are possible in perceptions of both the outer world and the inner world) and true perceptions: i.e., the capacity to distinguish in any perception between the influence of subjective elements (i.e., viewpoint) and objective reality.
Place in Western philosophy
Steiner built upon Goethe's conception of an imaginative power capable of synthesizing the sense-perceptible form of a thing (an image of its outer appearance) and the concept we have of that thing (an image of its inner structure or nature). Steiner added to this the conception that a further step in the development of thinking is possible when the thinker observes his or her own thought processes. "The organ of observation and the observed thought process are then identical, so that the condition thus arrived at is simultaneously one of perception through thinking and one of thought through perception."
Thus, in Steiner's view, we can overcome the subject-object divide through inner activity, even though all human experience begins by being conditioned by it. In this connection, Steiner examines the step from thinking determined by outer impressions to what he calls sense-free thinking. He characterizes thoughts he considers without sensory content, such as mathematical or logical thoughts, as free deeds. Steiner believed he had thus located the origin of free will in our thinking, and in particular in sense-free thinking.
Some of the epistemic basis for Steiner's later anthroposophical work is contained in the seminal work, Philosophy of Freedom. In his early works, Steiner sought to overcome what he perceived as the dualism of Cartesian idealism and Kantian subjectivism by developing Goethe's conception of the human being as a natural-supernatural entity, that is: natural in that humanity is a product of nature, supernatural in that through our conceptual powers we extend nature's realm, allowing it to achieve a reflective capacity in us as philosophy, art and science. Steiner was one of the first European philosophers to overcome the subject-object split in Western thought. Though not well known among philosophers, his philosophical work was taken up by Owen Barfield (and through him influenced the Inklings, an Oxford group of Christian writers that included J. R. R. Tolkien and C. S. Lewis).
Christian and Jewish mystical thought have also influenced the development of anthroposophy.
Union of science and spirit
Steiner believed in the possibility of applying the clarity of scientific thinking to spiritual experience, which he saw as deriving from an objectively existing spiritual world. Steiner identified mathematics, which attains certainty through thinking itself, thus through inner experience rather than empirical observation, as the basis of his epistemology of spiritual experience.
Anthroposophy regards mainstream science as Ahrimanic.
Relationship to religion
Christ as the center of earthly evolution
Steiner's writing, though appreciative of all religions and cultural developments, emphasizes Western tradition as having evolved to meet contemporary needs. He describes Christ and his mission on earth of bringing individuated consciousness as having a particularly important place in human evolution, whereby:
Christianity has evolved out of previous religions;
The being which manifests in Christianity also manifests in all faiths and religions, and each religion is valid and true for the time and cultural context in which it was born;
All historical forms of Christianity need to be transformed considerably to meet the continuing evolution of humanity.
Thus, anthroposophy considers there to be a being who unifies all religions, and who is not represented by any particular religious faith. This being is, according to Steiner, not only the Redeemer of the Fall from Paradise, but also the unique pivot and meaning of earth's evolutionary processes and of human history. To describe this being, Steiner periodically used terms such as the "Representative of Humanity" or the "good spirit" rather than any denominational term.
Divergence from conventional Christian thought
Steiner's views of Christianity diverge from conventional Christian thought in key places, and include gnostic elements:
One central point of divergence is Steiner's views on reincarnation and karma.
Steiner differentiated three contemporary paths by which he believed it possible to arrive at Christ:
Through heart-felt experiences of the Gospels; Steiner described this as the historically dominant path, but becoming less important in the future.
Through inner experiences of a spiritual reality; this Steiner regarded as increasingly the path of spiritual or religious seekers today.
Through initiatory experiences whereby the reality of Christ's death and resurrection are experienced; Steiner believed this is the path people will increasingly take.
Steiner also believed that there were two different Jesus children involved in the Incarnation of the Christ: one child descended from Solomon, as described in the Gospel of Matthew, the other child from Nathan, as described in the Gospel of Luke. (The genealogies given in the two gospels diverge some thirty generations before Jesus' birth, and 'Jesus' was a common name in biblical times.)
His view of the second coming of Christ is also unusual; he suggested that this would not be a physical reappearance, but that the Christ being would become manifest in non-physical form, visible to spiritual vision and apparent in community life for increasing numbers of people beginning around the year 1933.
He emphasized his belief that in the future humanity would need to be able to recognize the Spirit of Love in all its genuine forms, regardless of what name would be used to describe this being. He also warned that the traditional name of the Christ might be misused, and the true essence of this being of love ignored.
According to Jane Gilmer, "Jung and Steiner were both versed in ancient gnosis and both envisioned a paradigmatic shift in the way it was delivered."
As Gilles Quispel put it, "After all, Theosophy is a pagan, Anthroposophy a Christian form of modern Gnosis."
Maria Carlson stated "Theosophy and Anthroposophy are fundamentally Gnostic systems in that they posit the dualism of Spirit and Matter."
R. McL. Wilson in The Oxford Companion to the Bible agrees that Steiner and Anthroposophy are under the influence of gnosticism.
Robert A. McDermott says Anthroposophy belongs to Christian Rosicrucianism. According to Nicholas Goodrick-Clarke, Rudolf Steiner "blended modern Theosophy with a Gnostic form of Christianity, Rosicrucianism, and German Naturphilosophie".
Geoffrey Ahern states that Anthroposophy belongs to neo-gnosticism broadly conceived, which he identifies with Western esotericism and occultism.
According to Catholic scholars Anthroposophy belongs to the New Age.
Judaism
Rudolf Steiner wrote and lectured on Judaism and Jewish issues over much of his adult life. He was a fierce opponent of popular antisemitism, but asserted that there was no justification for the existence of Judaism and Jewish culture in the modern world, a radical assimilationist perspective which saw the Jews completely integrating into the larger society. He also supported Émile Zola's position in the Dreyfus affair. Steiner emphasized Judaism's central importance to the constitution of the modern era in the West but suggested that to appreciate the spirituality of the future it would need to overcome its tendency toward abstraction.
Steiner financed the publication of the book Die Entente-Freimaurerei und der Weltkrieg (1919) by ; Steiner also wrote the foreword for the book, partly based upon his own ideas. The publication comprised a conspiracy theory according to whom World War I was a consequence of a collusion of Freemasons and Jews – still favorite scapegoats of the conspiracy theorists – their purpose being the destruction of Germany. Fact is that Steiner spent a large sum of money for publishing "a now classic work of anti-Masonry and anti-Judaism". The writing was later enthusiastically received by the Nazi Party.
In his later life, Steiner was accused by the Nazis of being Jewish, and Adolf Hitler called anthroposophy "Jewish methods". The anthroposophical institutions in Germany were banned during Nazi rule and several anthroposophists sent to concentration camps.
Important early anthroposophists who were Jewish included two central members on the executive boards of the precursors to the modern Anthroposophical Society, and Karl König, the founder of the Camphill movement, who had converted to Christianity. Martin Buber and Hugo Bergmann, who viewed Steiner's social ideas as a solution to the Arab–Jewish conflict, were also influenced by anthroposophy.
There are numerous anthroposophical organisations in Israel, including the anthroposophical kibbutz Harduf, founded by Jesaiah Ben-Aharon, forty Waldorf kindergartens and seventeen Waldorf schools (as of 2018). A number of these organizations are striving to foster positive relationships between the Arab and Jewish populations: The Harduf Waldorf school includes both Jewish and Arab faculty and students, and has extensive contact with the surrounding Arab communities, while the first joint Arab-Jewish kindergarten was a Waldorf program in Hilf near Haifa.
Christian Community
Towards the end of Steiner's life, a group of theology students (primarily Lutheran, with some Roman Catholic members) approached Steiner for help in reviving Christianity, in particular "to bridge the widening gulf between modern science and the world of spirit". They approached a notable Lutheran pastor, Friedrich Rittelmeyer, who was already working with Steiner's ideas, to join their efforts. Out of their co-operative endeavor, the Movement for Religious Renewal, now generally known as The Christian Community, was born. Steiner emphasized that he considered this movement, and his role in creating it, to be independent of his anthroposophical work, as he wished anthroposophy to be independent of any particular religion or religious denomination.
Reception
Anthroposophy's supporters include Saul Bellow, Selma Lagerlöf, Andrei Bely, Joseph Beuys, Owen Barfield, architect Walter Burley Griffin, Wassily Kandinsky, Andrei Tarkovsky, Bruno Walter, Right Livelihood Award winners Sir George Trevelyan, and Ibrahim Abouleish, and child psychiatrist Eva Frommer.
The historian of religion Olav Hammer has termed anthroposophy "the most important esoteric society in European history." However authors, scientists, and physicians including Michael Shermer, Michael Ruse, Edzard Ernst, David Gorski, and Simon Singh have criticized anthroposophy's application in the areas of medicine, biology, agriculture, and education to be dangerous and pseudoscientific. Others including former Waldorf pupil Dan Dugan and historian Geoffrey Ahern have criticized anthroposophy itself as a dangerous quasi-religious movement that is fundamentally anti-rational and anti-scientific.
Scientific basis
Though Rudolf Steiner studied natural science at the Vienna Technical University at the undergraduate level, his doctorate was in epistemology and very little of his work is directly concerned with the empirical sciences. In his mature work, when he did refer to science it was often to present phenomenological or Goethean science as an alternative to what he considered the materialistic science of his contemporaries.
Steiner's primary interest was in applying the methodology of science to realms of inner experience and the spiritual worlds (his appreciation that the essence of science is its method of inquiry is unusual among esotericists), and Steiner called anthroposophy Geisteswissenschaft (science of the mind, cultural/spiritual science), a term generally used in German to refer to the humanities and social sciences.
Whether this is a sufficient basis for anthroposophy to be considered a spiritual science has been a matter of controversy. As Freda Easton explained in her study of Waldorf schools, "Whether one accepts anthroposophy as a science depends upon whether one accepts Steiner's interpretation of a science that extends the consciousness and capacity of human beings to experience their inner spiritual world."
Sven Ove Hansson has disputed anthroposophy's claim to a scientific basis, stating that its ideas are not empirically derived and neither reproducible nor testable. Carlo Willmann points out that as, on its own terms, anthroposophical methodology offers no possibility of being falsified except through its own procedures of spiritual investigation, no intersubjective validation is possible by conventional scientific methods; it thus cannot stand up to empiricist critics. Peter Schneider describes such objections as untenable, asserting that if a non-sensory, non-physical realm exists, then according to Steiner the experiences of pure thinking possible within the normal realm of consciousness would already be experiences of that, and it would be impossible to exclude the possibility of empirically grounded experiences of other supersensory content.
Olav Hammer suggests that anthroposophy carries scientism "to lengths unparalleled in any other Esoteric position" due to its dependence upon claims of clairvoyant experience, its subsuming natural science under "spiritual science." Hammer also asserts that the development of what he calls "fringe" sciences such as anthroposophic medicine and biodynamic agriculture are justified partly on the basis of the ethical and ecological values they promote, rather than purely on a scientific basis.
Though Steiner saw that spiritual vision itself is difficult for others to achieve, he recommended open-mindedly exploring and rationally testing the results of such research; he also urged others to follow a spiritual training that would allow them directly to apply his methods to achieve comparable results.
Anthony Storr stated about Rudolf Steiner's Anthroposophy: "His belief system is so eccentric, so unsupported by evidence, so manifestly bizarre, that rational skeptics are bound to consider it delusional... But, whereas Einstein's way of perceiving the world by thought became confirmed by experiment and mathematical proof, Steiner's remained intensely subjective and insusceptible of objective confirmation."
According to Dan Dugan, Steiner was a champion of the following pseudoscientific claims, also championed by Waldorf schools:
wrong color theory;
obtuse criticism of the theory of relativity;
weird ideas about motions of the planets;
supporting vitalism;
doubting germ theory;
weird approach to physiological systems;
"the heart is not a pump".
Religious nature
Two German scholars have called Anthroposophy "the most successful form of 'alternative' religion in the [twentieth] century." Other scholars stated that Anthroposophy is "aspiring to the status of religious dogma". According to Maria Carlson, anthroposophy is a "positivistic religion" "offering a seemingly logical theology based on pseudoscience."
According to Swartz, Brandt, Hammer, and Hansson, Anthroposophy is a religion. They also call it "settled new religious movement", while Martin Gardner called it a cult. Another scholar also calls it a new religious movement or a new spiritual movement. Already in 1924 Anthroposophy got labeled "new religious movement" and "occultist movement". Other scholars agree it is a new religious movement. According to , both the theory and practice of Anthroposophy display characteristics of religion, and, according to Zander, Rudolf Steiner would plead no contest. According to Zander, Steiner's book Geheimwissenschaft [Occult Science] contains Steiner's mythology about cosmogenesis. Hammer notices that Anthroposophy is a synthesis which does include occultism. Hammer also notices that Steiner's occult doctrines bear a strong resemblance to post-Blavatskyan Theosophy (e.g. Annie Besant and Charles Webster Leadbeater). According to Helmut Zander, Steiner's clairvoyant insights always developed according to the same pattern. He took revised texts from theosophical literature and then passed them off as his own higher insights. Because he did not want to be an occult storyteller, but a (spiritual) scientist, he adapted his reading, which he had seen supernaturally in the world's memory, to the current state of technology. When, for example, the Wright brothers began flying with gliders and eventually with motorized aircraft in 1903, Steiner transformed the ponderous gondola airships of his Atlantis story into airplanes with elevators and rudders in 1904.
As an explicitly spiritual movement, anthroposophy has sometimes been called a religious philosophy. In 1998 People for Legal and Non-Sectarian Schools (PLANS) started a lawsuit alleging that anthroposophy is a religion for Establishment Clause purposes and therefore several California school districts should not be chartering Waldorf schools; the lawsuit was dismissed in 2012 for failure to show anthroposophy was a religion. A 2012 paper in legal science reports this verdict as being provisional, and disagrees with its result, i.e. anthroposophy was declared "not a religion" due to an outdated legal framework. In 2000, a French court ruled that a government minister's description of anthroposophy as a cult was defamatory. The French governmental anti-cults agency MIVILUDES reported that it remains vigilant about Anthroposophy, especially because of its deviant medical applications and its work with underage persons, and that the works of Grégoire Perra which lambast anthroposophical medicine do not constitute defamation. Anthroposophical MDs think diseases are caused primarily by karma and demons, rather than materialistic causes. The Gospel of Luke is their main handbook of medical science; this makes them believe they have magical powers, and that medicine is essentially a form of magic. The professional French organization of Anthroposophic MDs have sued Mr. Perra for such claims; they have been condemned to pay 25,000 Euros damages for abusively suing him.
Scholars state that Anthroposophy is influenced by Christian Gnosticism. The Catholic Church did in 1919 issue an edict classifying Anthroposophy as "a neognostic heresy" despite the fact that Steiner "very well respected the distinctions on which Catholic dogma insists".
Some Baptist and mainstream academical heresiologists still appear inclined to agree with the more narrow prior edict of 1919 on dogma and the Lutheran (Missouri Sinod) apologist and heresiologist Eldon K. Winker quoted Ron Rhodes that Steiner's Christology is very similar to Cerinthus. Steiner did perceive "a distinction between the human person Jesus, and Christ as the divine Logos", which could be construed as Gnostic but not Docetic, since "they do not believe the Christ departed from Jesus prior to the crucfixion". "Steiner's Christology is discussed as a central element of his thought in Johannes Hemleben, Rudolf Steiner: A Documentary Biography, trans. Leo Twyman (East Grinstead, Sussex: Henry Goulden, 1975), pp. 96-100. From the perspective of orthodox Christianity, it may be said that Steiner combined a docetic understanding of Christ's nature with the Adoptionist heresy." Older scholarship says Steiner's Christology is Nestorian. According to Egil Asprem, "Steiner’s Christology was, however, quite heterodox, and hardly compatible with official church doctrine."
Statements on race
Rudolf Steiner was an extreme pan-German nationalist, and never disavowed such stance.
Some anthroposophical ideas challenged the National Socialist racialist and nationalistic agenda. In contrast, some American educators have criticized Waldorf schools for failing to equally include the fables and myths of all cultures, instead favoring European stories over African ones.
From the mid-1930s on, National Socialist ideologues attacked the anthroposophical worldview as being opposed to Nazi racist and nationalistic principles; anthroposophy considered "Blood, Race and Folk" as primitive instincts that must be overcome.
An academic analysis of the educational approach in public schools noted that "[A] naive version of the evolution of consciousness, a theory foundational to both Steiner's anthroposophy and Waldorf education, sometimes places one race below another in one or another dimension of development. It is easy to imagine why there are disputes [...] about Waldorf educators' insisting on teaching Norse tales and Greek myths to the exclusion of African modes of discourse."
In response to such critiques, the Anthroposophical Society in America published in 1998 a statement clarifying its stance:
We explicitly reject any racial theory that may be construed to be part of Rudolf Steiner's writings. The Anthroposophical Society in America is an open, public society and it rejects any purported spiritual or scientific theory on the basis of which the alleged superiority of one race is justified at the expense of another race.
Tommy Wieringa, a Dutch writer who grew among Anthroposophists, commenting upon an essay by the Anthroposophist , he wrote "It was a meeting of old acquaintances: Nazi leaders such as Rudolf Hess and Heinrich Himmler already recognized a kindred spirit in Rudolf Steiner, with his theories about racial purity, esoteric medicine and biodynamic agriculture."
The racism of Anthroposophy is spiritual and paternalistic (i.e. benevolent), while the racism of fascism is materialistic and often malign. Olav Hammer, university professor expert in new religious movements and Western esotericism, confirms that now the racist and anti-Semitic character of Steiner's teachings can no longer be denied, even if that is "spiritual racism".
According to Munoz, in the materialist perspective (i.e. no reincarnations), Anthroposophy is racist, but in the spiritual perspective (i.e. reincarnations mandatory) it is not racist.
Reception by Nazi regime in Germany
Though several prominent members of the Nazi Party were supporters of anthroposophy and its movements, including agriculturalist , SS colonel Hermann Schneider, and Gestapo chief Heinrich Müller, anti-Nazis such as Traute Lafrenz, a member of the White Rose resistance movement, were also followers. Rudolf Hess, the adjunct Führer, was a patron of Waldorf schools and a staunch defender of biodynamic agriculture. "Before 1933, Himmler, Walther Darré (the future Reich Agriculture Minister), and Rudolf Höss (the future commandant of Auschwitz) had studied ariosophy and anthroposophy, belonged to the occult-inspired Artamanen movement, [...]"
"One of the most insightful contributions to this area is Peter Staudenmaier's case study of Anthroposophy, which has demonstrated the ambiguous role of Anthroposophists in fascist Italy and Nazi Germany." According to Staudenmaier, the fascist and Nazi authorities saw occultism not as deviant, but as deeply familiar.
See also
Esotericism in Germany and Austria
Pneumatosophy
Spiritual but not religious
References
Notes
Citations
External links
Rudolf Steiner Archive (Steiner's works online)
Steiner's complete works in German
Rudolf Steiner Handbook (PDF; 56 MB)
Goetheanum
Societies
General Anthroposophical Society
Anthroposophical Society in America
Anthroposophical Society in Great Britain
Anthroposophical Initiatives in India
Anthroposophical Society in Australia
Anthroposophical Society in New Zealand
Esoteric Christianity
Rudolf Steiner
Spirituality
New religious movements | 0.760496 | 0.99864 | 0.759462 |
Social engineering (security) | In the context of information security, social engineering is the psychological manipulation of people into performing actions or divulging confidential information. A type of confidence trick for the purpose of information gathering, fraud, or system access, it differs from a traditional "con" in the sense that it is often one of the many steps in a more complex fraud scheme. It has also been defined as "any act that influences a person to take an action that may or may not be in their best interests."
Research done in 2020 has indicated that social engineering will be one of the most prominent challenges of the upcoming decade. Having proficiency in social engineering will be increasingly important for organizations and countries, due to the impact on geopolitics as well. Social engineering raises the question of whether our decisions will be accurately informed if our primary information is engineered and biased.
Social engineering attacks have been increasing in intensity and number, cementing the need for novel detection techniques and cyber security educational programs.
Techniques and terms
All social engineering techniques are based on attributes of human decision-making known as cognitive biases.
One example of social engineering is an individual who walks into a building and posts an official-looking announcement to the company bulletin that says the number for the help desk has changed. So, when employees call for help the individual asks them for their passwords and IDs thereby gaining the ability to access the company's private information.
Another example of social engineering would be that the hacker contacts the target on a social networking site and starts a conversation with the target. Gradually the hacker gains the trust of the target and then uses that trust to get access to sensitive information like password or bank account details.
Pretexting
Pretexting (adj. pretextual), also known in the UK as blagging, is the act of creating and using an invented scenario (the pretext) to engage a targeted victim in a manner that increases the chance the victim will divulge information or perform actions that would be unlikely in ordinary circumstances. An elaborate lie, it most often involves some prior research or setup and the use of this information for impersonation (e.g., date of birth, Social Security number, last bill amount) to establish legitimacy in the mind of the target.
Water holing
Water holing is a targeted social engineering strategy that capitalizes on the trust users have in websites they regularly visit. The victim feels safe to do things they would not do in a different situation. A wary person might, for example, purposefully avoid clicking a link in an unsolicited email, but the same person would not hesitate to follow a link on a website they often visit. So, the attacker prepares a trap for the unwary prey at a favored watering hole. This strategy has been successfully used to gain access to some (supposedly) very secure systems.
Baiting
Baiting is like the real-world Trojan horse that uses physical media and relies on the curiosity or greed of the victim. In this attack, attackers leave malware-infected floppy disks, CD-ROMs, or USB flash drives in locations people will find them (bathrooms, elevators, sidewalks, parking lots, etc.), give them legitimate and curiosity-piquing labels, and wait for victims.
Unless computer controls block infections, insertion compromises PCs "auto-running" media. Hostile devices can also be used. For instance, a "lucky winner" is sent a free digital audio player compromising any computer it is plugged to. A "road apple" (the colloquial term for horse manure, suggesting the device's undesirable nature) is any removable media with malicious software left in opportunistic or conspicuous places. It may be a CD, DVD, or USB flash drive, among other media. Curious people take it and plug it into a computer, infecting the host and any attached networks. Again, hackers may give them enticing labels, such as "Employee Salaries" or "Confidential".
One study published in 2016 had researchers drop 297 USB drives around the campus of the University of Illinois. The drives contained files on them that linked to webpages owned by the researchers. The researchers were able to see how many of the drives had files on them opened, but not how many were inserted into a computer without having a file opened. Of the 297 drives that were dropped, 290 (98%) of them were picked up and 135 (45%) of them "called home".
Law
In common law, pretexting is an invasion of privacy tort of appropriation.
Pretexting of telephone records
In December 2006, United States Congress approved a Senate sponsored bill making the pretexting of telephone records a federal felony with fines of up to $250,000 and ten years in prison for individuals (or fines of up to $500,000 for companies). It was signed by President George W. Bush on 12 January 2007.
Federal legislation
The 1999 Gramm-Leach-Bliley Act (GLBA) is a U.S. Federal law that specifically addresses pretexting of banking records as an illegal act punishable under federal statutes. When a business entity such as a private investigator, SIU insurance investigator, or an adjuster conducts any type of deception, it falls under the authority of the Federal Trade Commission (FTC). This federal agency has the obligation and authority to ensure that consumers are not subjected to any unfair or deceptive business practices. US Federal Trade Commission Act, Section 5 of the FTCA states, in part:
"Whenever the Commission shall have reason to believe that any such person, partnership, or corporation has been or is using any unfair method of competition or unfair or deceptive act or practice in or affecting commerce, and if it shall appear to the Commission that a proceeding by it in respect thereof would be to the interest of the public, it shall issue and serve upon such person, partnership, or corporation a complaint stating its charges in that respect."
The statute states that when someone obtains any personal, non-public information from a financial institution or the consumer, their action is subject to the statute. It relates to the consumer's relationship with the financial institution. For example, a pretexter using false pretenses either to get a consumer's address from the consumer's bank, or to get a consumer to disclose the name of their bank, would be covered. The determining principle is that pretexting only occurs when information is obtained through false pretenses.
While the sale of cell telephone records has gained significant media attention, and telecommunications records are the focus of the two bills currently before the United States Senate, many other types of private records are being bought and sold in the public market. Alongside many advertisements for cell phone records, wireline records and the records associated with calling cards are advertised. As individuals shift to VoIP telephones, it is safe to assume that those records will be offered for sale as well. Currently, it is legal to sell telephone records, but illegal to obtain them.
1st Source Information Specialists
U.S. Rep. Fred Upton (R-Kalamazoo, Michigan), chairman of the Energy and Commerce Subcommittee on Telecommunications and the Internet, expressed concern over the easy access to personal mobile phone records on the Internet during a House Energy & Commerce Committee hearing on "Phone Records For Sale: Why Aren't Phone Records Safe From Pretexting?" Illinois became the first state to sue an online records broker when Attorney General Lisa Madigan sued 1st Source Information Specialists, Inc. A spokeswoman for Madigan's office said. The Florida-based company operates several Web sites that sell mobile telephone records, according to a copy of the suit. The attorneys general of Florida and Missouri quickly followed Madigan's lead, filing suits respectively, against 1st Source Information Specialists and, in Missouri's case, one other records broker – First Data Solutions, Inc.
Several wireless providers, including T-Mobile, Verizon, and Cingular filed earlier lawsuits against records brokers, with Cingular winning an injunction against First Data Solutions and 1st Source Information Specialists. U.S. Senator Charles Schumer (D-New York) introduced legislation in February 2006 aimed at curbing the practice. The Consumer Telephone Records Protection Act of 2006 would create felony criminal penalties for stealing and selling the records of mobile phone, landline, and Voice over Internet Protocol (VoIP) subscribers.
Hewlett Packard
Patricia Dunn, former chairwoman of Hewlett Packard, reported that the HP board hired a private investigation company to delve into who was responsible for leaks within the board. Dunn acknowledged that the company used the practice of pretexting to solicit the telephone records of board members and journalists. Chairman Dunn later apologized for this act and offered to step down from the board if it was desired by board members. Unlike Federal law, California law specifically forbids such pretexting. The four felony charges brought on Dunn were dismissed.
Notable social engineering incidents
Equifax breach help websites
Following the 2017 Equifax data breach in which over 150 million private records were leaked (including Social Security numbers, and drivers license numbers, birthdates, etc.), warnings were sent out regarding the dangers of impending security risks. In the day after the establishment of a legitimate help website (equifaxsecurity2017.com) dedicated to people potentially victimized by the breach, 194 malicious domains were reserved from small variations on the URL, capitalizing on the likelihood of people mistyping.
2016 United States Elections Leaks
During the 2016 United States Elections, hackers associated with Russian Military Intelligence (GRU) sent phishing emails directed to members of Hillary Clinton's campaign, disguised as a Google alert. Many members, including the chairman of the campaign, John Podesta, had entered their passwords thinking it would be reset, causing their personal information, and thousands of private emails and documents to be leaked. With this information, they hacked into other computers in the Democratic Congressional Campaign Committee, implanting malware in them, which caused their computer activities to be monitored and leaked.
Google and Facebook phishing emails
Two tech giants—Google and Facebook—were phished out of $100 million by a Lithuanian fraudster. He impersonated a hardware supplier to falsely invoice both companies over two years. Despite their technological sophistication, the companies lost the money.
Notable social engineers
Susan Headley
Susan Headley became involved in phreaking with Kevin Mitnick and Lewis de Payne in Los Angeles, but later framed them for erasing the system files at US Leasing after a falling out, leading to Mitnick's first conviction. She retired to professional poker.
Mike Ridpath
Mike Ridpath is a security consultant, published author, speaker and previous member of w00w00. He is well known for developing techniques and tactics for social engineering through cold calling. He became well known for live demonstrations as well as playing recorded calls after talks where he explained his thought process on what he was doing to get passwords through the phone. As a child, Ridpath was connected with Badir Brothers and was widely known within the phreaking and hacking community for his articles with popular underground ezines, such as, Phrack, B4B0 and 9x on modifying Oki 900s, blueboxing, satellite hacking and RCMAC.
Badir Brothers
Brothers Ramy, Muzher, and Shadde Badir—all of whom were blind from birth—managed to set up an extensive phone and computer fraud scheme in Israel in the 1990s using social engineering, voice impersonation, and Braille-display computers.
Christopher J. Hadnagy
Christopher J. Hadnagy is an American social engineer and information technology security consultant. He is best known as an author of 4 books on social engineering and cyber security and founder of Innocent Lives Foundation, an organization that helps tracking and identifying child trafficking by seeking the assistance of information security specialists, using data from open-source intelligence (OSINT) and collaborating with law enforcement.
References
Further reading
Boyington, Gregory. (1990). 'Baa Baa Black Sheep' Published by Gregory Boyington
Harley, David. 1998 Re-Floating the Titanic: Dealing with Social Engineering Attacks EICAR Conference.
Laribee, Lena. June 2006 Development of methodical social engineering taxonomy project Master's Thesis, Naval Postgraduate School.
Leyden, John. 18 April 2003. Office workers give away passwords for a cheap pen. The Register. Retrieved 2004-09-09.
Mann, Ian. (2008). Hacking the Human: Social Engineering Techniques and Security Countermeasures Published by Gower Publishing Ltd. or
Mitnick, Kevin, Kasperavičius, Alexis. (2004). CSEPS Course Workbook. Mitnick Security Publishing.
Mitnick, Kevin, Simon, William L., Wozniak, Steve,. (2002). The Art of Deception: Controlling the Human Element of Security Published by Wiley. or
Hadnagy, Christopher, (2011) Social Engineering: The Art of Human Hacking Published by Wiley.
N.J. Evans. (2009). "Information Technology Social Engineering: An Academic Definition and Study of Social Engineering-Analyzing the Human Firewall." Graduate Theses and Dissertations. 10709. https://lib.dr.iastate.edu/etd/10709
Z. Wang, L. Sun and H. Zhu. (2020) "Defining Social Engineering in Cybersecurity," in IEEE Access, vol. 8, pp. 85094-85115, doi:10.1109/ACCESS.2020.2992807.
External links
Social Engineering Fundamentals – Securityfocus.com. Retrieved 3 August 2009.
Should Social Engineering be a part of Penetration Testing? – Darknet.org.uk. Retrieved 3 August 2009.
"Protecting Consumers' Phone Records", Electronic Privacy Information Center US Committee on Commerce, Science, and Transportation. Retrieved 8 February 2006.
Plotkin, Hal. Memo to the Press: Pretexting is Already Illegal. Retrieved 9 September 2006.
Cybercrime
Deception | 0.760971 | 0.997972 | 0.759428 |
Preference | In psychology, economics and philosophy, preference is a technical term usually used in relation to choosing between alternatives. For example, someone prefers A over B if they would rather choose A than B. Preferences are central to decision theory because of this relation to behavior. Some methods such as Ordinal Priority Approach use preference relation for decision-making. As connative states, they are closely related to desires. The difference between the two is that desires are directed at one object while preferences concern a comparison between two alternatives, of which one is preferred to the other.
In insolvency, the term is used to determine which outstanding obligation the insolvent party has to settle first.
Psychology
In psychology, preferences refer to an individual's attitude towards a set of objects, typically reflected in an explicit decision-making process. The term is also used to mean evaluative judgment in the sense of liking or disliking an object, as in Scherer (2005), which is the most typical definition employed in psychology. It does not mean that a preference is necessarily stable over time. Preference can be notably modified by decision-making processes, such as choices, even unconsciously. Consequently, preference can be affected by a person's surroundings and upbringing in terms of geographical location, cultural background, religious beliefs, and education. These factors are found to affect preference as repeated exposure to a certain idea or concept correlates with a positive preference.
Economics
In economics and other social sciences, preference refers to the set of assumptions related to ordering some alternatives, based on the degree of happiness, satisfaction, gratification, morality, enjoyment, or utility they provide. The concept of preferences is used in post-World War II neoclassical economics to provide observable evidence in relation to people's actions. These actions can be described by Rational Choice Theory, where individuals make decisions based on rational preferences which are aligned with their self-interests in order to achieve an optimal outcome.
Consumer preference, or consumers' preference for particular brands over identical products and services, is an important notion in the psychological influence of consumption. Consumer preferences have three properties: completeness, transitivity and non-satiation. For a preference to be rational, it must satisfy the axioms of transitivity and Completeness (statistics). The first axiom of transitivity refers to consistency between preferences, such that if x is preferred to y and y is preferred to z, then x has to be preferred to z. The second axiom of completeness describes that a relationship must exist between two options, such that x must be preferred to y or y must be preferred to x, or is indifferent between them. For example, if I prefer sugar to honey and honey to sweetener then I must prefer sugar to sweetener to satisfy transitivity and I must have a preference between the items to satisfy completeness. Under the axiom of completeness, an individual cannot lack a preference between any two options.
If preferences are both transitive and complete, the relationship between preference can be described by a utility function. This is because the axioms allow for preferences to be ordered into one equivalent ordering with no preference cycles. Maximising utility does not imply maximise happiness, rather it is an optimisation of the available options based on an individual's preferences. The so-called Expected Utility Theory (EUT), which was introduced by John von Neumann and Oskar Morgenstern in 1944, explains that so long as an agent's preferences over risky options follow a set of axioms, then he is maximizing the expected value of a utility function. In utility theory, preference relates to decision makers' attitudes towards rewards and hazards. The specific varieties are classified into three categories: 1) risk-averse, that is, equal gains and losses, with investors participating when the loss probability is less than 50%; 2) the risk-taking kind, which is the polar opposite of type 1); 3) Relatively risk-neutral, in the sense that the introduction of risk has no clear association with the decision maker's choice.
The mathematical foundations of most common types of preferences — that are representable by quadratic or additive functions — laid down by Gérard Debreu
enabled Andranik Tangian to develop methods for their elicitation.
In particular, additive and quadratic preference functions in variables can be constructed from interviews, where questions are aimed at tracing totally 2D-indifference curves in coordinate planes without referring to cardinal utility estimates.
Empirical evidence has shown that the usage of rational preferences (and Rational Choice Theory) does not always accurately predict human behaviour because it makes unrealistic assumptions. In response to this, neoclassical economists argue that it provides a normative model for people to adjust and optimise their actions. Behavioural economics describes an alternative approach to predicting human behaviour by using psychological theory which explores deviations from rational preferences and the standard economic model. It also recognises that rational preferences and choices are limited by heuristics and biases. Heuristics are rules of thumb such as elimination by aspects which are used to make decisions rather than maximising the utility function. Economic biases such as reference points and loss aversion also violate the assumption of rational preferences by causing individuals to act irrationally.
Individual preferences can be represented as an indifference curve given the underlying assumptions. Indifference curves graphically depict all product combinations that yield the same amount of usefulness. Indifference curves allow us to graphically define and rank all possible combinations of two commodities.
The graph's three main points are:
If more is better, the indifference curve dips downward.
Greater transitivity indicates that the indifference curves do not overlap.
A propensity for diversity causes indifference curves to curve inward.
Risk preference
Risk preference is defined as how much risk a person is prepared to accept based on the expected utility or pleasure of the outcome.
Risk tolerance is a critical component of personal financial planning, that is, risk preference.
In psychology, risk preference is occasionally characterised as the proclivity to engage in a behaviour or activity that is advantageous but may involve some potential loss, such as substance abuse or criminal action that may bring significant bodily and mental harm to the individual.
In economics, risk preference refers to a proclivity to engage in behaviours or activities that entail greater variance returns, regardless of whether they be gains or losses, and are frequently associated with monetary rewards involving lotteries.
There are two different traditions of measuring preference for risk, the revealed and stated preference traditions, which coexist in psychology, and to some extent in economics as well.
Risk preference evaluated from stated preferences emerges as a concept with significant temporal stability, but revealed preference measures do not.
Relation to desires
Preferences and desires are two closely related notions: they are both conative states that determine our behavior. The difference between the two is that desires are directed at one object while preferences concern a comparison between two alternatives, of which one is preferred to the other. The focus on preferences instead of desires is very common in the field of decision theory. It has been argued that desire is the more fundamental notion and that preferences are to be defined in terms of desires. For this to work, desire has to be understood as involving a degree or intensity. Given this assumption, a preference can be defined as a comparison of two desires. That Nadia prefers tea over coffee, for example, just means that her desire for tea is stronger than her desire for coffee. One argument for this approach is due to considerations of parsimony: a great number of preferences can be derived from a very small number of desires. One objection to this theory is that our introspective access is much more immediate in cases of preferences than in cases of desires. So it is usually much easier for us to know which of two options we prefer than to know the degree with which we desire a particular object. This consideration has been used to suggest that maybe preference, and not desire, is the more fundamental notion.
Insolvency
In Insolvency, the term can be used to describe when a company pays a specific creditor or group of creditors. From doing this, that creditor(s) is made better off, than other creditors. After paying the 'preferred creditor', the company seeks to go into formal insolvency like an administration or liquidation. There must be a desire to make the creditor better off, for them to be a preference. If the preference is proven, legal action can occur. It is a wrongful act of trading. Disqualification is a risk. Preference arises within the context of the principle maintaining that one of the main objectives in the winding up of an insolvent company is to ensure the equal treatment of creditors. The rules on preferences allow paying up their creditors as insolvency looms, but that it must prove that the transaction is a result of ordinary commercial considerations. Also, under the English Insolvency Act 1986, if a creditor was proven to have forced the company to pay, the resulting payment would not be considered a preference since it would not constitute unfairness. It is the decision to give a preference, rather than the giving of the preference pursuant to that decision, which must be influenced by the desire to produce the effect of the preference. For these purposes, therefore, the relevant time is the date of the decision, not the date of giving the preference.
See also
Motivation
Ordinal Priority Approach
Preference-based planning (in artificial intelligence)
Preference revelation
Choice
Pairwise comparison
References
External links
Stanford Encyclopedia of Philosophy article on 'Preferences'
(white paper from International Communications Research)
Psychological attitude
Utility
Free will
Decision-making
Concepts in ethics | 0.766947 | 0.990137 | 0.759383 |
Scatology | In medicine and biology, scatology or coprology is the study of faeces.
Scatological studies allow one to determine a wide range of biological information about a creature, including its diet (and thus where it has been), health and diseases such as tapeworms.
A comprehensive study of scatology was documented by John Gregory Bourke under the title Rites of All Nations (1891), with a 1913 German translation including a foreword by Sigmund Freud. An abbreviated version of the work was published as The Portable Scatalog in 1994.
Etymology
The word derives from the Greek meaning "dung, feces"; coprology derives from the Greek of similar meaning.
Psychology
In psychology, a scatology is an obsession with excretion or excrement, or the study of such obsessions.
In sexual fetishism, scatology (usually abbreviated scat) refers to coprophilia, when someone is sexually aroused by fecal matter, whether in the use of feces in various sexual acts, watching someone defecating, or simply seeing the feces. Entire subcultures in sexuality are devoted to this fetish.
Literature
In literature, "scatological" is a term to denote the literary trope of the grotesque body. It is used to describe works that make particular reference to excretion or excrement, as well as to toilet humor. Well known for his scatological tropes is the late medieval fictional character of Till Eulenspiegel. Another common example is John Dryden's Mac Flecknoe, a poem that employs extensive scatological imagery to ridicule Dryden's contemporary Thomas Shadwell. German literature is particularly rich in scatological texts and references, including such books as Collofino's Non Olet. A case which has provoked an unusual amount of comment in the academic literature is Mozart's scatological humour. Smith, in his review of English literature's representations of scatology from the Middle Ages to the 18th century, notes two attitudes towards scatology. One of these emphasises the merry and the carnivalesque. This is found in Chaucer and Shakespeare. The other attitude is one of self-disgust and misanthropy. This is found in the works of the Earl of Rochester and Jonathan Swift.
See also
Coprolite – fossilized faeces
Coprophilia – faeces fetish
Stool sample – sample of faeces for studying
Urolagnia – urination fetish
Sources
Bakhtin, Mikhail, Rabelais and His World.
Lewin, Ralph, Merde: excursions in scientific, cultural and socio-historical coprology. Random House, 1999. .
Susan Gubar, "The Female Monster in Augustan Satire." Signs 3.2 (Winter, 1977): 380–394.
Jae Num Lee, Swift and Scatological Satire. University of New Mexico Press, 1971. .
Smith, Peter J. (2012) Between Two Stools: Scatology and its Representation in English Literature, Chaucer to Swift, Manchester University Press
References
Feces | 0.763839 | 0.994147 | 0.759368 |
Quarter-life crisis | In popular psychology, a quarter-life crisis is an existential crisis involving anxiety and sorrow over the direction and quality of one's life which is most commonly experienced in a period ranging from a person's early twenties up to their mid-thirties, although it can begin as early as eighteen. It is defined by clinical psychologist Alex Fowke as "a period of insecurity, doubt and disappointment surrounding your career, relationships and financial situation".
Aspects
According to Meredith Goldstein of The Boston Globe, the quarter-life crisis occurs in one's twenties, usually after entering the "real world" (i.e., after graduating from college, after moving out of the family home, or both). German psychologist Erik Erikson, who proposed eight crises that humans face during their development, proposed the existence of a life crisis occurring at this age. The conflict he associated with young adulthood is the Intimacy vs. Isolation crisis. According to Erikson, after establishing a personal identity in adolescence, young adults seek to form intense, usually romantic relationships with other people.
Common symptoms of a quarter-life crisis are often feelings of being "lost, scared, lonely or confused" about what steps to take in early adulthood. Studies have shown that unemployment and choosing a career path is a major cause of stress and anxiety in young adults. Early stages of one living on their own for the first time and learning to cope without parental help can also induce feelings of isolation and loneliness. Re-evaluation of one's close personal relationships can also be a factor, with sufferers feeling they have outgrown their partner or believing others may be more suitable for them.
Recently, millennials have occasionally been referred to as the Boomerang Generation or Peter Pan Generation, because of the members' perceived penchant for delaying some rites of passage into adulthood for longer periods than previous generations. These labels were also a reference to a trend toward members returning home after college or living with their parents for longer periods than previous generations. These tendencies can be explained by changes in external social factors rather than characteristics intrinsic to millennials (e.g., higher cost of living and higher levels of student loan debt in the US among millennials when compared to earlier generations can make it more difficult for young adults to achieve traditional markers of independence such as marriage, home ownership or investing).
In film
The notion of the quarter-life crisis is explored by the 1967 film The Graduate, one of the first film depictions of this issue. Other notable films that also do so are Bright Lights, Big City; The Paper Chase; St. Elmo's Fire; How to Be; Reality Bites; Garden State; Accepted; Ghost World; High Fidelity; (500) Days of Summer; Lost in Translation; Silver Linings Playbook; Vicky Cristina Barcelona; Amélie; and Shaun of the Dead; as well as the musical Avenue Q, in the television show The Office, and the HBO television series Girls. The 2008 web series Quarterlife was so named for the phenomenon. Other movies exploring the quarter-life crisis include: Tiny Furniture, The Puffy Chair, Fight Club, Stranger than Fiction, Greenberg, Frances Ha and Eternal Sunshine of the Spotless Mind. A 2014 comedy directed by Lynn Shelton titled Laggies delves into the complexities of a quarter-life crisis. The 2nd season of 2021's Cubicles revolved majorly on quarter life crisis psychology, its symptoms, effects and ways to deal with them.
In music
The 2003 John Mayer single "Why Georgia" explores the concept of a quarter-life crisis. The song was based upon John Mayer's experiences during this age period, when he moved to Georgia.
The 1975 Fleetwood Mac song "Landslide", written by Stevie Nicks in her late twenties, explores many of the self-doubts and fears of the quarter-life crisis, at a time when Nicks professed to be uncertain about her musical career and her romantic life.
English indie rock band Spector's song "True Love (For Now)", the opening track to their 2012 album Enjoy It While It Lasts, references a quarter-life crisis.
"20 Something", the final track on SZAs 2017 album Ctrl, delves into the many insecurities she experienced in her twenties, both personal and professional, and the urgency she felt to make the most of her life before entering into mature adulthood.
In the album "Pep Talks" by Judah & the Lion, lead single "Quarter-Life Crisis" is about the ensuing rootlessness and insecurity that Akers, band's lead vocalist, felt during his twenties, caused by loss of his aunt and his parents' divorce.
The 2020 released EP "Young Life Crisis" by UPSAHL is about a breakup, lost friendships and a canceled tour, all during the coronavirus pandemic and the uncertainness around her life.
The 2022 song "Quarter Life Crisis" (stylized in all-caps for the track listing as "QUARTER LIFE CRISIS"), by singer-songwriter Taylor Bickett, addresses this topic through the eyes of its twenty-three year old subject who lists many aspects common to the central motif of young adult anxiety from a female perspective.
On 10 August 2023, British singer Baby Queen, announced her debut album titled "Quarter Life Crisis". It was released on 6 October 2023. The albums lead singles are portrayed by "Dream Girl", "We Can Be Anything" and for the deluxe edition "All The Things" and six of her other songs that were featured in the Netflix series based on the graphic novel, Heartstopper. Bella stated in an interview, "This album tells the story of my journey through my early 20s – leaving my childhood and my adolescence behind but never really losing my childlike wonder and never quite growing up. The songs are all facets of what early adulthood has been like for me while discovering new parts of myself, my sexuality, my past and my place in this world." She also added, "I really want this album to leave people feeling hopeful, because there is so much beauty to live through and look forward to and it truly is magical and extraordinary to be alive and to have the very short opportunity to experience every emotion imaginable." To support the album's release, Baby Queen also announced a headline tour titled "The Quarter Life Crisis Tour" throughout November 2023.
In 2023, singer-songwriter Wallice released her EP "Mr Big Shot", with the title "Quarterlife" as the 4th track. The song is about herself, as she takes a look back on her life, and the choices she made, as she is about to turn 25. It explores many subjects people in their twenties can experience, such as regret, growing up and losing things along the way. This is also a message of hope, as she is "ready for a second fight". She also stated that the whole EP was supposed to be titled Quarterlife too, as she saw this EP as more personal and authentic.
See also
Angst
Disenchantment
Existential crisis
Fear
Midlife crisis
Panic
Status attainment
References
Further reading
Barr, Damian. Get It Together: A Guide to Surviving Your Quarterlife Crisis. Hodder & Stoughton Paperbacks, 2004. .
Hassler, Christine. 20-Something, 20-Everything: A Quarter-life Woman's Guide to Balance and Direction. New World Library, 2005. .
Hassler, Christine. 20-Something Manifesto: Quarter-Lifers Speak Out About Who They Are, What They Want, and How to Get It. New World Library, 2008. .
Pollak, Lindsey. Getting from College to Career: 90 Things to Do Before You Join the Real World. Collins Business, 2007. .
Robbins, Alexandra. Conquering Your Quarterlife Crisis: Advice from Twentysomethings Who Have Been There and Survived. Perigee, 2004. .
Robbins, Alexandra; Wilner, Abby. Quarterlife Crisis: The Unique Challenges of Life in Your Twenties. Tarcher, 2001. .
Wilner, Abby; Stocker, Catherine. Quarterlifer's Companion: How to Get on the Right Career Path, Control Your Finances, and Find the Support Network You Need to Thrive. McGraw-Hill, 2004. .
External links
Does graduation mark the start of your quarter-life crisis?
Life is hard when you're in your 20s – BBC News
Human development
Popular psychology
Young adult | 0.76138 | 0.997349 | 0.759361 |
Environmental resource management | Environmental resource management or environmental management is the management of the interaction and impact of human societies on the environment. It is not, as the phrase might suggest, the management of the environment itself. Environmental resources management aims to ensure that ecosystem services are protected and maintained for future human generations, and also maintain ecosystem integrity through considering ethical, economic, and scientific (ecological) variables. Environmental resource management tries to identify factors between meeting needs and protecting resources. It is thus linked to environmental protection, resource management, sustainability, integrated landscape management, natural resource management, fisheries management, forest management, wildlife management, environmental management systems, and others.
Significance
Environmental resource management is an issue of increasing concern, as reflected in its prevalence in several texts influencing global sociopolitical frameworks such as the Brundtland Commission's Our Common Future, which highlighted the integrated nature of the environment and international development, and the Worldwatch Institute's annual State of the World reports.
The environment determines the nature of people, animals, plants, and places around the Earth, affecting behaviour, religion, culture and economic practices.
Scope
Environmental resource management can be viewed from a variety of perspectives. It involves the management of all components of the biophysical environment, both living (biotic) and non-living (abiotic), and the relationships among all living species and their habitats. The environment also involves the relationships of the human environment, such as the social, cultural, and economic environment, with the biophysical environment. The essential aspects of environmental resource management are ethical, economical, social, and technological. These underlie principles and help make decisions.
The concept of environmental determinism, probabilism, and possibilism are significant in the concept of environmental resource management.
Environmental resource management covers many areas in science, including geography, biology, social sciences, political sciences, public policy, ecology, physics, chemistry, sociology, psychology, and physiology. Environmental resource management as a practice and discourse (across these areas) is also the object of study in the social sciences.
Aspects
Ethical
Environmental resource management strategies are intrinsically driven by conceptions of human-nature relationships. Ethical aspects involve the cultural and social issues relating to the environment, and dealing with changes to it. "All human activities take place in the context of certain types of relationships between society and the bio-physical world (the rest of nature)," and so, there is a great significance in understanding the ethical values of different groups around the world. Broadly speaking, two schools of thought exist in environmental ethics: Anthropocentrism and Ecocentrism, each influencing a broad spectrum of environmental resource management styles along a continuum. These styles perceive "...different evidence, imperatives, and problems, and prescribe different solutions, strategies, technologies, roles for economic sectors, culture, governments, and ethics, etc."
Anthropocentrism
Anthropocentrism, "an inclination to evaluate reality exclusively in terms of human values," is an ethic reflected in the major interpretations of Western religions and the dominant economic paradigms of the industrialised world. Anthropocentrism looks at nature as existing solely for the benefit of humans, and as a commodity to use for the good of humanity and to improve human quality of life. Anthropocentric environmental resource management is therefore not the conservation of the environment solely for the environment's sake, but rather the conservation of the environment, and ecosystem structure, for humans' sake.
Ecocentrism
Ecocentrists believe in the intrinsic value of nature while maintaining that human beings must use and even exploit nature to survive and live. It is this fine ethical line that ecocentrists navigate between fair use and abuse. At an extreme of the ethical scale, ecocentrism includes philosophies such as ecofeminism and deep ecology, which evolved as a reaction to dominant anthropocentric paradigms. "In its current form, it is an attempt to synthesize many old and some new philosophical attitudes about the relationship between nature and human activity, with particular emphasis on ethical, social, and spiritual aspects that have been downplayed in the dominant economic worldview."
Economics
Main article: Economics
The economy functions within and is dependent upon goods and services provided by natural ecosystems. The role of the environment is recognized in both classical economics and neoclassical economics theories, yet the environment was a lower priority in economic policies from 1950 to 1980 due to emphasis from policy makers on economic growth. With the prevalence of environmental problems, many economists embraced the notion that, "If environmental sustainability must coexist for economic sustainability, then the overall system must [permit] identification of an equilibrium between the environment and the economy." As such, economic policy makers began to incorporate the functions of the natural environment – or natural capital – particularly as a sink for wastes and for the provision of raw materials and amenities.
Debate continues among economists as to how to account for natural capital, specifically whether resources can be replaced through knowledge and technology, or whether the environment is a closed system that cannot be replenished and is finite. Economic models influence environmental resource management, in that management policies reflect beliefs about natural capital scarcity. For someone who believes natural capital is infinite and easily substituted, environmental management is irrelevant to the economy. For example, economic paradigms based on neoclassical models of closed economic systems are primarily concerned with resource scarcity and thus prescribe legalizing the environment as an economic externality for an environmental resource management strategy. This approach has often been termed 'Command-and-control'. Colby has identified trends in the development of economic paradigms, among them, a shift towards more ecological economics since the 1990s.
Ecology
There are many definitions of the field of science commonly called ecology. A typical one is "the branch of biology dealing with the relations and interactions between organisms and their environment, including other organisms." "The pairing of significant uncertainty about the behaviour and response of ecological systems with urgent calls for near-term action constitutes a difficult reality, and a common lament" for many environmental resource managers. Scientific analysis of the environment deals with several dimensions of ecological uncertainty. These include: structural uncertainty resulting from the misidentification, or lack of information pertaining to the relationships between ecological variables; parameter uncertainty referring to "uncertainty associated with parameter values that are not known precisely but can be assessed and reported in terms of the likelihood…of experiencing a defined range of outcomes"; and stochastic uncertainty stemming from chance or unrelated factors. Adaptive management is considered a useful framework for dealing with situations of high levels of uncertainty though it is not without its detractors.
A common scientific concept and impetus behind environmental resource management is carrying capacity. Simply put, carrying capacity refers to the maximum number of organisms a particular resource can sustain. The concept of carrying capacity, whilst understood by many cultures over history, has its roots in Malthusian theory. An example is visible in the EU Water Framework Directive. However, "it is argued that Western scientific knowledge ... is often insufficient to deal with the full complexity of the interplay of variables in environmental resource management. These concerns have been recently addressed by a shift in environmental resource management approaches to incorporate different knowledge systems including traditional knowledge, reflected in approaches such as adaptive co-management community-based natural resource management and transitions management among others.
Sustainability
Sustainability in environmental resource management involves managing economic, social, and ecological systems both within and outside an organizational entity so it can sustain itself and the system it exists in. In context, sustainability implies that rather than competing for endless growth on a finite planet, development improves quality of life without necessarily consuming more resources. Sustainably managing environmental resources requires organizational change that instills sustainability values that portrays these values outwardly from all levels and reinforces them to surrounding stakeholders. The result should be a symbiotic relationship between the sustaining organization, community, and environment.
Many drivers compel environmental resource management to take sustainability issues into account. Today's economic paradigms do not protect the natural environment, yet they deepen human dependency on biodiversity and ecosystem services. Ecologically, massive environmental degradation and climate change threaten the stability of ecological systems that humanity depends on. Socially, an increasing gap between rich and poor and the global North–South divide denies many access to basic human needs, rights, and education, leading to further environmental destruction. The planet's unstable condition is caused by many anthropogenic sources. As an exceptionally powerful contributing factor to social and environmental change, the modern organisation has the potential to apply environmental resource management with sustainability principles to achieve highly effective outcomes. To achieve sustainable development with environmental resource management an organisation should work within sustainability principles, including social and environmental accountability, long-term planning; a strong, shared vision; a holistic focus; devolved and consensus decision making; broad stakeholder engagement and justice; transparency measures; trust; and flexibility.
Current paradigm shifts
To adjust to today's environment of quick social and ecological changes, some organizations have begun to experiment with new tools and concepts. Those that are more traditional and stick to hierarchical decision making have difficulty dealing with the demand for lateral decision making that supports effective participation. Whether it be a matter of ethics or just strategic advantage organizations are internalizing sustainability principles. Some of the world's largest and most profitable corporations are shifting to sustainable environmental resource management: Ford, Toyota, BMW, Honda, Shell, Du Port, Sta toil, Swiss Re, Hewlett-Packard, and Unilever, among others. An extensive study by the Boston Consulting Group reaching 1,560 business leaders from diverse regions, job positions, expertise in sustainability, industries, and sizes of organizations, revealed the many benefits of sustainable practice as well as its viability.
Although the sustainability of environmental resource management has improved, corporate sustainability, for one, has yet to reach the majority of global companies operating in the markets. The three major barriers to preventing organizations from shifting towards sustainable practice with environmental resource management are not understanding what sustainability is; having difficulty modeling an economically viable case for the switch; and having a flawed execution plan, or a lack thereof. Therefore, the most important part of shifting an organization to adopt sustainability in environmental resource management would be to create a shared vision and understanding of what sustainability is for that particular organization and to clarify the business case.
Stakeholders
Public sector
The public sector comprises the general government sector plus all public corporations including the central bank. In environmental resource management the public sector is responsible for administering natural resource management and implementing environmental protection legislation. The traditional role of the public sector in environmental resource management is to provide professional judgement through skilled technicians on behalf of the public. With the increase of intractable environmental problems, the public sector has been led to examine alternative paradigms for managing environmental resources. This has resulted in the public sector working collaboratively with other sectors (including other governments, private and civil) to encourage sustainable natural resource management behaviours.
Private sector
The private sector comprises private corporations and non-profit institutions serving households. The private sector's traditional role in environmental resource management is that of the recovery of natural resources. Such private sector recovery groups include mining (minerals and petroleum), forestry and fishery organisations. Environmental resource management undertaken by the private sectors varies dependent upon the resource type, that being renewable or non-renewable and private and common resources (also see Tragedy of the Commons). Environmental managers from the private sector also need skills to manage collaboration within a dynamic social and political environment.
Civil society
Civil society comprises associations in which societies voluntarily organise themselves and which represent a wide range of interests and ties. These can include community-based organisations, indigenous peoples' organisations and non-government organisations (NGOs). Functioning through strong public pressure, civil society can exercise their legal rights against the implementation of resource management plans, particularly land management plans. The aim of civil society in environmental resource management is to be included in the decision-making process by means of public participation. Public participation can be an effective strategy to invoke a sense of social responsibility of natural resources.
Tools
As with all management functions, effective management tools, standards, and systems are required. An environmental management standard or system or protocol attempts to reduce environmental impact as measured by some objective criteria. The ISO 14001 standard is the most widely used standard for environmental risk management and is closely aligned to the European Eco-Management and Audit Scheme (EMAS). As a common auditing standard, the ISO 19011 standard explains how to combine this with quality management.
Other environmental management systems (EMS) tend to be based on the ISO 14001 standard and many extend it in various ways:
The Green Dragon Environmental Management Standard is a five-level EMS designed for smaller organisations for whom ISO 14001 may be too onerous and for larger organisations who wish to implement ISO 14001 in a more manageable step-by-step approach,
BS 8555 is a phased standard that can help smaller companies move to ISO 14001 in six manageable steps,
The Natural Step focuses on basic sustainability criteria and helps focus engineering on reducing use of materials or energy use that is unsustainable in the long term,
Natural Capitalism advises using accounting reform and a general biomimicry and industrial ecology approach to do the same thing,
US Environmental Protection Agency has many further terms and standards that it defines as appropriate to large-scale EMS,
The UN and World Bank has encouraged adopting a "natural capital" measurement and management framework.
Other strategies exist that rely on making simple distinctions rather than building top-down management "systems" using performance audits and full cost accounting. For instance, Ecological Intelligent Design divides products into consumables, service products or durables and unsaleables – toxic products that no one should buy, or in many cases, do not realize they are buying. By eliminating the unsaleables from the comprehensive outcome of any purchase, better environmental resource management is achieved without systems.
Another example that diverges from top-down management is the implementation of community based co-management systems of governance. An example of this is community based subsistence fishing areas, such as is implemented in Ha'ena, Hawaii. Community based systems of governance allow for the communities who most directly interact with the resource and who are most deeply impacted by the overexploitation of said resource to make the decisions regarding its management, thus empowering local communities and more effectively managing resources.
Recent successful cases have put forward the notion of integrated management. It shares a wider approach and stresses out the importance of interdisciplinary assessment. It is an interesting notion that might not be adaptable to all cases.
Case Study: Kissidougou, Guinea (Fairhead, Leach)
Kissidougou, Guinea’s dry season brings about fires in the open grass fires which defoliate the few trees in the savanna. There are villages within this savanna surrounded by “islands” of forests, allowing for forts, hiding, rituals, protection from wind and fire, and shade for crops. According to scholars and researchers in the region during the late-19th and 20th centuries, there was a steady decline in tree cover. This led to colonial Guinea’s implementation of policies, including the switch of upland to swamp farming; bush-fire control; protection of certain species and land; and tree planting in villages. These policies were carried out in the form of permits, fines, and military repression.
But, Kissidougou villagers claim their ancestors’ established these islands. Many maps and letters evidence France’s occupation of Guinea, as well as Kissidougou’s past landscape. During the 1780s to 1860s “the whole country [was] prairie.” James Fairhead and Melissa Leach, both environmental anthropologists at the University of Sussex, claim the state’s environmental analyses “casts into question the relationships between society, demography, and environment.” With this, they reformed the state’s narratives: Local land use can be both vegetation enriching and degrading; combined effect on resource management is greater than the sum of their parts; there is evidence of increased population correlating to an increase in forest cover. Fairhead and Leach support the enabling of policy and socioeconomic conditions in which local resource management conglomerates can act effectively. In Kissidougou, there is evidence that local powers and community efforts shaped the island forests that shape the savanna’s landscape.
See also
Citizen science, cleanup projects that people can take part in.
Cleaner production
Environmental impact assessment
Environmental management scheme
Environmental manager
Integrated landscape management
ISO 14000
Natural resource management
Planetary management
Political ecology
Resource justice
Stakeholder analysis
Sustainable management
References
Further reading
External links
Economic Costs & Benefits of Environmental Management NOAA Economics
business.gov – provides businesses with environmental management tips, as well as tips for green business owners (United States)
Nonprofit research on managing the environment
Resource economics
Natural resource management
Systems ecology
Human-Environment interaction | 0.768824 | 0.987691 | 0.75936 |
Field theory (psychology) | In topological and vector psychology, field theory is a psychological theory that examines patterns of interaction between the individual and the total field, or environment. The concept first made its appearance in psychology with roots in the holistic perspective of Gestalt theories. It was developed by Kurt Lewin, a Gestalt psychologist, in the 1940s.
Lewin's field theory can be expressed by a formula: B = f(p,e), meaning that behavior (B) is a function of the person (p) and their cultural environment (e).
History
Early philosophers believed the body to have a rational, inner nature that helped guide our thoughts and bodies. This intuitive force, our soul, was viewed as having supreme control over our entire being. However, this view changed during the intellectual revolution of the 17th century. The mind versus the body was a forever evolving concept that received great attention from the likes of Descartes, Locke and Kant. From once believing that the mind and body interact, to thinking the mind is completely separate from the body, rationalist and empirical views were deeply rooted in the understanding of this phenomenon. Field Theory emerged when Lewin considered a person's behavior to consist of many different interactions. He believed people to have dynamic thoughts, forces, and emotions that shifted their behavior to reflect their present state.
Kurt Lewin's influence
Kurt Lewin was born in Germany in 1890. He originally wanted to pursue behaviorism, but later found an interest in Gestalt psychology while volunteering in the German army in 1914. He went on to work at the Psychological Institute in the University of Berlin after World War 1. There he worked with two of the founders of gestalt psychology, Max Wertheimer and Wolfgang Köhler. When Lewin moved to the USA, he had become more involved with real world issues and the need to understand and change human behavior. His desire and personal involvement with gestalt psychology led to the development of his field theory. Lewin's field theory emphasized interpersonal conflict, individual personalities, and situational variables. He proposed that behavior is the result of the individual and their environment. In viewing a person's social environment and its effect on their dynamic field, Lewin also found that a person's psychological state influences their social field.
Wanting to shift the focus of psychology away from Aristotelian views and more towards Galileo's approach, he believed psychology needed to follow physics. Drawing from both mathematics and physics, Lewin took the concept of the field, the focus of one's experiences, personal needs, and topography to map spatial relationships. Lewin created a field theory rule that says analysis can only start with the situation represented as a whole, so in order for change to take place, the entire situation must be considered. There seems to be a repetition of people having the same unsuccessful attempts to grow and develop themselves and field theory draws the conclusion that this repetition comes from forces within our fields. To display this psychological field, Lewin constructed "topological maps" that showed inter-related areas and indicated the directions of people's goals.
Main principles
The life space
The idea that an individual's behavior, at any time, is manifested only within the coexisting factors of the current "life space" or "psychological field." So a life space is the combination of all the factors that influences a person's behavior at any time. Therefore, behavior can be expressed as a function of the life space B=ƒ(LS). Furthermore, the interaction of the person (P), and the environment (E) produces this life space. In symbolic expression, B=ƒ(LS)=F(P,E). An example of a more complex life-space concept is the idea that two people's experience of a situation can become one when they converse together. This does not happen if the two people do not interact with each other, such as being in the same room but not talking to each other. This combined space can be "built" up as the two people share more ideas and create a more complex life-space together.
Environment
The environment as demonstrated in the life space, refers to the objective situation in which the person perceives and acts. The life space environment (E) is completely subjective within each context as it depends not only on the objective situation, but also on the characteristics of the person (P). It is necessary to consider all aspects of a person's conscious and unconscious environment in order to map out the person's life space. The combined state, influenced by the environment as well as the person's perspective, conscious, and unconscious, must be viewed as a whole. While each part can be viewed as a separate entity, to observe the totality of the situation one must take all inputs into consideration.
Person
Lewin applied the term person in three different ways.
Properties/characteristics of the individual. (needs, beliefs, values, abilities)
A way of representing essentially the same psychological facts of "life space" itself.
"The behaving self".
"The behaving self may be seen as the individual's perception of his relations to the environment he perceives."
The development of the person inevitably affects the life space. As a person undergoes changes with their body or their image of themselves changes, this can cause an instability in the region of life space. Additionally, an instability in the psychological environment or life space can lead to the instability of the person.
Behavior
Any change within the life space subject to psychological laws. Accordingly, an action of the person (P) or a change in the environment (E) resulting from said action, can be considered behavior (B). These behaviors can make large or small influences on the totality of the life space. Regardless, they must be taken into consideration. Field theory holds that behavior must be derived from a totality of coexisting facts. These coexisting facts make up a "dynamic field", which means that the state of any part of the field depends on every other part of it. This not only includes both mental and physical fields, but also unseen forces such as magnetism and gravity. This can be elaborated by imagining the difference that a force can make by acting from a distance. When considering something such as the Moon's influence on the Earth, it is clear that there is an effect even though it acts from a large distance away. Behavior depends on the present field rather than on the past or the future.
Development also plays a major role in life space behavior. From the beginning of one's life behavior is molded in all respects to his or her social situation. This of course brings up the sociological discussion of nature versus nurture. Experimental psychology studies have shown the formation of aspiration, the driving factor of actions and expressions (behavior), is directly influenced by the presence or absence of certain individuals within one's life space. A child's development naturally leads to an opening up of new unknown life space regions. Transitional periods such as adolescence are characterized by a greater effect of these new regions. Therefore, an adolescent entering a new social group or life space can be seen psychologically as entering a cognitively unstructured field. This new field makes it difficult for the individual to know what behavior is appropriate within the field. This is believed to be a possibility for changes in child and adolescent behavior.
Theory and experimental evidence
According to field theory, a person's life is made up of multiple distinct spaces. Image 1 is an example of the total field, or environment. Image 2 is showing a person, and a goal they have. This image shows that there are forces pushing a person toward their goal. The dotted line is everything one must go through to reach their goal, and how one must go through many different spaces. Individuals may have the same goal, but the field to get there may be different. One's field may be adjusted in order to gain the most in life. Some fields may be deleted, and some added, all depending on certain events that occur in a person's lifetime.
Field theory also includes the idea that every person holds a different experience for a situation. This is not to say that two people's experience of an event will not be similar, but that there will be some difference. This leads to the idea that no two experiences are the same for a person either, as the dynamic field is constantly changing. This is to say that the dynamic field is like a stream, constantly flowing while changing slightly. Another piece of field theory is the idea that no part of a person's field can be viewed as being pointless. Every part of a total field must be viewed as having possible meaning and importance. This must be done regardless of how pointless or non-important the part of the field may seem, it should still be accounted for. The totality of an individual's field seems to have no bounds, as research has shown that even an infant's experience of World War II could possibly affect life later on, due to the change in field. This is a good example of how broad field theory can span, as a person's preconsciousness may be altered due to field changes that occurred before any major development.
Reception and implications
Field theory is important aspect of Gestalt theory, a doctrine that includes many important methods and discoveries. It is a crucial building block to the foundation of Gestalt psychologists' concepts and applications.
The field theory is also a cornerstone of Gestalt therapy together with phenomenology and existentialist dialog.
See also
Force-field analysis
Humanistic psychology
Major publications
Lewin, K. (1935). A dynamic theory of personality. New York: McGraw-Hill.
Lewin, K. (1936). Principles of topological psychology. New York: McGraw-Hill.
Lewin, K. (1938). The conceptual representation and measurement of psychological forces. Durham, NC: Duke University Press.
Lewin, K. (1951). Field theory in social science. New York: Harper.
References
Citations
Sources
Psychological theories | 0.769782 | 0.986432 | 0.759338 |
Psychological fiction | In literature, psychological fiction (also psychological realism) is a narrative genre that emphasizes interior characterization and motivation to explore the spiritual, emotional, and mental lives of its characters. The mode of narration examines the reasons for the behaviours of the character, which propel the plot and explain the story. Psychological realism is achieved with deep explorations and explanations of the mental states of the character's inner person, usually through narrative modes such as stream of consciousness and flashbacks.
Early examples
The Tale of Genji by Lady Murasaki, written in 11th-century Japan, was considered by Jorge Luis Borges to be a psychological novel. French theorists Gilles Deleuze and Félix Guattari, in A Thousand Plateaus, evaluated the 12th-century Arthurian author Chrétien de Troyes' Lancelot, the Knight of the Cart and Perceval, the Story of the Grail as early examples of the style of the psychological novel.
Stendhal's The Red and the Black and Madame de La Fayette's The Princess of Cleves are considered the first precursors of the psychological novel. The modern psychological novel originated, according to The Encyclopedia of the Novel, primarily in the works of Nobel laureate Knut Hamsun – in particular, Hunger (1890), Mysteries (1892), Pan (1894) and Victoria (1898).
Notable examples
One of the greatest writers of the genre was Fyodor Dostoyevsky. His novels deal strongly with ideas, and characters who embody these ideas, how they play out in real world circumstances, and the value of them, most notably The Brothers Karamazov and Crime and Punishment.
In the literature of the United States, Henry James, Patrick McGrath, Arthur Miller, and Edith Wharton are considered "major contributor[s] to the practice of psychological realism."
Subgenres
Psychological thriller
A subgenre of the thriller and psychological novel genres, emphasizing the inner mind and mentality of characters in a creative work. Because of its complexity, the genre often overlaps and/or incorporates elements of mystery, drama, action, slasher, and horror — often psychological horror. It bears similarities to the Gothic and detective fiction genres.
Psychological horror
A subgenre of the horror and psychological novel genres that relies on the psychological, emotional and mental states of characters to generate horror. On occasions, it overlaps with the psychological thriller subgenre to enhance the story suspensefully.
Psychological drama
A subgenre of the drama and psychological novel genres, focuses upon the emotional, mental, and psychological development of characters in a dramatic work. One Flew Over the Cuckoo's Nest (1975) and Requiem for a Dream (2000), both based on novels, are notable examples of this subgenre.
Psychological science fiction
Psychological science fiction refers to works that focus is on the character's inner struggle dealing with political or technological forces. A Clockwork Orange (1971) is a notable example of this genre.
References
Further reading
George M. Johnson. Dynamic Psychology in Modernist British Fiction. Palgrave Macmillan, U.K., 2006.
Literary genres | 0.763651 | 0.994335 | 0.759325 |
Loneliness | Loneliness is an unpleasant emotional response to perceived isolation. Loneliness is also described as social paina psychological mechanism which motivates individuals to seek social connections. It is often associated with a perceived lack of connection and intimacy. Loneliness overlaps and yet is distinct from solitude. Solitude is simply the state of being apart from others; not everyone who experiences solitude feels lonely. As a subjective emotion, loneliness can be felt even when a person is surrounded by other people.
The causes of loneliness are varied. Loneliness can be a result of genetic inheritance, cultural factors, a lack of meaningful relationships, a significant loss, an excessive reliance on passive technologies (notably the Internet in the 21st century), or a self-perpetuating mindset. Research has shown that loneliness is found throughout society, including among people in marriages along with other strong relationships, and those with successful careers. Most people experience loneliness at some points in their lives, and some feel it often.
Loneliness is found to be the highest among younger people as, according to the BBC Loneliness Experiment, 40% people within the age group 16-24 admit to feeling lonely while the percentage of people who feel lonely above age 75 is around 27%.
The effects of loneliness are also varied. Transient loneliness (loneliness which exists for a short period of time) is related to positive effects, including an increased focus on the strength of one's relationships. Chronic loneliness (loneliness which exists for a significant amount of time in one's life) is generally correlated with negative effects, including increased obesity, substance use disorder, risk of depression, cardiovascular disease, risk of high blood pressure, and high cholesterol. Chronic loneliness is also correlated with an increased risk of death and suicidal thoughts.
Medical treatments for loneliness include beginning therapy and taking antidepressants. Social treatments for loneliness generally include an increase in interaction with others, such as group activities (such as exercise or religious activities), re-engaging with old friends or colleagues, owning pets, and becoming more connected with one's community.
Loneliness has long been a theme in literature, going back to the Epic of Gilgamesh. However, academic coverage of loneliness was sparse until recent decades. In the 21st century, some academics and professionals have claimed that loneliness has become an epidemic, including Vivek Murthy, the Surgeon General of the United States.
Causes
Existential
Loneliness has long been viewed as a universal condition which, at least to a moderate extent, is felt by everyone. From this perspective, some degree of loneliness is inevitable as the limitations of human life mean it is impossible for anyone to continually satisfy their inherent need for connection. Professors including Michele A. Carter and Ben Lazare Mijuskovic have written books and essays tracking the existential perspective and the many writers who have talked about it throughout history. Thomas Wolfe's 1930s essay God's Lonely Man is frequently discussed in this regard; Wolfe makes the case that everyone imagines they are lonely in a special way unique to themselves, whereas really every single person sometimes experiences loneliness. While agreeing that loneliness alleviation can be a good thing, those who take the existential view tend to doubt such efforts can ever be fully successful, seeing some level of loneliness as both unavoidable and even beneficial, as it can help people appreciate the joy of living.
Cultural
Culture is discussed as a cause of loneliness in two senses. Migrants can experience loneliness due to missing their home culture. Studies have found this effect can be especially strong for students from countries in Asia with a collective culture, when they go to study at universities in more individualist English speaking countries. Culture is also seen as a cause of loneliness in the sense that western culture may have been contributing to loneliness, ever since the Enlightenment began to favour individualism over older communal values.
Lack of meaningful relationships
For many people the family of origin did not offer the trust building relationships needed to build a reference that lasts a lifetime and even in memory after the passing of a loved one. This can be due to parenting style, traditions, mental health issues including personality disorders and abusive family environments. Sometimes religious shunning is also present.
This impacts the ability of individuals to know themselves, to value themselves and to relate to others or to do so with great difficulty.
All these factors and many others are often overlooked by the standard medical or psychological advice that recommends to go meet friends, family and to socialise. This is not always possible when there is no one available to relate to and an inability to connect without the skills and knowledge on how to proceed. With time a person might become discouraged or develop apathy from numerous trials, failures or rejections brought on by the lack of interpersonal skills.
As the rate of loneliness increases yearly among people of every age group and more so in the elderly, with known detrimental physical and psychological effects, there is a need to find new ways to connect people with each other and especially so at a time when a whole lot of the human attention is focused on electronic devices, it is a challenge.
Relationship loss
Loneliness is a very common, though often temporary, consequence of a relationship breakup or bereavement. The loss of a significant person in one's life will typically initiate a grief response; in this situation, one might feel lonely, even while in the company of others. Loneliness can occur due to the disruption to one's social circle, sometimes combined with homesickness, which results from people moving away for work or education.
Situational
All sorts of situations and events can cause loneliness, especially in combination with certain personality traits for susceptible individuals. For example, an extroverted person who is highly social is more likely to feel lonely if they are living somewhere with a low population density, with fewer people for them to interact with. Loneliness can sometimes even be caused by events that might normally be expected to alleviate it: for example the birth of a child (if there is significant postpartum depression) or after getting married (especially if the marriage turns out to be unstable, overly disruptive to previous relationships, or emotionally cold.) In addition to being impacted by external events, loneliness can be aggravated by pre-existing mental health conditions like chronic depression and anxiety.
Self-perpetuating
Long term loneliness can cause various types of maladaptive social cognition, such as hypervigilance and social awkwardness, which can make it harder for an individual to maintain existing relationships or establish new ones. Various studies have found that therapy targeted at addressing this maladaptive cognition is the single most effective way of intervening to reduce loneliness, though it does not always work for everyone.
Social contagion
Loneliness can spread through social groups like a disease. The mechanism for this involves the maladaptive cognition that often results from chronic loneliness. If a man loses a friend for whatever reason, this may increase his loneliness, resulting in him developing maladaptive cognition such as excessive neediness or suspicion of other friends. Hence leading to a further loss of human connection if he then goes on to split up with his remaining friends. Those other friends now become lonelier too, leading to a ripple effect of loneliness. Studies have however found that this contagion effect is not consistenta small increase in loneliness does not always cause the maladaptive cognition. Also, when someone loses a friend, they will sometimes form new friendships or deepen other existing relationships.
Internet
Studies have tended to find a moderate correlation between extensive internet use and loneliness, especially ones that draw on data from the 1990s, before internet use became widespread. Contradictory results have been found by studies investigating whether the association is simply a result of lonely people being more attracted to the internet or if the internet can actually cause loneliness. The displacement hypothesis holds that some people choose to withdraw from real world social interactions so they can have more time for the internet. Excessive internet use can directly cause anxiety and depression, conditions which can contribute to lonelinessyet these factors may be offset by the internet's ability to facilitate interaction, and to empower people. Some studies found that internet use is a cause of loneliness, at least for some types of people. Others have found internet use can have a significant positive effect on reducing loneliness. The authors of meta studies and reviews from about 2015 and later have tended to argue that there is a bidirectional causal relationship between loneliness and internet use. Excessive use, especially if passive, can increase loneliness. While moderate use, especially by users who engage with others rather than just passively consume content, can increase social connection and reduce loneliness.
Genetics
Smaller early studies had estimated that loneliness may be between 37–55% hereditable. However, in 2016, the first Genome-wide association study of loneliness found that the heredity of loneliness is much lower, at about 14–27%. This suggests that while genes play a role in determining how much loneliness a person may feel, they are less of a factor than individual experiences and the environment.
Other
People making long driving commutes have reported dramatically higher feelings of loneliness (as well as other negative health impacts).
Typology
Two principal types of loneliness are social and emotional loneliness. This delineation was made in 1973 by Robert S. Weiss, in his seminal work: Loneliness: The Experience of Emotional and Social Isolation. Based on Weiss's view that "both types of loneliness have to be examined independently, because the satisfaction for the need of emotional loneliness cannot act as a counterbalance for social loneliness, and vice versa", people working to treat or better understand loneliness have tended to treat these two types of loneliness separately, though this is far from always the case.
Social loneliness
Social loneliness is the loneliness people experience because of the lack of a wider social network. They may not feel they are members of a community, or that they have friends or allies whom they can rely on in times of distress.
Emotional loneliness
Emotional loneliness results from the lack of deep, nurturing relationships with other people. Weiss tied his concept of emotional loneliness to attachment theory. People have a need for deep attachments, which can be fulfilled by close friends, though more often by close family members such as parents, and later in life by romantic partners. In 1997, Enrico DiTommaso and Barry Spinner separated emotional loneliness into Romantic and Family loneliness.
A 2019 study found that emotional loneliness significantly increased the likelihood of death for older adults living alone (whereas there was no increase in mortality found with social loneliness).
Family loneliness
Family loneliness results when individuals feel they lack close ties with family members. A 2010 study of 1,009 students found that only family loneliness was associated with increased frequency of self-harm, not romantic or social loneliness.
Romantic loneliness
Romantic loneliness can be experienced by adolescents and adults who lack a close bond with a romantic partner. Psychologists have asserted that the formation of a committed romantic relationship is a critical development task for young adults but is also one that many are delaying into their late 20s or beyond. People in romantic relationships tend to report less loneliness than single people, provided their relationship provides them with emotional intimacy. People in unstable or emotionally cold romantic partnerships can still feel romantic loneliness.
Other
Several other typologies and types of loneliness exist. Further types of loneliness include existential loneliness, cosmic lonelinessfeeling alone in a hostile universe, and cultural lonelinesstypically found among immigrants who miss their home culture. These types are less well studied than the threefold separation into social, romantic and family loneliness, yet can be valuable in understanding the experience of certain subgroups with loneliness.
Lockdown loneliness
Lockdown loneliness refers to "loneliness resulting because of social disconnection due to enforced social distancing and lockdowns during the COVID-19 pandemic and similar emergency situations."
Demarcation
Differences between feeling lonely and being socially isolated
There is a clear distinction between feeling lonely and being socially isolated (for example, a loner). In particular, one way of thinking about loneliness is as a discrepancy between one's necessary and achieved levels of social interaction, while solitude is simply the lack of contact with people. Loneliness is therefore a subjective yet multidimensional experience; if a person thinks they are lonely, then they are lonely. People can be lonely while in solitude, or in the middle of a crowd. What makes a person lonely is their perceived need for more social interaction or a certain type or quality of social interaction that is not currently available. A person can be in the middle of a party and feel lonely due to not talking to enough people. Conversely, one can be alone and not feel lonely; even though there is no one around, that person is not lonely because there is no desire for social interaction. There have also been suggestions that each person has their own optimal level of social interaction. If a person gets too little or too much social interaction, this could lead to feelings of loneliness or over-stimulation.
Solitude can have positive effects on individuals. One study found that, although time spent alone tended to depress a person's mood and increase feelings of loneliness, it also helped to improve their cognitive state, such as improving concentration. It can be argued some individuals seek solitude for discovering a more meaningful and vital existence. Furthermore, once the alone time was over, people's moods tended to increase significantly. Solitude is also associated with other positive growth experiences, religious experiences, and identity building such as solitary quests used in rites of passages for adolescents.
Transient vs. chronic loneliness
Another important typology of loneliness focuses on the time perspective. In this respect, loneliness can be viewed as either transient or chronic.
Transient loneliness is temporary in nature; generally it is easily relieved. Chronic loneliness is more permanent and not easily relieved. For example, when a person is sick and cannot socialize with friends, this would be a case of transient loneliness. Once the person got better it would be easy for them to alleviate their loneliness. A person with long term feelings of loneliness, regardless of if they are at a family gathering or with friends, is experiencing chronic loneliness.
Loneliness as a human condition
The existentialist school of thought views individuality as the essence of being human. Each human being comes into the world alone, travels through life as a separate person, and ultimately dies alone. Coping with this, accepting it, and learning how to direct our own lives with some degree of grace and satisfaction is the human condition.
Some philosophers, such as Sartre, believe in an epistemic loneliness in which loneliness is a fundamental part of the human condition because of the paradox between people's consciousness desiring meaning in life and the isolation and nothingness of the universe. Conversely, other existentialist thinkers argue that human beings might be said to actively engage each other and the universe as they communicate and create, and loneliness is merely the feeling of being cut off from this process.
In his 2019 text, Evidence of Being: The Black Gay Cultural Renaissance and the Politics of Violence, Darius Bost draws from Heather Love's theorization of loneliness to delineate the ways in which loneliness structures black gay feeling and literary, cultural productions. Bost limns, "As a form of negative affect, loneliness shores up the alienation, isolation, and pathologization of black gay men during the 1980s and early 1990s. But loneliness is also a form of bodily desire, a yearning for an attachment to the social and for a future beyond the forces that create someone's alienation and isolation."
Prevalence
Possibly over 5% of the population of the industrial countries experience loneliness at levels which are harmful to physical and mental health, though scientists have expressed caution over making such claims with high confidence. Thousands of studies and surveys have been undertaken to assess the prevalence of loneliness. Yet it remains challenging for scientists to make accurate generalisations and comparisons. Reasons for this include various loneliness measurement scales being used by different studies, differences in how even the same scale is implemented from study to study, and as cultural variations across time and space may impact how people report the largely subjective phenomena of loneliness.
One consistent finding has been that loneliness is not evenly distributed across a nation's population. It tends to be concentrated among vulnerable sub groups; for example the poor, the unemployed, immigrants and mothers. Some of the most severe loneliness tends to be found among international students from countries in Asia with a collective culture, when they come to study in countries with a more individualist culture, such as Australia. In New Zealand, the fourteen surveyed groups with the highest prevalence of loneliness most/all of the time in descending order are: disabled people, recent migrants, low income households, unemployed, single parents, rural (rest of South Island), seniors aged 75+, not in the labour force, youth aged 15–24, no qualifications, not housing owner-occupier, not in a family nucleus, Māori, and low personal income.
Studies have found inconsistent results concerning the effect of age, gender and culture on loneliness.
Much 20th century and early 21st century writing on loneliness assumed it typically increases with age. In high-income countries, on average, one in four people over 60 and one in three over 75 feels lonely. Yet as of 2020, with some exceptions, recent studies have tended to find that it is young people who report the most loneliness (though loneliness is still found to be a severe problem for the very old).
There have been contradictory results concerning how the prevalence of loneliness varies with gender. A 2020 analysis based on a worldwide dataset gathered by the BBC found greater loneliness among men, though some earlier work had found the opposite, or that gender made no difference.
While cross-cultural comparisons are difficult to interpret with high confidence, the 2020 analyses based on the BBC dataset found the more individualist countries like the UK tended to have higher levels of loneliness. However, previous empirical work had often found that people living in more collectivist cultures tended to report greater loneliness, possibly due to less freedom to choose the sort of relationships that suit them best.
Increasing prevalence
In the 21st century, loneliness has been widely reported as an increasing worldwide problem. A 2010 systematic review and meta analyses had stated that the "modern way of life in industrialized countries" is greatly reducing the quality of social relationships, partly due to people no longer living in close proximity with their extended families. The review notes that from 1990 to 2010, the number of Americans reporting no close confidants has tripled.
In 2017, Vivek Murthy, the Surgeon General of the United States, argued that there was a loneliness epidemic. It has since been described as an epidemic thousands of times, by reporters, academics and other public officials.
Professors such as Claude S. Fischer and Eric Klinenberg opined in 2018 that while the data doesn't support describing loneliness as an "epidemic" or even as a clearly growing problem, loneliness is indeed a serious issue, having a severe health impact on millions of people. However, a 2021 study found that adolescent loneliness in contemporary schools and depression increased substantially and consistently worldwide after 2012.
A comparative overview of the prevalence and determinants of loneliness and social isolation in Europe in the pre-COVID period was conducted by Joint Research Centre of the European Commission within the project Loneliness in Europe. The empirical results indicate that 8.6% of the adult population in Europe experience frequent loneliness and 20.8% experience social isolation, with eastern Europe recording the highest prevalence of both phenomena.
In Australia, the annual national Household, Income and Labour Dynamics in Australia (HILDA) Survey has reported a steady 8% rise in agreement with the statement "I often feel very lonely" between 2009 and 2021, responses indicating "strongly agree" rose steadily by over 20% in that same time period. This is a reversal of the trend seen from the start of the survey in 2001 until 2009 where these figures had both been steadily decreasing.
Loneliness was exacerbated by the isolating effects of social distancing, stay-at-home orders, and deaths during the COVID-19 pandemic.
In May 2023, Murthy published a United States Department of Health and Human Services advisory on the impact of the epidemic of loneliness and isolation in the United States. The report likened the dangers of loneliness to other public health threats such as smoking and obesity. In November 2023, the World Health Organization declared loneliness a "global public health concern" and launched an international commission to study the problem.
Effects
Transient
While unpleasant, temporary feelings of loneliness are sometimes experienced by almost everyone, they are not thought to cause long term harm. Early 20th century work sometimes treated loneliness as a wholly negative phenomenon. Yet transient loneliness is now generally considered beneficial. The capacity to feel it may have been evolutionarily selected for, a healthy aversive emotion that motivates individuals to strengthen social connections. Transient loneliness is sometimes compared to short-term hunger, which is unpleasant but ultimately useful as it motivates us to eat.
Chronic
Long-term loneliness is widely considered a close to entirely harmful condition. Whereas transient loneliness typically motivates us to improve relationships with others, chronic loneliness can have the opposite effect. This is as long-term social isolation can cause hypervigilance. While enhanced vigilance may have been evolutionary adaptive for individuals who went long periods without others watching their backs, it can lead to excessive cynicism and suspicion of other people, which in turn can be detrimental to interpersonal relationships. So without intervention, chronic loneliness can be self-reinforcing.
Benefits
Much has been written about the benefits of being alone, yet often, even when authors use the word "loneliness", they are referring to what could be more precisely described as voluntary solitude. Yet some assert that even long-term involuntary loneliness can have beneficial effects.
Chronic loneliness is often seen as a purely negative phenomena from the lens of social and medical science. Yet in spiritual and artistic traditions, it has been viewed as having mixed effects. Though even within these traditions, there can be warnings not to intentionally seek out chronic loneliness or other conditionsjust advise that if one falls into them, there can be benefits. In western arts, there is a long belief that psychological hardship, including loneliness, can be a source of creativity. In spiritual traditions, perhaps the most obvious benefit of loneliness is that it can increase the desire for a union with the divine. More esoterically, the psychic wound opened up by loneliness or other conditions has been said, e.g. by Simone Weil, to open up space for God to manifest within the soul. In Christianity, spiritual dryness has been seen as advantageous as part of the "dark night of the soul", an ordeal that while painful, can result in spiritual transformation. From a secular perspective, while the vast majority of empirical studies focus on the negative effects of long term loneliness, a few studies have found there can also be benefits, such as enhanced perceptiveness of social situations.
Brain
Studies have found mostly negative effects from chronic loneliness on brain functioning and structure. However, certain parts of the brain and specific functions, like the ability to detect social threat, appear to be strengthened. A 2020 population-genetics study looked for signatures of loneliness in grey matter morphology, intrinsic functional coupling, and fiber tract microstructure. The loneliness-linked neurobiological profiles converged on a collection of brain regions known as the default mode network. This higher associative network shows more consistent loneliness associations in grey matter volume than other cortical brain networks. Lonely individuals display stronger functional communication in the default network, and greater microstructural integrity of its fornix pathway. The findings fit with the possibility that the up-regulation of these neural circuits supports mentalizing, reminiscence and imagination to fill the social void.
Physical health
Chronic loneliness can be a serious, life-threatening health condition. It has been found to be strongly associated with an increased risk of cardiovascular disease, though direct causal links have yet to be firmly identified. People experiencing loneliness tend to have an increased incidence of high blood pressure, high cholesterol, and obesity.
Loneliness has been shown to increase the concentration of cortisol levels in the body and weaken the effects of dopamine, the hormone that makes people enjoy things. Prolonged, high cortisol levels can cause anxiety, depression, digestive problems, heart disease, sleep problems, and weight gain.
Associational studies on loneliness and the immune system have found mixed results, with lower natural killer (NK) cell activity or dampened antibody response to viruses such Epstein Barr, herpes, and influenza, but either slower or no change to the progression of AIDS.
Based on the ELSA, a study found that loneliness increased the risk of dementia by one-third. Not having a partner (being single, divorced, or widowed) doubled the risk of dementia. However, having two or three closer relationships reduced the risk by three-fifths. And based on the large UK Biobank cohort, a study found that individuals who reported feeling lonely had a higher risk of developing Parkinson's disease.
Death
A 2010 systematic review and meta- analyses found a significant association between loneliness and increased mortality. People with good social relationships were found to have a 50% greater chance of survival compared to lonely people (odds ratio = 1.5). In other words, chronic loneliness seems to be a risk factor for death comparable to smoking, and greater than obesity or lack of exercise. A 2017 overview of systematic reviews found other meta-studies with similar findings. However, clear causative links between loneliness and early death have not been firmly established.
Mental health
Loneliness has been linked with depression, and is thus a risk factor for suicide. A study based on more than 4,000 adults aged over 50 in the English Longitudinal Study of Ageing (ELSA), looked at loneliness. Nearly one in five of those who reported being lonely had developed signs of depression within a year. Émile Durkheim has described loneliness, specifically the inability or unwillingness to live for others, i.e. for friendships or altruistic ideas, as the main reason for what he called egoistic suicide. In adults, loneliness is a major precipitant of depression and alcoholism. People who are socially isolated may report poor sleep quality, and thus have diminished restorative processes. Loneliness has also been linked with a schizoid character type in which one may see the world differently and experience social alienation, described as the self in exile.
While the long-term effects of extended periods of loneliness are little understood, it has been noted that people who are isolated or experience loneliness for a long period of time fall into a "ontological crisis" or "ontological insecurity," where they are not sure if they or their surroundings exist, and if they do, exactly who or what they are, creating torment, suffering, and despair to the point of palpability within the thoughts of the person.
In children, a lack of social connections is directly linked to several forms of antisocial and self-destructive behavior, most notably hostile and delinquent behavior. In both children and adults, loneliness often has a negative impact on learning and memory. Its disruption of sleep patterns can have a significant impact on the ability to function in everyday life.
Research from a large-scale study published in the journal Psychological Medicine, showed that "lonely millennials are more likely to have mental health problems, be out of work and feel pessimistic about their ability to succeed in life than their peers who feel connected to others, regardless of gender or wealth".
In 2004, the United States Department of Justice published a study indicating that loneliness increases suicide rates profoundly among juveniles, with 62% of all suicides that occurred within juvenile facilities being among those who either were, at the time of the suicide, in solitary confinement or among those with a history of being housed thereof.
Pain, depression, and fatigue function as a symptom cluster and thus may share common risk factors. Two longitudinal studies with different populations demonstrated that loneliness was a risk factor for the development of the pain, depression, and fatigue symptom cluster over time. These data also highlight the health risks of loneliness; pain, depression, and fatigue often accompany serious illness and place people at risk for poor health and mortality.
The psychiatrist George Vaillant and the director of longitudinal Study of Adult Development at Harvard University Robert J. Waldinger found that those who were happiest and healthier reported strong interpersonal relationships.
Suicide
Loneliness can cause suicidal thoughts (suicidal ideation), attempts at suicide and actual suicide. The extent to which suicides result from loneliness are difficult to determine however, as there are typically several potential causes involved. In an article written for the American Foundation for Suicide Prevention Dr. Jeremy Noble writes, "You don't have to be a doctor to recognize the connection between loneliness and suicide". As feelings of loneliness intensify so do thoughts of suicide and attempts at suicide. The loneliness that triggers suicidal tendencies impacts all facets of society.
The Samaritans, a nonprofit charity in England, who work with people going through crisis says there is a definite correlation between feelings of loneliness and suicide for juveniles and those in their young adult years. The Office of National Statistics, in England, found one of the top ten reasons young people have suicidal idealizations and attempt suicide is because they are lonely. College students, lonely, away from home, living in new unfamiliar surroundings, away from friends feel isolated and without proper coping skills will turn to suicide as a way to fix the pain of loneliness. A common theme, among children and young adults dealing with feelings of loneliness is they didn't know help was available, or where to get help. Loneliness, to them, is a source of shame.
Older people can also struggle with feelings of severe loneliness which lead them to consider acting on thoughts of suicide or self-harm. In some countries, senior citizens appear to commit a high proportion of suicides, though in other countries there is a significantly higher rate for middle-aged men. Retirement, poor health, loss of a significant other or other family or friends, all contribute to loneliness. Suicides caused by loneliness in older people can be difficult to identify. Often they don't have anyone to disclose their feelings of loneliness and the despair it brings. They may stop eating, alter the doses of medications, or choose not to treat an illness as a way to help expedite death so they don't have to deal with feeling lonely.
Cultural influences can also cause loneliness leading to suicidal thoughts or actions. For example, Hispanic and Japanese cultures value interdependence. When a person from one of these cultures feels removed or feels like they can't sustain relationships in their families or society, they start to have negative behaviors, including negative thoughts or acting self-destructively. Other cultures, such as in Europe, are more independent. While the cause of loneliness in a person may stem from different circumstances or cultural norms, the impact lead to the same results – a desire to end life.
Society level
High levels of chronic loneliness can also have society wide effects. Noreena Hertz writes that Hannah Arendt was the first to discuss the link between loneliness and the politics of intolerance. In her book The Origins of Totalitarianism Arendt argues that loneliness is an essential prerequisite for a totalitarian movement to gain power. Hertz states that the link between an individual's loneliness and their likelihood to vote for a populist political party or candidate has since been supported by several empirical studies. In addition to increasing support for populist policies, Hertz argues that a society with high levels of loneliness risks eroding its ability to have effective mutually beneficial politics. Partly as loneliness tends to make people more suspicious about each other. And also as some of the ways individuals alleviate loneliness, such as technological or transactional substitutes for human companionship, can reduce peoples political and social skills, such as their ability to compromise and to see other points of view.
However, the link between loneliness and political attitudes remains underexplored and ambiguous. Studies investigating the relationship between loneliness and voter orientation directly found that lonely individuals tend to abstain from elections rather than support populist parties. This inconsistency might stem from differences in the definition and operationalization of loneliness. While Hertz applies a broader definition of loneliness, the empirical studies that contradict her point use self reported, directly measured loneliness as predictor for voting behavior.
Physiological mechanisms linked to poor health
There are a number of potential physiological mechanisms linking loneliness to poor health outcomes. In 2005, results from the American Framingham Heart Study demonstrated that lonely men had raised levels of Interleukin 6 (IL-6), a blood chemical linked to heart disease. A 2006 study conducted by the Center for Cognitive and Social Neuroscience at the University of Chicago found loneliness can add thirty points to a blood pressure reading for adults over the age of fifty. Another finding, from a survey conducted by John Cacioppo from the University of Chicago, is that doctors report providing better medical care to patients who have a strong network of family and friends than they do to patients who are alone. Cacioppo states that loneliness impairs cognition and willpower, alters DNA transcription in immune cells, and leads over time to high blood pressure. Lonelier people are more likely to show evidence of viral reactivation than less lonely people. Lonelier people also have stronger inflammatory responses to acute stress compared with less lonely people; inflammation is a well known risk factor for age-related diseases.
When someone feels left out of a situation, they feel excluded and one possible side effect is for their body temperature to decrease. When people feel excluded blood vessels at the periphery of the body may narrow, preserving core body heat. This class protective mechanism is known as vasoconstriction.
Relief
The reduction of loneliness in oneself and others has long been a motive for human activity and social organization. For some commentators, such as professor Ben Lazare Mijuskovic, ever since the dawn of civilization, it has been the single strongest motivator for human activity after essential physical needs are satisfied. Loneliness is the first negative condition identified in the Bible, with the Book of Genesis showing God creating a companion for man to relieve loneliness. Nevertheless, there is relatively little direct record of explicit loneliness relief efforts prior to the 20th century. Some commentators including professor Rubin Gotesky have argued the sense of aloneness was rarely felt until older communal ways of living began to be disrupted by the Enlightenment.
Starting in the 1900s, and especially in the 21st century, efforts explicitly aiming to alleviate loneliness became much more common. Loneliness reduction efforts occur across multiple disciplines, often by actors for whom loneliness relief is not their primary concern. For example, by commercial firms, civic planners, designers of new housing developments, and university administration. Across the world, many departments, NGOs and even umbrella groups entirely dedicated to loneliness relief have been established. For example, in the UK, the Campaign to End Loneliness. With loneliness being a complex condition, there is no single method that can consistently alleviate it for different individuals; many different approaches are used.
Medical treatment
Therapy is a common way of treating loneliness. For individuals whose loneliness is caused by factors that respond well to medical intervention, it is often successful. Short-term therapy, the most common form for lonely or depressed patients, typically occurs over a period of ten to twenty weeks. During therapy, emphasis is put on understanding the cause of the problem, reversing the negative thoughts, feelings, and attitudes resulting from the problem, and exploring ways to help the patient feel connected. Some doctors also recommend group therapy as a means to connect with other patients and establish a support system. Doctors also frequently prescribe anti-depressants to patients as a stand-alone treatment, or in conjunction with therapy. It may take several attempts before a suitable anti-depressant medication is found.
Doctors often see a high proportion of patients with loneliness; a UK survey found that three-quarters of Doctors believed that between 1–5 patients visited them each day mainly out of loneliness. There isn't always sufficient funds to pay for therapy, leading to the rise of "social prescription", where Doctors can refer patients to NGO and Community led solutions such as group activities. While preliminary findings suggest social prescription has good results for some people, early evidence to support its effectiveness was not strong, with commentators advising that for some people it is not a good alternative to medical therapy. , formal social prescribing programmes have been launched in 17 different countries around the world, with improved evidence for its effectiveness when the prescriptions are carefully targeted, such as helping the lonely person get closer to nature, or participate in group activities they enjoy.
NGO and community led
Along with growing awareness of the problem of loneliness, community led projects explicitly aiming for its relief became more common in the latter half of the 20th century, with still more starting up in the 21st. There have been many thousands of such projects across North and South America, Europe, Asia and Africa. Some campaigns are run nationally under the control of charities dedicated to loneliness relief, while other efforts may be local projects, sometimes run by a group for which loneliness relief is not their primary objective. For example, housing associations that aim to ensure multi generational living, with social interaction between younger and older people encouraged, in some cases even contractually required. Projects range from befriending schemes that facilitate just two people meeting up, to large group activities, which will often have other objectives in addition to loneliness relief: as having fun, improving physical health with exercise, or participating in conservation efforts. For example, in New Zealand the NGO Age Concern began an Accredited Visiting Service which was found to be an effective counter to this loneliness and isolation.
Government
In 2010, François Fillon announced the fight against loneliness would be France's great national cause for 2011. In the UK, the Jo Cox Commission on Loneliness began pushing to make tackling loneliness a government priority from 2016. In 2018, this led to Great Britain becoming the first country in the world to appoint a ministerial lead for loneliness, and to publish an official loneliness reduction strategy. There have since been calls for other countries to appoint their own minister for loneliness, for example in Sweden and Germany. Various other countries had seen government led anti loneliness efforts even before 2018 however. For example, in 2017 the government of Singapore started a scheme to provide allotments to its citizens so they could socialise while working together on them, while the Netherlands government set up a telephone line for lonely older people. While governments sometimes directly control loneliness relief efforts, typically they fund or work in partnership with educational institutions, companies and NGOs.
Pets
Pet therapy, or animal-assisted therapy, can be used to treat both loneliness and depression. The presence of animal companions, especially dogs, but also others like cats, rabbits, and guinea pigs, can ease feelings of depression and loneliness among some patients. Beyond the companionship the animal itself provides there may also be increased opportunities for socializing with other pet owners. According to the Centers for Disease Control and Prevention there are a number of other health benefits associated with pet ownership, including lowered blood pressure and decreased levels of cholesterol and triglycerides.
Technology
Technology companies have been advertising their products as helpful for reducing loneliness at least as far back as 1905; records exist of early telephones being presented as a way for isolated farmers to reduce loneliness. Technological solutions for loneliness have been suggested much more frequently since the development of the internet, and especially since loneliness became a more prominent public health issue at around 2017. Solutions have been proposed by existing tech companies, and by start-ups dedicated to loneliness reduction.
Solutions that have become available since 2017 tend to fall under 4 different approaches. 1) Mindfulness apps that aim to change an individual's attitude towards loneliness, emphasising possible benefits, and trying to shift towards an experience more similar to voluntary solitude. 2) Apps that warn users when they're starting to spend too much time online, which is based on research findings that moderate use of digital technology can be beneficial, but that excessive time online can increase loneliness. 3) Apps that help people connect with others, including to arrange real life meetups. 4) AI related technologies that provide digital companionship. Such companions can be conventionally virtual (having existence only when their application is switched on), can have an independent digital life (their program may run all the time in the cloud, allowing them to interact with the user across different platforms like Instagram & Twitter in similar ways to how a real human friend might behave), or can have a physical presence like a Pepper robot. As far back as the 1960s, some individuals had stated they prefer communicating with the ELIZA computer program rather than regular human beings. AI driven applications available in the 2020s are considerably more advanced, able to remember previous conversations, with some ability to sense emotional states, and to tailor their interaction accordingly. An example of a start-up working on such technology is Edward Saatchis Fable studio. Inspired by the Joi character in Blade Runner 2049, Saatchi seeks to create digital friends that can help alleviate loneliness. As they'll be in some senses beyond human, untainted by negative motivators like greed or envy, and with enhanced powers of attention, they may be able to help people be kinder and gentler to others. And so assist with loneliness relief on a society wide level, as well as directly with individuals.
Effectiveness of digital technology interventions
A 2021 systematic review and meta-analysis on the effectiveness of digital technology interventions (DTIs) in reducing loneliness in older adults found no evidence supporting that DTIs reduce loneliness in older adults with an average age from 73 to 78 years (SD 6–11). DTIs studied included social internet-based activities, that is, social activities via social websites, videoconferencing, customized computer platforms with simplified touch-screen interfaces, personal reminder information and social management systems, WhatsApp groups, and video or voice networks.
Religion
Studies have found an association with religion and the reduction of loneliness, especially among the elderly. The studies sometimes include caveats, such as that religions with strong behavioural prescriptions can have isolating effects. In the 21st century, numerous religious organisations have begun to undertake efforts explicitly focusing on loneliness reduction. Religious figures have also played a role in raising awareness of the problem of loneliness; for example, Pope Francis said in 2013 that loneliness of the old (along with youth unemployment) were the most serious evils of the age.
Others
Nostalgia has also been found to have a restorative effect, counteracting loneliness by increasing perceived social support. Vivek Murthy has stated that the most generally available cure for loneliness is human connection. Murthy argues that regular people have a vital role to play as individuals in reducing loneliness for themselves and others, in part by greater emphases on kindness and on nurturing relationships with others.
Effectiveness
Professor Stella Mills has suggested that while social loneliness can be relatively easy to address with group activities and other measures that help build connections between people, effective intervention against emotional loneliness can be more challenging. Mills argues that such intervention is more likely to succeed for individuals who are in the early stages of loneliness, before the effects caused by chronic loneliness are deeply engrained.
A 2010 meta-study compared the effectiveness of four interventions: improving social skills, enhancing social support, increasing opportunities for social interaction, and addressing abnormal social cognition (faulty patterns of thoughts, such as the hyper-vigilance often caused by chronic loneliness). The results of the study indicated that all interventions were effective in reducing loneliness, possibly with the exception of social skill training. Results of the meta-analysis suggest that correcting maladaptive social cognition offers the best chance of reducing loneliness. A 2019 umbrella review of systematic reviews focussing on the effectiveness of loneliness relief efforts aimed just at older people, also found that those targeting social cognition were most effective.
A 2018 overview of systematic reviews concerning the effectiveness of loneliness interventions, found that generally, there is little solid evidence that intervention are effective. Though they also found no reason to believe the various types of intervention did any harm, except they cautioned against the excessive use of digital technology. The authors called for more rigorous, best practice compliant research in future studies, and with more attention to the cost of interventions.
History
Loneliness has been a theme in literature throughout the ages, as far back as Epic of Gilgamesh. Yet according to Fay Bound Alberti, it was only around the year 1800 that the word began to widely denote a negative condition. With some exceptions, earlier writings and dictionary definitions of loneliness tended to equate it with solitude – a state that was often seen as positive, unless taken to excess. From about 1800, the word loneliness began to acquire its modern definition as a painful subjective condition. This may be due to the economic and social changes arising out of the enlightenment. Such as alienation and increased interpersonal competition, along with a reduction in the proportion of people enjoying close and enduring connections with others living in close proximity, as may for example have been the case for modernising pastoral villages. Despite growing awareness of the problem of loneliness, widespread social recognition of the problem remained limited, and scientific study was sparse, until the last quarter of the twentieth century. One of the earliest studies of loneliness was published by Joseph Harold Sheldon in 1948. The 1950 book The Lonely Crowd helped further raise the profile of loneliness among academics. For the general public, awareness was raised by the 1966 Beatles song "Eleanor Rigby".
According to Eugene Garfield, it was Robert S. Weiss who brought the attention of scientists to the topic of loneliness, with his 1973 publication of Loneliness: The experience of emotional and social isolation. Before Weiss's publication, what few studies of loneliness existed were mostly focussed on older adults. Following Weis's work, and especially after the 1978 publication of the UCLA Loneliness Scale, scientific interest in the topic has broadened and deepened considerably, with tens of thousands of academic studies having been carried to investigate loneliness just among students, with many more focussed on other subgroups, and on whole populations.
Concern among the general public over loneliness increased in the decades since "Eleanor Rigby"'s release; by 2018 government-backed anti-loneliness campaigns had been launched in countries including the UK, Denmark and Australia.
See also
Solitude
Shyness
Social anxiety
Social anxiety disorder
Social isolation
Autophobia
Individualism
Interpersonal relationship
Loner
Pit of despair (animal experiments on isolation)
Schizoid personality disorder
Friendship recession
Notes and references
Further reading
External links
Emotional issues
Philosophy of love
Social issues
Emotions | 0.761763 | 0.996796 | 0.759322 |
Metacognition | Metacognition is an awareness of one's thought processes and an understanding of the patterns behind them. The term comes from the root word meta, meaning "beyond", or "on top of". Metacognition can take many forms, such as reflecting on one's ways of thinking, and knowing when and how oneself and others use particular strategies for problem-solving. There are generally two components of metacognition: (1) cognitive conceptions and (2) cognitive regulation system. Research has shown that both components of metacognition play key roles in metaconceptual knowledge and learning. Metamemory, defined as knowing about memory and mnemonic strategies, is an important aspect of metacognition.
Writings on metacognition date back at least as far as two works by the Greek philosopher Aristotle (384–322 BC): On the Soul and the Parva Naturalia.
Definitions
This higher-level cognition was given the label metacognition by American developmental psychologist John H. Flavell (1976).
The term metacognition literally means 'above cognition', and is used to indicate cognition about cognition, or more informally, thinking about thinking. Flavell defined metacognition as knowledge about cognition and control of cognition. For example, a person is engaging in metacognition if they notice that they are having more trouble learning A than B, or if it strikes them that they should double-check C before accepting it as fact. J. H. Flavell (1976, p. 232). Andreas Demetriou's theory (one of the neo-Piagetian theories of cognitive development) used the term hyper-cognition to refer to self-monitoring, self-representation, and self-regulation processes, which are regarded as integral components of the human mind. Moreover, with his colleagues, he showed that these processes participate in general intelligence, together with processing efficiency and reasoning, which have traditionally been considered to compose fluid intelligence.
Metacognition also involves thinking about one's own thinking process such as study skills, memory capabilities, and the ability to monitor learning. This concept needs to be explicitly taught along with content instruction. A pithy statement from M.D. Gall et al. is often cited in this respect: "Learning how to learn cannot be left to students. It must be taught."
Metacognition is a general term encompassing the study of memory-monitoring and self-regulation, meta-reasoning, consciousness/awareness and autonoetic consciousness/self-awareness. In practice these capacities are used to regulate one's own cognition, to maximize one's potential to think, learn and to the evaluation of proper ethical/moral rules. It can also lead to a reduction in response time for a given situation as a result of heightened awareness, and potentially reduce the time to complete problems or tasks.
In the context of student metacognition, D. N. Perkins and Gavriel Salomon observe that metacognition concerns students' ability to monitor their progress. During this process, students ask questions like “What am I doing now?”, “Is it getting me anywhere?", and “What else could I be doing instead?”. Perkins and Salomon argue that such metacognitive practices help students to avoid unproductive approaches.
In the domain of experimental psychology, an influential distinction in metacognition (proposed by T. O. Nelson & L. Narens) is between Monitoring—making judgments about the strength of one's memories—and Control—using those judgments to guide behavior (in particular, to guide study choices). Dunlosky, Serra, and Baker (2007) covered this distinction in a review of metamemory research that focused on how findings from this domain can be applied to other areas of applied research.
In the domain of cognitive neuroscience, metacognitive monitoring and control has been viewed as a function of the prefrontal cortex, which receives (monitors) sensory signals from other cortical regions and implements control using feedback loops (see chapters by Schwartz & Bacon and Shimamura, in Dunlosky & Bjork, 2008).
Metacognition is studied in the domain of artificial intelligence and modelling. Therefore, it is the domain of interest of emergent systemics.
Concepts and models
Metacognition has two interacting phenomena guided by a person's cognitive regulation:
Metacognitive knowledge (also called metacognitive awareness) is what individuals know about themselves and others like beliefs about thinking and such, as cognitive processors.
Metacognitive experiences are those experiences that have something to do with the current, on-going cognitive endeavor.
Metacognition refers to a level of thinking and metacognitive regulation, the regulation of cognition and subsequent learning experiences that help people enhance their learning through a set of activities. It involves active metacognitive control or attention over the process in learning situations. The skills that aid in regulation involve planning the way to approach a learning task, monitoring comprehension, and evaluating progress towards the completion of a task.
Metacognition includes at least three different types of metacognitive awareness when considering metacognitive knowledge:
Declarative knowledge: refers to knowledge about oneself as a learner and about what factors can influence one's performance. Declarative knowledge can also be referred to as "world knowledge".
Procedural knowledge: refers to knowledge about doing things. This type of knowledge is displayed as heuristics and strategies. A high degree of procedural knowledge can allow individuals to perform tasks more automatically. This is achieved through a large variety of strategies that can be accessed more efficiently.
Conditional knowledge: refers to knowing when and why to use declarative and procedural knowledge. It allows students to allocate their resources when using strategies. This in turn allows the strategies to become more effective.
These types of metacognitive knowledge also include:
Content knowledge (declarative knowledge) which is understanding one's own capabilities, such as a student evaluating their own knowledge of a subject in a class. It is notable that not all metacognition is accurate. Studies have shown that students often mistake lack of effort with understanding in evaluating themselves and their overall knowledge of a concept. Also, greater confidence in having performed well is associated with less accurate metacognitive judgment of the performance.
Task knowledge (procedural knowledge), which is how one perceives the difficulty of a task which is the content, length, and the type of assignment. The study mentioned in Content knowledge also deals with a person's ability to evaluate the difficulty of a task related to their overall performance on the task. Again, the accuracy of this knowledge was skewed as students who thought their way was better/easier also seemed to perform worse on evaluations, while students who were rigorously and continually evaluated reported to not be as confident but still did better on initial evaluations.
Strategic knowledge (conditional knowledge) is one's own capability for using strategies to learn information. Young children are not particularly good at this; it is not until students are in upper elementary school that they begin to develop an understanding of effective strategies.
In short, strategic knowledge involves knowing what (factual or declarative knowledge), knowing when and why (conditional or contextual knowledge) and knowing how (procedural or methodological knowledge).
Similar to metacognitive knowledge, metacognitive regulation or "regulation of cognition" contains three skills that are essential.
Planning: refers to the appropriate selection of strategies and the correct allocation of resources that affect task performance.
Monitoring: refers to one's awareness of comprehension and task performance
Evaluating: refers to appraising the final product of a task and the efficiency at which the task was performed. This can include re-evaluating strategies that were used.
Metacognitive control is an important skill in cognitive regulation, it is about focusing cognitive resources on relevant information. Similarly, maintaining motivation to see a task to completion is also a metacognitive skill that is closely associated with the attentional control. The ability to become aware of distracting stimuli – both internal and external – and sustain effort over time also involves metacognitive or executive functions. Swanson (1990) found that metacognitive knowledge can compensate for IQ and lack of prior knowledge when comparing fifth and sixth grade students' problem solving. Students with a better metacognition were reported to have used fewer strategies, but solved problems more effectively than students with poor metacognition, regardless of IQ or prior knowledge.
A lack of awareness of one's own knowledge, thoughts, feelings, and adaptive strategies leads to inefficient control over them. Hence, metacognition is a necessary life skill that needs nurturing to improve one's quality of life. Maladaptive use of metacognitive skills in response to stress can strengthen negative psychological states and social responses, potentially leading to psychosocial dysfunction. Examples of maladaptive metacognitive skills include worry based on inaccurate cognitive conceptions, rumination, and hypervigilance. Continuous cycles of negative cognitive conceptions and the associated emotional burden often lead to negative coping strategies such as avoidance and suppression. These can foster pervasive learned helplessness and impair the formation of executive functions, negatively affecting an individual's quality of life.
The theory of metacognition plays a critical role in successful learning, and it's important for both students and teachers to demonstrate understanding of it. Students who underwent metacognitive training including pretesting, self evaluation, and creating study plans performed better on exams. They are self-regulated learners who utilize the "right tool for the job" and modify learning strategies and skills based on their awareness of effectiveness. Individuals with a high level of metacognitive knowledge and skill identify blocks to learning as early as possible and change "tools" or strategies to ensure goal attainment. A broader repertoire of "tools" also assists in goal attainment. When "tools" are general, generic, and context independent, they are more likely to be useful in different types of learning needs. In one study examining students who received text messages during college lectures, it was suggested that students with higher metacognitive self-regulation were less likely than other students to have their learning affected by keeping mobile phones switched on in classes.
Finally, there is no distinction between domain-general and domain-specific metacognitive skills. This means that metacognitive skills are domain-general in nature and there are no specific skills for certain subject areas. The metacognitive skills that are used to review an essay are the same as those that are used to verify an answer to a math question.
Related concepts
A number of theorists have proposed a common mechanism behind theory of mind, the ability to model and understand the mental states of others, and metacognition, which involves a theory of one's own mind's function. Direct evidence for this link is limited.
Several researchers have related mindfulness to metacognition. Mindfulness includes at least two mental processes: a stream of mental events and a higher level awareness of the flow of events. Mindfulness can be distinguished from some metacognition processes in that it is a conscious process.
Social metacognition
Although metacognition has thus far been discussed in relation to the self, recent research in the field has suggested that this view is overly restrictive. Instead, it is argued that metacognition research should also include beliefs about others' mental processes, the influence of culture on those beliefs, and on beliefs about ourselves. This "expansionist view" proposes that it is impossible to fully understand metacognition without considering the situational norms and cultural expectations that influence those same conceptions. This combination of social psychology and metacognition is referred to as social metacognition.
Social metacognition can include ideas and perceptions that relate to social cognition. Additionally, social metacognition can include judging the cognition of others, such as judging the perceptions and emotional states of others. This is in part because the process of judging others is similar to judging the self. However, individuals have less information about the people they are judging; therefore, judging others tends to be more inaccurate; an effect called the fundamental attribution error. Having similar cognitions can buffer against this inaccuracy and can be helpful for teams or organizations, as well as interpersonal relationships.
Social metacognition and the self concept
An example of the interaction between social metacognition and self-concept can be found in examining implicit theories about the self. Implicit theories can cover a wide range of constructs about how the self operates, but two are especially relevant here; entity theory and incrementalist theory. Entity theory proposes that an individual's self-attributes and abilities are fixed and stable, while incrementalist theory proposes that these same constructs can be changed through effort and experience. Entity theorists are susceptible to learned helplessness because they may feel that circumstances are outside their control (i.e. there's nothing that could have been done to make things better), thus they may give up easily. Incremental theorists react differently when faced with failure: they desire to master challenges, and therefore adopt a mastery-oriented pattern. They immediately began to consider various ways that they could approach the task differently, and they increase their efforts. Cultural beliefs can act on this as well. For example, a person who has accepted a cultural belief that memory loss is an unavoidable consequence of old age may avoid cognitively demanding tasks as they age, thus accelerating cognitive decline. Similarly, a woman who is aware of the stereotype that purports that women are not good at mathematics may perform worse on tests of mathematical ability or avoid mathematics altogether. These examples demonstrate that the metacognitive beliefs people hold about the self - which may be socially or culturally transmitted - can have important effects on persistence, performance, and motivation.
Attitudes as a function of social metacognition
The way that individuals think about attitude greatly affects the way that they behave. Metacognitions about attitudes influence how individuals act, and especially how they interact with others.
Some metacognitive characteristics of attitudes include importance, certainty, and perceived knowledge, and they influence behavior in different ways. Attitude importance is the strongest predictor of behavior and can predict information seeking behaviors in individuals. Attitude importance is also more likely to influence behavior than certainty of the attitude. When considering a social behavior like voting a person may hold high importance but low certainty. This means that they will likely vote, even if they are unsure whom to vote for. Meanwhile, a person who is very certain of who they want to vote for, may not actually vote if it is of low importance to them. This also applies to interpersonal relationships. A person might hold a lot of favorable knowledge about their family, but they may not maintain close relations with their family if it is of low importance.
Metacognitive characteristics of attitudes may be key to understanding how attitudes change. Research shows that the frequency of positive or negative thoughts is the biggest factor in attitude change. A person may believe that climate change is occurring but have negative thoughts toward it such as "If I accept the responsibilities of climate change, I must change my lifestyle". These individuals would not likely change their behavior compared to someone that thinks positively about the same issue such as "By using less electricity, I will be helping the planet".
Another way to increase the likelihood of behavior change is by influencing the source of the attitude. An individual's personal thoughts and ideas have a much greater impact on the attitude compared to ideas of others. Therefore, when people view lifestyle changes as coming from themselves, the effects are more powerful than if the changes were coming from a friend or family member. These thoughts can be re-framed in a way that emphasizes personal importance, such as "I want to stop smoking because it is important to me" rather than "quitting smoking is important to my family". More research needs to be conducted on culture differences and importance of group ideology, which may alter these results.
Social metacognition and stereotypes
People have secondary cognitions about the appropriateness, justifiability, and social judgability of their own stereotypic beliefs. People know that it is typically unacceptable to make stereotypical judgments and make conscious efforts not to do so. Subtle social cues can influence these conscious efforts. For example, when given a false sense of confidence about their ability to judge others, people will return to relying on social stereotypes. Cultural backgrounds influence social metacognitive assumptions, including stereotypes. For example, cultures without the stereotype that memory declines with old age display no age differences in memory performance.
When it comes to making judgments about other people, implicit theories about the stability versus malleability of human characteristics predict differences in social stereotyping as well. Holding an entity theory of traits increases the tendency for people to see similarity among group members and utilize stereotyped judgments. For example, compared to those holding incremental beliefs, people who hold entity beliefs of traits use more stereotypical trait judgments of ethnic and occupational groups as well as form more extreme trait judgments of new groups. When an individual's assumptions about a group combine with their implicit theories, more stereotypical judgments may be formed. Stereotypes that one believes others hold about them are called metastereotypes.
Animal metacognition
In nonhuman primates
Chimpanzees
Beran, Smith, and Perdue (2013) found that chimpanzees showed metacognitive monitoring in the information-seeking task. In their studies, three language-trained chimpanzees were asked to use the keyboard to name the food item in order to get the food. The food in the container was either visible to them or they had to move toward the container to see its contents. Studies shown that chimpanzees were more often to check what was in the container first if the food in the container was hidden. But when the food was visible to them, the chimpanzees were more likely to directly approach the keyboard and reported the identity of the food without looking again in the container. Their results suggested that chimpanzees know what they have seen and show effective information-seeking behavior when information is incomplete.
Rhesus macaques (Macaca mulatta)
Morgan et al. (2014) investigated whether rhesus macaques can make both retrospective and prospective metacognitive judgments on the same memory task. Risk choices were introduced to assess the monkey's confidence about their memories. Two male rhesus monkeys (Macaca mulatta) were trained in a computerized token economy task first in which they can accumulate tokens to exchange food rewards. Monkeys were presented with multiple images of common objects simultaneously and then a moving border appearing on the screen indicating the target. Immediately following the presentation, the target images and some distractors were shown in the test. During the training phase, monkeys received immediate feedback after they made responses. They can earn two tokens if they make correct choices but lost two tokens if they were wrong.
In Experiment 1, the confidence rating was introduced after they completed their responses in order to test the retrospective metamemory judgments. After each response, a high-risk and a low-risk choice were provided to the monkeys. They could earn one token regardless of their accuracy if they choose the low-risk option. When they chose high-risk, they were rewarded with three tokens if their memory response was correct on that trial but lost three tokens if they made incorrect responses. Morgan and colleagues (2014) found a significant positive correlation between memory accuracy and risk choice in two rhesus monkeys. That is, they were more likely to select the high-risk option if they answered correctly in the working memory task but select the low-risk option if they were failed in the memory task.
Then Morgan et al. (2014) examine monkeys’ prospective metacognitive monitoring skills in Experiment 2. This study employed the same design except that two monkeys were asked to make low-risk or high-risk confidence judgment before they make actual responses to measure their judgments about future events. Similarly, the monkeys were more often to choose high-risk confidence judgment before answering correctly in working memory task and tended to choose the low-risk option before providing an incorrect response. These two studies indicated that rhesus monkeys can accurately monitor their performance and provided evidence of metacognitive abilities in monkeys.
In rats
In addition to nonhuman primates, other animals are also shown metacognition. Foote and Crystal (2007) provided the first evidence that rats have the knowledge of what they know in a perceptual discrimination task. Rats were required to classify brief noises as short or long. Some noises with intermediate durations were difficult to discriminate as short or long. Rats were provided with an option to decline to take the test on some trials but were forced to make responses on other trials. If they chose to take the test and respond correctly, they would receive a high reward but no reward if their classification of noises was incorrect. But if the rats decline to take the test, they would be guaranteed a smaller reward. The results showed that rats were more likely to decline to take the test when the difficulty of noise discrimination increased, suggesting rats knew they do not have the correct answers and declined to take the test to receive the reward. Another finding is that the performance was better when they had chosen to take the test compared with if the rats were forced to make responses, proving that some uncertain trials were declined to improve the accuracy.
These responses pattern might be attributed to actively monitor their own mental states. Alternatively, external cues such as environmental cue associations could be used to explain their behaviors in the discrimination task. Rats might have learned the association between intermediate stimuli and the decline option over time. Longer response latencies or some features inherent to stimuli can serve as discriminative cues to decline tests. Therefore, Templer, Lee, and Preston (2017) utilized an olfactory-based delayed match to sample (DMTS) memory task to assess whether rats were capable of metacognitive responding adaptively. Rats were exposed to sample odor first and chose to either decline or take the four-choice memory test after a delay. The correct choices of odor were associated with high reward and incorrect choices have no reward. The decline options were accompanied by a small reward.
In experiment 2, some “no-sample” trials were added in the memory test in which no odor was provided before the test. They hypothesized that rats would decline more often when there was no sample odor presented compared with odor presented if rats could internally assess the memory strength. Alternatively, if the decline option was motivated by external environmental cues, the rats would be less likely to decline the test because no available external cues were presented. The results showed that rats were more likely to decline the test in no-sample trials relative to normal sample trials, supporting the notion that rats can track their internal memory strength.
To rule out other potential possibilities, they also manipulated memory strength by providing the sampled odor twice and varying the retention interval between the learning and the test. Templer and colleagues (2017) found rats were less likely to decline the test if they had been exposed to the sample twice, suggesting that their memory strength for these samples was increased. Longer delayed sample test was more often declined than short delayed test because their memory was better after the short delay. Overall, their series of studies demonstrated that rats could distinguish between remembering and forgetting and rule out the possibilities that decline use was modulated by the external cues such as environmental cue associations.
In pigeons
Research on metacognition of pigeons has shown limited success. Inman and Shettleworth (1999) employed the delayed match to sample (DMTS) procedure to test pigeons’ metacognition. Pigeons were presented with one of three sample shapes (a triangle, a square, or a star) and then they were required to peck the matched sample when three stimuli simultaneously appeared on the screen at the end of the retention interval. A safe key was also presented in some trials next to three sample stimuli which allow them to decline that trial. Pigeons received a high reward for pecking correct stimuli, a middle-level reward for pecking the safe key, and nothing if they pecked the wrong stimuli. Inman and Shettleworth's (1999) first experiment found that pigeons’ accuracies were lower and they were more likely to choose the safe key as the retention interval between presentation of stimuli and test increased. However, in Experiment 2, when pigeons were presented with the option to escape or take the test before the test phase, there was no relationship between choosing the safe key and longer retention interval. Adams and Santi (2011) also employed the DMTS procedure in a perceptual discrimination task during which pigeons were trained to discriminate between durations of illumination. Pigeons did not choose the escape option more often as the retention interval increased during initial testing. After extended training, they learned to escape the difficult trials. However, these patterns might be attributed to the possibility that pigeons learned the association between escape responses and longer retention delay.
In addition to DMTS paradigm, Castro and Wasserman (2013) proved that pigeons can exhibit adaptive and efficient information-seeking behavior in the same-different discrimination task. Two arrays of items were presented simultaneously in which the two sets of items were either identical or different from one another. Pigeons were required to distinguish between the two arrays of items in which the level of difficulty was varied. Pigeons were provided with an “Information” button and a “Go” button on some trials that they could increase the number of items in the arrays to make the discrimination easier or they can prompt to make responses by pecking the Go button. Castro and Wasserman found that the more difficult the task, the more often pigeons chose the information button to solve the discrimination task. This behavioral pattern indicated that pigeons could evaluate the difficulty of the task internally and actively search for information when is necessary.
In dogs
Dogs have shown a certain level of metacognition that they are sensitive to information they have acquired or not. Belger & Bräuer (2018) examined whether dogs could seek additional information when facing uncertain situations. The experimenter put the reward behind one of the two fences in which dogs can see or cannot see where the reward was hidden. After that, dogs were encouraged to find the reward by walking around one fence. The dogs checked more frequently before selecting the fence when they did not see the baiting process compared with when they saw where the reward was hidden. However, contrary to apes, dogs did not show more checking behaviors when the delay between baiting the reward and selecting the fence was longer. Their findings suggested that dogs have some aspect of information-searching behaviors but less flexibly compared to apes.
In dolphins
Smith et al. (1995) evaluated whether dolphins have the ability of metacognitive monitoring in an auditory threshold paradigm. A bottlenosed dolphin was trained to discriminate between high-frequency tones and low-frequency tones. An escape option was available on some trials associated with a small reward. Their studies showed that dolphins could appropriately use the uncertain response when the trials were difficult to discriminate.
Debate
There is consensus that nonhuman primates, especially great apes and rhesus monkeys, exhibit metacognitive control and monitoring behaviors. But less convergent evidence was found in other animals such as rats and pigeons. Some researchers criticized these methods and posited that these performances might be accounted for by low-level conditioning mechanisms. Animals learned the association between reward and external stimuli through simple reinforcement models. However, many studies have demonstrated that the reinforcement model alone cannot explain animals’ behavioral patterns. Animals have shown adaptive metacognitive behavior even with the absence of concrete reward.
Strategies
Metacognitive-like processes are especially ubiquitous when it comes to the discussion of self-regulated learning. Self-regulation requires metacognition by looking at one's awareness of their learning and planning further learning methodology. Attentive metacognition is a salient feature of good self-regulated learners, but does not guarantee automatic application. Reinforcing collective discussion of metacognition is a salient feature of self-critical and self-regulating social groups. The activities of strategy selection and application include those concerned with an ongoing attempt to plan, check, monitor, select, revise, evaluate, etc.
Metacognition is 'stable' in that learners' initial decisions derive from the pertinent facts about their cognition through years of learning experience. Simultaneously, it is also 'situated' in the sense that it depends on learners' familiarity with the task, motivation, emotion, and so forth. Individuals need to regulate their thoughts about the strategy they are using and adjust it based on the situation to which the strategy is being applied. At a professional level, this has led to emphasis on the development of reflective practice, particularly in the education and health-care professions.
Recently, the notion has been applied to the study of second language learners in the field of TESOL and applied linguistics in general (e.g., Wenden, 1987; Zhang, 2001, 2010). This new development has been much related to Flavell (1979), where the notion of metacognition is elaborated within a tripartite theoretical framework. Learner metacognition is defined and investigated by examining their person knowledge, task knowledge and strategy knowledge.
Wenden (1991) has proposed and used this framework and Zhang (2001) has adopted this approach and investigated second language learners' metacognition or metacognitive knowledge. In addition to exploring the relationships between learner metacognition and performance, researchers are also interested in the effects of metacognitively-oriented strategic instruction on reading comprehension (e.g., Garner, 1994, in first language contexts, and Chamot, 2005; Zhang, 2010). The efforts are aimed at developing learner autonomy, interdependence and self-regulation.
Metacognition helps people to perform many cognitive tasks more effectively. Strategies for promoting metacognition include self-questioning (e.g. "What do I already know about this topic? How have I solved problems like this before?"), thinking aloud while performing a task, and making graphic representations (e.g. concept maps, flow charts, semantic webs) of one's thoughts and knowledge. Carr, 2002, argues that the physical act of writing plays a large part in the development of metacognitive skills.
Strategy Evaluation matrices (SEM) can help to improve the knowledge of cognition component of metacognition. The SEM works by identifying the declarative (Column 1), procedural (Column 2) and conditional (Column 3 and 4) knowledge about specific strategies. The SEM can help individuals identify the strength and weaknesses about certain strategies as well as introduce them to new strategies that they can add to their repertoire.
A regulation checklist (RC) is a useful strategy for improving the regulation of cognition aspect of one's metacognition. RCs help individuals to implement a sequence of thoughts that allow them to go over their own metacognition. King (1991) found that fifth-grade students who used a regulation checklist outperformed control students when looking at a variety of questions including written problem solving, asking strategic questions, and elaborating information.
Examples of strategies that can be taught to students are word analysis skills, active reading strategies, listening skills, organizational skills and creating mnemonic devices.
Walker and Walker have developed a model of metacognition in school learning termed Steering Cognition, which describes the capacity of the mind to exert conscious control over its reasoning and processing strategies in relation to the external learning task. Studies have shown that pupils with an ability to exert metacognitive regulation over their attentional and reasoning strategies used when engaged in maths, and then shift those strategies when engaged in science or then English literature learning, associate with higher academic outcomes at secondary school.
Metastrategic knowledge
"Metastrategic knowledge" (MSK) is a sub-component of metacognition that is defined as general knowledge about higher order thinking strategies. MSK had been defined as "general knowledge about the cognitive procedures that are being manipulated". The knowledge involved in MSK consists of "making generalizations and drawing rules regarding a thinking strategy" and of "naming" the thinking strategy.
The important conscious act of a metastrategic strategy is the "conscious" awareness that one is performing a form of higher order thinking. MSK is an awareness of the type of thinking strategies being used in specific instances and it consists of the following abilities: making generalizations and drawing rules regarding a thinking strategy, naming the thinking strategy, explaining when, why and how such a thinking strategy should be used, when it should not be used, what are the disadvantages of not using appropriate strategies, and what task characteristics call for the use of the strategy.
MSK deals with the broader picture of the conceptual problem. It creates rules to describe and understand the physical world around the people who utilize these processes called higher-order thinking. This is the capability of the individual to take apart complex problems in order to understand the components in problem. These are the building blocks to understanding the "big picture" (of the main problem) through reflection and problem solving.
Action
Both social and cognitive dimensions of sporting expertise can be adequately explained from a metacognitive perspective according to recent research. The potential of metacognitive inferences and domain-general skills including psychological skills training are integral to the genesis of expert performance. Moreover, the contribution of both mental imagery (e.g., mental practice) and attentional strategies (e.g., routines) to our understanding of expertise and metacognition is noteworthy.
The potential of metacognition to illuminate our understanding of action was first highlighted by Aidan Moran who discussed the role of meta-attention in 1996. A recent research initiative, a research seminar series called META funded by the BPS, is exploring the role of the related constructs of meta-motivation, meta-emotion, and thinking and action (metacognition).
Mental illness
Sparks of interest
In the context of mental health, metacognition can be loosely defined as the process that "reinforces one's subjective sense of being a self and allows for becoming aware that some of one's thoughts and feelings are symptoms of an illness". The interest in metacognition emerged from a concern for an individual's ability to understand their own mental status compared to others as well as the ability to cope with the source of their distress. These insights into an individual's mental health status can have a profound effect on overall prognosis and recovery.
Metacognition brings many unique insights into the normal daily functioning of a human being. It also demonstrates that a lack of these insights compromises 'normal' functioning. This leads to less healthy functioning. In the autism spectrum, it is speculated that there is a profound deficit in theory of mind.
In people who identify as alcoholics, there is a belief that the need to control cognition is an independent predictor of alcohol use over anxiety. Alcohol may be used as a coping strategy for controlling unwanted thoughts and emotions formed by negative perceptions. This is sometimes referred to as self medication.
Implications
Adrian Wells' and Gerald Matthews' theory proposes that when faced with an undesired choice, an individual can operate in two distinct modes: "object" and "metacognitive". Object mode interprets perceived stimuli as truth, where metacognitive mode understands thoughts as cues that have to be weighted and evaluated. They are not as easily trusted. There are targeted interventions unique of each patient, that gives rise to the belief that assistance in increasing metacognition in people diagnosed with schizophrenia is possible through tailored psychotherapy. With a customized therapy in place, clients then have the potential to develop greater ability to engage in complex self-reflection. This can ultimately be pivotal in the patient's recovery process. In the obsessive–compulsive spectrum, cognitive formulations have greater attention to intrusive thoughts related to the disorder. "Cognitive self-consciousness" are the tendencies to focus attention on thought. Patients with OCD exemplify varying degrees of these "intrusive thoughts". Patients also with generalized anxiety disorder also show negative thought process in their cognition.
Cognitive-attentional syndrome (CAS) characterizes a metacognitive model of emotion disorder (CAS is consistent with the attention strategy of excessively focusing on the source of a threat). This ultimately develops through the client's own beliefs. Metacognitive therapy attempts to correct this change in the CAS. One of the techniques in this model is called attention training (ATT). It was designed to diminish the worry and anxiety by a sense of control and cognitive awareness. ATT also trains clients to detect threats and test how controllable reality appears to be.
Following the work of Asher Koriat, who regards confidence as central aspect of metacognition, metacognitive training for psychosis aims at decreasing overconfidence in patients with schizophrenia and raising awareness of cognitive biases. According to a meta-analysis, this type of intervention improves delusions and hallucinations.
Works of art as metacognitive artifacts
The concept of metacognition has also been applied to reader-response criticism. Narrative works of art, including novels, movies and musical compositions, can be characterized as metacognitive artifacts which are designed by the artist to anticipate and regulate the beliefs and cognitive processes of the recipient, for instance, how and in which order events and their causes and identities are revealed to the reader of a detective story. As Menakhem Perry has pointed out, mere order has profound effects on the aesthetical meaning of a text. Narrative works of art contain a representation of their own ideal reception process. They are something of a tool with which the creators of the work wish to attain certain aesthetical and even moral effects.
Mind wandering
There is an intimate, dynamic interplay between mind wandering and metacognition. Metacognition serves to correct the wandering mind, suppressing spontaneous thoughts and bringing attention back to more "worthwhile" tasks.
Organizational metacognition
The concept of metacognition has also been applied to collective teams and organizations in general, termed organizational metacognition.
References
Further reading
External links
Cognitive psychology
Educational technology
Educational psychology
Mind–body interventions | 0.76075 | 0.998021 | 0.759245 |
Psychological warfare | Psychological warfare (PSYWAR), or the basic aspects of modern psychological operations (PsyOp), has been known by many other names or terms, including Military Information Support Operations (MISO), Psy Ops, political warfare, "Hearts and Minds", and propaganda. The term is used "to denote any action which is practiced mainly by psychological methods with the aim of evoking a planned psychological reaction in other people".
Various techniques are used, and are aimed at influencing a target audience's value system, belief system, emotions, motives, reasoning, or behavior. It is used to induce confessions or reinforce attitudes and behaviors favorable to the originator's objectives, and are sometimes combined with black operations or false flag tactics. It is also used to destroy the morale of enemies through tactics that aim to depress troops' psychological states.
Target audiences can be governments, organizations, groups, and individuals, and is not just limited to soldiers. Civilians of foreign territories can also be targeted by technology and media so as to cause an effect on the government of their country.
Mass communication such as radio allows for direct communication with an enemy populace, and therefore has been used in many efforts. Social media channels and the internet allow for campaigns of disinformation and misinformation performed by agents anywhere in the world.
History
Early
Since prehistoric times, warlords and chiefs have recognized the importance of weakening the morale of their opponents. According to Polyaenus, in the Battle of Pelusium (525 BC) between the Persian Empire and ancient Egypt, the Persian forces used cats and other animals as a psychological tactic against the Egyptians, who avoided harming cats due to religious belief and superstitions.
Currying favor with supporters was the other side of psychological warfare, and an early practitioner of this was Alexander the Great, who successfully conquered large parts of Europe and the Middle East and held on to his territorial gains by co-opting local elites into the Greek administration and culture. Alexander left some of his men behind in each conquered city to introduce Greek culture and oppress dissident views. His soldiers were paid dowries to marry locals in an effort to encourage assimilation.
Genghis Khan, leader of the Mongolian Empire in the 13th century AD employed less subtle techniques. Defeating the will of the enemy before having to attack and reaching a consented settlement was preferable to facing his wrath. The Mongol generals demanded submission to the Khan and threatened the initially captured villages with complete destruction if they refused to surrender. If they had to fight to take the settlement, the Mongol generals fulfilled their threats and massacred the survivors. Tales of the encroaching horde spread to the next villages and created an aura of insecurity that undermined the possibility of future resistance.
Genghis Khan also employed tactics that made his numbers seem greater than they actually were. During night operations he ordered each soldier to light three torches at dusk to give the illusion of an overwhelming army and deceive and intimidate enemy scouts. He also sometimes had objects tied to the tails of his horses, so that riding on open and dry fields raised a cloud of dust that gave the enemy the impression of great numbers. His soldiers used arrows specially notched to whistle as they flew through the air, creating a terrifying noise.
Another tactic favored by the Mongols was catapulting severed human heads over city walls to frighten the inhabitants and spread disease in the besieged city's closed confines. This was especially used by the later Turko-Mongol chieftain.
The Muslim caliph Omar, in his battles against the Byzantine Empire, sent small reinforcements in the form of a continuous stream, giving the impression that a large force would accumulate eventually if not swiftly dealt with.
During the early Qin dynasty and late Eastern Zhou dynasty in 1st century AD China, the Empty Fort Strategy was used to trick the enemy into believing that an empty location was an ambush, in order to prevent them from attacking it using reverse psychology. This tactic also relied on luck, should the enemy believe that the location is a threat to them.
In the 6th century BCE Greek Bias of Priene successfully resisted the Lydian king Alyattes by fattening up a pair of mules and driving them out of the besieged city. When Alyattes' envoy was then sent to Priene, Bias had piles of sand covered with wheat to give the impression of plentiful resources.
This ruse appears to have been well known in medieval Europe: defenders in castles or towns under siege would throw food from the walls to show besiegers that provisions were plentiful. A famous example occurs in the 8th-century legend of Lady Carcas, who supposedly persuaded the Franks to abandon a five-year siege by this means and gave her name to Carcassonne as a result.
During the Granada War, Spanish captain Hernán Pérez del Pulgar routinely employed psychological tactics as part of his guerrilla actions against the Emirate of Granada. In 1490, infiltrating the city by night with a small retinue of soldiers, he nailed a letter of challenge on the main mosque and set fire to the alcaicería before withdrawing.
In 1574, having been informed about the pirate attacks previous to the Battle of Manila, Spanish captain Juan de Salcedo had his relief force return to the city by night while playing marching music and carrying torches in loose formations, so they would appear to be a much larger army to any nearby enemy. They reached the city unopposed.
During the Attack on Marstrand in 1719, Peter Tordenskjold carried out military deception against the Swedes. Although probably apocryphal, he apparently succeeded in making his small force appear larger and feed disinformation to his opponents, similar to the Operations Fortitude and Titanic in World War II.
World War I
The start of modern psychological operations in war is generally dated to World War I. By that point, Western societies were increasingly educated and urbanized, and mass media was available in the form of large circulation newspapers and posters. It was also possible to transmit propaganda to the enemy via the use of airborne leaflets or through explosive delivery systems like modified artillery or mortar rounds.
At the start of the war, the belligerents, especially the British and Germans, began distributing propaganda, both domestically and on the Western front. The British had several advantages that allowed them to succeed in the battle for world opinion; they had one of the world's most reputable news systems, with much experience in international and cross-cultural communication, and they controlled much of the undersea communications cable system then in operation. These capabilities were easily transitioned to the task of warfare.
The British also had a diplomatic service that maintained good relations with many nations around the world, in contrast to the reputation of the German services. While German attempts to foment revolution in parts of the British Empire, such as Ireland and India, were ineffective, extensive experience in the Middle East allowed the British to successfully induce the Arabs to revolt against the Ottoman Empire.
In August 1914, David Lloyd George appointed a Member of Parliament (MP), Charles Masterman, to head a Propaganda Agency at Wellington House. A distinguished body of literary talent was enlisted for the task, with its members including Arthur Conan Doyle, Ford Madox Ford, G. K. Chesterton, Thomas Hardy, Rudyard Kipling and H. G. Wells. Over 1,160 pamphlets were published during the war and distributed to neutral countries, and eventually, to Germany. One of the first significant publications, the Report on Alleged German Outrages of 1915, had a great effect on general opinion across the world. The pamphlet documented atrocities, both actual and alleged, committed by the German army against Belgian civilians. A Dutch illustrator, Louis Raemaekers, provided the highly emotional drawings which appeared in the pamphlet.
In 1917, the bureau was subsumed into the new Department of Information and branched out into telegraph communications, radio, newspapers, magazines and the cinema. In 1918, Viscount Northcliffe was appointed Director of Propaganda in Enemy Countries. The department was split between propaganda against Germany organized by H.G Wells, and propaganda against the Austro-Hungarian Empire supervised by Wickham Steed and Robert William Seton-Watson; the attempts of the latter focused on the lack of ethnic cohesion in the Empire and stoked the grievances of minorities such as the Croats and Slovenes. It had a significant effect on the final collapse of the Austro-Hungarian Army at the Battle of Vittorio Veneto.
Aerial leaflets were dropped over German trenches containing postcards from prisoners of war detailing their humane conditions, surrender notices and general propaganda against the Kaiser and the German generals. By the end of the war, MI7b had distributed almost 26 million leaflets. The Germans began shooting the leaflet-dropping pilots, prompting the British to develop unmanned leaflet balloons that drifted across no-man's land. At least one in seven of these leaflets were not handed in by the soldiers to their superiors, despite severe penalties for that offence. Even General Hindenburg admitted that "Unsuspectingly, many thousands consumed the poison", and POWs admitted to being disillusioned by the propaganda leaflets that depicted the use of German troops as mere cannon fodder. In 1915, the British began airdropping a regular leaflet newspaper Le Courrier de l'Air for civilians in German-occupied France and Belgium.
At the start of the war, the French government took control of the media to suppress negative coverage. Only in 1916, with the establishment of the Maison de la Presse, did they begin to use similar tactics for the purpose of psychological warfare. One of its sections was the "Service de la Propagande aérienne" (Aerial Propaganda Service), headed by Professor Tonnelat and Jean-Jacques Waltz, an Alsatian artist code-named "Hansi". The French tended to distribute leaflets of images only, although the full publication of US President Woodrow Wilson's Fourteen Points, which had been heavily edited in the German newspapers, was distributed via airborne leaflets by the French.
The Central Powers were slow to use these techniques; however, at the start of the war the Germans succeeded in inducing the Sultan of the Ottoman Empire to declare 'holy war', or Jihad, against the Western infidels. They also attempted to foment rebellion against the British Empire in places as far afield as Ireland, Afghanistan, and India. The Germans' greatest success was in giving the Russian revolutionary, Lenin, free transit on a sealed train from Switzerland to Finland after the overthrow of the Tsar. This soon paid off when the Bolshevik Revolution took Russia out of the war.
World War II
Adolf Hitler was greatly influenced by the psychological tactics of warfare the British had employed during World War I, and attributed the defeat of Germany to the effects this propaganda had on the soldiers. He became committed to the use of mass propaganda to influence the minds of the German population in the decades to come. By calling his movement The Third Reich, he was able to convince many civilians that his cause was not just a fad, but the way of their future. Joseph Goebbels was appointed as Propaganda Minister when Hitler came to power in 1933, and he portrayed Hitler as a messianic figure for the redemption of Germany. Hitler also coupled this with the resonating projections of his orations for effect.
Germany's Fall Grün plan of invasion of Czechoslovakia had a large part dealing with psychological warfare aimed both at the Czechoslovak civilians and government as well as, crucially, at Czechoslovakia's allies. It became successful to the point that Germany gained support of UK and France through appeasement to occupy Czechoslovakia without having to fight an all-out war, sustaining only minimum losses in covert war before the Munich Agreement.
At the start of the Second World War, the British set up the Political Warfare Executive to produce and distribute propaganda. Through the use of powerful transmitters, broadcasts could be made across Europe. Sefton Delmer managed a successful black propaganda campaign through several radio stations which were designed to be popular with German troops while at the same time introducing news material that would weaken their morale under a veneer of authenticity. British Prime Minister Winston Churchill made use of radio broadcasts for propaganda against the Germans. Churchill favoured deception; he said "In wartime, truth is so precious that she should always be attended by a bodyguard of lies.".
During World War II, the British made extensive use of deception – developing many new techniques and theories. The main protagonists at this time were 'A' Force, set up in 1940 under Dudley Clarke, and the London Controlling Section, chartered in 1942 under the control of John Bevan. Clarke pioneered many of the strategies of military deception. His ideas for combining fictional orders of battle, visual deception and double agents helped define Allied deception strategy during the war, for which he has been referred to as "the greatest British deceiver of WW2".
During the lead-up to the Allied invasion of Normandy, many new tactics in psychological warfare were devised. The plan for Operation Bodyguard set out a general strategy to mislead German high command as to the date and location of the invasion, which was obviously going to happen. Planning began in 1943 under the auspices of the London Controlling Section (LCS). A draft strategy, referred to as Plan Jael, was presented to Allied high command at the Tehran Conference. Operation Fortitude was intended to convince the Germans of a greater Allied military strength than was the case, through fictional field armies, faked operations to prepare the ground for invasion and "leaked" misinformation about the Allied order of battle and war plans.
Elaborate naval deceptions (Operations Glimmer, Taxable and Big Drum) were undertaken in the English Channel. Small ships and aircraft simulated invasion fleets lying off Pas de Calais, Cap d'Antifer and the western flank of the real invasion force. At the same time Operation Titanic involved the RAF dropping fake paratroopers to the east and west of the Normandy landings.
The deceptions were implemented with the use of double agents, radio traffic and visual deception. The British "Double Cross" anti-espionage operation had proven very successful from the outset of the war, and the LCS was able to use double agents to send back misleading information about Allied invasion plans. The use of visual deception, including mock tanks and other military hardware had been developed during the North Africa campaign. Mock hardware was created for Bodyguard; in particular, dummy landing craft were stockpiled to give the impression that the invasion would take place near Calais.
The Operation was a strategic success and the Normandy landings caught German defences unaware. Continuing deception, portraying the landings as a diversion from a forthcoming main invasion in the Calais region, led Hitler into delaying transferring forces from Calais to the real battleground for nearly seven weeks.
Vietnam War
The United States ran an extensive program of psychological warfare during the Vietnam War. The Phoenix Program had the dual aim of assassinating National Liberation Front of South Vietnam (NLF or Viet Cong) personnel and terrorizing any potential sympathizers or passive supporters. During the Phoenix Program, over 19,000 NLF supporters were killed. In Operation Wandering Soul, the United States also used tapes of distorted human sounds and played them during the night making the Vietnamese soldiers think that the dead were back for revenge.
The Vietcong and their forces also used a program of psychological warfare during this war. Trịnh Thị Ngọ, also known as Thu Hương and Hanoi Hannah, was a Vietnamese radio personality. She made English-language broadcasts for North Vietnam directed at United States troops. During the Vietnam War, Ngọ became famous among US soldiers for her propaganda broadcasts on Radio Hanoi. Her scripts were written by the North Vietnamese Army and were intended to frighten and shame the soldiers into leaving their posts. She made three broadcasts a day, reading a list of newly killed or imprisoned Americans, and playing popular US anti-war songs in an effort to incite feelings of nostalgia and homesickness, attempting to persuade US GIs that the US involvement in the Vietnam War was unjust and immoral. A typical broadcast began as follows:
How are you, GI Joe? It seems to me that most of you are poorly informed about the going of the war, to say nothing about a correct explanation of your presence over here. Nothing is more confused than to be ordered into a war to die or to be maimed for life without the faintest idea of what's going on.
21st century
The CIA made extensive use of Contra soldiers to destabilize the Sandinista government in Nicaragua. The CIA used psychological warfare techniques against the Panamanians by delivering unlicensed TV broadcasts. The United States government has used propaganda broadcasts against the Cuban government through TV Marti, based in Miami, Florida. However, the Cuban government has been successful at jamming the signal of TV Marti.
In the Iraq War, the United States used the shock and awe campaign to psychologically maim and break the will of the Iraqi Army to fight.
In cyberspace, social media has enabled the use of disinformation on a wide scale. Analysts have found evidence of doctored or misleading photographs spread by social media in the Syrian Civil War and 2014 Russian military intervention in Ukraine, possibly with state involvement. Military and governments have engaged in psychological operations (PSYOP) and informational warfare (IW) on social networking platforms to regulate foreign propaganda, which includes countries like the US, Russia, and China.
In 2022, Meta and the Stanford Internet Observatory found that over five years people associated with the U.S. military, who tried to conceal their identities, created fake accounts on social media systems including Balatarin, Facebook, Instagram, Odnoklassniki, Telegram, Twitter, VKontakte and YouTube in an influence operation in Central Asia and the Middle East. Their posts, primarily in Arabic, Farsi and Russian, criticized Iran, China and Russia and gave pro-Western narratives. Data suggested the activity was a series of covert campaigns rather than a single operation.
In operations in the South and East China Seas, both the United States and China have been engaged in "cognitive warfare", which involves displays of force, staged photographs and sharing disinformation. The start of the public use of "cognitive warfare" as a clear movement occurred in 2013 with China's political rhetoric.
Methods
Most modern uses of the term psychological warfare refer to the following military methods:
Demoralization:
Distributing pamphlets that encourage desertion or supply instructions on how to surrender.
Shock and awe military strategy.
Projecting repetitive and disturbing noises and music for long periods at high volume towards groups under siege like during Operation Nifty Package.
Propaganda radio stations, such as Lord Haw-Haw in World War II on the "Germany calling" station.
False flag events.
Terrorism
The threat of chemical weapons.
Information warfare.
Most of these techniques were developed during World War II or earlier, and have been used to some degree in every conflict since. Daniel Lerner was in the OSS (the predecessor to the American CIA) and in his book, attempts to analyze how effective the various strategies were. He concludes that there is little evidence that any of them were dramatically successful, except perhaps surrender instructions over loudspeakers when victory was imminent. Measuring the success or failure of psychological warfare is very hard, as the conditions are very far from being a controlled experiment.
Lerner also divides psychological warfare operations into three categories:
White propaganda (omissions and emphasis): Truthful and not strongly biased, where the source of information is acknowledged.
Grey propaganda (omissions, emphasis and racial/ethnic/religious bias): Largely truthful, containing no information that can be proven wrong; the source is not identified.
Black propaganda (commissions of falsification): Inherently deceitful, information given in the product is attributed to a source that was not responsible for its creation.
Lerner says grey and black operations ultimately have a heavy cost, in that the target population sooner or later recognizes them as propaganda and discredits the source. He writes, "This is one of the few dogmas advanced by Sykewarriors that is likely to endure as an axiom of propaganda: Credibility is a condition of persuasion. Before you can make a man do as you say, you must make him believe what you say." Consistent with this idea, the Allied strategy in World War II was predominantly one of truth (with certain exceptions).
In Propaganda: The Formation of Men's Attitudes, Jacques Ellul discusses psychological warfare as a common peace policy practice between nations as a form of indirect aggression. This type of propaganda drains the public opinion of an opposing regime by stripping away its power on public opinion. This form of aggression is hard to defend against because no international court of justice is capable of protecting against psychological aggression since it cannot be legally adjudicated. "Here the propagandists is [sic] dealing with a foreign adversary whose morale he seeks to destroy by psychological means so that the opponent begins to doubt the validity of his beliefs and actions."
Terrorism
According to Boaz Ganor, terrorism weakens the sense of security and disturbs daily life, damaging the target country's capability to function. Terrorism is a strategy that aims to influence public opinion into pressuring leaders to give in to the terrorists' demands, and the population becomes a tool to advance the political agenda.
By country
China
According to U.S. military analysts, attacking the enemy's mind is an important element of the People's Republic of China's military strategy. This type of warfare is rooted in the Chinese Stratagems outlined by Sun Tzu in The Art of War and Thirty-Six Stratagems. In its dealings with its rivals, China is expected to utilize Marxism to mobilize communist loyalists, as well as flex its economic and military muscle to persuade other nations to act in the Chinese government's interests. The Chinese government also tries to control the media to keep a tight hold on propaganda efforts for its people. The Chinese government also utilizes cognitive warfare against Taiwan.
France
The Centre interarmées des actions sur l'environnement is an organization made up of 300 soldiers whose mission is to assure to the four service arm of the French Armed Forces psychological warfare capacities. Deployed in particular to Mali and Afghanistan, its missions "consist in better explaining and accepting the action of French forces in operation with local actors and thus gaining their trust: direct aid to the populations, management of reconstruction sites, actions of communication of influence with the population, elites and local elected officials". The center has capacities for analysis, influence, expertise and instruction.
Germany
In the German , the Zentrum Operative Kommunikation is responsible for PSYOP efforts. The center is subordinate to the Cyber and Information Domain Service branch alongside multiple IT and Electronic Warfare battalions and consists of around 1000 soldiers. One project of the German PSYOP forces is the radio station Stimme der Freiheit (Sada-e Azadi, Voice of Freedom), heard by thousands of Afghans. Another is the publication of various newspapers and magazines in Kosovo and Afghanistan, where German soldiers serve with NATO.
Iran
The Iranian government had an operation program to use the 2022 FIFA World Cup as a psyop against concurrent people's protests.
Israel
The Israeli government and its military make use of psychological warfare. In 2021, Israeli newspaper Haaretz revealed that "Abu Ali Express", a popular news page on Telegram and Twitter purportedly dedicated to "Arab affairs", was actually run by a Jewish Israeli paid consultant to the Israel Defense Forces (IDF). The IDF's psyops account had been the source of a number of noteworthy reports that were afterwards cited by the Israeli and international media.
Russia
Soviet Union
United Kingdom
The British were one of the first major military powers to use psychological warfare in the First and Second World Wars. In the current British Armed Forces, PsyOps are handled by the tri-service 15 Psychological Operations Group. (See also MI5 and Secret Intelligence Service). The Psychological Operations Group comprises over 150 personnel, approximately 75 from the regular Armed Services and 75 from the Reserves. The Group supports deployed commanders in the provision of psychological operations in operational and tactical environments.
The Group was established immediately after the 1991 Gulf War, has since grown significantly in size to meet operational requirements, and since 2015 has been one of the sub-units of the 77th Brigade, formerly called the Security Assistance Group.
In June 2015, NSA files published by Glenn Greenwald revealed details of the JTRIG group at British intelligence agency GCHQ covertly manipulating online communities. This is in line with JTRIG's goal: to "destroy, deny, degrade [and] disrupt" enemies by "discrediting" them, planting misinformation and shutting down their communications.
In March 2019, it emerged that the Defence Science and Technology Laboratory (DSTL) of the UK's Ministry of Defence (MoD) is tendering to arms companies and universities for £70M worth of assistance under a project to develop new methods of psychological warfare. The project is known as the human and social sciences research capability (HSSRC).
United States
The term psychological warfare is believed to have migrated from Germany to the United States in 1941. During World War II, the United States Joint Chiefs of Staff defined psychological warfare broadly, stating "Psychological warfare employs any weapon to influence the mind of the enemy. The weapons are psychological only in the effect they produce and not because of the weapons themselves." The U.S. Department of Defense (DoD) currently defines psychological warfare as:
"The planned use of propaganda and other psychological actions having the primary purpose of influencing the opinions, emotions, attitudes, and behavior of hostile foreign groups in such a way as to support the achievement of national objectives."
This definition indicates that a critical element of the U.S. psychological operations capabilities includes propaganda and by extension counterpropaganda. Joint Publication 3–53 establishes specific policy to use public affairs mediums to counter propaganda from foreign origins.
The purpose of United States psychological operations is to induce or reinforce attitudes and behaviors favorable to US objectives. The Special Activities Center (SAC) is a division of the Central Intelligence Agency's Directorate of Operations, responsible for Covert Action and "Special Activities". These special activities include covert political influence (which includes psychological operations) and paramilitary operations. SAC's political influence group is the only US unit allowed to conduct these operations covertly and is considered the primary unit in this area.
Dedicated psychological operations units exist in the United States Army and United States Marine Corps. The United States Navy and the 193rd Special Operations Wing of the United States Air Force also plans and executes limited PSYOP missions. United States PSYOP units and soldiers of all branches of the military are prohibited by law from targeting U.S. citizens with PSYOP within the borders of the United States (Executive Order S-1233, DOD Directive S-3321.1, and National Security Decision Directive 130). While United States Army PSYOP units may offer non-PSYOP support to domestic military missions, they can only target foreign audiences.
A U.S. Army field manual released in January 2013 states that "Inform and Influence Activities" are critical for describing, directing, and leading military operations. Several Army Division leadership staff are assigned to “planning, integration and synchronization of designated information-related capabilities."
In September 2022, the DoD launched an audit of covert information warfare after social media companies identified a suspected U.S. military operation.
See also
Active measures
Brainwashing
Character assassination
Charles Douglas Jackson
Cognitive dissonance
Cordwainer Smith
Demonizing the enemy
Directed-energy weapon
Fear mongering
Information warfare
Lawfare
Media manipulation
Military psychology
Mind games
Minor sabotage
Moral panic
Noisy investigation
Orwellian
Psychological manipulation
Special Operations
Strategy of tension
Taliban propaganda
The Shock Doctrine
Unconventional Warfare
Peter Watson (intellectual historian)
Zersetzung
NATO
Able Archer 83
UK
Briggs Plan
Information Research Department
US specific:
Information Operations Roadmap
NLF and PAVN battle tactics
Zarqawi PSYOP program
World War II:
Psychological Warfare Division
USSR
Active measures
Related:
Asymmetric warfare
Fourth generation warfare
The Gospel of Afranius
References
Bibliography
Abner, Alan K. Psywarriors : psychological warfare during the Korean War (1951) online
Cohen, Fred. Frauds, Spies, and Lies – and How to Defeat Them. (2006). ASP Press.
Cohen, Fred. World War 3 ... Information Warfare Basics. (2006). ASP Press.
Holzmann, Ashley F. "Artists of War: A History of United States Propaganda, Psychological Warfare, Psychological Operations and a Proposal for Its Ever-Changing Future." US Army Command and General Staff College, 2020) online
Linebarger, Paul M. A. Psychological Warfare: International Propaganda and Communications. (1948). Revised second edition, Duell, Sloan and Pearce (1954).
Pease, Stephen E. Psywar : psychological warfare in Korea, 1950-1953 (1992) online
Roberts III, Mervyn Edwin. The Psychological War for Vietnam, 1960–1968 (2018)
Roetter, Charles. The art of psychological warfare, 1914-1945 (1974) online
Simpson, Christopher. Science of Coercion: Communication Research & Psychological Warfare, 1945–1960 (1994) online
Song, Tae Eun. "Information/Psychological Warfare in the Russia-Ukraine War: Overview and Implications." IFANS FOCUS 2022.9 (May 2022): 1-4. online
Voloshin, Nikolay, and Leyla Garaybeli. "Putin's Psychological Warfare in Ukraine and Syria" Insights of Pakistan, Iran and the Caucasus Studies 2.3 (2023): 50-54. online
External links
Movie: Psywar: The Real Battlefield is the Mind by Metanoia films
The history of psychological warfare
IWS Psychological Operations (PsyOps) / Influence Operations
"Pentagon psychological warfare operation", USA Today, 15 December 2005
"U.S. Adapts Cold-War Idea to Fight Terrorists", New York Times, 18 March 2008
US Army PSYOPS Info – Detailed information about the US Army Psychological Operation Soldiers
IWS — The Information Warfare Site
U.S. — PSYOP producing mid-eastern kids comic book
The Institute of Heraldry — Psychological Operations
Psychological warfare
The Nature of Psychological Warfare (CIA 1958) Original
Aggression
Crowd psychology
Information operations and warfare
Mind control
Propaganda techniques
Psychological warfare techniques
Warfare by type
Warfare of the late modern period | 0.760454 | 0.998335 | 0.759187 |
Holland Codes | The Holland Codes or the Holland Occupational Themes (RIASEC) refers to a taxonomy of interests based on a theory of careers and vocational choice that was initially developed by American psychologist John L. Holland.
The Holland Codes serve as a component of the interests assessment, the Strong Interest Inventory. In addition, the US Department of Labor's Employment and Training Administration has been using an updated and expanded version of the RIASEC model in the "Interests" section of its free online database O*NET (Occupational Information Network) since its inception during the late 1990s.
Overview
Holland's theories of vocational choice, The Holland Occupational Themes, "now pervades career counseling research and practice". Its origins "can be traced to an article in the Journal of Applied Psychology in 1958 and a subsequent article in 1959 that set out his theory of vocational choices. ... The basic premise was that one's occupational preferences were in a sense a veiled expression of underlying character." The 1959 article in particular ("A Theory of Vocational Choice", published in the Journal of Counseling Psychology) is considered the first major introduction of Holland's "theory of vocational personalities and work environments".
Holland originally labeled his six types as "motoric, intellectual, esthetic, supportive, persuasive, and conforming". He later developed and changed them to: "Realistic (Doers), Investigative (Thinkers), Artistic (Creators), Social (Helpers), Enterprising (Persuaders), and Conventional (Organizers)". Holland's six categories show some correlation with each other. It is called the RIASEC model or the hexagonal model because the initial letter of the region is equal to R-I-A-S-E-C when it is expressed as a circle connecting the regions of high correlation. Professor John Johnson of Penn State suggested that an alternative way of categorizing the six types would be through ancient social roles: "hunters (Realistic), shamans (Investigative), artisans (Artistic), healers (Social), leaders (Enterprising), and lorekeepers (Conventional)". Holland offers full definitions of each type in his book, Making Vocational Choices: A Theory of Vocational Personalities and Work Environments (Third Edition)’’ (1997).
According to the Committee on Scientific Awards, Holland's "research shows that personalities seek out and flourish in career environments they fit and that jobs and career environments are classifiable by the personalities that flourish in them". Holland also wrote of his theory that "the choice of a vocation is an expression of personality". Furthermore, while Holland suggested that people can be "categorized as one of six types", he also argued that "a six-category scheme built on the assumption that there are only six kinds of people in the world is unacceptable on the strength of common sense alone. But a six category scheme that allows a simple ordering of a person's resemblance to each of the six models provides the possibility of 720 different personality patterns."
Related model
Prediger's two-dimensional model
Prediger constructed the scale of "work task" and "work relevant abilities" based on Holland's model, and carried out factor analysis and multidimensional scale analysis to clarify the basic structure. As a result, two axes of Data/Ideas and Things/People were extracted. Although Prediger's inquiry did not start from interest per se, it eventually led to the birth of models other than RIASEC, suggesting that the structure of occupational interest may provide a basic dimension.
Tracey and Rounds's octagonal model
In the United States, the energetic trial is being made with the aim of the new model which surpasses Holland hexagon model in 1990's. Tracey & Rounds's octagonal model is one such example. Based on the empirical data, they argue that occupational interests can be placed circularly in a two-dimensional plane consisting of People/Things and Data/ldeas axes, and the number of regions can be arbitrarily determined. According to their model, only Holland's hexagonal model does not adequately represent the structure of occupational interest, and it is possible to retain validity as an octagonal or 16 square model if necessary.
Tracey, Watanabe, & Schneider conducted an international comparative study of job interests among Japanese and U.S. university students, and the results suggest that the Tracey & Rounds's octagonal model is more fitted to Japanese students than Holland's hexagonal model.
Tracey and Rounds's spherical model
Tracey & Rounds criticizes that the conventional models of occupational interest structure do not correctly depict the positional relationship of occupations because they neglect occupational prestige, i.e., "social prestige" or "high socioeconomic status" and proposes a spherical model that assigns occupations to a 3-dimensional space incorporating occupational prestige. In this model, 18 regions of interest are displayed on a spherical space. The left hemisphere has a high status area, with Health Sciences at the top. The right hemisphere has a low status area, with Service Provision as the lowest ground.
Though this model is excellent in the point of more accurately describing the relation between various occupations, it makes the occupation interest structure more complicated, and there is a weak point that it is difficult to be adapted to the data except for the U.S.
List of types
R: Realistic (Doers)
Holland defines the "Realistic Type" as a person who has “a preference for activities that entail the explicit, ordered, or systematic manipulation of objects, tools, machines, and animals…these behavioral tendencies lead in turn to the acquisition of manual, mechanical, agricultural, electrical, and technical competencies.” Sample majors and careers include:
Agriculture
Architect (with Artistic and Enterprising)
Athletics
Carpenter (with Conventional and Investigative)
Culinary arts/Chef (with Artistic and Enterprising)
Chemistry/Chemist (with Investigative and Conventional)
Computer engineering/Computer science/Information technology/Computer programmer (with Investigative and Conventional)
Dentist (with Investigative and Social)
Engineer (with Investigative and Conventional)
Fashion design (with Artistic and Enterprising)
Firefighter (with Social and Enterprising)
Graphic designer (with Artistic and Enterprising)
Model (people) (with Artistic and Enterprising)
Musician (with Artistic and Enterprising)
Nurse (with Social, Conventional, and Investigative)
Outdoor recreation
Park Naturalist (with Social and Artistic)
Personal trainer (with Enterprising and Social)
Photographer (with Artistic and Enterprising)
Physical therapy (with Social and Investigative)
Driver
Sports medicine/Wilderness medicine (with Social and Investigative)
Surgeon (with Investigative and Social)
Veterinarian (with Investigative and Social)
Web developer (with Conventional, Artistic, and Investigative)
Zoologists and Wildlife Biologists (with Investigative)
I: Investigative (Thinkers)
Holland defines the "Investigative Type" as a person who has "a preference for activities that entail the observational, symbolic, systematic and creative investigation of physical, biological, and cultural phenomena (in order to understand and control such phenomena)... these behavioral tendencies lead in turn to an acquisition of scientific and mathematical competencies." Sample majors and careers include:
Actuary (with Conventional and Enterprising)
Archivist/Librarian (with Social and Conventional)
Biostatistics/Masters in Public Health (with Conventional)
Carpenter (with Conventional and Realistic)
CPA (Certified Public Accountant) (with Conventional and Enterprising)
Chemistry/Chemist (with Realistic and Conventional)
Community Health Workers/Masters in Public Health (with Social and Enterprising)
Computer engineering/Computer science/Information technology/Computer programmer (with Realistic and Conventional)
Counselor (with Social and Artistic)
Dentist (with Realistic and Social)
Dietitian/Nutritionist (with Social and Enterprising)
Doctor (Medical school/Medical research) (with Social)
Engineer (with Realistic and Conventional)
Financial analyst (with Conventional and Enterprising)
Epidemiology/Masters in Public Health (with Social)
Lawyer (with Enterprising and Social)
Nurse (with Realistic, Conventional, and Social)
Paralegal (with Conventional and Enterprising)
Pharmacist (with Social and Conventional),
Physical therapy (with Social and Realistic)
Physics
Poets, Lyricists and Creative Writers (with Artistic)
Professor/Research – PhD
Psychology/Psychologist (with Social and Artistic); Art therapist/Dance therapy/Drama therapy/Music therapy/Narrative therapy/Culinary therapy
Social Work (with Social)
Speech-language pathology/Myofunctional therapist (With Social and Artistic)
Sports medicine/Wilderness medicine (with Social and Realistic)
Surgeon (with Realistic and Social)
Technical writer, Proofreader, Copy Editor (with Artistic and Conventional)
Tutor (with Social)
Veterinarian (with Realistic and Social)
Web developer (with Conventional, Realistic, and Artistic)
Zoologists and Wildlife Biologists (with Realistic)
A: Artistic (Creators)
Holland defines the "Artistic Type" as a person who has "a preference for ambiguous, free, unsystematized activities that entail the manipulation of physical, verbal, or human materials to create art forms or products...these behavioral tendencies lead in turn to the acquisition of artistic competencies." Sample majors and careers include:
Architect (with Realistic and Enterprising)
Broadcast journalism (with Enterprising)
Culinary arts (with Realistic and Enterprising)
Entrepreneur (with Social and Enterprising)
Fashion design (with Realistic and Enterprising)
Graphic designer (with Enterprising and Realistic)
Model (people) (with Realistic and Enterprising)
Musician (with Enterprising and Realistic)
Park Naturalist (with Social and Realistic)
Poets, Lyricists and Creative Writers (with Investigative)
Psychology/Psychologist (with Social and Investigative); Art therapist/Dance therapy/Drama therapy/Music therapy/Narrative therapy/Culinary therapy
Photographer (with Realistic and Enterprising)
Speech-language pathology/Myofunctional therapist (With Social and Investigative)
Technical writer, Proofreader, Copy Editor (with Investigative and Conventional)
Trainer (business) (with Social and Conventional)
Translator (with Social)
Web developer (with Conventional, Realistic, and Investigative)
S: Social (Helpers)
Holland defines the "Social Type" as a person who has "a preference for activities that entail the manipulation of others to inform, train, develop, cure, or enlighten...these behavioral tendencies lead in turn to an acquisition of human relations competencies." Sample majors and careers include:
Archivist/Librarian (with Conventional and Investigative)
CFP (Certified Financial Planner)/Personal Financial Planner (with Conventional and Enterprising)
Clergy (with Artistic and Enterprising)
Community Organizer
Community Health Workers/Masters in Public Health (with Investigative and Enterprising)
Counselors (various)/Advisers (with Investigative, Enterprising, Conventional, and Artistic)
Guidance/School Counselors, Academic Advisors, Career Counselors (see also: List of psychotherapies)
Customer service (with Conventional and Enterprising)
Dentist (with Investigative and Realistic)
Dietitian/Nutritionist (with Investigative and Enterprising)
Doctor (Medical school/Medical research) (with Investigative)
Educational administration (with Enterprising and Conventional)
Educational consultant (with Conventional and Enterprising)
Entrepreneur (with Enterprising and Artistic)
Epidemiology/Masters in Public Health (with Investigative)
Personal Financial Planner/Certified Financial Planner (with Enterprising and Conventional)
Firefighter (with Realistic and Enterprising)
Fitness Trainer and Aerobics Teacher (with Enterprising and Realistic)
Foreign Service/Diplomacy (with Enterprising and Artistic)
Human Resources (with Conventional and Enterprising)
Lawyer (with Investigative and Enterprising)
Nurse (with Realistic, Conventional, and Investigative)
Park Naturalist (with Realistic and Artistic)
Pharmacist (with Investigative and Conventional)
Physical therapy (with Realistic and Investigative)
Psychology/Psychologist (with Artistic and Investigative); Art therapist/Dance therapy/Drama therapy/Music therapy/Narrative therapy/Culinary therapy
Social Advocate
Sociology
Social Work
Speech-language pathology/Myofunctional therapist (With Investigative and Artistic)
Surgeon (with Realistic and Investigative)
Teacher (Early childhood education, Primary school, Secondary school, Teaching English as a second language, Special Ed, and Substitute teaching) (with Artistic)
Sports medicine/Wilderness medicine (with Investigative and Realistic)
Trainer (business) (with Artistic and Conventional)
Translator (with Artistic)
Tutor (with Investigative)
Veterinarian (with Investigative and Realistic)
E: Enterprising (Persuaders)
Holland defines the "Enterprising Type" as a person who has "a preference for actives that entail the manipulation of others to attain organization goals or economic gain...these behavioral tendencies lead in turn to an acquisition of leadership, interpersonal, and persuasive competences." Sample majors and careers include:
Actuary (with Investigative and Conventional)
Architect (with Artistic and Realistic)
Business (with Social and Conventional)
Broker or Agent (i.e. Automobile broker, Real Estate broker etc.)
Buyer
CPA (Certified Public Accountant) (with Investigative and Conventional)
CFP (Certified Financial Planner)/Personal Financial Planner (with Social and Conventional)
Community Health Workers/Masters in Public Health (with Investigative and Social)
Culinary arts (with Artistic and Realistic)
Clergy (with Artistic and Social)
Customer service (with Conventional and Social)
Dietitian/Nutritionist (with Social and Investigative)
Educational administration (with Social and Conventional)
Educational consultant (with Conventional and Social)
Entrepreneur (with Social and Artistic)
Fashion design (with Artistic and Realistic)
Financial analyst (with Investigative and Conventional)
Foreign Service/Diplomacy (with Social and Artistic)
Firefighter (with Social and Realistic)
Fitness Trainer and Aerobics Teacher (with Realistic and Social)
Fundraising
Graphic designer (with Artistic and Realistic)
Human Resources (with Conventional and Social)
Broadcast journalism (with Artistic)
Lawyer (with Investigative and Social)
Management/Management Consultant
Market Research Analyst (with Investigative)
Model (people) (with Artistic and Realistic)
Musician (with Artistic and Realistic)
Paralegal (with Conventional and Investigative)
Photographer (with Artistic and Realistic)
Public Health Educator/Masters in Public Health (with Social)
Property manager/Community association manager (with Conventional)
Public relations/Publicity/Advertising/Marketing (with Artistic)
Sales (with Conventional and Social)
C: Conventional (Organizers)
Holland defines the "Conventional Type" as a person who has "a preference for actives that entail the explicit, ordered, systematic manipulation of data (keeping records, filing materials, reproducing materials, organizing business machines and data processing equipment to attain organizational or economic goals)...these behavioral tendencies lead in turn to an acquisition of clerical, computational, and business system competencies." Sample majors and careers include:
Actuary (with Investigative and Enterprising)
Archivist/Librarian (with Social and Investigative)
Biostatistics/Masters in Public Health (with Investigative)
Carpenter (with Realistic and Investigative)
Chemistry/Chemist (with Investigative and Realistic)
CFP (Certified Financial Planner)/Personal Financial Planner (with Social and Enterprising)
CPA (Certified Public Accountant) (with Investigative and Enterprising)
Computer engineering/Computer science/Information technology/Computer programmer (with Investigative and Realistic)
Customer service (with Enterprising and Social)
Educational administration (with Social and Enterprising)
Educational consultant (with Social and Enterprising)
Engineer (with Investigative and Realistic)
Financial analyst (with Investigative and Enterprising)
Personal Financial Planner/Certified Financial Planner (with Social and Enterprising)
Human Resources (HR) (with Enterprising and Social)
Maths teacher (with Social)
Nurse (with Realistic, Social, and Investigative)
Office administration (with Enterprising)
Paralegal (with Enterprising and Investigative)
Pharmacist (with Social and Investigative),
Property manager/Community association manager (with Enterprising)
Real Estate Agent (with Enterprising)
Statistician (with Realistic and Investigative)
Technical writer, Proofreader, Copy Editor (with Artistic and Investigative)
Trainer (business) (with Social and Artistic)
Web developer (with Artistic, Realistic, and Investigative)
Notes
Further reading
Eikleberry, Carol; Pinsky, Carrie. The Career Guide for Creative and Unconventional People (Fourth Edition). Ten Speed Press, 2015.
Holland, John L. Making Vocational Choices: A Theory of Vocational Personalities and Work Environments (Third Edition). PAR Psychological Assessment Resources Inc., 1997.
Streufert, Billie. "How Facebook can help you select a major or career", USA Today, September 26, 2015.
"Find Your Field", New York Times'', April 7, 2016
External links
Free tests
O*NET Interest Profiler (Holland Codes Quiz) – Occupational Information Network (O*NET): US Department of Labor/Employment and Training Administration
Student Services: Holland Codes Quiz – Rogue Community College
Free career databases
O*NET Holland Codes Interests Matched to Careers– Occupational Information Network (O*NET): US Department of Labor/Employment and Training Administration
Delaware Department of Labor@Delaware Career Compass-State of Delaware
Free college majors databases
Holland Code and College Majors: College majors classified by Holland Themes – Central Oregon Community College
College Majors and Holland Codes – Central Oregon Community College
Holland Code Handout - Central Washington University
Major and Career Exploration - University of Oklahoma
Majors With Interest Codes- Washburn University
Motivational theories
Career development
Higher education
Personality typologies
Personality tests | 0.763848 | 0.993869 | 0.759166 |
The Principles of Psychology | The Principles of Psychology is an 1890 book about psychology by William James, an American philosopher and psychologist who trained to be a physician before going into psychology. The four key concepts in James' book are: stream of consciousness (his most famous psychological metaphor); emotion (later known as the James–Lange theory); habit (human habits are constantly formed to achieve certain results); and will (through James' personal experiences in life).
Origins
The openings of The Principles of Psychology presented what was known at the time of writing about the localization of functions in the brain: how each sense seemed to have a neural center to which it reported and how varied bodily motions have their sources in other centers.
The particular hypotheses and observations on which James relied are now very dated, but the broadest conclusion to which his material leads is still valid, which was that the functions of the "lower centers" (beneath the cerebrum) become increasingly specialized as one moves from reptiles, through ever more intelligent mammals, to humans while the functions of the cerebrum itself become increasingly flexible and less localized as one moves along the same continuum.
James also discussed experiments on illusions (optical, auditory, etc.) and offered a physiological explanation for many of them, including that "the brain reacts by paths which previous experiences have worn, and makes us usually perceive the probable thing, i.e. the thing by which on previous occasions the reaction was most frequently aroused." Illusions are thus a special case of the phenomenon of habit.
Key features
Stream of consciousness
Stream of consciousness is arguably James' most famous psychological metaphor. He argued that human thought can be characterized as a flowing stream, which was an innovative concept at the time due to the prior argument being that human thought was more so like a distinct chain. He also believed that humans can never experience exactly the same thought or idea more than once. In addition to this, he viewed consciousness as completely continuous.
Emotion
James introduced a new theory of emotion (later known as the James–Lange theory), which argued that an emotion is instead the consequence rather than the cause of the bodily experiences associated with its expression. In other words, a stimulus causes a physical response and an emotion follows the response. This theory has received criticism throughout the years since its introduction.
Habit
Human habits are constantly formed to achieve certain results because of one's strong feelings of wanting or wishing for something. James emphasized the importance and power of human habit and proceeded to draw a conclusion. James noted that the laws of habit formation are unbiased, habits are capable of causing either good or bad actions. And once either a good or bad habit has begun to be established, it is very difficult to change.
Will
Will is the final chapter of The Principles of Psychology, which was through James' own personal experiences in life. There was one question that troubled James during his crisis, which was whether or not free will existed. "The most essential achievement of the will,... when it is most 'voluntary', is to attend to a difficult object and hold it fast before the mind..." Effort of attention is thus the essential phenomenon of will."
Use of comparative psychology
In the use of the comparative method, James wrote, "instincts of animals are ransacked to throw light on our own...." By this light, James dismisses the platitude that "man differs from lower creatures by the almost total absence of instincts". There is no such absence, so the difference must be found elsewhere.
James believed that humans wielded far more impulses than other creatures. Impulses which, when observed out of their greater context, may have appeared just as automatic as the most basic of animal instincts. However, as man experienced the results of his impulses, and these experiences evoked memories and expectations, those very same impulses became gradually refined.
By this reasoning, William James arrived at the conclusion that in any animal with the capacity for memory, association, and expectation, behavior is ultimately expressed as a synthesis of instinct and experience, rather than just blind instinct alone.
Influence and reception
The Principles of Psychology was a vastly influential textbook which summarized the field of psychology through the time of its publication. Psychology was beginning to gain popularity and acclaim in the United States at this time, and the compilation of this textbook only further solidified psychology's credibility as a science. Philosopher Helmut R. Wagner writes that most of the book's contents are now outdated, but that it still contains insights of interest.
In 2002, James was listed as the 14th most eminent psychology author of the 20th century, with his theory on emotion (the James-Lange Theory) presented in this book being a contributing factor for that ranking.
In areas outside of psychology, the book was also to have a major impact. The philosopher Edmund Husserl engages specifically with William James's work in many areas. Following Husserl, this work would also impact many other phenomenologists. Furthermore, the Anglo-Austrian philosopher Ludwig Wittgenstein read James's work and utilized it in his coursework for students, though Wittgenstein held philosophical disagreements about many of James's points. For instance, Wittgenstein's critique of William James in sec 342 of Philosophical Investigations.
Editions
James, W. (1890). The Principles of Psychology, in two volumes. New York: Henry Holt and Company.
James, W. (1950). The Principles of Psychology, 2 volumes in 1. New York: Dover Publications.
James, W. (1983). The Principles of Psychology, Volumes I and II. Cambridge, MA: Harvard University Press (with introduction by George A. Miller).
See also
American philosophy
References
External links
The entire text
The Principles of Psychology, vol. 1 – digitized copy
The Principles of Psychology, vol. 2 – digitized copy
Psychology : briefer course – Digitized copy of James' abridgment of Principles
1890 non-fiction books
Books by William James
Cognitive science literature
History of psychology
Works about philosophy of psychology | 0.766883 | 0.989874 | 0.759118 |
Grief | Grief is the response to the loss of something deemed important, particularly to the loss of someone or some living thing that has died, to which a bond or affection was formed. Although conventionally focused on the emotional response to loss, grief also has physical, cognitive, behavioral, social, cultural, spiritual and philosophical dimensions. While the terms are often used interchangeably, bereavement refers to the state of loss, while grief is the reaction to that loss.
The grief associated with death is familiar to most people, but individuals grieve in connection with a variety of losses throughout their lives, such as unemployment, ill health or the end of a relationship. Loss can be categorized as either physical or abstract; physical loss is related to something that the individual can touch or measure, such as losing a spouse through death, while other types of loss are more abstract, possibly relating to aspects of a person's social interactions.
Grieving process
Between 1996 and 2006, there was extensive skepticism about a universal and predictable "emotional pathway" that leads from distress to "recovery" with an appreciation that grief is a more complex process of adapting to loss than stage and phase models have previously suggested. The two-track model of bereavement, created by Simon Shimshon Rubin in 1981, provided a deeper focus on the grieving process. The model examines the long-term effects of bereavement by measuring how well the person is adapting to the loss of a significant person in their life. The main objective of the two-track model of bereavement is for the individual to "manage and live in reality in which the deceased is absent" as well as returning to normal biological functioning.
Track One is focused on the biopsychosocial functioning of grief. This focuses on the anxiety, depression, somatic concerns, traumatic responses, familial relationships, interpersonal relationships, self-esteem, meaning structure, work, and investment in life tasks. Rubin (2010) points out, "Track 1, the range of aspects of the individual's functioning across affective, interpersonal, somatic and classical psychiatric indicators is considered". All of the terms listed above are noted for the importance they have in relation to people's responses to grief and loss.
The significance of the closeness between the bereaved and the deceased is important to Track 1 because this could determine the severity of the mourning and grief the bereaved will endure. This first track is the response to the extremely stressful life events and requires adaptation along with change and integration. The second track focuses on the ongoing relationship between the griever and the deceased. Track two mainly focuses on how the bereaved was connected to the deceased, and on what level of closeness was shared. The two main components considered are memories, both positive and negative, and emotional involvement shared with the decedent. The stronger the relationship to the deceased, the greater the evaluation of the relationship with heightened shock.
Any memory could be a trigger for the bereaved, the way the bereaved chose to remember their loved ones, and how the bereaved integrate the memory of their loved ones into their daily lives.
Ten main attributes to this track include: imagery/memory, emotional distance, positive effect, negative effect, preoccupation with the loss, conflict, idealization, memorialization/transformation of the loss, impact on self-perception and loss process (shock, searching, disorganized). An outcome of this track is being able to recognize how transformation has occurred beyond grief and mourning. By outlining the main aspects of the bereavement process into two interactive tracks, individuals can examine and understand how grief has affected their life following loss and begin to adapt to this post-loss life. The Model offers a better understanding with the duration of time in the wake of one's loss and the outcomes that evolve from death. By using this model, researchers can effectively examine the response to an individual's loss by assessing the behavioral-psychological functioning and the relationship with the deceased.
The authors from What's Your Grief?, Litza Williams and Eleanor Haley, state in their understanding of the clinical and therapeutic uses of the model:
"The Two-Track Model of Bereavement can help specify areas of mutuality (how people respond affectivity to trauma and change) and also difference (how bereaved people may be preoccupied with the deceased following loss compared to how they may be preoccupied with trauma following the exposure to it)" (Rubin, S.S, 1999).
While the grief response is considered a natural way of dealing with loss, prolonged, highly intense grief may at times become debilitating enough to be considered a disorder.
Reactions
Crying is a normal and natural part of grieving. It has also been found, however, that crying and talking about the loss is not the only healthy response and, if forced or excessive, can be harmful. Responses or actions in the affected person, called "coping ugly" by researcher George Bonanno, may seem counter-intuitive or even appear dysfunctional, e.g., celebratory responses, laughter, or self-serving bias in interpreting events. Lack of crying is also a natural, healthy reaction, potentially protective of the individual, and may also be seen as a sign of resilience.
Science has found that some healthy people who are grieving do not spontaneously talk about the loss. Pressing people to cry or retell the experience of a loss can be damaging. Genuine laughter is healthy. When a loved one dies, it is not unusual for the bereaved to report that they have "seen" or "heard" the person they have lost. Most people who have experienced this report feeling comforted. In a 2008 survey conducted by Amanda Barusch, 27% of respondents who had lost a loved one reported having had this kind of "contact" experience.
Bereavement science
Bonanno's four trajectories of grief
George Bonanno, a professor of clinical psychology at Columbia University, conducted more than two decades of scientific studies on grief and trauma, which have been published in several papers in the most respected peer-reviewed journals in the field of psychology, such as Psychological Science and The Journal of Abnormal Psychology. Subjects of his studies number in the several thousand and include people who have suffered losses in the U.S. and cross-cultural studies in various countries around the world, such as Israel, Bosnia-Herzegovina, and China. His subjects suffered losses through war, terrorism, deaths of children, premature deaths of spouses, sexual abuse, childhood diagnoses of AIDS, and other potentially devastating loss events or potential trauma events.
In Bonanno's book, The Other Side of Sadness: What the New Science of Bereavement Tells Us About Life After a Loss, he summarizes his research. His findings include that a natural resilience is the main component of grief and trauma reactions. The first researcher to use pre-loss data, he outlined four trajectories of grief. Bonanno's work has also demonstrated that absence of grief or trauma symptoms is a healthy outcome, rather than something to be feared as has been the thought and practice until his research. Because grief responses can take many forms, including laughter, celebration, and bawdiness, in addition to sadness, Bonanno coined the phrase "coping ugly" to describe the idea that some forms of coping may seem counter intuitive. Bonanno has found that resilience is natural to humans, suggesting that it cannot be "taught" through specialized programs and that there is virtually no existing research with which to design resilience training, nor is there existing research to support major investment in such things as military resilience training programs.
The four trajectories are as follows:
Resilience: "The ability of adults in otherwise normal circumstances who are exposed to an isolated and potentially highly disruptive event, such as the death of a close relation or a violent or life-threatening situation, to maintain relatively stable, healthy levels of psychological and physical functioning" as well as "the capacity for generative experiences and positive emotions".
Recovery: When "normal functioning temporarily gives way to threshold or sub-threshold psychopathology (e.g., symptoms of depression or post-traumatic stress disorder, or PTSD), usually for a period of at least several months, and then gradually returns to pre-event levels".
Chronic dysfunction: Prolonged suffering and inability to function, usually lasting several years or longer.
Delayed grief or trauma: When adjustment seems normal but then distress and symptoms increase months later. Researchers have not found evidence of delayed grief, but delayed trauma appears to be a genuine phenomenon.
"Five stages" model
The Kübler-Ross model, commonly known as the five stages of grief, describes a hypothesis first introduced by Elisabeth Kübler-Ross in her 1969 book, On Death and Dying. Based on the uncredited earlier work of John Bowlby and Colin Murray-Parkes, Kübler-Ross actually applied the stages to people who were dying, not people who were grieving.
The five stages are:
denial
anger
bargaining
depression
acceptance
This model found limited empirical support in a study by Maciejewski et al. That is that the sequence was correct although Acceptance was highest at all points throughout the person's experience. The research of George Bonanno, however, is acknowledged as debunking the five stages of grief because his large body of peer-reviewed studies show that the vast majority of people who have experienced a loss are resilient and that there are multiple trajectories following loss.
Physiological and neurological processes
Studies of fMRI scans of women from whom grief was elicited about the death of a mother or a sister in the past 5 years resulted in the conclusion that grief produced a local inflammation response as measured by salivary concentrations of pro-inflammatory cytokines. These responses were correlated with activation in the anterior cingulate cortex and orbitofrontal cortex. This activation also correlated with the free recall of grief-related word stimuli. This suggests that grief can cause stress, and that this reaction is linked to the emotional processing parts of the frontal lobe. Activation of the anterior cingulate cortex and vagus nerve is similarly implicated in the experience of heartbreak whether due to social rejection or bereavement.
Among those persons who have been bereaved within the previous three months of a given report, those who report many intrusive thoughts about the deceased show ventral amygdala and rostral anterior cingulate cortex hyperactivity to reminders of their loss. In the case of the amygdala, this links to their sadness intensity. In those individuals who avoid such thoughts, there is a related opposite type of pattern in which there is a decrease in the activation of the dorsal amygdala and the dorsolateral prefrontal cortex.
In those not so emotionally affected by reminders of their loss, studies of fMRI scans have been used to conclude that there is a high functional connectivity between the dorsolateral prefrontal cortex and amygdala activity, suggesting that the former regulates activity in the latter. In those people who had greater intensity of sadness, there was a low functional connection between the rostral anterior cingulate cortex and amygdala activity, suggesting a lack of regulation of the former part of the brain upon the latter.
Evolutionary hypotheses
From an evolutionary perspective, grief is perplexing because it appears costly, and it is not clear what benefits it provides the sufferer. Several researchers have proposed functional explanations for grief, attempting to solve this puzzle. Sigmund Freud argued that grief is a process of libidinal reinvestment. The griever must, Freud argued, disinvest from the deceased, which is a painful process. But this disinvestment allows the griever to use libidinal energies on other, possibly new attachments, so it provides a valuable function. John Archer, approaching grief from an attachment theory perspective, argued that grief is a byproduct of the human attachment system. Generally, a grief-type response is adaptive because it compels a social organism to search for a lost individual (e.g., a mother or a child). However, in the case of death, the response is maladaptive because the individual is not simply lost and the griever cannot reunite with the deceased. Grief, from this perspective, is a painful cost of the human capacity to form commitments.
Other researchers such as Randolph Nesse have proposed that grief is a kind of psychological pain that orients the sufferer to a new existence without the deceased and creates a painful but instructive memory. If, for example, leaving an offspring alone at a watering hole led to the offspring's death, grief creates an intensively painful memory of the event, dissuading a parent from ever again leaving an offspring alone at a watering hole. More recently, Bo Winegard and colleagues argued that grief might be a socially selected signal of an individual's propensity for forming strong, committed relationships. From this social signaling perspective, grief targets old and new social partners, informing them that the griever is capable of forming strong social commitments. That is, because grief signals a person's capacity to form strong and faithful social bonds, those who displayed prolonged grief responses were preferentially chosen by alliance partners. The authors argue that throughout human evolution, grief was therefore shaped and elaborated by the social decisions of selective alliance partners.
Risks
Bereavement, while a normal part of life, carries a degree of risk when severe. Severe reactions affect approximately 10% to 15% of people. Severe reactions mainly occur in people with depression present before the loss event. Severe grief reactions may carry over into family relations. Some researchers have found an increased risk of marital breakup following the death of a child, for example. Others have found no increase. John James, author of the Grief Recovery Handbook and founder of the Grief Recovery Institute, reported that his marriage broke up after the death of his infant son.
Health risks
Many studies have looked at the bereaved in terms of increased risks for stress-related illnesses. Colin Murray Parkes in the 1960s and 1970s in England noted increased doctor visits, with symptoms such as abdominal pain, breathing difficulties, and so forth in the first six months following a death. Others have noted increased mortality rates (Ward, A.W. 1976) and Bunch et al. found a five times greater risk of suicide in teens following the death of a parent. Bereavement also increases the risk of heart attack.
Complicated grief
Prolonged grief disorder (PGD), formerly known as complicated grief disorder (CGD), is a pathological reaction to loss representing a cluster of empirically derived symptoms that have been associated with long-term physical and psycho-social dysfunction. Individuals with PGD experience severe grief symptoms for at least six months and are stuck in a maladaptive state. An attempt is being made to create a diagnosis category for complicated grief in the DSM-5. It is currently an "area for further study" in the DSM, under the name Persistent Complex Bereavement Disorder. Critics of including the diagnosis of complicated grief in the DSM-5 say that doing so will constitute characterizing a natural response as a pathology, and will result in wholesale medicating of people who are essentially normal.
Shear and colleagues found an effective treatment for complicated grief, by treating the reactions in the same way as trauma reactions.
Complicated grief is not synonymous with grief. Complicated grief is characterised by an extended grieving period and other criteria, including mental and physical impairments. An important part of understanding complicated grief is understanding how the symptoms differ from normal grief. The Mayo Clinic states that with normal grief the feelings of loss are evident. When the reaction turns into complicated grief, however, the feelings of loss become incapacitating and continue even though time passes. The signs and symptoms characteristic of complicated grief are listed as "extreme focus on the loss and reminders of the loved one, intense longing or pining for the deceased, problems accepting the death, numbness or detachment ... bitterness about your loss, inability to enjoy life, depression or deep sadness, trouble carrying out normal routines, withdrawing from social activities, feeling that life holds no meaning or purpose, irritability or agitation, lack of trust in others". The symptoms seen in complicated grief are specific because the symptoms seem to be a combination of the symptoms found in separation as well as traumatic distress. They are also considered to be complicated because, unlike normal grief, these symptoms will continue regardless of the amount of time that has passed and despite treatment given from tricyclic antidepressants. Individuals with complicated grief symptoms are likely to have other mental disorders such as PTSD (post traumatic syndrome disorder), depression, anxiety, etc.
An article by the NEJM (The New England Journal of Medicine) states complicated grief cases are multifactorial, and that complicated grief is distinguished from major depression and post-traumatic stress disorder. Evidence shows that complicated grief is a more severe and prolonged version of acute grief than a completely different type of grief. While only affecting 2 to 3% of people in the world, complicated grief is usually contracted when a loved one dies suddenly and in a violent way.
In the study "Bereavement and Late-Life Depression: Grief and its Complications in the Elderly" six subjects with symptoms of complicated grief were given a dose of Paroxetine, a selective serotonin re-uptake inhibitor, and showed a 50% decrease in their symptoms within a three-month period. The Mental Health Clinical Research team theorizes that the symptoms of complicated grief in bereaved elderly are an alternative of post-traumatic stress. These symptoms were correlated with cancer, hypertension, anxiety, depression, suicidal ideation, increased smoking, and sleep impairments at around six months after spousal death.
A treatment that has been found beneficial in dealing with the symptoms associated with complicated grief is the use of serotonin specific reuptake inhibitors such as Paroxetine. These inhibitors have been found to reduce intrusive thoughts, avoidant behaviors, and hyperarousal that are associated with complicated grief. In addition psychotherapy techniques are in the process of being developed.
Disenfranchised grief
Disenfranchised grief is a term describing grief that is not acknowledged by society. Examples of events leading to disenfranchised grief are the death of a friend, the loss of a pet, a trauma in the family a generation prior, the loss of a home or place of residence particularly in the case of children, who generally have little or no control in such situations, and whose grief may not be noticed or understood by caregivers. American military children and teens in particular moving a great deal while growing up, an aborted or miscarried pregnancy, a parent's loss or surrender of a child to adoption, a child's loss of their birth parent to adoption, the death of a loved one due to a socially unacceptable cause such as suicide, or the death of a celebrity.
There are fewer support systems available for people who experience disenfranchised grief compared to those who are going through a widely recognized form of grief. Therefore, people who suffer disenfranchised grief undergo a more complicated grieving process. They may feel angry and depressed due to the lack of public validation which leads to the inability to fully express their sorrow. Moreover, they may not receive sufficient social support and feel isolated.
Examples of bereavement
Death of a child
Death of a child can take the form of a loss in infancy such as miscarriage, stillbirth, neonatal death, SIDS, or the death of an older child. Among adults over the age of 50, approximately 11% have been predeceased by at least one of their offspring.
In most cases, parents find the grief almost unbearably devastating, and it tends to hold greater risk factors than any other loss. This loss also bears a lifelong process: one does not get 'over' the death but instead must assimilate and live with it. Intervention and comforting support can make all the difference to the survival of a parent in this type of grief but the risk factors are great and may include family breakup or suicide.
Feelings of guilt, whether legitimate or not, are pervasive, and the dependent nature of the relationship disposes parents to a variety of problems as they seek to cope with this great loss. Parents who suffer miscarriage or a regretful or coerced abortion may experience resentment towards others who experience successful pregnancies.
Suicide
Parents may feel they cannot openly discuss their grief and feel their emotions because of how their child died and how the people around them may perceive the situation. Parents, family members and service providers have all confirmed the unique nature of suicide-related bereavement following the loss of a child. The difference in suicide-related bereavement is that there are different reactions and ways when we respond to the loss of someone we love dearly. Some examples are post-traumatic stress, family, and relationship tensions. Post-traumatic stress (PTS) can affect the person severely when witnessing the death of someone. It can give them horrible trauma and nightmares may occur making them have a lack of sleep. Another reaction is family and relationship tensions. Having loved ones by their side could really support them, but some families might lack connections or communications with one another. They feel as if they are going to bring more burden to others. Some have different perspectives on themselves when communicating with others and might keep their feelings to themselves. It's a way to protect their inner feelings as if they're scared to share with others.
Death of a spouse
Many widows and widowers describe losing 'half' of themselves. A factor is the manner in which the spouse died. The survivor of a spouse who died of an illness has a different experience of such loss than a survivor of a spouse who died by an act of violence. Often, the spouse who is "left behind" may suffer from depression and loneliness, and may feel it necessary to seek professional help in dealing with their new life.
Furthermore, most couples have a division of 'tasks' or 'labor', e.g., the husband mows the yard, the wife pays the bills, etc. which, in addition to dealing with great grief and life changes, means added responsibilities for the bereaved. Planning and financing a funeral can be very difficult if pre-planning was not completed. Changes in insurance, bank accounts, claiming of life insurance, securing childcare can also be intimidating to someone who is grieving. Social isolation may also become imminent, as many groups composed of couples find it difficult to adjust to the new identity of the bereaved, and the bereaved themselves have great challenges in reconnecting with others. Widows of many cultures, for instance, wear black for the rest of their lives to signify the loss of their spouse and their grief. Only in more recent decades has this tradition been reduced to a period of two years, while some religions such as Orthodox Christianity many widows will still continue to wear black for the remainder of their lives.
Death of a sibling
Grieving siblings are often referred to as the 'forgotten mourners' who are made to feel as if their grief is not as severe as their parents' grief. However, the sibling relationship tends to be the longest significant relationship of the lifespan and siblings who have been part of each other's lives since birth, such as twins, help form and sustain each other's identities; with the death of one sibling comes the loss of that part of the survivor's identity because "your identity is based on having them there".
If siblings were not on good terms or close with each other, then intense feelings of guilt may ensue on the part of the surviving sibling (guilt may also ensue for having survived, not being able to prevent the death, having argued with their sibling, etc.)
Death of a parent
For an adult
When an adult child loses a parent in later adulthood, it is considered to be "timely" and to be a normative life course event. This allows the adult children to feel a permitted level of grief. However, research shows that the death of a parent in an adult's midlife is not a normative event by any measure, but is a major life transition causing an evaluation of one's own life or mortality. Others may shut out friends and family in processing the loss of someone with whom they have had the longest relationship.
In developed countries, people typically lose parents after the age of 50.
For a child
For a child, the death of a parent, without support to manage the effects of the grief, may result in long-term psychological harm. This is more likely if the adult carers are struggling with their own grief and are psychologically unavailable to the child. There is a critical role of the surviving parent or caregiver in helping the children adapt to a parent's death. However, losing a parent at a young age also has some positive effects. Some children had an increased maturity, better coping skills and improved communication. Adolescents who lost a parent valued other people more than those who have not experienced such a close loss.
Loss during childhood
When a parent or caregiver dies or leaves, children may have symptoms of psychopathology, but they are less severe than in children with major depression. The loss of a parent, grandparent or sibling can be very troubling in childhood, but even in childhood there are age differences in relation to the loss. A very young child, under one or two, may be found to have no reaction if a carer dies, but other children may be affected by the loss.
At a time when trust and dependency are formed, even mere separation can cause problems in well-being. This is especially true if the loss is around critical periods such as 8–12 months, when attachment and separation are at their height and even a brief separation from a parent or other caregiver can cause distress.
Even as a child grows older, death is still difficult to fathom and this affects how a child responds. For example, younger children see death more as a separation, and may believe death is curable or temporary. Reactions can manifest themselves in "acting out" behaviors, a return to earlier behaviors such as thumb sucking, clinging to a toy or angry behavior. Though they do not have the maturity to mourn as an adult, they feel the same intensity. As children enter pre-teen and teen years, there is a more mature understanding.
Adolescents may respond by delinquency, or oppositely become "over-achievers". Repetitive actions are not uncommon such as washing a car repeatedly or taking up repetitive tasks such as sewing, computer games, etc. It is an effort to stay above the grief. Childhood loss can predispose a child not only to physical illness but to emotional problems and an increased risk for suicide, especially in the adolescent period.
Grief can be experienced as a result of losses due to causes other than death. For example, women who have been physically, psychologically or sexually abused often grieve over the damage to or the loss of their ability to trust. This is likely to be experienced as disenfranchised grief.
In relation to the specific issue of child sexual abuse, it has been argued by some commentators that the concepts of loss and grief offer particularly useful analytical frames for understanding both the impact of child sexual abuse and therapeutic ways to respond to it. From this perspective, child sexual abuse may represent for many children multiple forms of loss: not only of trust but also loss of control over their bodies, loss of innocence and indeed loss of their very childhoods.
Relocations can cause children significant grief particularly if they are combined with other difficult circumstances such as neglectful or abusive parental behaviors, other significant losses, etc.
Loss of a friend or classmate
Children may experience the death of a friend or a classmate through illness, accidents, suicide, or violence. Initial support involves reassuring children that their emotional and physical feelings are normal.
Survivor guilt (or survivor's guilt; also called survivor syndrome or survivor's syndrome) is a mental condition that occurs when a person perceives themselves to have done wrong by surviving a traumatic event when others did not. It may be found among survivors of combat, natural disasters, epidemics, among the friends and family of those who have died by suicide, and in non-mortal situations such as among those whose colleagues are laid off.
Other losses
Parents may grieve due to loss of children through means other than death, for example through loss of custody in divorce proceedings; legal termination of parental rights by the government, such as in cases of child abuse; through kidnapping; because the child voluntarily left home (either as a runaway or, for overage children, by leaving home legally); or because an adult refuses or is unable to have contact with a parent. This loss differs from the death of a child in that the grief process is prolonged or denied because of hope that the relationship will be restored.
Grief may occur after the loss of a romantic relationship (i.e. divorce or break up), a vocation, a pet (animal loss), a home, children leaving home (empty nest syndrome), sibling(s) leaving home, a friend, a faith in one's religion, etc. A person who strongly identifies with their occupation may feel a sense of grief if they have to stop their job due to retirement, being laid off, injury, or loss of certification. Those who have experienced a loss of trust will often also experience some form of grief.
Veteran bereavement
The grief of living veteran soldiers is often ignored. Psychological effects and post traumatic syndrome disorder have been researched and studied but very few focus on grief and bereavement specifically. Additionally, there have been many studies conducted about families losing members who were in the military but little about soldiers themselves. There are many monuments paying respect to those who were lost which emphasizes the lack of focus living veterans and soldiers get in regards to grief.
Gradual bereavement
Many of the above examples of bereavement happen abruptly, but there are also cases of being gradually bereft of something or someone. For example, the gradual loss of a loved one by Alzheimer's produces a "gradual grief".
The author Kara Tippetts described her dying of cancer, as dying "by degrees": her "body failing" and her "abilities vanishing". Milton Crum, writing about gradual bereavement says that "every degree of death, every death of a person's characteristics, every death of a person's abilities, is a bereavement".
Sensory experiences of the deceased
Bereaved people often report having sensory and quasi-sensory experiences of the deceased (SED), which were correlated with pathology like grief complications.
Support
Pecuniary assistance
Professional support
Many people who grieve do not need professional help. Some, however, may seek additional support from licensed psychologists or psychiatrists. Support resources available to the bereaved may include grief counseling, professional support-groups or educational classes, and peer-led support groups. In the United States of America, local hospice agencies may provide a first contact for those seeking bereavement support.
It is important to recognize when grief has turned into something more serious, thus mandating contacting a medical professional. Grief can result in depression or alcohol- and drug-abuse and, if left untreated, it can become severe enough to impact daily living. It recommends contacting a medical professional if "you can't deal with grief, you are using excessive amounts of drugs or alcohol, you become very depressed, or you have prolonged depression that interferes with your daily life". Other reasons to seek medical attention may include: "Can focus on little else but your loved one's death, have persistent pining or longing for the deceased person, have thoughts of guilt or self-blame, believe that you did something wrong or could have prevented the death, feel as if life is not worth living, have lost your sense of purpose in life, wish you had died along with your loved one".
Professionals can use multiple ways to help someone cope and move through their grief. Hypnosis is sometimes used as an adjunct therapy in helping patients experiencing grief. Hypnosis enhances and facilitates mourning and helps patients to resolve traumatic grief. Art therapy may also be used to allow the bereaved to process their grief in a non-verbal way.
Lichtenthal and Cruess studied how bereavement-specific written disclosure had benefits in helping adjust to loss, and in helping improve the effects of post-traumatic stress disorder (PTSD), prolonged grief disorder, and depression. Directed writing helped many of the individuals who had experienced a loss of a significant relationship. It involved individuals trying to make meaning out of the loss through meaning-making (making sense of what happened and the cause of the death), or through benefit finding (consideration of the global significance of the loss of one's goals, and helping the family develop a greater appreciation of life). This meaning-making can come naturally for some, but many need direct intervention to "move on".
Support groups
Support groups for bereaved individuals follow a diversity of patterns. Many are organized purely as peer-to-peer groups such as local chapters of the Compassionate Friends, an international group for bereaved parents. Other grief support groups are led by professionals, perhaps with the assistance of peers. Some support groups deal with specific problems, such as learning to plan meals and cook for only for one person.
Cultural differences in grieving
Each culture specifies manners such as rituals, styles of dress, or other habits, as well as attitudes, in which the bereaved are encouraged or expected to take part. An analysis of non-Western cultures suggests that beliefs about continuing ties with the deceased varies. In Japan, maintenance of ties with the deceased is accepted and carried out through religious rituals. In the Hopi of Arizona, the women go into self-induced hallucinations where they conjure images of the deceased loved one to mourn and process their grief.
Different cultures grieve in different ways, but all have ways that are vital in healthy coping with the death of a loved one. The American family's approach to grieving was depicted in "The Grief Committee", by T. Glen Coughlin. The short story gives an inside look at how the American culture has learned to cope with the tribulations and difficulties of grief. The story is taught in the course, "The Politics of Mourning: Grief Management in a Cross-Cultural Fiction" at Columbia University.
In those with neurodevelopmental disorders
Contrary to popular belief, people with neurodevelopmental disorders, such as autistic individuals and those with an intellectual disability, are able to process grief in a similar manner to neurotypical individuals. However, the ways in which others interact with individuals with neurodevelopmental disorders may impact the ways in which they perceive, process, and express their grief; this is typically seen in association with the double taboo of death and disability, which leads to those with neurodevelopmental disorders often not being appropriately informed of a loss or its significance and excluded or discouraged from attending events related to the loss (e.g. funerals).
Moreover, one of the main differences between those with an intellectual disability and those without is typically the ability to verbalize their feelings about the loss, which is why non-verbal cues and changes in behavior become so important, because these are usually signs of distress and expressions of grief among this population. This difficulty of expressing the emotional impact of a loss in a neuronormative way is seen across neurodevelopmental disorders and often leads to their grief reactions going unrecognized and/or misunderstood by those around them; for example, authentic grief reactions in autistic individuals and/or individuals with an intellectual disability may just be labelled as challenging behavior by those supporting and caring for them. As such, it is important when working with individuals with neurodevelopmental disorders to remember that they may express and understand their grief in non-neuronormative ways, such as in perseveration and repeating words related to death (a form of echophenomena known as echothanatologia). Moreover, it is important that caregivers and family members of individuals with neurodevelopmental disorders meet them at their level of understanding and allow them to process the loss and grief with assistance given where needed, and not to ignore the grief that these individuals undergo and the unique ways in which they may express their grief.
An important aspect of supporting the processing of grief for those with neurodevelopmental disorders is narrative and storytelling, as this can help individuals understand death and loss and express their grief at a level appropriate to their own understanding. Moreover, another important aspect of support is family involvement where possible, which should focus on promoting inclusion in events before and after the loss (e.g. visiting hospital to see a dying relative, attending the funeral, being able to visit the grave, etc.) and ensuring individuals have information about these events provided to them at their level of understanding and their choices respected, such as whether or not they want to attend a funeral service. By having the involvement of family and friends in an open and supporting dialogue with the individual, being mindful of the double taboo of death and disability, it helps individuals with neurodevelopmental disorders to process, understand, and feel included. However, if those supporting the individual are not properly educated on how those with different neurodevelopmental profiles process, understand, and express grief, their involvement may not be as beneficial than those who are aware of the potential differences, and ultimately may prove harmful in areas beyond practical support. Furthermore, the importance of the family unit is very crucial in a socio-cognitive approach to bereavement counseling; in this approach the neurodivergent individual has the opportunity to see how those around them handle the loss and have the opportunity to act accordingly by modeling and mirroring behavior. This approach also helps the individual know that their emotions are acceptable, valid, and normal.
In animals
Previously it was believed that grief was only a human emotion, but studies have shown that other animals have shown grief or grief-like states during the death of another animal, most notably elephants, wolves, apes, and goats. This can occur between bonded animals which are animals that attempt to survive together (i.e. a pack of wolves or mated prairie voles). There is evidence that animals experience grief in the loss of their group member, a mate, or their owner for many days. Some animals show their grief for their loss for many years. When animals are grieving, their life routines change the same as humans. For instance, they may stop eating, isolate themselves, or change their sleeping routine by taking naps instead of sleeping during the night. After the death of their group member or a mate, some of the animals become depressed, while others like the bonobo keep the dead bodies of their babies for a long time. Cats try to find their dead fellow with a mourning cry, and dogs and horses become depressed.
Since it is more difficult to study emotion in animals because of the lack of clear communication, in effort to study grief, research has been done on hormone levels. One study found that "females [baboons] showed significant increases in stress hormones called glucocorticoids". The female baboons then increased grooming, promoting physical touch, which releases "oxytocin, which inhibits glucocorticoid release".
Mammals
Mammals have demonstrated grief-like states, especially between a mother and her offspring. She will often stay close to her dead offspring for short periods of time and may investigate the reasons for the baby's non-response. For example, some deer will often sniff, poke, and look at its lifeless fawn before realizing it is dead and leaving it to rejoin the herd shortly afterwards. Other animals, such as a lioness, will pick up its cub in its mouth and place it somewhere else before abandoning it.
When a baby chimpanzee or gorilla dies, the mother will carry the body around for several days before she may finally be able to move on without it; this behavior has been observed in other primates, as well. The Royal Society suggests that, "Such interactions have been proposed to be related to maternal condition, attachment, environmental conditions or reflect a lack of awareness that the infant has died." Jane Goodall has described chimpanzees as exhibiting mournful behavior toward the loss of a group member with silence and by showing more attention to it. And they will often continue grooming it and stay close to the carcass until the group must move on without it. One example of this Goodall observed was of a chimpanzee mother of three who had died. The siblings stayed by their mother's body the whole day. Of the three siblings the youngest showed the most agitation by screaming and became depressed but was able to recover by the care of the two older siblings. However, the youngest refused behavior from the siblings that were similar to the mother. Another notable example is Koko, a gorilla who was taught sign language, who expressed sadness and even described sadness about the death of her pet cat, All Ball.
Elephants have shown unusual behavior upon encountering the remains of another deceased elephant. They will often investigate it by touching and grabbing it with their trunks and have the whole herd stand around it for long periods of time until they must leave it behind. It is unknown whether they are mourning over it and showing sympathy, or are just curious and investigating the dead body. Elephants are thought to be able to discern relatives even from their remains. When encountering the body of a deceased elephant or human, elephants have been witnessed covering the body with vegetation and soil in what seems to be burial behavior. An episode of the seminal BBC documentary series Life on Earth shows this in detail – the elephants, upon finding a dead herd member, pause for several minutes at a time, and carefully touch and hold the dead creature's bones.
Birds
Some birds seem to lack the perception of grief or quickly accept it; mallard hens, although shocked for a moment when losing one of their young to a predator, will soon return to doing what they were doing before the predator attacked. However, some other waterbirds, such as mute swans are known to grieve for the loss of a partner or cygnet, and are known to engage in pining for days, weeks or even months at a time. Other species of swans such as the black swan have also been observed mourning the loss of a close relative.
See also
References
Further reading
Hoy, William G. (2016). Bereavement Groups and the Role of Social Support: Integrating Theory, Research, and Practice. New York: Routledge.
Schmid, Wilhelm, What We Gain as We Grow Older: On Gelassenheit. New York: Upper West Side Philosophers, Inc. 2016 (Living Now Gold Award)
Smith, Melinda; Robinson, Lawrence; Segal, Jeanne. (1997). Depression in Older Adults and the Elderly. Helpguide, Retrieved 8 February 2012.
Span, Paula. (29 December 2011). The unspoken diagnosis: Old age. The New York Times. Retrieved 8 February 2012
Stengel, Kathrin, November Rose: A Speech on Death. New York: Upper West Side Philosophers, Inc. 2007 (Independent Publisher Book Award for Aging/Death & Dying)
External links
"Grieving: A study of bereavement" by Megan O'Rourke at Slate.com
"Grief & Bereavement – An Overview by Associated Counsellors & Psychologists
Counseling
Funeral-related industry
Emotions | 0.761311 | 0.997118 | 0.759117 |
Grandiosity | In psychology, grandiosity is a sense of superiority, uniqueness, or invulnerability that is unrealistic and not based on personal capability. It may be expressed by exaggerated beliefs regarding one's abilities, the belief that few other people have anything in common with oneself, and that one can only be understood by a few, very special people. The personality trait of grandiosity is principally associated with narcissistic personality disorder (NPD), but also is a feature in the occurrence and expression of antisocial personality disorder, and the manic and hypomanic episodes of bipolar disorder.
Measurement
Few scales exist for the sole purpose of measuring grandiosity, though one recent attempt is the Narcissistic Grandiosity Scale (NGS), an adjective rating scale where one indicates the applicability of a word to oneself (e.g. superior, glorious).
Grandiosity is also measured as part of other tests, including the Personality Assessment for DSM-5 (PID-5), Psychopathy Checklist-Revised, and diagnostic interviews for NPD. The Grandiosity section of the Diagnostic Interview for Narcissism (DIN), for instance, describes:
The person exaggerates talents, capacity, and achievements in an unrealistic way.
The person believes in their invulnerability or does not recognize their limitations.
The person has grandiose fantasies.
The person believes that they do not need other people.
The person overexamines and downgrades other people's projects, statements, or dreams in an unrealistic manner.
The person regards themself as unique or special when compared to other people.
The person regards themself as generally superior to other people.
The person behaves self-centeredly and/or self-referentially.
The person behaves in a boastful or pretentious way.
In narcissism
Grandiose narcissism is a subtype of narcissism with grandiosity as its central feature, in addition to other agentic and antagonistic traits (e.g., dominance, attention-seeking, entitlement, manipulation). Confusingly, the term "narcissistic grandiosity" is sometimes used as a synonym for grandiose narcissism and other times used to refer to the subject of this article (superiority feelings).
In mania
In mania, grandiosity is typically more pro-active and aggressive than in narcissism. The manic character may boast of future achievements or exaggerate their personal qualities.
They may also begin unrealistically ambitious undertakings, before being cut down, or cutting themselves back down, to size.
In psychopathy
Grandiosity features in Factor 1, Facet 1 (Interpersonal) in the Hare Psychopathy Checklist-Revised (PCL-R) test. Individuals endorsing this criterion appear arrogant and boastful, and may be unrealistically optimistic about their future. The American Psychiatric Association's DSM-5 also notes that persons with antisocial personality disorder often display an inflated self-image, and can appear excessively self-important, opinionated and cocky, and often hold others in contempt.
Relationship with other variables
Grandiosity is well documented to have associations with both positive/adaptive and negative/maladaptive outcomes, leading some researchers to question whether it is necessarily pathological.
Positive/Adaptive
Grandiosity demonstrates moderate-to-strong positive correlations with self-esteem, typically becoming larger in size when controlling for confounding variables. It relates positively to self-rated superiority and is inversely associated with self-rated worthlessness. It is also associated with a host of other variables (often even when controlling for self-esteem), including positive affect, optimism, life satisfaction, behavioural activation system functioning, and all forms of emotional resilience. It also correlates positively with adaptive narcissism, namely authoritativeness, charisma, self-assurance and ambitiousness. Moreover, it exhibits negative associations with depression, anxiety, pessimism and shame. Grandiosity has a small positive relationship with intelligence and achievement.
Negative/Maladaptive
Grandiosity has a well-studied association with aggression (both physical and verbal), risk-taking (e.g. financial, social, sexual) and competitiveness. It also has reliable associations with maladaptive narcissistic traits like entitlement and interpersonal exploitativeness. Even when controlling for exploitativeness, however, grandiosity still predicts unethical behaviours like lying, cheating and stealing. Grandiosity seems to be specifically related to rationalised cheating (i.e. opportunistic cheating behaviour whose context allows the behaviour to be construed as something other than cheating), but not deliberative cheating (i.e. conscious premeditation to violate rules and cheat).
Mechanisms
Despite the prominence of grandiosity in the research literature, few theories or even studies of its underlying mechanisms exist. Approximately 23% of the variance in grandiosity is explained by genetics, with the majority of remaining variance attributable to non-shared environmental factors.
Cognitive
Research has consistently indicated a role of positive rumination (repetitive positive self-focused thoughts). Recently, an experimental study found that having neurotypical participants engage in overly-positive rumination (i.e. think about times when they felt special, unique, important or superior) lead to increases in state grandiosity, whereas a control distraction condition conferred no such increment. Another study confirmed that positive ruminations confer grandiose self-perceptions in the moment, and found that (grandiosity-prone) patients with bipolar disorder (compared with healthy controls) exhibited heightened connectivity between brain regions associated with self-relevant information-processing during this task (medial prefrontal and anterior cingulate cortices) Further, experimental studies suggest that grandiose narcissists maintain their inflated self-esteem following criticism by recalling self-aggrandizing memories.
Correlational designs further confirm the associations of mania/hypomania and grandiose narcissism with positive self-rumination, and to specific expressions of positive rumination after success (e.g. believing that success in one domain indicates likely success in another). Grandiose fantasies, conceptually similar to positive rumination, also feature in narcissism. While grandiose narcissism has been associated with attentional and mnemonic biases to positive self-related words, it remains to be seen whether this reflects grandiosity or some other trait specific to narcissism (e.g. entitlement).
Other theories
A common characteristic of disorders and traits associated with grandiosity is heightened positive affect and potential dysregulation thereof. This is true of mania/hypomania in bipolar disorder, grandiose narcissism, and the interpersonal facet of psychopathy. Such associations partially inspired the Narcissism Spectrum Model, which posits grandiosity reflects the combination of self-preoccupation and "boldness" - exaggerated positive emotionality, self-confidence, and reward-seeking, which is ostensibly linked with neurobiological systems mediating behavioural approach motivation.
While no neuroimaging studies have specifically assessed the association between grandiosity and the reward system (or any other system), some neuroimaging studies using composite scales of grandiosity with other traits offer tentative support of these assertions, while others using the same measure suggest no association.
Contrary to frequent assertions by narcissism researchers, and despite much study of the matter, there is only weak and inconsistent evidence that grandiosity (when specifically and reliably measured) and grandiose narcissism have any association with parental overvaluation. The largest study on the matter found no association whatsoever.
Reality-testing
A distinction is made between individuals exhibiting grandiosity which includes a degree of insight into their unrealistic thoughts (they are aware that their behavior is considered unusual), and those experiencing grandiose delusions who lack this capability for reality-testing. Some individuals may transition between these two states, with grandiose ideas initially developing as "daydreams" that the patient recognises as untrue, but which can subsequently turn into full delusions that the patient becomes convinced reflect reality.
Psychoanalysis and the grandiose self
Otto Kernberg saw the unhealthily grandiose self as merging childhood feelings of specialness, personal ideals, and fantasies of an ideal parent.
Heinz Kohut saw the grandiose self as a normal part of the developmental process, only pathological when the grand and humble parts of the self became decisively divided. Kohut's recommendations for dealing with the patient with a disordered grandiose self were to tolerate and so re-integrate the grandiosity with the realistic self.
Reactive attachment disorder
The personality trait of grandiosity also is a component of the reactive attachment disorder (RAD), a severe and relatively uncommon attachment disorder that affects children. The expression of RAD is characterized by markedly disturbed and developmentally inappropriate ways of relating to other people in most social contexts, such as the persistent failure to initiate or to respond to most social interactions in a developmentally appropriate way, known as the "inhibited form" of reactive attachment disorder.
Related traits
Grandiosity is associated and often confused with other personality traits, including self-esteem, entitlement, and contemptuousness.
Self-esteem While the exact difference between high self-esteem and grandiosity has yet to be fully elucidated, research suggests that, while strongly correlated, they predict different outcomes. While both predict positive outcomes like optimism, life and job satisfaction, extraversion and positive affect, grandiosity uniquely predicts entitlement, exploitativeness and aggression.
Entitlement Entitlement is regularly confused with grandiosity even in peer-reviewed articles, but the literature nevertheless offers a clear discrimination of the two. Psychological entitlement is a sense of deservingness to positive outcomes, and can be founded on either grandiosity or feelings of deprivation. Like self-esteem, grandiosity and entitlement are well documented to predict different outcomes. Entitlement appears to be associated with more maladaptive outcomes, including low empathy, antisocial behaviour, and poor mental health, whereas grandiosity predicts better mental health.
Devaluation/contempt Surprisingly, and quite counterintuitively, grandiosity is only weakly related to regarding others as worthless (devaluation or contemptuousness). Moreover, grandiosity should not be conflated with arrogant social behaviour.
See also
References
Symptoms and signs of mental disorders | 0.76263 | 0.995382 | 0.759109 |
Mathematical model | A mathematical model is an abstract description of a concrete system using mathematical concepts and language. The process of developing a mathematical model is termed mathematical modeling. Mathematical models are used in applied mathematics and in the natural sciences (such as physics, biology, earth science, chemistry) and engineering disciplines (such as computer science, electrical engineering), as well as in non-physical systems such as the social sciences (such as economics, psychology, sociology, political science). It can also be taught as a subject in its own right.
The use of mathematical models to solve problems in business or military operations is a large part of the field of operations research. Mathematical models are also used in music, linguistics, and
philosophy (for example, intensively in analytic philosophy). A model may help to explain a system and to study the effects of different components, and to make predictions about behavior.
Elements of a mathematical model
Mathematical models can take many forms, including dynamical systems, statistical models, differential equations, or game theoretic models. These and other types of models can overlap, with a given model involving a variety of abstract structures. In general, mathematical models may include logical models. In many cases, the quality of a scientific field depends on how well the mathematical models developed on the theoretical side agree with results of repeatable experiments. Lack of agreement between theoretical mathematical models and experimental measurements often leads to important advances as better theories are developed. In the physical sciences, a traditional mathematical model contains most of the following elements:
Governing equations
Supplementary sub-models
Defining equations
Constitutive equations
Assumptions and constraints
Initial and boundary conditions
Classical constraints and kinematic equations
Classifications
Mathematical models are of different types:
Linear vs. nonlinear. If all the operators in a mathematical model exhibit linearity, the resulting mathematical model is defined as linear. A model is considered to be nonlinear otherwise. The definition of linearity and nonlinearity is dependent on context, and linear models may have nonlinear expressions in them. For example, in a statistical linear model, it is assumed that a relationship is linear in the parameters, but it may be nonlinear in the predictor variables. Similarly, a differential equation is said to be linear if it can be written with linear differential operators, but it can still have nonlinear expressions in it. In a mathematical programming model, if the objective functions and constraints are represented entirely by linear equations, then the model is regarded as a linear model. If one or more of the objective functions or constraints are represented with a nonlinear equation, then the model is known as a nonlinear model.Linear structure implies that a problem can be decomposed into simpler parts that can be treated independently and/or analyzed at a different scale and the results obtained will remain valid for the initial problem when recomposed and rescaled.Nonlinearity, even in fairly simple systems, is often associated with phenomena such as chaos and irreversibility. Although there are exceptions, nonlinear systems and models tend to be more difficult to study than linear ones. A common approach to nonlinear problems is linearization, but this can be problematic if one is trying to study aspects such as irreversibility, which are strongly tied to nonlinearity.
Static vs. dynamic. A dynamic model accounts for time-dependent changes in the state of the system, while a static (or steady-state) model calculates the system in equilibrium, and thus is time-invariant. Dynamic models typically are represented by differential equations or difference equations.
Explicit vs. implicit. If all of the input parameters of the overall model are known, and the output parameters can be calculated by a finite series of computations, the model is said to be explicit. But sometimes it is the output parameters which are known, and the corresponding inputs must be solved for by an iterative procedure, such as Newton's method or Broyden's method. In such a case the model is said to be implicit. For example, a jet engine's physical properties such as turbine and nozzle throat areas can be explicitly calculated given a design thermodynamic cycle (air and fuel flow rates, pressures, and temperatures) at a specific flight condition and power setting, but the engine's operating cycles at other flight conditions and power settings cannot be explicitly calculated from the constant physical properties.
Discrete vs. continuous. A discrete model treats objects as discrete, such as the particles in a molecular model or the states in a statistical model; while a continuous model represents the objects in a continuous manner, such as the velocity field of fluid in pipe flows, temperatures and stresses in a solid, and electric field that applies continuously over the entire model due to a point charge.
Deterministic vs. probabilistic (stochastic). A deterministic model is one in which every set of variable states is uniquely determined by parameters in the model and by sets of previous states of these variables; therefore, a deterministic model always performs the same way for a given set of initial conditions. Conversely, in a stochastic model—usually called a "statistical model"—randomness is present, and variable states are not described by unique values, but rather by probability distributions.
Deductive, inductive, or floating. A is a logical structure based on a theory. An inductive model arises from empirical findings and generalization from them. The floating model rests on neither theory nor observation, but is merely the invocation of expected structure. Application of mathematics in social sciences outside of economics has been criticized for unfounded models. Application of catastrophe theory in science has been characterized as a floating model.
Strategic vs. non-strategic. Models used in game theory are different in a sense that they model agents with incompatible incentives, such as competing species or bidders in an auction. Strategic models assume that players are autonomous decision makers who rationally choose actions that maximize their objective function. A key challenge of using strategic models is defining and computing solution concepts such as Nash equilibrium. An interesting property of strategic models is that they separate reasoning about rules of the game from reasoning about behavior of the players.
Construction
In business and engineering, mathematical models may be used to maximize a certain output. The system under consideration will require certain inputs. The system relating inputs to outputs depends on other variables too: decision variables, state variables, exogenous variables, and random variables. Decision variables are sometimes known as independent variables. Exogenous variables are sometimes known as parameters or constants. The variables are not independent of each other as the state variables are dependent on the decision, input, random, and exogenous variables. Furthermore, the output variables are dependent on the state of the system (represented by the state variables).
Objectives and constraints of the system and its users can be represented as functions of the output variables or state variables. The objective functions will depend on the perspective of the model's user. Depending on the context, an objective function is also known as an index of performance, as it is some measure of interest to the user. Although there is no limit to the number of objective functions and constraints a model can have, using or optimizing the model becomes more involved (computationally) as the number increases. For example, economists often apply linear algebra when using input–output models. Complicated mathematical models that have many variables may be consolidated by use of vectors where one symbol represents several variables.
A priori information
Mathematical modeling problems are often classified into black box or white box models, according to how much a priori information on the system is available. A black-box model is a system of which there is no a priori information available. A white-box model (also called glass box or clear box) is a system where all necessary information is available. Practically all systems are somewhere between the black-box and white-box models, so this concept is useful only as an intuitive guide for deciding which approach to take.
Usually, it is preferable to use as much a priori information as possible to make the model more accurate. Therefore, the white-box models are usually considered easier, because if you have used the information correctly, then the model will behave correctly. Often the a priori information comes in forms of knowing the type of functions relating different variables. For example, if we make a model of how a medicine works in a human system, we know that usually the amount of medicine in the blood is an exponentially decaying function, but we are still left with several unknown parameters; how rapidly does the medicine amount decay, and what is the initial amount of medicine in blood? This example is therefore not a completely white-box model. These parameters have to be estimated through some means before one can use the model.
In black-box models, one tries to estimate both the functional form of relations between variables and the numerical parameters in those functions. Using a priori information we could end up, for example, with a set of functions that probably could describe the system adequately. If there is no a priori information we would try to use functions as general as possible to cover all different models. An often used approach for black-box models are neural networks which usually do not make assumptions about incoming data. Alternatively, the NARMAX (Nonlinear AutoRegressive Moving Average model with eXogenous inputs) algorithms which were developed as part of nonlinear system identification can be used to select the model terms, determine the model structure, and estimate the unknown parameters in the presence of correlated and nonlinear noise. The advantage of NARMAX models compared to neural networks is that NARMAX produces models that can be written down and related to the underlying process, whereas neural networks produce an approximation that is opaque.
Subjective information
Sometimes it is useful to incorporate subjective information into a mathematical model. This can be done based on intuition, experience, or expert opinion, or based on convenience of mathematical form. Bayesian statistics provides a theoretical framework for incorporating such subjectivity into a rigorous analysis: we specify a prior probability distribution (which can be subjective), and then update this distribution based on empirical data.
An example of when such approach would be necessary is a situation in which an experimenter bends a coin slightly and tosses it once, recording whether it comes up heads, and is then given the task of predicting the probability that the next flip comes up heads. After bending the coin, the true probability that the coin will come up heads is unknown; so the experimenter would need to make a decision (perhaps by looking at the shape of the coin) about what prior distribution to use. Incorporation of such subjective information might be important to get an accurate estimate of the probability.
Complexity
In general, model complexity involves a trade-off between simplicity and accuracy of the model. Occam's razor is a principle particularly relevant to modeling, its essential idea being that among models with roughly equal predictive power, the simplest one is the most desirable. While added complexity usually improves the realism of a model, it can make the model difficult to understand and analyze, and can also pose computational problems, including numerical instability. Thomas Kuhn argues that as science progresses, explanations tend to become more complex before a paradigm shift offers radical simplification.
For example, when modeling the flight of an aircraft, we could embed each mechanical part of the aircraft into our model and would thus acquire an almost white-box model of the system. However, the computational cost of adding such a huge amount of detail would effectively inhibit the usage of such a model. Additionally, the uncertainty would increase due to an overly complex system, because each separate part induces some amount of variance into the model. It is therefore usually appropriate to make some approximations to reduce the model to a sensible size. Engineers often can accept some approximations in order to get a more robust and simple model. For example, Newton's classical mechanics is an approximated model of the real world. Still, Newton's model is quite sufficient for most ordinary-life situations, that is, as long as particle speeds are well below the speed of light, and we study macro-particles only. Note that better accuracy does not necessarily mean a better model. Statistical models are prone to overfitting which means that a model is fitted to data too much and it has lost its ability to generalize to new events that were not observed before.
Training, tuning, and fitting
Any model which is not pure white-box contains some parameters that can be used to fit the model to the system it is intended to describe. If the modeling is done by an artificial neural network or other machine learning, the optimization of parameters is called training, while the optimization of model hyperparameters is called tuning and often uses cross-validation. In more conventional modeling through explicitly given mathematical functions, parameters are often determined by curve fitting.
Evaluation and assessment
A crucial part of the modeling process is the evaluation of whether or not a given mathematical model describes a system accurately. This question can be difficult to answer as it involves several different types of evaluation.
Prediction of empirical data
Usually, the easiest part of model evaluation is checking whether a model predicts experimental measurements or other empirical data not used in the model development. In models with parameters, a common approach is to split the data into two disjoint subsets: training data and verification data. The training data are used to estimate the model parameters. An accurate model will closely match the verification data even though these data were not used to set the model's parameters. This practice is referred to as cross-validation in statistics.
Defining a metric to measure distances between observed and predicted data is a useful tool for assessing model fit. In statistics, decision theory, and some economic models, a loss function plays a similar role. While it is rather straightforward to test the appropriateness of parameters, it can be more difficult to test the validity of the general mathematical form of a model. In general, more mathematical tools have been developed to test the fit of statistical models than models involving differential equations. Tools from nonparametric statistics can sometimes be used to evaluate how well the data fit a known distribution or to come up with a general model that makes only minimal assumptions about the model's mathematical form.
Scope of the model
Assessing the scope of a model, that is, determining what situations the model is applicable to, can be less straightforward. If the model was constructed based on a set of data, one must determine for which systems or situations the known data is a "typical" set of data. The question of whether the model describes well the properties of the system between data points is called interpolation, and the same question for events or data points outside the observed data is called extrapolation.
As an example of the typical limitations of the scope of a model, in evaluating Newtonian classical mechanics, we can note that Newton made his measurements without advanced equipment, so he could not measure properties of particles traveling at speeds close to the speed of light. Likewise, he did not measure the movements of molecules and other small particles, but macro particles only. It is then not surprising that his model does not extrapolate well into these domains, even though his model is quite sufficient for ordinary life physics.
Philosophical considerations
Many types of modeling implicitly involve claims about causality. This is usually (but not always) true of models involving differential equations. As the purpose of modeling is to increase our understanding of the world, the validity of a model rests not only on its fit to empirical observations, but also on its ability to extrapolate to situations or data beyond those originally described in the model. One can think of this as the differentiation between qualitative and quantitative predictions. One can also argue that a model is worthless unless it provides some insight which goes beyond what is already known from direct investigation of the phenomenon being studied.
An example of such criticism is the argument that the mathematical models of optimal foraging theory do not offer insight that goes beyond the common-sense conclusions of evolution and other basic principles of ecology. It should also be noted that while mathematical modeling uses mathematical concepts and language, it is not itself a branch of mathematics and does not necessarily conform to any mathematical logic, but is typically a branch of some science or other technical subject, with corresponding concepts and standards of argumentation.
Significance in the natural sciences
Mathematical models are of great importance in the natural sciences, particularly in physics. Physical theories are almost invariably expressed using mathematical models. Throughout history, more and more accurate mathematical models have been developed. Newton's laws accurately describe many everyday phenomena, but at certain limits theory of relativity and quantum mechanics must be used.
It is common to use idealized models in physics to simplify things. Massless ropes, point particles, ideal gases and the particle in a box are among the many simplified models used in physics. The laws of physics are represented with simple equations such as Newton's laws, Maxwell's equations and the Schrödinger equation. These laws are a basis for making mathematical models of real situations. Many real situations are very complex and thus modeled approximately on a computer, a model that is computationally feasible to compute is made from the basic laws or from approximate models made from the basic laws. For example, molecules can be modeled by molecular orbital models that are approximate solutions to the Schrödinger equation. In engineering, physics models are often made by mathematical methods such as finite element analysis.
Different mathematical models use different geometries that are not necessarily accurate descriptions of the geometry of the universe. Euclidean geometry is much used in classical physics, while special relativity and general relativity are examples of theories that use geometries which are not Euclidean.
Some applications
Often when engineers analyze a system to be controlled or optimized, they use a mathematical model. In analysis, engineers can build a descriptive model of the system as a hypothesis of how the system could work, or try to estimate how an unforeseeable event could affect the system. Similarly, in control of a system, engineers can try out different control approaches in simulations.
A mathematical model usually describes a system by a set of variables and a set of equations that establish relationships between the variables. Variables may be of many types; real or integer numbers, Boolean values or strings, for example. The variables represent some properties of the system, for example, the measured system outputs often in the form of signals, timing data, counters, and event occurrence. The actual model is the set of functions that describe the relations between the different variables.
Examples
One of the popular examples in computer science is the mathematical models of various machines, an example is the deterministic finite automaton (DFA) which is defined as an abstract mathematical concept, but due to the deterministic nature of a DFA, it is implementable in hardware and software for solving various specific problems. For example, the following is a DFA M with a binary alphabet, which requires that the input contains an even number of 0s:
where
and
is defined by the following state-transition table:
{| border="1"
| || ||
|-
|S1 || ||
|-
|S''2 || ||
|}
The state represents that there has been an even number of 0s in the input so far, while signifies an odd number. A 1 in the input does not change the state of the automaton. When the input ends, the state will show whether the input contained an even number of 0s or not. If the input did contain an even number of 0s, will finish in state an accepting state, so the input string will be accepted.
The language recognized by is the regular language given by the regular expression 1*( 0 (1*) 0 (1*) )*, where "*" is the Kleene star, e.g., 1* denotes any non-negative number (possibly zero) of symbols "1".
Many everyday activities carried out without a thought are uses of mathematical models. A geographical map projection of a region of the earth onto a small, plane surface is a model which can be used for many purposes such as planning travel.
Another simple activity is predicting the position of a vehicle from its initial position, direction and speed of travel, using the equation that distance traveled is the product of time and speed. This is known as dead reckoning when used more formally. Mathematical modeling in this way does not necessarily require formal mathematics; animals have been shown to use dead reckoning.
Population Growth. A simple (though approximate) model of population growth is the Malthusian growth model. A slightly more realistic and largely used population growth model is the logistic function, and its extensions.
Model of a particle in a potential-field. In this model we consider a particle as being a point of mass which describes a trajectory in space which is modeled by a function giving its coordinates in space as a function of time. The potential field is given by a function and the trajectory, that is a function is the solution of the differential equation: that can be written also as
Note this model assumes the particle is a point mass, which is certainly known to be false in many cases in which we use this model; for example, as a model of planetary motion.
Model of rational behavior for a consumer. In this model we assume a consumer faces a choice of commodities labeled each with a market price The consumer is assumed to have an ordinal utility function (ordinal in the sense that only the sign of the differences between two utilities, and not the level of each utility, is meaningful), depending on the amounts of commodities consumed. The model further assumes that the consumer has a budget which is used to purchase a vector in such a way as to maximize The problem of rational behavior in this model then becomes a mathematical optimization problem, that is: subject to: This model has been used in a wide variety of economic contexts, such as in general equilibrium theory to show existence and Pareto efficiency of economic equilibria.
Neighbour-sensing model is a model that explains the mushroom formation from the initially chaotic fungal network.
In computer science, mathematical models may be used to simulate computer networks.
In mechanics, mathematical models may be used to analyze the movement of a rocket model.
See also
Agent-based model
All models are wrong
Cliodynamics
Computer simulation
Conceptual model
Decision engineering
Grey box model
International Mathematical Modeling Challenge
Mathematical biology
Mathematical diagram
Mathematical economics
Mathematical modelling of infectious disease
Mathematical finance
Mathematical psychology
Mathematical sociology
Microscale and macroscale models
Model inversion
Resilience (mathematics)
Scientific model
Sensitivity analysis
Statistical model
Surrogate model
System identification
References
Further reading
Books
Aris, Rutherford [ 1978 ] ( 1994 ). Mathematical Modelling Techniques, New York: Dover.
Bender, E.A. [ 1978 ] ( 2000 ). An Introduction to Mathematical Modeling, New York: Dover.
Gary Chartrand (1977) Graphs as Mathematical Models, Prindle, Webber & Schmidt
Dubois, G. (2018) "Modeling and Simulation", Taylor & Francis, CRC Press.
Gershenfeld, N. (1998) The Nature of Mathematical Modeling, Cambridge University Press .
Lin, C.C. & Segel, L.A. ( 1988 ). Mathematics Applied to Deterministic Problems in the Natural Sciences, Philadelphia: SIAM.
Specific applications
Papadimitriou, Fivos. (2010). Mathematical Modelling of Spatial-Ecological Complex Systems: an Evaluation. Geography, Environment, Sustainability 1(3), 67-80.
An Introduction to Infectious Disease Modelling by Emilia Vynnycky and Richard G White.
External links
General reference
Patrone, F. Introduction to modeling via differential equations, with critical remarks.
Plus teacher and student package: Mathematical Modelling. Brings together all articles on mathematical modeling from Plus Magazine'', the online mathematics magazine produced by the Millennium Mathematics Project at the University of Cambridge.
Philosophical
Frigg, R. and S. Hartmann, Models in Science, in: The Stanford Encyclopedia of Philosophy, (Spring 2006 Edition)
Griffiths, E. C. (2010) What is a model?
Applied mathematics
Conceptual modelling
Knowledge representation
Mathematical terminology
Mathematical and quantitative methods (economics) | 0.761178 | 0.99726 | 0.759092 |
Enculturation | Enculturation is the process by which people learn the dynamics of their surrounding culture and acquire values and norms appropriate or necessary to that culture and its worldviews.
Definition and history of research
The term enculturation was used first by sociologist of science Harry Collins to describe one of the models whereby scientific knowledge is communicated among scientists, and is contrasted with the 'algorithmic' mode of communication.
The ingredients discussed by Collins for enculturation are
Learning by Immersion: whereby aspiring scientists learn by engaging in the daily activities of the laboratory, interacting with other scientists, and participating in experiments and discussions.
Tacit Knowledge: highlighting the importance of tacit knowledge—knowledge that is not easily codified or written down but is acquired through experience and practice.
Socialization: where individuals learn the social norms, values, and behaviours expected within the scientific community.
Language and Discourse: Scientists must become fluent in the terminology, theoretical frameworks, and modes of argumentation specific to their discipline.
Community Membership: recognition of the individual as a legitimate member of the scientific community.
The problem tackled in the article of Harry Collins was the early experiments for the detection of gravitational waves.
Enculturation is mostly studied in sociology and anthropology. The influences that limit, direct, or shape the individual (whether deliberately or not) include parents, other adults, and peers. If successful, enculturation results in competence in the language, values, and rituals of the culture. Growing up, everyone goes through their own version of enculturation. Enculturation helps form an individual into an acceptable citizen. Culture impacts everything that an individual does, regardless of whether they know about it. Enculturation is a deep-rooted process that binds together individuals. Even as a culture undergoes changes, elements such as central convictions, values, perspectives, and young raising practices remain similar. Enculturation paves way for tolerance which is highly needed for peaceful co-habitance.
The process of enculturation, most commonly discussed in the field of anthropology, is closely related to socialization, a concept central to the field of sociology. Both roughly describe the adaptation of an individual into social groups by absorbing the ideas, beliefs and practices surrounding them. In some disciplines, socialization refers to the deliberate shaping of the individual. As such, the term may cover both deliberate and informal enculturation.
The process of learning and absorbing culture need not be social, direct or conscious. Cultural transmission can occur in various forms, though the most common social methods include observing other individuals, being taught or being instructed. Less obvious mechanisms include learning one's culture from the media, the information environment and various social technologies, which can lead to cultural transmission and adaptation across societies. A good example of this is the diffusion of hip-hop culture into states and communities beyond its American origins.
Enculturation has often been studied in the context of non-immigrant African Americans.
Conrad Phillip Kottak (in Window on Humanity) writes:
Enculturation is referred to as acculturation in some academic literature. However, more recent literature has signalled a difference in meaning between the two. Whereas enculturation describes the process of learning one's own culture, acculturation denotes learning a different culture, for example, that of a host. The latter can be linked to ideas of a culture shock, which describes an emotionally-jarring disconnect between one's old and new culture cues.
Famously, the sociologist Talcott Parsons once described children as "barbarians" of a sort, since they are fundamentally uncultured.
How enculturation occurs
When minorities come into the U.S., these people might fully associate with their racial legacy prior to taking part in processing enculturation. Enculturation can happen in several ways. Direct education implies that your family, instructors, or different individuals from the general public unequivocally show you certain convictions, esteems, or anticipated standards of conduct. Parents may play a vital role in teaching their children standard behavior for their culture, including table manners and some aspects of polite social interactions. Strict familial and societal teaching, which often uses different forms of positive and negative reinforcement to shape behavior, can lead a person to adhere closely to their religious convictions and customs. Schools also provide a formal setting to learn national values, such as honoring a country's flag, national anthem, and other significant patriotic symbols.
Participatory learning occurs as individuals take an active role of interacting with their environment and culture. Through their own engagement in meaningful activities, they learn socio-cultural norms for their area and may adopt related qualities and values. For example, if your school organizes an outing to gather trash at a public park, this action assists with ingraining the upsides of regard for nature and ecological protection. Strict customs frequently stress participatory learning - for example, kids who take part in the singing of psalms during Christmas will assimilate the qualities and practices of the occasion.
Observational learning is when knowledge is gained essentially by noticing and emulating others. As much as an individual related to a model accepts that emulating the model will prompt good results and feels that one is fit for mimicking the way of behaving, learning can happen with no unequivocal instruction. For example, a youngster who is sufficiently fortunate to be brought into the world by guardians in a caring relationship will figure out how to be tender and mindful in their future connections.
See also
Civil society
Dual inheritance theory
Education
Educational anthropology
Ethnocentrism
Indoctrination
Intercultural competence
Mores
Norm (philosophy)
Norm (sociology)
Peer pressure
Transculturation
References
Bibliography
Further reading
External links
Enculturation and Acculturation
Community empowerment
Concepts of moral character, historical and contemporary (Stanford Encyclopedia of Philosophy)
Cultural concepts
Cultural studies
Interculturalism | 0.766657 | 0.990046 | 0.759026 |
Psychomotor retardation | Psychomotor retardation involves a slowing down of thought and a reduction of physical movements in an individual. It can cause a visible slowing of physical and emotional reactions, including speech and affect.
Psychomotor retardation is most commonly seen in people with major depression and in the depressed phase of bipolar disorder; it is also associated with the adverse effects of certain drugs, such as benzodiazepines. Particularly in an inpatient setting, psychomotor retardation may require increased nursing care to ensure adequate food and fluid intake and sufficient personal care. Informed consent for treatment is more difficult to achieve in the presence of this condition.
Causes
Psychiatric disorders: anxiety disorders, bipolar disorder, eating disorders, schizophrenia, severe depression, etc.
Psychiatric medicines (if taken as prescribed or improperly, overdosed, or mixed with alcohol)
Parkinson's disease
Genetic disorders: Qazi–Markouizos syndrome, Say–Meyer syndrome, Tranebjaerg-Svejgaard syndrome, Wiedemann–Steiner syndrome, Wilson's disease, etc.
Examples
Examples of psychomotor retardation include the following:
Unaccountable difficulty in carrying out what are usually considered "automatic" or "mundane" self care tasks for healthy people (i.e., without depressive illness) such as taking a shower, dressing, grooming, cooking, brushing teeth, and exercising.
Physical difficulty performing activities that normally require little thought or effort, such as walking up stairs, getting out of bed, preparing meals, and clearing dishes from the table, household chores, and returning phone calls.
Tasks requiring mobility suddenly (or gradually) may inexplicably seem "impossible". Activities such as shopping, getting groceries, taking care of daily needs, and meeting the demands of employment or school are commonly affected.
Activities usually requiring little mental effort can become challenging. Balancing a checkbook, making a shopping list, and making decisions about mundane tasks (such as deciding what errands need to be done) are often difficult.
In schizophrenia, activity level may vary from psychomotor retardation to agitation; the patient experiences periods of listlessness and may be unresponsive, and at the next moment be active and energetic.
See also
Psychomotor learning
Psychomotor agitation
Disorders of diminished motivation
References
External links
Symptoms and signs of mental disorders
Motor control
Mood disorders
Disorders of diminished motivation | 0.764116 | 0.993328 | 0.759018 |
Mind–body problem | The mind–body problem is a philosophical problem concerning the relationship between thought and consciousness in the human mind and body.
It is not obvious how the concept of the mind and the concept of the body relate. For example, feelings of sadness (which are mental events) cause people to cry (which is a physical state of the body). Finding a joke funny (a mental event) causes one to laugh (another bodily state). Feelings of pain (in the mind) cause avoidance behaviours (in the body), and so on.
Similarly, changing the chemistry of the body (and the brain especially) via drugs (such as antipsychotics, SSRIs, or alcohol) can change one's state of mind in nontrivial ways. Alternatively, therapeutic interventions like cognitive behavioral therapy can change cognition in ways that have downstream effects on the bodily health.
In general, the existence of these mind–body connections seems unproblematic. Issues arise, however, once one considers what exactly we should make of these relations from a metaphysical or scientific perspective. Such reflections quickly raise a number of questions like:
Are the mind and body two distinct entities, or a single entity?
If the mind and body are two distinct entities, do the two of them causally interact?
Is it possible for these two distinct entities to causally interact?
What is the nature of this interaction?
Can this interaction ever be an object of empirical study?
If the mind and body are a single entity, then are mental events explicable in terms of physical events, or vice versa?
Is the relation between mental and physical events something that arises de novo at a certain point in development?
These and other questions that discuss the relation between mind and body are questions that all fall under the banner of the 'mind–body problem'.
Mind–body interaction and mental causation
Philosophers David L. Robb and John F. Heil introduce mental causation in terms of the mind–body problem of interaction:
Contemporary neurophilosopher Georg Northoff suggests that mental causation is compatible with classical formal and final causality.
Biologist, theoretical neuroscientist and philosopher, Walter J. Freeman, suggests that explaining mind–body interaction in terms of "circular causation" is more relevant than linear causation.
In neuroscience, much has been learned about correlations between brain activity and subjective, conscious experiences. Many suggest that neuroscience will ultimately explain consciousness:
"...consciousness is a biological process that will eventually be explained in terms of molecular signaling pathways used by interacting populations of nerve cells..." However, this view has been criticized because consciousness has yet to be shown to be a process, and the "hard problem" of relating consciousness directly to brain activity remains elusive.
Since 1927, at the Solvay Conference in Austria, European physicists of the late 19th and early 20th centuries realized that the interpretations of their experiments with light and electricity required a different theory to explain why light behaves both as a wave and particle. The implications were profound. The usual empirical model of explaining natural phenomena could not account for this duality of matter and non-matter. In a significant way, this has brought back the conversation on the mind–body duality.
Neural correlates
The neural correlates of consciousness "are the smallest set of brain mechanisms and events sufficient for some specific conscious feeling, as elemental as the color red or as complex as the sensual, mysterious, and primeval sensation evoked when looking at [a] jungle scene..." Neuroscientists use empirical approaches to discover neural correlates of subjective phenomena.
Neurobiology and neurophilosophy
A science of consciousness must explain the exact relationship between subjective conscious mental states and brain states formed by electrochemical interactions in the body, the so-called hard problem of consciousness. Neurobiology studies the connection scientifically, as do neuropsychology and neuropsychiatry. Neurophilosophy is the interdisciplinary study of neuroscience and philosophy of mind. In this pursuit, neurophilosophers, such as Patricia Churchland, Paul Churchland and Daniel Dennett, have focused primarily on the body rather than the mind. In this context, neuronal correlates may be viewed as causing consciousness, where consciousness can be thought of as an undefined property that depends upon this complex, adaptive, and highly interconnected biological system. However, it's unknown if discovering and characterizing neural correlates may eventually provide a theory of consciousness that can explain the first-person experience of these "systems", and determine whether other systems of equal complexity lack such features.
The massive parallelism of neural networks allows redundant populations of neurons to mediate the same or similar percepts. Nonetheless, it is assumed that every subjective state will have associated neural correlates, which can be manipulated to artificially inhibit or induce the subject's experience of that conscious state. The growing ability of neuroscientists to manipulate neurons using methods from molecular biology in combination with optical tools was achieved by the development of behavioral and organic models that are amenable to large-scale genomic analysis and manipulation. Non-human analysis such as this, in combination with imaging of the human brain, have contributed to a robust and increasingly predictive theoretical framework.
Arousal and content
There are two common but distinct dimensions of the term consciousness, one involving arousal and states of consciousness and the other involving content of consciousness and conscious states. To be conscious of something, the brain must be in a relatively high state of arousal (sometimes called vigilance), whether awake or in REM sleep. Brain arousal level fluctuates in a circadian rhythm but these natural cycles may be influenced by lack of sleep, alcohol and other drugs, physical exertion, etc. Arousal can be measured behaviorally by the signal amplitude required to trigger a given reaction (for example, the sound level that causes a subject to turn and look toward the source). High arousal states involve conscious states that feature specific perceptual content, planning and recollection or even fantasy. Clinicians use scoring systems such as the Glasgow Coma Scale to assess the level of arousal in patients with impaired states of consciousness such as the comatose state, the persistent vegetative state, and the minimally conscious state. Here, "state" refers to different amounts of externalized, physical consciousness: ranging from a total absence in coma, persistent vegetative state and general anesthesia, to a fluctuating, minimally conscious state, such as sleep walking and epileptic seizure.
Many nuclei with distinct chemical signatures in the thalamus, midbrain and pons must function for a subject to be in a sufficient state of brain arousal to experience anything at all. These nuclei therefore belong to the enabling factors for consciousness. Conversely it is likely that the specific content of any particular conscious sensation is mediated by particular neurons in the cortex and their associated satellite structures, including the amygdala, thalamus, claustrum and the basal ganglia.
Theoretical frameworks
A variety of approaches have been proposed. Most are either dualist or monist. Dualism maintains a rigid distinction between the realms of mind and matter. Monism maintains that there is only one unifying reality as in neutral or substance or essence, in terms of which everything can be explained.
Each of these categories contains numerous variants. The two main forms of dualism are substance dualism, which holds that the mind is formed of a distinct type of substance not governed by the laws of physics, and property dualism, which holds that mental properties involving conscious experience are fundamental properties, alongside the fundamental properties identified by a completed physics. The three main forms of monism are physicalism, which holds that the mind consists of matter organized in a particular way; idealism, which holds that only thought truly exists and matter is merely a representation of mental processes; and neutral monism, which holds that both mind and matter are aspects of a distinct essence that is itself identical to neither of them. Psychophysical parallelism is a third possible alternative regarding the relation between mind and body, between interaction (dualism) and one-sided action (monism).
Several philosophical perspectives that have sought to escape the problem by rejecting the mind–body dichotomy have been developed. The historical materialism of Karl Marx and subsequent writers, itself a form of physicalism, held that consciousness was engendered by the material contingencies of one's environment. An explicit rejection of the dichotomy is found in French structuralism, and is a position that generally characterized post-war Continental philosophy.
An ancient model of the mind known as the Five-Aggregate Model, described in the Buddhist teachings, explains the mind as continuously changing sense impressions and mental phenomena. Considering this model, it is possible to understand that it is the constantly changing sense impressions and mental phenomena (i.e., the mind) that experience/analyze all external phenomena in the world as well as all internal phenomena including the body anatomy, the nervous system as well as the organ brain. This conceptualization leads to two levels of analyses: (i) analyses conducted from a third-person perspective on how the brain works, and (ii) analyzing the moment-to-moment manifestation of an individual's mind-stream (analyses conducted from a first-person perspective). Considering the latter, the manifestation of the mind-stream is described as happening in every person all the time, even in a scientist who analyzes various phenomena in the world, including analyzing and hypothesizing about the organ brain.
Dualism
The following is a very brief account of some contributions to the mind–body problem.
Interactionism
The viewpoint of interactionism suggests that the mind and body are two separate substances, but that each can affect the other. This interaction between the mind and body was first put forward by the philosopher René Descartes. Descartes believed that the mind was non-physical and permeated the entire body, but that the mind and body interacted via the pineal gland. This theory has changed throughout the years, and in the 20th century its main adherents were the philosopher of science Karl Popper and the neurophysiologist John Carew Eccles. A more recent and popular version of Interactionism is the viewpoint of emergentism. This perspective states that mental states are a result of the brain states, and that the mental events can then influence the brain, resulting in a two way communication between the mind and body.
The absence of an empirically identifiable meeting point between the non-physical mind (if there is such a thing) and its physical extension (if there is such a thing) has been raised as a criticism of interactionalist dualism. This criticism has led many modern philosophers of mind to maintain that the mind is not something separate from the body. These approaches have been particularly influential in the sciences, particularly in the fields of sociobiology, computer science, evolutionary psychology, and the neurosciences.
Epiphenomenalism
The viewpoint of epiphenomenalism suggests that the physical brain can cause mental events in the mind, but that the mind cannot interact with the brain at all; stating that mental occurrences are simply a side effect of the brain's processes. This viewpoint explains that while one's body may react to them feeling joy, fear, or sadness, that the emotion does not cause the physical response. Rather, it explains that joy, fear, sadness, and all bodily reactions are caused by chemicals and their interaction with the body.
Psychophysical parallelism
The viewpoint of psychophysical parallelism suggests that the mind and body are entirely independent from one another. Furthermore, this viewpoint states that both mental and physical stimuli and reactions are experienced simultaneously by both the mind and body, however, there is no interaction nor communication between the two.
Double aspectism
Double aspectism is an extension of psychophysical parallelism which also suggests that the mind and body cannot interact, nor can they be separated. Baruch Spinoza and Gustav Fechner were two of the notable users of double aspectism, however, Fechner later expanded upon it to form the branch of psychophysics in an attempt to prove the relationship of the mind and body.
Pre-established harmony
The viewpoint of pre-established harmony is another offshoot of psychophysical parallelism which suggests that mental events and bodily events are separate and distinct, but that they are both coordinated by an external agent, an example of such an agent could be God. A notable adherent to the idea of pre-established harmony is Gottfried Wilhelm von Leibniz in his theory of Monadology. His explanation of pre-established harmony relied heavily upon God as the external agent who coordinated the mental and bodily events of all things in the beginning.
Gottfried Wilhelm Leibniz's theory of pre-established harmony is a philosophical theory about causation under which every "substance" affects only itself, but all the substances (both bodies and minds) in the world nevertheless seem to causally interact with each other because they have been programmed by God in advance to "harmonize" with each other. Leibniz's term for these substances was "monads", which he described in a popular work (Monadology §7) as "windowless".
The concept of pre-established harmony can be understood by considering an event with both seemingly mental and physical aspects. For example, consider saying 'ouch' after stubbing one's toe. There are two general ways to describe this event: in terms of mental events (where the conscious sensation of pain caused one to say 'ouch') and in terms of physical events (where neural firings in one's toe, carried to the brain, are what caused one to say 'ouch'). The main task of the mind–body problem is figuring out how these mental events (the feeling of pain) and physical events (the nerve firings) relate. Leibniz's pre-established harmony attempts to answer this puzzle, by saying that mental and physical events are not genuinely related in any causal sense, but only seem to interact due to psycho-physical fine-tuning.
Leibniz's theory is best known as a solution to the mind–body problem of how mind can interact with the body. Leibniz rejected the idea of physical bodies affecting each other, and explained all physical causation in this way.
Under pre-established harmony, the preprogramming of each mind must be extremely complex, since only it causes its own thoughts or actions, for as long as it exists. To appear to interact, each substance's "program" must contain a description of either the entire universe, or of how the object behaves at all times during all interactions that appear to occur.
An example:
An apple falls on Alice's head, apparently causing the experience of pain in her mind. In fact, the apple does not cause the pain—the pain is caused by some previous state of Alice's mind. If Alice then seems to shake her hand in anger, it is not actually her mind that causes this, but some previous state of her hand.
Note that if a mind behaves as a windowless monad, there is no need for any other object to exist to create that mind's sense perceptions, leading to a solipsistic universe that consists only of that mind. Leibniz seems to admit this in his Discourse on Metaphysics, section 14. However, he claims that his principle of harmony, according to which God creates the best and most harmonious world possible, dictates that the perceptions (internal states) of each monad "expresses" the world in its entirety, and the world expressed by the monad actually exists. Although Leibniz says that each monad is "windowless", he also claims that it functions as a "mirror" of the entire created universe.
On occasion, Leibniz styled himself as "the author of the system of pre-established harmony".
Immanuel Kant's professor Martin Knutzen regarded pre-established harmony as "the pillow for the lazy mind".
In his sixth Metaphysical Meditation, Descartes talked about a "coordinated disposition of created things set up by God", shortly after having identified "nature in its general aspect" with God himself. His conception of the relationship between God and his normative nature actualized in the existing world recalls both the pre-established harmony of Leibniz and the Deus sive Natura of Baruch Spinoza.
Occasionalism
The viewpoint of Occasionalism is another offshoot of psychophysical parallelism, however, the major difference is that the mind and body have some indirect interaction. Occasionalism suggests that the mind and body are separate and distinct, but that they interact through divine intervention. Nicolas Malebranche was one of the main contributors to this idea, using it as a way to address his disagreements with Descartes' view of the mind–body problem. In Malebranche's occasionalism, he viewed thoughts as a wish for the body to move, which was then fulfilled by God causing the body to act.
Historical background
The problem was popularized by René Descartes in the 17th century, which resulted in Cartesian dualism, also by pre-Aristotelian philosophers, in Avicennian philosophy, and in earlier Asian traditions.
The Buddha
The Buddha (480–400 B.C.E), founder of Buddhism, described the mind and the body as depending on each other in a way that two sheaves of reeds were to stand leaning against one another and taught that the world consists of mind and matter which work together, interdependently. Buddhist teachings describe the mind as manifesting from moment to moment, one thought moment at a time as a fast flowing stream. The components that make up the mind are known as the five aggregates (i.e., material form, feelings, perception, volition, and sensory consciousness), which arise and pass away continuously. The arising and passing of these aggregates in the present moment is described as being influenced by five causal laws: biological laws, psychological laws, physical laws, volitional laws, and universal laws. The Buddhist practice of mindfulness involves attending to this constantly changing mind-stream.
Ultimately, the Buddha's philosophy is that both mind and forms are conditionally arising qualities of an ever-changing universe in which, when nirvāna is attained, all phenomenal experience ceases to exist. According to the anattā doctrine of the Buddha, the conceptual self is a mere mental construct of an individual entity and is basically an impermanent illusion, sustained by form, sensation, perception, thought and consciousness. The Buddha argued that mentally clinging to any views will result in delusion and stress, since, according to the Buddha, a real self (conceptual self, being the basis of standpoints and views) cannot be found when the mind has clarity.
Plato
Plato (429–347 B.C.E.) believed that the material world is a shadow of a higher reality that consists of concepts he called Forms. According to Plato, objects in our everyday world "participate in" these Forms, which confer identity and meaning to material objects. For example, a circle drawn in the sand would be a circle only because it participates in the concept of an ideal circle that exists somewhere in the world of Forms. He argued that, as the body is from the material world, the soul is from the world of Forms and is thus immortal. He believed the soul was temporarily united with the body and would only be separated at death, when it, if pure, would return to the world of Forms; otherwise, reincarnation follows. Since the soul does not exist in time and space, as the body does, it can access universal truths. For Plato, ideas (or Forms) are the true reality, and are experienced by the soul. The body is for Plato empty in that it cannot access the abstract reality of the world; it can only experience shadows. This is determined by Plato's essentially rationalistic epistemology.
Aristotle
For Aristotle (384–322 BC) mind is a faculty of the soul. Regarding the soul, he said:
In the end, Aristotle saw the relation between soul and body as uncomplicated, in the same way that it is uncomplicated that a cubical shape is a property of a toy building block. The soul is a property exhibited by the body, one among many. Moreover, Aristotle proposed that when the body perishes, so does the soul, just as the shape of a building block disappears with destruction of the block.
Medieval Aristotelianism
Working in the Aristotelian-influenced tradition of Thomism, Thomas Aquinas (1225–1274), like Aristotle, believed that the mind and the body are one, like a seal and wax; therefore, it is pointless to ask whether or not they are one. However, (referring to "mind" as "the soul") he asserted that the soul persists after the death of the body in spite of their unity, calling the soul "this particular thing". Since his view was primarily theological rather than philosophical, it is impossible to fit it neatly within either the category of physicalism or dualism.
Influences of Eastern monotheistic religions
In religious philosophy of Eastern monotheism, dualism denotes a binary opposition of an idea that contains two essential parts. The first formal concept of a "mind–body" split may be found in the divinity–secularity dualism of the ancient Persian religion of Zoroastrianism around the mid-fifth century BC. Gnosticism is a modern name for a variety of ancient dualistic ideas inspired by Judaism popular in the first and second century AD. These ideas later seem to have been incorporated into Galen's "tripartite soul" that led into both the Christian sentiments expressed in the later Augustinian theodicy and Avicenna's Platonism in Islamic Philosophy.
Descartes
René Descartes (1596–1650) believed that mind exerted control over the brain via the pineal gland:
His posited relation between mind and body is called Cartesian dualism or substance dualism. He held that mind was distinct from matter, but could influence matter. How such an interaction could be exerted remains a contentious issue.
Kant
For Immanuel Kant (1724–1804) beyond mind and matter there exists a world of a priori forms, which are seen as necessary preconditions for understanding. Some of these forms, space and time being examples, today seem to be pre-programmed in the brain.
Kant views the mind–body interaction as taking place through forces that may be of different kinds for mind and body.
Huxley
For Thomas Henry Huxley (1825–1895) the conscious mind was a by-product of the brain that has no influence upon the brain, a so-called epiphenomenon.
Whitehead
Alfred North Whitehead advocated a sophisticated form of panpsychism that has been called by David Ray Griffin panexperientialism.
Popper
For Karl Popper (1902–1994) there are three aspects of the mind–body problem: the worlds of matter, mind, and of the creations of the mind, such as mathematics. In his view, the third-world creations of the mind could be interpreted by the second-world mind and used to affect the first-world of matter. An example might be radio, an example of the interpretation of the third-world (Maxwell's electromagnetic theory) by the second-world mind to suggest modifications of the external first world.
Ryle
With his 1949 book, The Concept of Mind, Gilbert Ryle "was seen to have put the final nail in the coffin of Cartesian dualism".
In the chapter "Descartes' Myth," Ryle introduces "the dogma of the Ghost in the machine" to describe the philosophical concept of the mind as an entity separate from the body:I hope to prove that it is entirely false, and false not in detail but in principle. It is not merely an assemblage of particular mistakes. It is one big mistake and a mistake of a special kind. It is, namely, a category mistake.
Searle
For John Searle (b. 1932) the mind–body problem is a false dichotomy; that is, mind is a perfectly ordinary aspect of the brain. Searle proposed biological naturalism in 1980.
See also
Binding problem
Bodymind
Chinese room
Cognitive closure (philosophy)
Cognitive neuroscience
Connectionism
Consciousness in animals
Downward causation
Descartes' Error
Embodied cognition
Existentialism
Explanatory gap
Free will
Ideasthesia
Namarupa (Buddhist concept)
Neuroscience of free will
Philosophical zombie
Philosophy of artificial intelligence
Pluralism
Problem of mental causation
Problem of other minds
Qualia
Reductionism
Sacred–profane dichotomy
Sentience
Strange loop (self-reflective thoughts)
The Mind's I (book on the subject)
Turing test
Vertiginous question
William H. Poteat
References
Bibliography
Kim, J. (1995). "Mind–Body Problem", Oxford Companion to Philosophy. Ted Honderich (ed.). Oxford: Oxford University Press.
Massimini, M.; Tononi, G. (2018). Sizing up Consciousness: Towards an Objective Measure of the Capacity for Experience. Oxford University Press.
Turner, Bryan S. (1996). The Body and Society: Exploration in Social Theory.
External links
Plato's Middle Period Metaphysics and Epistemology - The Stanford Encyclopedia of Philosophy
The Mind/Body Problem, BBC Radio 4 discussion with Anthony Grayling, Julian Baggini & Sue James (In Our Time, Jan. 13, 2005)
Baruch Spinoza
Cognition
Consciousness
Dichotomies
Enactive cognition
History of psychology
Metaphysics of mind
Neuroscience
Philosophical problems
René Descartes | 0.761374 | 0.996889 | 0.759005 |
Social media and the effects on American adolescents | Introduction
Social media has grown in popularity, and many people around the world now use it. People use social media to share information, ideas, personal messages, and other content (such as videos). Around 95% of young people between the ages of 13–17 use at least one social media platform, making it a major influence on young adolescents.
Research
Social media may positively affect adolescents by promoting a feeling of inclusion, providing greater access to more friends, and enhancing romantic relationships. Social media allows people to communicate with other people using social media, no matter the distance between them. Some adolescents with social and emotional issues feel more included with social media and online activities. Social media can give people a sense of belonging which can lead to an increase in identity development. Adolescents that post pictures on social media can look back on their memories, and their positive emotions can be related to a sense of their true identity. Additionally, social media can provide a way to communicate with friends and family when alone.
Adolescents who use social media tend to be more outgoing and interact more with others online and in person. According to Newport Academy, teens who spend more time on non-screen activities, such as sports, exercise, in-person social interaction, or any other in-person activities are less likely to report any mental issues, such as anxiety or depression. Social media provide adolescents within the United States the ability to connect with people from other countries. Being involved in social media typically improves communication skills, social connections, and technical skills. Furthermore, adolescents who are students can use social media to seek academic help. The appropriate usage of social media has developed favorable academic environments for both, the students and the teaching faculty, offering the potential benefits in the process of learning information.
According to research conducted by Dr. John Gilmour and his coauthors, social media exposure, specifically Facebook, has allowed the general population to have positive interactions and gain social support from their family and friends, which in turn benefits their overall well-being. Social support is defined as the extent to which an individual feels a sense of value and belongingness to a social group. Although several studies have found that general Facebook use has a negative impact on mental health, Facebook use has a variety of positive mental health outcomes when used to seek and provide social support. Gilmour and his research team used academic databases and located 27 articles related to individuals’ use of Facebook as a mechanism for social support. The articles did not consider adolescents and adults separately, but rather focused on the general population of Facebook users.
After analyzing all 27 articles, the researchers concluded that the more active a person is on Facebook, the greater the opportunities for receiving social support. Furthermore, higher levels of Facebook-based social support predicted greater positive mental health outcomes. These outcomes include, but are not limited to, a decrease in depression, anxiety, and loneliness, as well as an increase in general psychological well-being.
Focusing on adolescents, J. Pouwels and her coauthors conducted a 3-week study to determine whether social media has a positive impact on adolescents’ close friendships, characterized by supportiveness, responsiveness, and accessibility. A total of 387 adolescents who were active on Instagram, WhatsApp, and Snapchat completed a 2-minute survey six times per day. They reported the amount of time they spent on these three social media platforms, as well as their momentary experiences of friendship closeness. Findings suggested that the more time spent on Instagram and WhatsApp, the higher degree of friendship closeness as reported on the surveys.
Similar results were found in a study conducted by Dr. Lauren Shapiro and Dr. Gayla Margolin. Their study found that social media has a positive impact on the development of adolescents’ social relationships. The researchers administered self-report questionnaires to gather these findings. Their results suggest that social networking sites make it easier for adolescents to communicate and share feelings and experiences because it is less threatening than face-to-face interactions. In addition, online communications were found to lead to closer, high-quality friendships among adolescents.
As discovered by this study, social networking sites can also foster identity development for adolescents. Specifically, social media provides many opportunities for self-disclosure, which researchers believe plays a role in identity development. Adolescents that participated in the study reported being able to learn more about themselves through using social media.
Although social media offers positives to teenagers, there are also negatives that come with it. These types of content may be even more risky for teens who already have a mental health condition. Being exposed to discrimination, hate or cyberbullying on social media also can raise the risk of anxiety or depression. What teens share about themselves on social media also matters. With the teenage brain, it's common to make a choice before thinking it through. So, teens might post something when they're angry or upset, and regret it later. That's known as stress posting.
How the addiction to technology begins
According to Living Skills in Schools,
Addiction is based on dopamine dependence. Dopamine is the brain's chemical signal for pleasure, excitement, and motivation. The addiction process begins through the hacking of the dopamine system by an outside source. The dopamine becomes spiked, dysregulated, and the brain is flooded with this chemical.
This outside source of technology addiction can impact dopamine receptors long term affecting attention span, critical thinking and problem solving.
In the article, "Unveiling the Dark Side of Social Networking Sites: Personal and Work-related Consequences of Social Networking Site Addiction" by Murad Moqbel and Ned Kock, they expand on Social Networking Sites (SNS), and the negative effects it causes among people have excessive use. Moqbel; et al. say that: "Although SNS addiction is not formally recognized as a diagnosis, it can be broadly defined as a psychological dependence on the use of SNSs that interferes with other important activities and yields negative consequences". The article discusses how excessive use of SNS causes people to perform poorly at work and how it causes distractions, like not doing their jobs correctly. This article is based on a survey of 276 corporate employees. The authors explain that: "However, excessive use of these SNSs may also promote negative outcomes, such as addiction, distraction, reduced positive emotions, low performance, and poor health". SNS can have positive effects on work such as communication, but excessive use makes it affect you at work and may cause different mental disorders such as anxiety and depression. Also, in this article, it is discussed who is more likely to suffer from SNS addiction and the reason. Anyone can suffer from SNS addiction, however, there is a specific group that is more vulnerable to suffering from SNS. The authors explain that: "One of the earliest studies on SNS addiction reported that young people are more vulnerable to falling into SNS addiction". Young people use SNS more because they want to see what other friends are doing in addition to following the public lives of celebrities, for example. The problem gets worse when they become obsessed and it becomes a competition to show "who has a better life". That is when depression begins and young people compare themselves to other people's lives. This is called social cognition, and that is when a person wants to imitate or follow what they learn from seeing another person. The study concludes by showing that, of the 276 people who participated in the survey, the majority suffer from depression and mental disorders due to excessive use of SNS since they were young, and that it affects them in their daily life and work. SNS causes them to have distractions and perform poorly. If they had positive emotions, they could have better performances and better health. Sadly, the results of this survey show negative effects such as bad performances at work due to distractions and depression.
Researches about Negative Impacts on the Excessive use of Social media by Adolescents
Many research studies have also analyzed the negative effects of social media on adolescents’ mental health, however. In the same study conducted by Dr. Shapiro and Dr. Margolin, they discovered that social networking sites, such as Facebook, make it easier for adolescents to compare themselves to their peers. Based on the results of this research study, social comparison can have a strong negative impact on adolescents’ self-esteem. Self-report surveys revealed that the more time adolescents spent on Facebook, the more they believed others were better off or happier than themselves.
Along with accomplishments and happiness, physical attractiveness is also a significant aspect of social comparison. Preadolescence is a period when children start to become exposed to social media and is also a period when they start to develop body image concerns and depression. Since individuals posting on social media tend to only present the best version of themselves online, research has shown that this can cause adolescents to perceive others as more attractive than themselves. In the study administered by Dr. Shapiro and Dr. Margolin, female adolescents reported having a more negative body image after looking at beautiful photos of other women versus looking at less attractive photos on social media. While online, teens can be exposed to content revolving around self-harm, body shaming, bullying, unrealistic beauty standards and eating disorders.
Young adults also seem to experience higher symptoms of anxiety because of attempting to keep up with social media's warped beauty standards. Hawes et al. (2020) found that increased social media usage, along with trying to stay up-to-date with beauty and fashion trends, could be damaging to those who already struggle with body image issues. This study researched the relationship between social media use and maladjustment, focusing on appearance-related content and symptoms of anxiety. They had two hypotheses, one being that appearance-related (AR) social media preoccupation would correlate with more symptoms of depression and social anxiety, and the other being that AR social media preoccupation use intensifies the use of social media with appearance anxiety. They used 763 adolescents of mixed genders from ages 12–17. They also tested college students from ages 16–25. The participants completed surveys that inquired about social media use, symptoms of general anxiety, appearance anxiety, and depression. They found that social media use can be associated with worse emotional adjustment in adolescents and young adults as well as that appearance-related social media preoccupation elevated symptoms of appearance anxiety.
Further investigation has suggested that spending an excessive amount of time on social media can lead to depressive symptoms, which in turn may increase the risk for social isolation or even suicidal ideation. In a recent survey of teens, it was discovered that 35% of teens use at least one of five social media platforms multiple times throughout the day. Many policymakers have expressed concerns regarding the potential negative impact of social media on mental health because of its relation to suicidal thoughts and ideation. A study conducted by Dr. Chloe Berryman and her coauthors looked into the phenomenon called "vague booking," which refers to individuals intentionally wording their social media posts in a way that they believe will obtain concern from others. These posts may even function as a cry for help. This study found that young adults who partook in vague booking and relied on social media as their emotional outlet reported greater loneliness and suicidal thoughts than those who were not vague booking.
Social comparison theory examines how people establish their personal value by comparing themselves to others. These social comparisons and related feelings of jealousy, when made on social media platforms, can lead to the development of symptoms of depression in users. Depression is common also for children and adolescents who have been cyberbullied. According to Youth Risk Behavior Surveillance – United States, 2015, nationwide, 15.5% of students had been electronically bullied, counting being bullied through e-mail, chat rooms, instant messaging, websites, or texting, during the 12 months before the survey. Using 7 or more social media platforms has been correlated with a higher risk of anxiety and depression in adolescents.
One important aspect that is a huge factor in how teens react to media is the social learning theory. In Banduras experiment, "Bobo Dolls experiment on Social Learning," demonstrates how kids learn from social environments. In his experiment he has kids observe adults being exceedingly kind to the bobo dolls, when left in the room with just the kid and the bobo doll, the kids treated it nicely just as the adult did. In contrast that when the kids observed the adults punching and hurting the bobo doll, children did the same when alone with the bobo doll. As teens learn from their peers or idols online, they tend to duplicate that behavior just like the kids did with the bobo dolls in Bandura's experiment. If teens are viewing people with a social media platform online demonstrating certain inappropriate behaviors, they may learn from this and recreate the behavior.
Social media can significantly influence body image concerns in female adolescents. Young women who are easily influenced by the images of others on social media may hold themselves to an unrealistic standard for their bodies because of the prevalence of digital image alteration. Social media can be a gateway to Body dysmorphic disorder. Dana Johns, MD, a plastic surgeon at the University of Utah Health says, "Selfie' or 'Snapchat' dysmorphia is essentially the new age social media upgrade to a long-standing disorder." According to the APA, these unrealistic beauty standards are detrimental to the developing mind and can cause serious mental health issues.
Engaging with social media platforms two hours before falling asleep can affect sleep quality, and a longer duration of digital media use is associated with reduced total sleep time. The phenomena of "Facebook depression" is a condition which comes to surface when young adults have a higher usage of Facebook and tend to manifest the actual symptoms of depression. Youths who frequently use social media increase their risk of depression by 27 percent, while those who dedicate themselves to outdoor activities don't have that much risk. Sleep deprivation may also be another common factor in teens. According to the Mayo Clinic, a 2016 study that was conducted on more than 450 teens found that greater social media use, nighttime social media use, and emotional investment in social media, such as feeling upset when prevented from logging on, were each linked with worse sleep quality that could increase the levels of anxiety and depression.
In the article, "Adolescent Social Media Use and Mental Health from Adolescent and Parent Perspectives" by Christopher T. Barry, Chloe L. Sidoti, Shanelle M. Briggs, Shari R. Reiter, and Rebecca A. Lindsey, there is a sample survey conducted with 226 participants (113 parent-adolescent days) from throughout the United States, with adolescents (55 males, 51 females, 7 unreported) ranging from ages 14 to 17. In this study, both adolescents and parents provide social media use of adolescents' social networks. The hypothesis question of this survey is: "Are parent-reported symptoms of inattention, hyperactivity/impulsivity, conduct problems (i.e. symptoms of Oppositional Defiant Disorder [ODD] and Conduct Disorder [CD]), depression, and anxiety related to the reported number of adolescents' social media accounts and the frequency with which adolescents report checking their social media accounts?" Something very important that the authors of this survey say is that: "The present study represents an area of ever-growing importance, as approximately 24% of U.S. teens report being online ‘almost constantly’ with much of that time being spent on social media applications". Nowadays, young people spend a large part of their day on social networks in different applications. The study concluded by saying that due to young people's excessive use of social media, they have high levels of anxiety, stress, fear of missing out, and hyperactivity. The more time they spend on social media, the higher the levels. Furthermore, due to time on social media, teenagers tend to feel more lonely and sad. This may be due to comparing themselves with other people on social networks, or cyberbullying. The authors explain that: "These latter findings are particularly noteworthy in that they cannot be explained by shared source variance, as adolescent-reported social media activity was associated with externalizing and internalizing symptoms as reported by parents" The results show that these mental problems are individual, not hereditary. The authors conclude that there should be more limitations to protect young people from the excessive use of social media since, due to its excessive use, the levels of mental problems or depressive symptoms increase. Parents should guide young people on this complicated topic, and have limitations on the use of social media to prevent young people from having high levels of depression or other mental problems.
In the article, "Associations Between Time Spent Using Social Media and Internalizing and Externalizing Problems Among US Youth" by Kira E. Riehm, Kenneth A. Feder, Kayla N. Tormohlen, et al., they report the results of a cohort study of 6,595 US adolescents on the use of social networks. They divided the study into 3 waves; waves 1 (September 12, 2013, to December 14, 2014), 2 (October 23, 2014, to October 30, 2015), and 3 (October 18, 2015, to October 23, 2016). Young people today are using social networks intensely and much more frequently, causing depression and anxiety among them. The question for the Self-reported time spent on social media during a typical day was divided by (none, ≤30 minutes, >30 minutes to ≤3 hours, >3 hours to ≤6 hours, and >6 hours) during the waves. The study shows that “Adolescents who spend more than 3 hours per day on social media may be at heightened risk for mental health problems, including internalizing problems”. By using social media excessively they begin to compare themselves and create complexities and insecurities. Adolescents who use social media for more than 3 hours a day could suffer from insomnia or other mental disorders such as low self-esteem. The study shows that young people aged 12–15 tend to use their phones between 3 and 6 hours a day, although many of them spend the entire 6 hours. The authors believe that the use of social media could be limited and there could be more guidance to young people on this topic, as well as more research should be done on limiting social media.
Policymaking
Although a large aspect of policymaking is creating or changing laws, this is not always the case. Policymaking can also include other types of established standards, for example, parents’ rules or policies restricting their child's social media exposure. Since social media is easily accessible to nearly everyone, there are few laws regarding adolescents’ exposure to social media. However, there is substantial evidence that parents’ policies regarding the time their child spends on social media has an impact on their child's mental health.
One particular study, conducted by Dr. Jasmine Fardouly and her coauthors, involved a sample of 528 preadolescent social media users between the ages of 10 and 12 and one of their parents. Both children and their parents completed online surveys. Some of the parents involved in the study enforced social media policies for their children, such as setting rules that limit the amount of time their child spent on social media. Results from this study showed that preadolescents with parents who had greater control over their child's time on social media reported better overall mental health. The researchers found that parents who reduced the amount of time their child spent on social media resulted in their child being less exposed to content harmful to their emotional health. More parental control over time spent on social media was also found to be associated with preadolescents making fewer appearance comparisons online. The authors of this study concluded that fewer social media appearance comparisons were associated with higher adolescent appearance satisfaction and life satisfaction, as well as lower depressive symptoms.
The article, "No More FOMO: Limiting Social Media Decreases Loneliness and Depression" by Melissa G. Hunt, Rachel Marx, Courtney Lipson, and Jordyn Young, reports a research study of 143 undegraded students at the University of Pennsylvania who were randomly assigned to limit Facebook, Instagram and Snapchat use to 10 minutes a day per app. The results are incredibly positive. The authors explain that: "As of March 2018, 68% of adults in the United States had a Facebook account, and 75% of these people reported using Facebook on a daily basis. Furthermore, 78% of young adults (ages 18– 24) used Snapchat, while 71% of young adults used Instagram" Here we can see a large number of young people between 18 and 24 years old use social networks. The survey also served to see the levels of anxiety, depression, and loneliness of the participants. The authors explain that: "Both loneliness and depressive symptoms declined in the experimental group". Studies show that participants lowered their levels of depression and anxiety due to limiting their time on social media. One of the excuses that young people use is that they use social media to connect and talk to their family or friends, but the authors explain that: "It is ironic, but perhaps not surprising, that reducing social media, which promised to help us connect with others, actually helps people feel less lonely and depressed" The authors conclude by saying that this survey was a success by limiting social media use to only 30 minutes a day. The level of depression and loneliness in the participants decreased and they were able to communicate better in person, something they had not done at all before. This article because it proves my argument that if there were a social media limit, people's self-esteem would improve.
In June 2024, Surgeon General Vivek Murthy called for social media platforms to contain a warning about the impact they have on the mental health of young people.
Conclusion
In conclusion, there is a social problem with social networks and their excessive use. This social problem affects everyone, but it affects young Americans the most. If there are no measures or laws, this problem will continue to grow. Something that would help combat this problem is to create limits on the time spent on social media, and this is something as individuals we could do. Little by little, we will help this problem reach the ears of our governments and new measures or laws may be passed that limit time on social media. Due to the excessive use of social media, young people create different mental problems and high levels of depression, loneliness, anxiety, and stress. This affects them in their daily lives, limits them to doing things they normally do, and affects their academic or work performance. It is in our hands to educate our young people and raise awareness about this social problem so that they are not affected and have strengthened mental health. The purpose of this wiki post was to show information about the effects caused by excessive use of social media on young people and to demonstrate possible solutions to it.
See also
Instagram's impact on people
References
External links
Social media | 0.776966 | 0.976854 | 0.758982 |
Temperament | In psychology, temperament broadly refers to consistent individual differences in behavior that are biologically based and are relatively independent of learning, system of values and attitudes.
Some researchers point to association of temperament with formal dynamical features of behavior, such as energetic aspects, plasticity, sensitivity to specific reinforcers and emotionality. Temperament traits (such as neuroticism, sociability, impulsivity, etc.) are distinct patterns in behavior throughout a lifetime, but they are most noticeable and most studied in children. Babies are typically described by temperament, but longitudinal research in the 1920s began to establish temperament as something which is stable across the lifespan.
Definition
Temperament has been defined as "the constellation of inborn traits that determine a child's unique behavioral style and the way he or she experiences and reacts to the world."
Classification schemes
Many classification schemes for temperament have been developed, and there is no consensus. The Latin word temperamentum means 'mixture'.
Temperament vs personality
Some commentators see temperament as one factor underlying personality.
Main models
Four temperaments model
Historically, in the second century AD, the physician Galen described four classical temperaments (melancholic, phlegmatic, sanguine and choleric), corresponding to the four humors or bodily fluids. This historical concept was explored by philosophers, psychologists, psychiatrists and psycho-physiologists from very early times of psychological science, with theories proposed by Immanuel Kant, Hermann Lotze, Ivan Pavlov, Carl Jung, Gerardus Heymans among others. In more recent history, Rudolf Steiner had emphasized the importance of the four classical temperaments in elementary education, the time when he believed the influence of temperament on the personality to be at its strongest. Neither Galen nor Steiner are generally applied to the contemporary study of temperament in the approaches of modern medicine or contemporary psychology.
Rusalov-Trofimova neurophysiological model of temperament
This model based on the longest tradition of neurophysiological experiments started within the investigations of types and properties of nervous systems by Ivan Pavlov's school. This experimental tradition started on studies with animals in 1910–20s but expanded its methodology to humans since 1930s and especially since 1960s, including EEG, caffeine tests, evoked potentials, behavioral tasks and other psychophysiological methods.
The latest version of this model is based on the "Activity-specific approach in temperament research, on Alexander Luria's research in clinical neurophysiology and on the neurochemical model Functional Ensemble of Temperament. At the present time the model is associated with the Structure of Temperament Questionnaire and has 12 scales:
Endurance-related scales
Motor-physical Endurance: the ability of an individual to sustain prolonged physical activity using well-defined behavioral elements
Social-verbal Endurance (sociability): the ability of an individual to sustain prolonged social-verbal activities using well-defined behavioral elements.
Mental Endurance, or Attention: the ability to stay focused on selected features of objects with suppression of behavioral reactivity to other features.
Scales related to speed of integration of behavior
Motor-physical Tempo: speed of integration of an action in physical manipulations with objects with well-defined scripts of actions
Plasticity: the ability to adapt quickly to changes in situations, to change the program of action, and to shift between different tasks
Social-verbal Tempo: the preferred speed of speech and ability to understand fast speech on well-known topics, reading and sorting of known verbal material
Scales related to type of orientation of behavior
Sensation Seeking (SS): behavioral orientation to well-defined and existing sensational objects and events, underestimation of outcomes of risky behavior.
Empathy: behavioral orientation to the emotional states/needs of others (ranging from empathic deafness in autism and schizophrenia disorders to social dependency).
Sensitivity to Probabilities: the drive to gather information about uniqueness, frequency and values of objects/events, to differentiate their specific features, to project these features in future actions.
Emotionality scales
Satisfaction (Self-Confidence): A disposition to be satisfied with current state of events, a sense of security, confidence in the future. In spite of the optimism about outcomes of his or her activities, the respondent might be negligent in details.
Impulsivity: Initiation of actions based on immediate emotional reactivity rather than by planning or rational reasoning.
Neuroticism: A tendency to avoid novelty, unpredictable situations and uncertainty. Preference of well-known settings and people over unknown ones and a need for approval and feedback from people around.
Kagan's research
Jerome Kagan and his colleagues have concentrated empirical research on a temperamental category termed "reactivity." Four-month-old infants who became "motorically aroused and distressed" to presentations of novel stimuli were termed highly reactive. Those who remained "motorically relaxed and did not cry or fret to the same set of unfamiliar events" were termed low reactive. These high and low reactive infants were tested again at 14 and 21 months "in a variety of unfamiliar laboratory situations." Highly reactive infants were predominantly characterized by a profile of high fear to unfamiliar events, which Kagan termed inhibited. Contrastingly, low reactive children were minimally fearful to novel situations, and were characterized by an uninhibited profile (Kagan). However, when observed again at age 4.5, only a modest proportion of children maintained their expected profile due to mediating factors such as intervening family experiences. Those who remained highly inhibited or uninhibited after age 4.5 were at higher risk for developing anxiety and conduct disorders, respectively.
Kagan also used two additional classifications, one for infants who were inactive but cried frequently (distressed) and one for those who showed vigorous activity but little crying (aroused). Followed to age 14–17 years, these groups of children showed differing outcomes, including some differences in central nervous system activity. Teenagers who had been classed as high reactives when they were babies were more likely to be "subdued in unfamiliar situations, to report a dour mood and anxiety over the future, [and] to be more religious."
Thomas and Chess's nine temperament characteristics
Alexander Thomas, Stella Chess, Herbert G. Birch, Margaret Hertzig and Sam Korn began the classic New York Longitudinal study in the early 1950s regarding infant temperament (Thomas, Chess & Birch, 1968). The study focused on how temperamental qualities influence adjustment throughout life. Chess, Thomas et al. rated young infants on nine temperament characteristics, each of which, by itself, or with connection to another, affects how well a child fits in at school, with their friends, and at home. Behaviors for each one of these traits are on a continuum. If a child leans towards the high or low end of the scale, it could be a cause for concern. The specific behaviors are: activity level, regularity of sleeping and eating patterns, initial reaction, adaptability, intensity of emotion, mood, distractibility, persistence and attention span, and sensory sensitivity. Redundancies between the categories have been found and a reduced list is normally used by psychologists today.
Research by Thomas and Chess used the following nine temperament traits in children based on a classification scheme developed by Dr. Herbert Birch: Thomas, Chess, Birch, Hertzig and Korn found that many babies could be categorized into one of three groups: easy, difficult, and slow-to-warm-up (Thomas & Chess 1977). Not all children can be placed in one of these groups. Approximately 65% of children fit one of the patterns. Of the 65%, 40% fit the easy pattern, 10% fell into the difficult pattern, and 15% were slow to warm up. Each category has its own strength and weakness and one is not superior to another.
Thomas, Chess, Birch, Hertzig and Korn showed that easy babies readily adapt to new experiences, generally display positive moods and emotions and also have normal eating and sleeping patterns. Difficult babies tend to be very emotional, irritable and fussy, and cry a lot. They also tend to have irregular eating and sleeping patterns. Slow-to-warm-up babies have a low activity level, and tend to withdraw from new situations and people. They are slow to adapt to new experiences, but accept them after repeated exposure.
Thomas, Chess, Birch, Hertzig and Korn found that these broad patterns of temperamental qualities are remarkably stable through childhood. These traits are also found in children across all cultures.
Thomas and Chess also studied temperament and environment. One sample consisted of white middle-class families with high educational status and the other was of Puerto Rican working-class families. They found several differences. Among those were:
Parents of middle class children were more likely to report behavior problems before the age of nine and the children had sleep problems. This may be because children start preschool between the ages of three and four. Puerto Rican children under the age of five showed rare signs of sleep problems, however, sleep problems became more common at the age of six.
Middle-class parents also placed great stress on the child's early development, believing that problems in early ages were indicative of later problems in psychological development, whereas Puerto Rican parents felt their children would outgrow any problems.
At the age of nine, the report of new problems dropped for middle class children but they rose in Puerto Rican children, possibly due to the demands of school.
Observed traits:
Activity: refers to the child's physical energy. Is the child constantly moving, or does the child have a relaxing approach? A high-energy child may have difficulty sitting still in class, whereas a child with low energy can tolerate a very structured environment. The former may use gross motor skills like running and jumping more frequently. Conversely, a child with a lower activity level may rely more on fine motor skills, such as drawing and putting puzzles together. This trait can also refer to mental activity, such as deep thinking or reading—activities which become more significant as the person matures.
Regularity: also known as rhythmicity, refers to the level of predictability in a child's biological functions, such as waking, becoming tired, hunger, and bowel movements. Does the child have a routine in eating and sleeping habits, or are these events more random? For example, a child with a high regularity rating may want to eat at 2 p.m. every day, whereas a child lower on the regularity scale may eat at sporadic times throughout the day.
Initial reaction: also known as approach or withdrawal. This refers to how the child responds (whether positively or negatively) to new people or environments. Does the child approach people or things in the environment without hesitation, or does the child shy away? A bold child tends to approach things quickly, as if without thinking, whereas a cautious child typically prefers to watch for a while before engaging in new experiences.
Adaptability: refers to how long it takes the child to adjust to change over time (as opposed to an initial reaction). Does the child adjust to the changes in their environment easily, or is the child resistant? A child who adjusts easily may be quick to settle into a new routine, whereas a resistant child may take a long time to adjust to the situation.
Intensity: refers to the energy level of a positive or negative response. Does the child react intensely to a situation, or does the child respond in a calm and quiet manner? A more intense child may jump up and down screaming with excitement, whereas a mild-mannered child may smile or show no emotion.
Mood: refers to the child's general tendency towards a happy or unhappy demeanor. All children have a variety of emotions and reactions, such as cheerful and stormy, happy and unhappy. Yet each child biologically tends to have a generally positive or negative outlook. A baby who frequently smiles and coos could be considered a cheerful baby, whereas a baby who frequently cries or fusses might be considered a stormy baby.
Distractibility: refers to the child's tendency to be sidetracked by other things going on around them. Does the child get easily distracted by what is happening in the environment, or can the child concentrate despite the interruptions? An easily distracted child is engaged by external events and has difficulty returning to the task at hand, whereas a rarely distracted child stays focused and completes the task at hand.
Persistence and attention span: refer to the child's length of time on a task and ability to stay with the task through frustrations—whether the child stays with an activity for a long period of time or loses interest quickly.
Sensitivity: refers to how easily a child is disturbed by changes in the environment. This is also called sensory threshold or threshold of responsiveness. Is the child bothered by external stimuli like noises, textures, or lights, or does the child seem to ignore them? A sensitive child may lose focus when a door slams, whereas a child less sensitive to external noises will be able to maintain focus.
Mary K. Rothbart's three dimensions of temperament
Mary K. Rothbart views temperament as the individual personality differences in infants and young children that are present prior to the development of higher cognitive and social aspects of personality. Rothbart further defines temperament as individual differences in reactivity and self-regulation that manifest in the domains of emotion, activity and attention. Moving away from classifying infants into categories, Mary Rothbart identified three underlying dimensions of temperament. Using factor analysis on data from 3- to 12-month-old children, three broad factors emerged and were labeled surgency/extraversion, negative affect, and effortful control.
Surgency/extraversion
Surgency/extraversion includes positive anticipation, impulsivity, increased levels of activity and a desire for sensation seeking. This factor reflects the degree to which a child is generally happy, active, and enjoys vocalizing and seeking stimulation. Increased levels of smiling and laughter are observed in babies high in surgency/extraversion. 10- to 11-year-olds with higher levels of surgency/extraversion are more likely to develop externalizing problems like acting out; however, they are less likely to develop internalizing problems such as shyness and low self-esteem.
Negative affect
Negative affect includes fear, frustration, sadness, discomfort, and anger. This factor reflects the degree to which a child is shy and not easily calmed. Anger and frustration is seen as early as 2 to 3 months of age. Anger and frustration, together, predict externalizing and internalizing difficulties. Anger, alone, is later related to externalizing problems, while fear is associated with internalizing difficulties. Fear as evidenced by behavioral inhibition is seen as early as 7–10 months of age, and later predicts children's fearfulness and lower levels of aggression.
Effortful control
Effortful control includes the focusing and shifting of attention, inhibitory control, perceptual sensitivity, and a low threshold for pleasure. This factor reflects the degree to which a child can focus attention, is not easily distracted, can restrain a dominant response in order to execute a non-dominant response, and employ planning. When high in effortful control, six- to seven-year-olds tend to be more empathetic and lower in aggressiveness. Higher levels of effortful control at age seven also predict lower externalizing problems at age 11 years. Children high on negative affect show decreased internalizing and externalizing problems when they are also high on effortful control. Rothbart suggests that effortful control is dependent on the development of executive attention skills in the early years. In turn, executive attention skills allows greater self-control over reactive tendencies. Effortful control shows stability from infancy into the school years and also predicts conscience.
Others
Solomon Diamond described temperaments based upon characteristics found in the animal world: fearfulness, aggressiveness, affiliativeness, and impulsiveness. His work has been carried forward by Arnold Buss and Robert Plomin, who developed two measures of temperament: The Colorado Child Temperament Inventory, which includes aspects of Thomas and Chess's schema, and the EAS Survey for Children.
H. Hill Goldsmith and Joseph Campos used emotional characteristics to define temperament, originally analyzing five emotional qualities: motor activity, anger, fearfulness, pleasure/joy, and interest/persistence, but later expanding to include other emotions. They developed several measures of temperament: Lab-TAB and TBAQ.
Other temperament systems include those based upon theories of adult temperament (e.g. Gray and Martin's Temperament Assessment Battery for Children), or adult personality (e.g.the Big Five personality traits).
Causal and correlating factors
Biological basis for temperament
Scientists seeking evidence of a biological basis of personality have examined the relationship between temperament and neurotransmitter systems and character (defined in this context as developmental aspects of personality). Temperament is hypothesized to be associated with biological factors, but these have proven to be complex and diverse, and biological correlations have proven hard to confirm.
Temperament vs. psychiatric disorders
Several psychiatrists and differential psychologists have suggested that temperament and mental illness represent varying degrees along the same continuum of neurotransmitter imbalances in neurophysiological systems of behavioral regulation.
In fact, the original four types of temperament (choleric, melancholic, phlegmatic and sanguine) suggested by Hippocrates and Galen resemble mild forms of types of psychiatric disorders described in modern classifications. Moreover, Hippocrates-Galen hypothesis of chemical imbalances as factors of consistent individual differences has also been validated by research in neurochemistry and psychopharmacology, though modern studies attribute this to different compounds. Many studies have examined the relationships between temperament traits (such as impulsivity, sensation seeking, neuroticism, endurance, plasticity, sociability or extraversion) and various neurotransmitter and hormonal systems (i.e., the very same systems implicated in mental disorders).
Even though temperament and psychiatric disorders can be presented as, correspondingly, weak and strong imbalances within the same regulatory systems, it is incorrect to say that temperament is a weak degree of these disorders. Temperament might be a disposition to develop a mental disorder, but it should not be treated as a guaranteed marker for disorders.
Family life
Influences
Most experts agree that temperament has a genetic and biological basis, although environmental factors and maturation modify the ways a child's personality is expressed. The term "goodness of fit" refers to the match or mismatch between temperament and other personal characteristics and the specific features of the environment. Differences of temperament or behavior styles between individuals are important in family life. They affect the interactions among family members. While some children can adapt quickly and easily to family routines and get along with siblings, others who are more active or intense may have a difficult time adjusting. The interactions between these children and their parents or siblings are among a number of factors that can lead to stress and friction within the family.
The temperament mix between parents and children also affects family life. For example, a slow-paced parent may be irritated by a highly active child; or if both parent and child are highly active and intense, conflict could result. This knowledge can help parents figure out how temperaments affect family relationships. What may appear to be a behavioral problem may actually be a mismatch between the parent's temperament and their child's. By taking a closer look at the nine traits that Thomas and Chess revealed from their study, parents can gain a better understanding of their child's temperament and their own. Parents may also notice that situational factors cause a child's temperament to seem problematic; for example, a child with low rhythmicity can cause difficulties for a family with a highly scheduled life, and a child with a high activity level may be difficult to cope with if the family lives in a crowded apartment upstairs from sensitive neighbors.
Parents can encourage new behaviors in their children, and with enough support a slow-to-warm-up child can become less shy, or a difficult baby can become easier to handle. More recently infants and children with temperament issues have been called "spirited" to avoid negative connotations of "difficult" and "slow to warm up". Numerous books have been written advising parents how to raise their spirited youngsters.
Understanding for improvement
Understanding a child's temperament can help reframe how parents interpret children's behavior and the way parents think about the reasons for behaviors. By parents having access to this knowledge now helps them to guide their child in ways that respect the child's individual differences. By understanding children's temperaments and our own helps adults to work with them rather than try to change them. It is an opportunity to anticipate and understand a child's reaction. It is also important to know that temperament does not excuse a child's unacceptable behavior, but it does provide direction to how parents can respond to it. Making small and reasonable accommodations to routines can reduce tension. For example, a child who is slow-paced in the mornings may need an extra half-hour to get ready. Knowing who or what may affect the child's behavior can help to alleviate potential problems. Although children obtain their temperament behaviors innately, a large part that helps determine a child's ability to develop and act in certain ways is determined by the parents. When a parent takes the time to identify and more importantly respond to the temperaments they are faced with in a positive way it will help them guide their child in trying to figure out the world.
Recognizing the child's temperament and helping them to understand how it impacts his/her life as well as others is important. It is just as important for parents to recognize their own temperaments. Recognizing each individual's temperament, will help to prevent and manage problems that may arise from the differences among family members.
Temperament continues into adulthood, and later studies by Chess and Thomas have shown that these characteristics continue to influence behavior and adjustment throughout the life-span.
In addition to the initial clinical studies, academic psychologists have developed an interest in the field and researchers such as Bates, Buss & Plomin, Kagan, Rusalov, Cloninger, Trofimova and Rothbart have generated large bodies of research in the areas of personality, neuroscience, and behavioral genetics.
Determination of temperament type
Temperament is determined through specific behavioral profiles, usually focusing on those that are both easily measurable and testable early in childhood. Commonly tested factors include traits related to energetic capacities (named as "Activity", "Endurance", "Extraversion"), traits related to emotionality (such as irritability, frequency of smiling), and approach or avoidance of unfamiliar events. There is generally a low correlation between descriptions by teachers and behavioral observations by scientists of features used in determining temperament.
See also
Four temperaments
Functional Ensemble of Temperament
Keirsey Temperament Sorter
Socionics temperaments
Structure of Temperament Questionnaire
Big Five Personality Traits
Blood type personality theory
References
Additional References
Anschütz, Marieke, Children and Their Temperaments. .
Carey, William B., Understanding Your Child's Temperament. .
Diamond, S. (1957). Personality and temperament New York: Harper
Kagan J. Galen's prophecy: temperament in human nature. New York, NY: Basic Books; 1994.
Kagan J, Snidman NC. The long shadow of temperament. Cambridge, Mass: Harvard University Press; 2004.
Kohnstamm GA, Bates JE, Rothbart MK, eds. Temperament in childhood Oxford, United Kingdom: John Wiley and Sons; 1989:59-73.
Neville, Helen F., and Diane Clark Johnson, "Temperament Tools: Working with Your Child's Inborn Traits". .
Shick, Lyndall,"Understanding Temperament: Strategies for Creating Family Harmony". .
Thomas, Chess & Birch (1968). Temperament and Behavior Disorders in Children. New York, New York University Press
External links
Henig, Robin Marantz. "Understanding the Anxious Mind". New York Times Magazine, September 29, 2009. Retrieved October 3, 2009.
Personality | 0.763111 | 0.994557 | 0.758957 |
Histology | Histology,
also known as microscopic anatomy or microanatomy, is the branch of biology that studies the microscopic anatomy of biological tissues. Histology is the microscopic counterpart to gross anatomy, which looks at larger structures visible without a microscope. Although one may divide microscopic anatomy into organology, the study of organs, histology, the study of tissues, and cytology, the study of cells, modern usage places all of these topics under the field of histology. In medicine, histopathology is the branch of histology that includes the microscopic identification and study of diseased tissue. In the field of paleontology, the term paleohistology refers to the histology of fossil organisms.
Biological tissues
Animal tissue classification
There are four basic types of animal tissues: muscle tissue, nervous tissue, connective tissue, and epithelial tissue. All animal tissues are considered to be subtypes of these four principal tissue types (for example, blood is classified as connective tissue, since the blood cells are suspended in an extracellular matrix, the plasma).
Plant tissue classification
For plants, the study of their tissues falls under the field of plant anatomy, with the following four main types:
Dermal tissue
Vascular tissue
Ground tissue
Meristematic tissue
Medical histology
Histopathology is the branch of histology that includes the microscopic identification and study of diseased tissue. It is an important part of anatomical pathology and surgical pathology, as accurate diagnosis of cancer and other diseases often requires histopathological examination of tissue samples. Trained physicians, frequently licensed pathologists, perform histopathological examination and provide diagnostic information based on their observations.
Occupations
The field of histology that includes the preparation of tissues for microscopic examination is known as histotechnology. Job titles for the trained personnel who prepare histological specimens for examination are numerous and include histotechnicians, histotechnologists, histology technicians and technologists, medical laboratory technicians, and biomedical scientists.
Sample preparation
Most histological samples need preparation before microscopic observation; these methods depend on the specimen and method of observation.
Fixation
Chemical fixatives are used to preserve and maintain the structure of tissues and cells; fixation also hardens tissues which aids in cutting the thin sections of tissue needed for observation under the microscope. Fixatives generally preserve tissues (and cells) by irreversibly cross-linking proteins. The most widely used fixative for light microscopy is 10% neutral buffered formalin, or NBF (4% formaldehyde in phosphate buffered saline).
For electron microscopy, the most commonly used fixative is glutaraldehyde, usually as a 2.5% solution in phosphate buffered saline. Other fixatives used for electron microscopy are osmium tetroxide or uranyl acetate.
The main action of these aldehyde fixatives is to cross-link amino groups in proteins through the formation of methylene bridges (-CH2-), in the case of formaldehyde, or by C5H10 cross-links in the case of glutaraldehyde. This process, while preserving the structural integrity of the cells and tissue can damage the biological functionality of proteins, particularly enzymes.
Formalin fixation leads to degradation of mRNA, miRNA, and DNA as well as denaturation and modification of proteins in tissues. However, extraction and analysis of nucleic acids and proteins from formalin-fixed, paraffin-embedded tissues is possible using appropriate protocols.
Selection and trimming
Selection is the choice of relevant tissue in cases where it is not necessary to put the entire original tissue mass through further processing. The remainder may remain fixed in case it needs to be examined at a later time.
Trimming is the cutting of tissue samples in order to expose the relevant surfaces for later sectioning. It also creates tissue samples of appropriate size to fit into cassettes.
Embedding
Tissues are embedded in a harder medium both as a support and to allow the cutting of thin tissue slices. In general, water must first be removed from tissues (dehydration) and replaced with a medium that either solidifies directly, or with an intermediary fluid (clearing) that is miscible with the embedding media.
Paraffin wax
For light microscopy, paraffin wax is the most frequently used embedding material. Paraffin is immiscible with water, the main constituent of biological tissue, so it must first be removed in a series of dehydration steps. Samples are transferred through a series of progressively more concentrated ethanol baths, up to 100% ethanol to remove remaining traces of water. Dehydration is followed by a clearing agent (typically xylene although other environmental safe substitutes are in use) which removes the alcohol and is miscible with the wax, finally melted paraffin wax is added to replace the xylene and infiltrate the tissue. In most histology, or histopathology laboratories the dehydration, clearing, and wax infiltration are carried out in tissue processors which automate this process. Once infiltrated in paraffin, tissues are oriented in molds which are filled with wax; once positioned, the wax is cooled, solidifying the block and tissue.
Other materials
Paraffin wax does not always provide a sufficiently hard matrix for cutting very thin sections (which are especially important for electron microscopy). Paraffin wax may also be too soft in relation to the tissue, the heat of the melted wax may alter the tissue in undesirable ways, or the dehydrating or clearing chemicals may harm the tissue. Alternatives to paraffin wax include, epoxy, acrylic, agar, gelatin, celloidin, and other types of waxes.
In electron microscopy epoxy resins are the most commonly employed embedding media, but acrylic resins are also used, particularly where immunohistochemistry is required.
For tissues to be cut in a frozen state, tissues are placed in a water-based embedding medium. Pre-frozen tissues are placed into molds with the liquid embedding material, usually a water-based glycol, OCT, TBS, Cryogen, or resin, which is then frozen to form hardened blocks.
Sectioning
For light microscopy, a knife mounted in a microtome is used to cut tissue sections (typically between 5-15 micrometers thick) which are mounted on a glass microscope slide. For transmission electron microscopy (TEM), a diamond or glass knife mounted in an ultramicrotome is used to cut between 50 and 150 nanometer thick tissue sections.
A limited number of manufacturers are recognized for their production of microtomes, including vibrating microtomes commonly referred to as vibratomes, primarily for research and clinical studies. Additionally, Leica Biosystems is known for its production of products related to light microscopy in the context of research and clinical studies.
Staining
Biological tissue has little inherent contrast in either the light or electron microscope. Staining is employed to give both contrast to the tissue as well as highlighting particular features of interest. When the stain is used to target a specific chemical component of the tissue (and not the general structure), the term histochemistry is used.
Light microscopy
Hematoxylin and eosin (H&E stain) is one of the most commonly used stains in histology to show the general structure of the tissue. Hematoxylin stains cell nuclei blue; eosin, an acidic dye, stains the cytoplasm and other tissues in different stains of pink.
In contrast to H&E, which is used as a general stain, there are many techniques that more selectively stain cells, cellular components, and specific substances. A commonly performed histochemical technique that targets a specific chemical is the Perls' Prussian blue reaction, used to demonstrate iron deposits in diseases like hemochromatosis. The Nissl method for Nissl substance and Golgi's method (and related silver stains) are useful in identifying neurons are other examples of more specific stains.
Historadiography
In historadiography, a slide (sometimes stained histochemically) is X-rayed. More commonly, autoradiography is used in visualizing the locations to which a radioactive substance has been transported within the body, such as cells in S phase (undergoing DNA replication) which incorporate tritiated thymidine, or sites to which radiolabeled nucleic acid probes bind in in situ hybridization. For autoradiography on a microscopic level, the slide is typically dipped into liquid nuclear tract emulsion, which dries to form the exposure film. Individual silver grains in the film are visualized with dark field microscopy.
Immunohistochemistry
Recently, antibodies have been used to specifically visualize proteins, carbohydrates, and lipids. This process is called immunohistochemistry, or when the stain is a fluorescent molecule, immunofluorescence. This technique has greatly increased the ability to identify categories of cells under a microscope. Other advanced techniques, such as nonradioactive in situ hybridization, can be combined with immunochemistry to identify specific DNA or RNA molecules with fluorescent probes or tags that can be used for immunofluorescence and enzyme-linked fluorescence amplification (especially alkaline phosphatase and tyramide signal amplification). Fluorescence microscopy and confocal microscopy are used to detect fluorescent signals with good intracellular detail.
Electron microscopy
For electron microscopy heavy metals are typically used to stain tissue sections. Uranyl acetate and lead citrate are commonly used to impart contrast to tissue in the electron microscope.
Specialized techniques
Cryosectioning
Similar to the frozen section procedure employed in medicine, cryosectioning is a method to rapidly freeze, cut, and mount sections of tissue for histology. The tissue is usually sectioned on a cryostat or freezing microtome. The frozen sections are mounted on a glass slide and may be stained to enhance the contrast between different tissues. Unfixed frozen sections can be used for studies requiring enzyme localization in tissues and cells. Tissue fixation is required for certain procedures such as antibody-linked immunofluorescence staining. Frozen sections are often prepared during surgical removal of tumors to allow rapid identification of tumor margins, as in Mohs surgery, or determination of tumor malignancy, when a tumor is discovered incidentally during surgery.
Ultramicrotomy
Ultramicrotomy is a method of preparing extremely thin sections for transmission electron microscope (TEM) analysis. Tissues are commonly embedded in epoxy or other plastic resin. Very thin sections (less than 0.1 micrometer in thickness) are cut using diamond or glass knives on an ultramicrotome.
Artifacts
Artifacts are structures or features in tissue that interfere with normal histological examination. Artifacts interfere with histology by changing the tissues appearance and hiding structures. Tissue processing artifacts can include pigments formed by fixatives, shrinkage, washing out of cellular components, color changes in different tissues types and alterations of the structures in the tissue. An example is mercury pigment left behind after using Zenker's fixative to fix a section. Formalin fixation can also leave a brown to black pigment under acidic conditions.
History
In the 17th century the Italian Marcello Malpighi used microscopes to study tiny biological entities; some regard him as the founder of the fields of histology and microscopic pathology. Malpighi analyzed several parts of the organs of bats, frogs and other animals under the microscope. While studying the structure of the lung, Malpighi noticed its membranous alveoli and the hair-like connections between veins and arteries, which he named capillaries. His discovery established how the oxygen breathed in enters the blood stream and serves the body.
In the 19th century histology was an academic discipline in its own right. The French anatomist Xavier Bichat introduced the concept of tissue in anatomy in 1801, and the term "histology", coined to denote the "study of tissues", first appeared in a book by Karl Meyer in 1819. Bichat described twenty-one human tissues, which can be subsumed under the four categories currently accepted by histologists. The usage of illustrations in histology, deemed as useless by Bichat, was promoted by Jean Cruveilhier.
In the early 1830s Purkynĕ invented a microtome with high precision.
During the 19th century many fixation techniques were developed by Adolph Hannover (solutions of chromates and chromic acid), Franz Schulze and Max Schultze (osmic acid), Alexander Butlerov (formaldehyde) and Benedikt Stilling (freezing).
Mounting techniques were developed by Rudolf Heidenhain (1824–1898), who introduced gum Arabic; Salomon Stricker (1834–1898), who advocated a mixture of wax and oil; and Andrew Pritchard (1804–1884) who, in 1832, used a gum/isinglass mixture. In the same year, Canada balsam appeared on the scene, and in 1869 Edwin Klebs (1834–1913) reported that he had for some years embedded his specimens in paraffin.
The 1906 Nobel Prize in Physiology or Medicine was awarded to histologists Camillo Golgi and Santiago Ramon y Cajal. They had conflicting interpretations of the neural structure of the brain based on differing interpretations of the same images. Ramón y Cajal won the prize for his correct theory, and Golgi for the silver-staining technique that he invented to make it possible.
Future directions
In vivo histology
Currently there is intense interest in developing techniques for in vivo histology (predominantly using MRI), which would enable doctors to non-invasively gather information about healthy and diseased tissues in living patients, rather than from fixed tissue samples.
See also
National Society for Histotechnology
Slice preparation
Notes
References
External links
Histotechnology
Staining
Histochemistry
Anatomy
Laboratory healthcare occupations | 0.760534 | 0.99788 | 0.758922 |
Pathophysiology | Pathophysiology (or physiopathology) is a branch of study, at the intersection of pathology and physiology, concerning disordered physiological processes that cause, result from, or are otherwise associated with a disease or injury. Pathology is the medical discipline that describes conditions typically observed during a disease state, whereas physiology is the biological discipline that describes processes or mechanisms operating within an organism. Pathology describes the abnormal or undesired condition (symptoms of a disease), whereas pathophysiology seeks to explain the functional changes that are occurring within an individual due to a disease or pathologic state.
Etymology
The term pathophysiology comes from the Ancient Greek πάθος (pathos) and φυσιολογία (phisiologia).
History
Nineteenth century
Reductionism
In Germany in the 1830s, Johannes Müller led the establishment of physiology research autonomous from medical research. In 1843, the Berlin Physical Society was founded in part to purge biology and medicine of vitalism, and in 1847 Hermann von Helmholtz, who joined the Society in 1845, published the paper "On the conservation of energy", highly influential to reduce physiology's research foundation to physical sciences. In the late 1850s, German anatomical pathologist Rudolf Virchow, a former student of Müller, directed focus to the cell, establishing cytology as the focus of physiological research, while Julius Cohnheim pioneered experimental pathology in medical schools' scientific laboratories.
Germ theory
By 1863, motivated by Louis Pasteur's report on fermentation to butyric acid, fellow Frenchman Casimir Davaine identified a microorganism as the crucial causal agent of the cattle disease anthrax, but its routinely vanishing from blood left other scientists inferring it a mere byproduct of putrefaction. In 1876, upon Ferdinand Cohn's report of a tiny spore stage of a bacterial species, the fellow German Robert Koch isolated Davaine's bacterides in pure culture —a pivotal step that would establish bacteriology as a distinct discipline— identified a spore stage, applied Jakob Henle's postulates, and confirmed Davaine's conclusion, a major feat for experimental pathology. Pasteur and colleagues followed up with ecological investigations confirming its role in the natural environment via spores in soil.
Also, as to sepsis, Davaine had injected rabbits with a highly diluted, tiny amount of putrid blood, duplicated disease, and used the term ferment of putrefaction, but it was unclear whether this referred as did Pasteur's term ferment to a microorganism or, as it did for many others, to a chemical. In 1878, Koch published Aetiology of Traumatic Infective Diseases, unlike any previous work, where in 80 pages Koch, as noted by a historian, "was able to show, in a manner practically conclusive, that a number of diseases, differing clinically, anatomically, and in aetiology, can be produced experimentally by the injection of putrid materials into animals." Koch used bacteriology and the new staining methods with aniline dyes to identify particular microorganisms for each. Germ theory of disease crystallized the concept of cause—presumably identifiable by scientific investigation.
Scientific medicine
The American physician William Welch trained in German pathology from 1876 to 1878, including under Cohnheim, and opened America's first scientific laboratory —a pathology laboratory— at Bellevue Hospital in New York City in 1878. Welch's course drew enrollment from students at other medical schools, which responded by opening their own pathology laboratories. Once appointed by Daniel Coit Gilman, upon advice by John Shaw Billings, as founding dean of the medical school of the newly forming Johns Hopkins University that Gilman, as its first president, was planning, Welch traveled again to Germany for training in Koch's bacteriology in 1883. Welch returned to America but moved to Baltimore, eager to overhaul American medicine, while blending Vichow's anatomical pathology, Cohnheim's experimental pathology, and Koch's bacteriology. Hopkins medical school, led by the "Four Horsemen" —Welch, William Osler, Howard Kelly, and William Halsted— opened at last in 1893 as America's first medical school devoted to teaching German scientific medicine, so called.
Twentieth century
Biomedicine
The first biomedical institutes, Pasteur Institute and Berlin Institute for Infectious Diseases, whose first directors were Pasteur and Koch, were founded in 1888 and 1891, respectively. America's first biomedical institute, The Rockefeller Institute for Medical Research, was founded in 1901 with Welch, nicknamed "dean of American medicine", as its scientific director, who appointed his former Hopkins student Simon Flexner as director of pathology and bacteriology laboratories. By way of World War I and World War II, Rockefeller Institute became the globe's leader in biomedical research.
Molecular paradigm
The 1918 pandemic triggered frenzied search for its cause, although most deaths were via lobar pneumonia, already attributed to pneumococcal invasion. In London, pathologist with the Ministry of Health, Fred Griffith in 1928 reported pneumococcal transformation from virulent to avirulent and between antigenic types —nearly a switch in species— challenging pneumonia's specific causation. The laboratory of Rockefeller Institute's Oswald Avery, America's leading pneumococcal expert, was so troubled by the report that they refused to attempt repetition.
When Avery was away on summer vacation, Martin Dawson, British-Canadian, convinced that anything from England must be correct, repeated Griffith's results, then achieved transformation in vitro, too, opening it to precise investigation. Having returned, Avery kept a photo of Griffith on his desk while his researchers followed the trail. In 1944, Avery, Colin MacLeod, and Maclyn McCarty reported the transformation factor as DNA, widely doubted amid estimations that something must act with it. At the time of Griffith's report, it was unrecognized that bacteria even had genes.
The first genetics, Mendelian genetics, began at 1900, yet inheritance of Mendelian traits was localized to chromosomes by 1903, thus chromosomal genetics. Biochemistry emerged in the same decade. In the 1940s, most scientists viewed the cell as a "sack of chemicals" —a membrane containing only loose molecules in chaotic motion— and the only especial cell structures as chromosomes, which bacteria lack as such. Chromosomal DNA was presumed too simple, so genes were sought in chromosomal proteins. Yet in 1953, American biologist James Watson, British physicist Francis Crick, and British chemist Rosalind Franklin inferred DNA's molecular structure —a double helix— and conjectured it to spell a code. In the early 1960s, Crick helped crack a genetic code in DNA, thus establishing molecular genetics.
In the late 1930s, Rockefeller Foundation had spearheaded and funded the molecular biology research program —seeking fundamental explanation of organisms and life— led largely by physicist Max Delbrück at Caltech and Vanderbilt University. Yet the reality of organelles in cells was controversial amid unclear visualization with conventional light microscopy. Around 1940, largely via cancer research at Rockefeller Institute, cell biology emerged as a new discipline filling the vast gap between cytology and biochemistry by applying new technology —ultracentrifuge and electron microscope— to identify and deconstruct cell structures, functions, and mechanisms. The two new sciences interlaced, cell and molecular biology.
Mindful of Griffith and Avery, Joshua Lederberg confirmed bacterial conjugation —reported decades earlier but controversial— and was awarded the 1958 Nobel Prize in Physiology or Medicine. At Cold Spring Harbor Laboratory in Long Island, New York, Delbrück and Salvador Luria led the Phage Group —hosting Watson— discovering details of cell physiology by tracking changes to bacteria upon infection with their viruses, the process transduction. Lederberg led the opening of a genetics department at Stanford University's medical school, and facilitated greater communication between biologists and medical departments.
Disease mechanisms
In the 1950s, researches on rheumatic fever, a complication of streptococcal infections, revealed it was mediated by the host's own immune response, stirring investigation by pathologist Lewis Thomas that led to identification of enzymes released by the innate immune cells macrophages and that degrade host tissue. In the late 1970s, as president of Memorial Sloan–Kettering Cancer Center, Thomas collaborated with Lederberg, soon to become president of Rockefeller University, to redirect the funding focus of the US National Institutes of Health toward basic research into the mechanisms operating during disease processes, which at the time medical scientists were all but wholly ignorant of, as biologists had scarcely taken interest in disease mechanisms. Thomas became for American basic researchers a patron saint.
Examples
Parkinson's disease
The pathophysiology of Parkinson's disease is death of dopaminergic neurons as a result of changes in biological activity in the brain with respect to Parkinson's disease (PD). There are several proposed mechanisms for neuronal death in PD; however, not all of them are well understood. Five proposed major mechanisms for neuronal death in Parkinson's Disease include protein aggregation in Lewy bodies, disruption of autophagy, changes in cell metabolism or mitochondrial function, neuroinflammation, and blood–brain barrier (BBB) breakdown resulting in vascular leakiness.
Heart failure
The pathophysiology of heart failure is a reduction in the efficiency of the heart muscle, through damage or overloading. As such, it can be caused by a wide number of conditions, including myocardial infarction (in which the heart muscle is starved of oxygen and dies), hypertension (which increases the force of contraction needed to pump blood) and amyloidosis (in which misfolded proteins are deposited in the heart muscle, causing it to stiffen). Over time these increases in workload will produce changes to the heart itself.
Multiple sclerosis
The pathophysiology of multiple sclerosis is that of an inflammatory demyelinating disease of the CNS in which activated immune cells invade the central nervous system and cause inflammation, neurodegeneration and tissue damage. The underlying condition that produces this behaviour is currently unknown. Current research in neuropathology, neuroimmunology, neurobiology, and neuroimaging, together with clinical neurology provide support for the notion that MS is not a single disease but rather a spectrum
Hypertension
The pathophysiology of hypertension is that of a chronic disease characterized by elevation of blood pressure. Hypertension can be classified by cause as either essential (also known as primary or idiopathic) or secondary. About 90–95% of hypertension is essential hypertension.
HIV/AIDS
The pathophysiology of HIV/AIDS involves, upon acquisition of the virus, that the virus replicates inside and kills T helper cells, which are required for almost all adaptive immune responses. There is an initial period of influenza-like illness, and then a latent, asymptomatic phase. When the CD4 lymphocyte count falls below 200 cells/ml of blood, the HIV host has progressed to AIDS, a condition characterized by deficiency in cell-mediated immunity and the resulting increased susceptibility to opportunistic infections and certain forms of cancer.
Spider bites
The pathophysiology of spider bites is due to the effect of its venom. A spider envenomation occurs whenever a spider injects venom into the skin. Not all spider bites inject venom – a dry bite, and the amount of venom injected can vary based on the type of spider and the circumstances of the encounter. The mechanical injury from a spider bite is not a serious concern for humans.
Obesity
The pathophysiology of obesity involves many possible pathophysiological mechanisms involved in its development and maintenance.
This field of research had been almost unapproached until the leptin gene was discovered in 1994 by J. M. Friedman's laboratory. These investigators postulated that leptin was a satiety factor. In the ob/ob mouse, mutations in the leptin gene resulted in the obese phenotype opening the possibility of leptin therapy for human obesity. However, soon thereafter J. F. Caro's laboratory could not detect any mutations in the leptin gene in humans with obesity. On the contrary Leptin expression was increased proposing the possibility of Leptin-resistance in human obesity.
See also
Pathogenesis
References
Pathology
Physiology | 0.762669 | 0.995041 | 0.758887 |
Rationality | Rationality is the quality of being guided by or based on reason. In this regard, a person acts rationally if they have a good reason for what they do, or a belief is rational if it is based on strong evidence. This quality can apply to an ability, as in a rational animal, to a psychological process, like reasoning, to mental states, such as beliefs and intentions, or to persons who possess these other forms of rationality. A thing that lacks rationality is either arational, if it is outside the domain of rational evaluation, or irrational, if it belongs to this domain but does not fulfill its standards.
There are many discussions about the essential features shared by all forms of rationality. According to reason-responsiveness accounts, to be rational is to be responsive to reasons. For example, dark clouds are a reason for taking an umbrella, which is why it is rational for an agent to do so in response. An important rival to this approach are coherence-based accounts, which define rationality as internal coherence among the agent's mental states. Many rules of coherence have been suggested in this regard, for example, that one should not hold contradictory beliefs or that one should intend to do something if one believes that one should do it. Goal-based accounts characterize rationality in relation to goals, such as acquiring truth in the case of theoretical rationality. Internalists believe that rationality depends only on the person's mind. Externalists contend that external factors may also be relevant. Debates about the normativity of rationality concern the question of whether one should always be rational. A further discussion is whether rationality requires that all beliefs be reviewed from scratch rather than trusting pre-existing beliefs.
Various types of rationality are discussed in the academic literature. The most influential distinction is between theoretical and practical rationality. Theoretical rationality concerns the rationality of beliefs. Rational beliefs are based on evidence that supports them. Practical rationality pertains primarily to actions. This includes certain mental states and events preceding actions, like intentions and decisions. In some cases, the two can conflict, as when practical rationality requires that one adopts an irrational belief. Another distinction is between ideal rationality, which demands that rational agents obey all the laws and implications of logic, and bounded rationality, which takes into account that this is not always possible since the computational power of the human mind is too limited. Most academic discussions focus on the rationality of individuals. This contrasts with social or collective rationality, which pertains to collectives and their group beliefs and decisions.
Rationality is important for solving all kinds of problems in order to efficiently reach one's goal. It is relevant to and discussed in many disciplines. In ethics, one question is whether one can be rational without being moral at the same time. Psychology is interested in how psychological processes implement rationality. This also includes the study of failures to do so, as in the case of cognitive biases. Cognitive and behavioral sciences usually assume that people are rational enough to predict how they think and act. Logic studies the laws of correct arguments. These laws are highly relevant to the rationality of beliefs. A very influential conception of practical rationality is given in decision theory, which states that a decision is rational if the chosen option has the highest expected utility. Other relevant fields include game theory, Bayesianism, economics, and artificial intelligence.
Definition and semantic field
In its most common sense, rationality is the quality of being guided by reasons or being reasonable. For example, a person who acts rationally has good reasons for what they do. This usually implies that they reflected on the possible consequences of their action and the goal it is supposed to realize. In the case of beliefs, it is rational to believe something if the agent has good evidence for it and it is coherent with the agent's other beliefs. While actions and beliefs are the most paradigmatic forms of rationality, the term is used both in ordinary language and in many academic disciplines to describe a wide variety of things, such as persons, desires, intentions, decisions, policies, and institutions. Because of this variety in different contexts, it has proven difficult to give a unified definition covering all these fields and usages. In this regard, different fields often focus their investigation on one specific conception, type, or aspect of rationality without trying to cover it in its most general sense.
These different forms of rationality are sometimes divided into abilities, processes, mental states, and persons. For example, when it is claimed that humans are rational animals, this usually refers to the ability to think and act in reasonable ways. It does not imply that all humans are rational all the time: this ability is exercised in some cases but not in others. On the other hand, the term can also refer to the process of reasoning that results from exercising this ability. Often many additional activities of the higher cognitive faculties are included as well, such as acquiring concepts, judging, deliberating, planning, and deciding as well as the formation of desires and intentions. These processes usually affect some kind of change in the thinker's mental states. In this regard, one can also talk of the rationality of mental states, like beliefs and intentions. A person who possesses these forms of rationality to a sufficiently high degree may themselves be called rational. In some cases, also non-mental results of rational processes may qualify as rational. For example, the arrangement of products in a supermarket can be rational if it is based on a rational plan.
The term "rational" has two opposites: irrational and arational. Arational things are outside the domain of rational evaluation, like digestive processes or the weather. Things within the domain of rationality are either rational or irrational depending on whether they fulfill the standards of rationality. For example, beliefs, actions, or general policies are rational if there is a good reason for them and irrational otherwise. It is not clear in all cases what belongs to the domain of rational assessment. For example, there are disagreements about whether desires and emotions can be evaluated as rational and irrational rather than arational. The term "irrational" is sometimes used in a wide sense to include cases of arationality.
The meaning of the terms "rational" and "irrational" in academic discourse often differs from how they are used in everyday language. Examples of behaviors considered irrational in ordinary discourse are giving into temptations, going out late even though one has to get up early in the morning, smoking despite being aware of the health risks, or believing in astrology. In the academic discourse, on the other hand, rationality is usually identified with being guided by reasons or following norms of internal coherence. Some of the earlier examples may qualify as rational in the academic sense depending on the circumstances. Examples of irrationality in this sense include cognitive biases and violating the laws of probability theory when assessing the likelihood of future events. This article focuses mainly on irrationality in the academic sense.
The terms "rationality", "reason", and "reasoning" are frequently used as synonyms. But in technical contexts, their meanings are often distinguished. Reason is usually understood as the faculty responsible for the process of reasoning. This process aims at improving mental states. Reasoning tries to ensure that the norms of rationality obtain. It differs from rationality nonetheless since other psychological processes besides reasoning may have the same effect. Rationality derives etymologically from the Latin term .
Disputes about the concept of rationality
There are many disputes about the essential characteristics of rationality. It is often understood in relational terms: something, like a belief or an intention, is rational because of how it is related to something else. But there are disagreements as to what it has to be related to and in what way. For reason-based accounts, the relation to a reason that justifies or explains the rational state is central. For coherence-based accounts, the relation of coherence between mental states matters. There is a lively discussion in the contemporary literature on whether reason-based accounts or coherence-based accounts are superior. Some theorists also try to understand rationality in relation to the goals it tries to realize.
Other disputes in this field concern whether rationality depends only on the agent's mind or also on external factors, whether rationality requires a review of all one's beliefs from scratch, and whether we should always be rational.
Based on reason-responsiveness
A common idea of many theories of rationality is that it can be defined in terms of reasons. On this view, to be rational means to respond correctly to reasons. For example, the fact that a food is healthy is a reason to eat it. So this reason makes it rational for the agent to eat the food. An important aspect of this interpretation is that it is not sufficient to merely act accidentally in accordance with reasons. Instead, responding to reasons implies that one acts intentionally because of these reasons.
Some theorists understand reasons as external facts. This view has been criticized based on the claim that, in order to respond to reasons, people have to be aware of them, i.e. they have some form of epistemic access. But lacking this access is not automatically irrational. In one example by John Broome, the agent eats a fish contaminated with salmonella, which is a strong reason against eating the fish. But since the agent could not have known this fact, eating the fish is rational for them. Because of such problems, many theorists have opted for an internalist version of this account. This means that the agent does not need to respond to reasons in general, but only to reasons they have or possess. The success of such approaches depends a lot on what it means to have a reason and there are various disagreements on this issue. A common approach is to hold that this access is given through the possession of evidence in the form of cognitive mental states, like perceptions and knowledge. A similar version states that "rationality consists in responding correctly to beliefs about reasons". So it is rational to bring an umbrella if the agent has strong evidence that it is going to rain. But without this evidence, it would be rational to leave the umbrella at home, even if, unbeknownst to the agent, it is going to rain. These versions avoid the previous objection since rationality no longer requires the agent to respond to external factors of which they could not have been aware.
A problem faced by all forms of reason-responsiveness theories is that there are usually many reasons relevant and some of them may conflict with each other. So while salmonella contamination is a reason against eating the fish, its good taste and the desire not to offend the host are reasons in favor of eating it. This problem is usually approached by weighing all the different reasons. This way, one does not respond directly to each reason individually but instead to their weighted sum. Cases of conflict are thus solved since one side usually outweighs the other. So despite the reasons cited in favor of eating the fish, the balance of reasons stands against it, since avoiding a salmonella infection is a much weightier reason than the other reasons cited. This can be expressed by stating that rational agents pick the option favored by the balance of reasons.
However, other objections to the reason-responsiveness account are not so easily solved. They often focus on cases where reasons require the agent to be irrational, leading to a rational dilemma. For example, if terrorists threaten to blow up a city unless the agent forms an irrational belief, this is a very weighty reason to do all in one's power to violate the norms of rationality.
Based on rules of coherence
An influential rival to the reason-responsiveness account understands rationality as internal coherence. On this view, a person is rational to the extent that their mental states and actions are coherent with each other. Diverse versions of this approach exist that differ in how they understand coherence and what rules of coherence they propose. A general distinction in this regard is between negative and positive coherence. Negative coherence is an uncontroversial aspect of most such theories: it requires the absence of contradictions and inconsistencies. This means that the agent's mental states do not clash with each other. In some cases, inconsistencies are rather obvious, as when a person believes that it will rain tomorrow and that it will not rain tomorrow. In complex cases, inconsistencies may be difficult to detect, for example, when a person believes in the axioms of Euclidean geometry and is nonetheless convinced that it is possible to square the circle. Positive coherence refers to the support that different mental states provide for each other. For example, there is positive coherence between the belief that there are eight planets in the solar system and the belief that there are less than ten planets in the solar system: the earlier belief implies the latter belief. Other types of support through positive coherence include explanatory and causal connections.
Coherence-based accounts are also referred to as rule-based accounts since the different aspects of coherence are often expressed in precise rules. In this regard, to be rational means to follow the rules of rationality in thought and action. According to the enkratic rule, for example, rational agents are required to intend what they believe they ought to do. This requires coherence between beliefs and intentions. The norm of persistence states that agents should retain their intentions over time. This way, earlier mental states cohere with later ones. It is also possible to distinguish different types of rationality, such as theoretical or practical rationality, based on the different sets of rules they require.
One problem with such coherence-based accounts of rationality is that the norms can enter into conflict with each other, so-called rational dilemmas. For example, if the agent has a pre-existing intention that turns out to conflict with their beliefs, then the enkratic norm requires them to change it, which is disallowed by the norm of persistence. This suggests that, in cases of rational dilemmas, it is impossible to be rational, no matter which norm is privileged. Some defenders of coherence theories of rationality have argued that, when formulated correctly, the norms of rationality cannot enter into conflict with each other. That means that rational dilemmas are impossible. This is sometimes tied to additional non-trivial assumptions, such that ethical dilemmas also do not exist. A different response is to bite the bullet and allow that rational dilemmas exist. This has the consequence that, in such cases, rationality is not possible for the agent and theories of rationality cannot offer guidance to them. These problems are avoided by reason-responsiveness accounts of rationality since they "allow for rationality despite conflicting reasons but [coherence-based accounts] do not allow for rationality despite conflicting requirements". Some theorists suggest a weaker criterion of coherence to avoid cases of necessary irrationality: rationality requires not to obey all norms of coherence but to obey as many norms as possible. So in rational dilemmas, agents can still be rational if they violate the minimal number of rational requirements.
Another criticism rests on the claim that coherence-based accounts are either redundant or false. On this view, either the rules recommend the same option as the balance of reasons or a different option. If they recommend the same option, they are redundant. If they recommend a different option, they are false since, according to its critics, there is no special value in sticking to rules against the balance of reasons.
Based on goals
A different approach characterizes rationality in relation to the goals it aims to achieve. In this regard, theoretical rationality aims at epistemic goals, like acquiring truth and avoiding falsehood. Practical rationality, on the other hand, aims at non-epistemic goals, like moral, prudential, political, economic, or aesthetic goals. This is usually understood in the sense that rationality follows these goals but does not set them. So rationality may be understood as a "minister without portfolio" since it serves goals external to itself. This issue has been the source of an important historical discussion between David Hume and Immanuel Kant. The slogan of Hume's position is that "reason is the slave of the passions". This is often understood as the claim that rationality concerns only how to reach a goal but not whether the goal should be pursued at all. So people with perverse or weird goals may still be perfectly rational. This position is opposed by Kant, who argues that rationality requires having the right goals and motives.
According to William Frankena there are four conceptions of rationality based on the goals it tries to achieve. They correspond to egoism, utilitarianism, perfectionism, and intuitionism. According to the egoist perspective, rationality implies looking out for one's own happiness. This contrasts with the utilitarian point of view, which states that rationality entails trying to contribute to everyone's well-being or to the greatest general good. For perfectionism, a certain ideal of perfection, either moral or non-moral, is the goal of rationality. According to the intuitionist perspective, something is rational "if and only if [it] conforms to self-evident truths, intuited by reason". These different perspectives diverge a lot concerning the behavior they prescribe. One problem for all of them is that they ignore the role of the evidence or information possessed by the agent. In this regard, it matters for rationality not just whether the agent acts efficiently towards a certain goal but also what information they have and how their actions appear reasonable from this perspective. Richard Brandt responds to this idea by proposing a conception of rationality based on relevant information: "Rationality is a matter of what would survive scrutiny by all relevant information." This implies that the subject repeatedly reflects on all the relevant facts, including formal facts like the laws of logic.
Internalism and externalism
An important contemporary discussion in the field of rationality is between internalists and externalists. Both sides agree that rationality demands and depends in some sense on reasons. They disagree on what reasons are relevant or how to conceive those reasons. Internalists understand reasons as mental states, for example, as perceptions, beliefs, or desires. On this view, an action may be rational because it is in tune with the agent's beliefs and realizes their desires. Externalists, on the other hand, see reasons as external factors about what is good or right. They state that whether an action is rational also depends on its actual consequences. The difference between the two positions is that internalists affirm and externalists reject the claim that rationality supervenes on the mind. This claim means that it only depends on the person's mind whether they are rational and not on external factors. So for internalism, two persons with the same mental states would both have the same degree of rationality independent of how different their external situation is. Because of this limitation, rationality can diverge from actuality. So if the agent has a lot of misleading evidence, it may be rational for them to turn left even though the actually correct path goes right.
Bernard Williams has criticized externalist conceptions of rationality based on the claim that rationality should help explain what motivates the agent to act. This is easy for internalism but difficult for externalism since external reasons can be independent of the agent's motivation. Externalists have responded to this objection by distinguishing between motivational and normative reasons. Motivational reasons explain why someone acts the way they do while normative reasons explain why someone ought to act in a certain way. Ideally, the two overlap, but they can come apart. For example, liking chocolate cake is a motivational reason for eating it while having high blood pressure is a normative reason for not eating it. The problem of rationality is primarily concerned with normative reasons. This is especially true for various contemporary philosophers who hold that rationality can be reduced to normative reasons. The distinction between motivational and normative reasons is usually accepted, but many theorists have raised doubts that rationality can be identified with normativity. On this view, rationality may sometimes recommend suboptimal actions, for example, because the agent lacks important information or has false information. In this regard, discussions between internalism and externalism overlap with discussions of the normativity of rationality.
Relativity
An important implication of internalist conceptions is that rationality is relative to the person's perspective or mental states. Whether a belief or an action is rational usually depends on which mental states the person has. So carrying an umbrella for the walk to the supermarket is rational for a person believing that it will rain but irrational for another person who lacks this belief. According to Robert Audi, this can be explained in terms of experience: what is rational depends on the agent's experience. Since different people make different experiences, there are differences in what is rational for them.
Normativity
Rationality is normative in the sense that it sets up certain rules or standards of correctness: to be rational is to comply with certain requirements. For example, rationality requires that the agent does not have contradictory beliefs. Many discussions on this issue concern the question of what exactly these standards are. Some theorists characterize the normativity of rationality in the deontological terms of obligations and permissions. Others understand them from an evaluative perspective as good or valuable. A further approach is to talk of rationality based on what is praise- and blameworthy. It is important to distinguish the norms of rationality from other types of norms. For example, some forms of fashion prescribe that men do not wear bell-bottom trousers. Understood in the strongest sense, a norm prescribes what an agent ought to do or what they have most reason to do. The norms of fashion are not norms in this strong sense: that it is unfashionable does not mean that men ought not to wear bell-bottom trousers.
Most discussions of the normativity of rationality are interested in the strong sense, i.e. whether agents ought always to be rational. This is sometimes termed a substantive account of rationality in contrast to structural accounts. One important argument in favor of the normativity of rationality is based on considerations of praise- and blameworthiness. It states that we usually hold each other responsible for being rational and criticize each other when we fail to do so. This practice indicates that irrationality is some form of fault on the side of the subject that should not be the case. A strong counterexample to this position is due to John Broome, who considers the case of a fish an agent wants to eat. It contains salmonella, which is a decisive reason why the agent ought not to eat it. But the agent is unaware of this fact, which is why it is rational for them to eat the fish. So this would be a case where normativity and rationality come apart. This example can be generalized in the sense that rationality only depends on the reasons accessible to the agent or how things appear to them. What one ought to do, on the other hand, is determined by objectively existing reasons. In the ideal case, rationality and normativity may coincide but they come apart either if the agent lacks access to a reason or if he has a mistaken belief about the presence of a reason. These considerations are summed up in the statement that rationality supervenes only on the agent's mind but normativity does not.
But there are also thought experiments in favor of the normativity of rationality. One, due to Frank Jackson, involves a doctor who receives a patient with a mild condition and has to prescribe one out of three drugs: drug A resulting in a partial cure, drug B resulting in a complete cure, or drug C resulting in the patient's death. The doctor's problem is that they cannot tell which of the drugs B and C results in a complete cure and which one in the patient's death. The objectively best case would be for the patient to get drug B, but it would be highly irresponsible for the doctor to prescribe it given the uncertainty about its effects. So the doctor ought to prescribe the less effective drug A, which is also the rational choice. This thought experiment indicates that rationality and normativity coincide since what is rational and what one ought to do depends on the agent's mind after all.
Some theorists have responded to these thought experiments by distinguishing between normativity and responsibility. On this view, critique of irrational behavior, like the doctor prescribing drug B, involves a negative evaluation of the agent in terms of responsibility but remains silent on normative issues. On a competence-based account, which defines rationality in terms of the competence of responding to reasons, such behavior can be understood as a failure to execute one's competence. But sometimes we are lucky and we succeed in the normative dimension despite failing to perform competently, i.e. rationally, due to being irresponsible. The opposite can also be the case: bad luck may result in failure despite a responsible, competent performance. This explains how rationality and normativity can come apart despite our practice of criticizing irrationality.
Normative and descriptive theories
The concept of normativity can also be used to distinguish different theories of rationality. Normative theories explore the normative nature of rationality. They are concerned with rules and ideals that govern how the mind should work. Descriptive theories, on the other hand, investigate how the mind actually works. This includes issues like under which circumstances the ideal rules are followed as well as studying the underlying psychological processes responsible for rational thought. Descriptive theories are often investigated in empirical psychology while philosophy tends to focus more on normative issues. This division also reflects how different these two types are investigated.
Descriptive and normative theorists usually employ different methodologies in their research. Descriptive issues are studied by empirical research. This can take the form of studies that present their participants with a cognitive problem. It is then observed how the participants solve the problem, possibly together with explanations of why they arrived at a specific solution. Normative issues, on the other hand, are usually investigated in similar ways to how the formal sciences conduct their inquiry. In the field of theoretical rationality, for example, it is accepted that deductive reasoning in the form of modus ponens leads to rational beliefs. This claim can be investigated using methods like rational intuition or careful deliberation toward a reflective equilibrium. These forms of investigation can arrive at conclusions about what forms of thought are rational and irrational without depending on empirical evidence.
An important question in this field concerns the relation between descriptive and normative approaches to rationality. One difficulty in this regard is that there is in many cases a huge gap between what the norms of ideal rationality prescribe and how people actually reason. Examples of normative systems of rationality are classical logic, probability theory, and decision theory. Actual reasoners often diverge from these standards because of cognitive biases, heuristics, or other mental limitations.
Traditionally, it was often assumed that actual human reasoning should follow the rules described in normative theories. On this view, any discrepancy is a form of irrationality that should be avoided. However, this usually ignores the human limitations of the mind. Given these limitations, various discrepancies may be necessary (and in this sense rational) to get the most useful results. For example, the ideal rational norms of decision theory demand that the agent should always choose the option with the highest expected value. However, calculating the expected value of each option may take a very long time in complex situations and may not be worth the trouble. This is reflected in the fact that actual reasoners often settle for an option that is good enough without making certain that it is really the best option available. A further difficulty in this regard is Hume's law, which states that one cannot deduce what ought to be based on what is. So just because a certain heuristic or cognitive bias is present in a specific case, it should not be inferred that it should be present. One approach to these problems is to hold that descriptive and normative theories talk about different types of rationality. This way, there is no contradiction between the two and both can be correct in their own field. Similar problems are discussed in so-called naturalized epistemology.
Conservatism and foundationalism
Rationality is usually understood as conservative in the sense that rational agents do not start from zero but already possess many beliefs and intentions. Reasoning takes place on the background of these pre-existing mental states and tries to improve them. This way, the original beliefs and intentions are privileged: one keeps them unless a reason to doubt them is encountered. Some forms of epistemic foundationalism reject this approach. According to them, the whole system of beliefs is to be justified by self-evident beliefs. Examples of such self-evident beliefs may include immediate experiences as well as simple logical and mathematical axioms.
An important difference between conservatism and foundationalism concerns their differing conceptions of the burden of proof. According to conservativism, the burden of proof is always in favor of already established belief: in the absence of new evidence, it is rational to keep the mental states one already has. According to foundationalism, the burden of proof is always in favor of suspending mental states. For example, the agent reflects on their pre-existing belief that the Taj Mahal is in Agra but is unable to access any reason for or against this belief. In this case, conservatists think it is rational to keep this belief while foundationalists reject it as irrational due to the lack of reasons. In this regard, conservatism is much closer to the ordinary conception of rationality. One problem for foundationalism is that very few beliefs, if any, would remain if this approach was carried out meticulously. Another is that enormous mental resources would be required to constantly keep track of all the justificatory relations connecting non-fundamental beliefs to fundamental ones.
Types
Rationality is discussed in a great variety of fields, often in very different terms. While some theorists try to provide a unifying conception expressing the features shared by all forms of rationality, the more common approach is to articulate the different aspects of the individual forms of rationality. The most common distinction is between theoretical and practical rationality. Other classifications include categories for ideal and bounded rationality as well as for individual and social rationality.
Theoretical and practical
The most influential distinction contrasts theoretical or epistemic rationality with practical rationality. Its theoretical side concerns the rationality of beliefs: whether it is rational to hold a given belief and how certain one should be about it. Practical rationality, on the other hand, is about the rationality of actions, intentions, and decisions. This corresponds to the distinction between theoretical reasoning and practical reasoning: theoretical reasoning tries to assess whether the agent should change their beliefs while practical reasoning tries to assess whether the agent should change their plans and intentions.
Theoretical
Theoretical rationality concerns the rationality of cognitive mental states, in particular, of beliefs. It is common to distinguish between two factors. The first factor is about the fact that good reasons are necessary for a belief to be rational. This is usually understood in terms of evidence provided by the so-called sources of knowledge, i.e. faculties like perception, introspection, and memory. In this regard, it is often argued that to be rational, the believer has to respond to the impressions or reasons presented by these sources. For example, the visual impression of the sunlight on a tree makes it rational to believe that the sun is shining. In this regard, it may also be relevant whether the formed belief is involuntary and implicit
The second factor pertains to the norms and procedures of rationality that govern how agents should form beliefs based on this evidence. These norms include the rules of inference discussed in regular logic as well as other norms of coherence between mental states. In the case of rules of inference, the premises of a valid argument offer support to the conclusion and make therefore the belief in the conclusion rational. The support offered by the premises can either be deductive or non-deductive. In both cases, believing in the premises of an argument makes it rational to also believe in its conclusion. The difference between the two is given by how the premises support the conclusion. For deductive reasoning, the premises offer the strongest possible support: it is impossible for the conclusion to be false if the premises are true. The premises of non-deductive arguments also offer support for their conclusion. But this support is not absolute: the truth of the premises does not guarantee the truth of the conclusion. Instead, the premises make it more likely that the conclusion is true. In this case, it is usually demanded that the non-deductive support is sufficiently strong if the belief in the conclusion is to be rational.
An important form of theoretical irrationality is motivationally biased belief, sometimes referred to as wishful thinking. In this case, beliefs are formed based on one's desires or what is pleasing to imagine without proper evidential support. Faulty reasoning in the form of formal and informal fallacies is another cause of theoretical irrationality.
Practical
All forms of practical rationality are concerned with how we act. It pertains both to actions directly as well as to mental states and events preceding actions, like intentions and decisions. There are various aspects of practical rationality, such as how to pick a goal to follow and how to choose the means for reaching this goal. Other issues include the coherence between different intentions as well as between beliefs and intentions.
Some theorists define the rationality of actions in terms of beliefs and desires. On this view, an action to bring about a certain goal is rational if the agent has the desire to bring about this goal and the belief that their action will realize it. A stronger version of this view requires that the responsible beliefs and desires are rational themselves. A very influential conception of the rationality of decisions comes from decision theory. In decisions, the agent is presented with a set of possible courses of action and has to choose one among them. Decision theory holds that the agent should choose the alternative that has the highest expected value. Practical rationality includes the field of actions but not of behavior in general. The difference between the two is that actions are intentional behavior, i.e. they are performed for a purpose and guided by it. In this regard, intentional behavior like driving a car is either rational or irrational while non-intentional behavior like sneezing is outside the domain of rationality.
For various other practical phenomena, there is no clear consensus on whether they belong to this domain or not. For example, concerning the rationality of desires, two important theories are proceduralism and substantivism. According to proceduralism, there is an important distinction between instrumental and noninstrumental desires. A desire is instrumental if its fulfillment serves as a means to the fulfillment of another desire. For example, Jack is sick and wants to take medicine to get healthy again. In this case, the desire to take the medicine is instrumental since it only serves as a means to Jack's noninstrumental desire to get healthy. Both proceduralism and substantivism usually agree that a person can be irrational if they lack an instrumental desire despite having the corresponding noninstrumental desire and being aware that it acts as a means. Proceduralists hold that this is the only way a desire can be irrational. Substantivists, on the other hand, allow that noninstrumental desires may also be irrational. In this regard, a substantivist could claim that it would be irrational for Jack to lack his noninstrumental desire to be healthy. Similar debates focus on the rationality of emotions.
Relation between the two
Theoretical and practical rationality are often discussed separately and there are many differences between them. In some cases, they even conflict with each other. However, there are also various ways in which they overlap and depend on each other.
It is sometimes claimed that theoretical rationality aims at truth while practical rationality aims at goodness. According to John Searle, the difference can be expressed in terms of "direction of fit". On this view, theoretical rationality is about how the mind corresponds to the world by representing it. Practical rationality, on the other hand, is about how the world corresponds to the ideal set up by the mind and how it should be changed. Another difference is that arbitrary choices are sometimes needed for practical rationality. For example, there may be two equally good routes available to reach a goal. On the practical level, one has to choose one of them if one wants to reach the goal. It would even be practically irrational to resist this arbitrary choice, as exemplified by Buridan's ass. But on the theoretical level, one does not have to form a belief about which route was taken upon hearing that someone reached the goal. In this case, the arbitrary choice for one belief rather than the other would be theoretically irrational. Instead, the agent should suspend their belief either way if they lack sufficient reasons. Another difference is that practical rationality is guided by specific goals and desires, in contrast to theoretical rationality. So it is practically rational to take medicine if one has the desire to cure a sickness. But it is theoretically irrational to adopt the belief that one is healthy just because one desires this. This is a form of wishful thinking.
In some cases, the demands of practical and theoretical rationality conflict with each other. For example, the practical reason of loyalty to one's child may demand the belief that they are innocent while the evidence linking them to the crime may demand a belief in their guilt on the theoretical level.
But the two domains also overlap in certain ways. For example, the norm of rationality known as enkrasia links beliefs and intentions. It states that "[r]ationality requires of you that you intend to F if you believe your reasons require you to F". Failing to fulfill this requirement results in cases of irrationality known as akrasia or weakness of the will. Another form of overlap is that the study of the rules governing practical rationality is a theoretical matter. And practical considerations may determine whether to pursue theoretical rationality on a certain issue as well as how much time and resources to invest in the inquiry. It is often held that practical rationality presupposes theoretical rationality. This is based on the idea that to decide what should be done, one needs to know what is the case. But one can assess what is the case independently of knowing what should be done. So in this regard, one can study theoretical rationality as a distinct discipline independent of practical rationality but not the other way round. However, this independence is rejected by some forms of doxastic voluntarism. They hold that theoretical rationality can be understood as one type of practical rationality. This is based on the controversial claim that we can decide what to believe. It can take the form of epistemic decision theory, which states that people try to fulfill epistemic aims when deciding what to believe. A similar idea is defended by Jesús Mosterín. He argues that the proper object of rationality is not belief but acceptance. He understands acceptance as a voluntary and context-dependent decision to affirm a proposition.
Ideal and bounded
Various theories of rationality assume some form of ideal rationality, for example, by demanding that rational agents obey all the laws and implications of logic. This can include the requirement that if the agent believes a proposition, they should also believe in everything that logically follows from this proposition. However, many theorists reject this form of logical omniscience as a requirement for rationality. They argue that, since the human mind is limited, rationality has to be defined accordingly to account for how actual finite humans possess some form of resource-limited rationality.
According to the position of bounded rationality, theories of rationality should take into account cognitive limitations, such as incomplete knowledge, imperfect memory, and limited capacities of computation and representation. An important research question in this field is about how cognitive agents use heuristics rather than brute calculations to solve problems and make decisions. According to the satisficing heuristic, for example, agents usually stop their search for the best option once an option is found that meets their desired achievement level. In this regard, people often do not continue to search for the best possible option, even though this is what theories of ideal rationality commonly demand. Using heuristics can be highly rational as a way to adapt to the limitations of the human mind, especially in complex cases where these limitations make brute calculations impossible or very time- and resource-intensive.
Individual and social
Most discussions and research in the academic literature focus on individual rationality. This concerns the rationality of individual persons, for example, whether their beliefs and actions are rational. But the question of rationality can also be applied to groups as a whole on the social level. This form of social or collective rationality concerns both theoretical and practical issues like group beliefs and group decisions. And just like in the individual case, it is possible to study these phenomena as well as the processes and structures that are responsible for them. On the social level, there are various forms of cooperation to reach a shared goal. In the theoretical cases, a group of jurors may first discuss and then vote to determine whether the defendant is guilty. Or in the practical case, politicians may cooperate to implement new regulations to combat climate change. These forms of cooperation can be judged on their social rationality depending on how they are implemented and on the quality of the results they bear. Some theorists try to reduce social rationality to individual rationality by holding that the group processes are rational to the extent that the individuals participating in them are rational. But such a reduction is frequently rejected.
Various studies indicate that group rationality often outperforms individual rationality. For example, groups of people working together on the Wason selection task usually perform better than individuals by themselves. This form of group superiority is sometimes termed "wisdom of crowds" and may be explained based on the claim that competent individuals have a stronger impact on the group decision than others. However, this is not always the case and sometimes groups perform worse due to conformity or unwillingness to bring up controversial issues.
Others
Many other classifications are discussed in the academic literature. One important distinction is between approaches to rationality based on the output or on the process. Process-oriented theories of rationality are common in cognitive psychology and study how cognitive systems process inputs to generate outputs. Output-oriented approaches are more common in philosophy and investigate the rationality of the resulting states. Another distinction is between relative and categorical judgments of rationality. In the relative case, rationality is judged based on limited information or evidence while categorical judgments take all the evidence into account and are thus judgments all things considered. For example, believing that one's investments will multiply can be rational in a relative sense because it is based on one's astrological horoscope. But this belief is irrational in a categorical sense if the belief in astrology is itself irrational.
Importance
Rationality is central to solving many problems, both on the local and the global scale. This is often based on the idea that rationality is necessary to act efficiently and to reach all kinds of goals. This includes goals from diverse fields, such as ethical goals, humanist goals, scientific goals, and even religious goals. The study of rationality is very old and has occupied many of the greatest minds since ancient Greek. This interest is often motivated by discovering the potentials and limitations of our minds. Various theorists even see rationality as the essence of being human, often in an attempt to distinguish humans from other animals. However, this strong affirmation has been subjected to many criticisms, for example, that humans are not rational all the time and that non-human animals also show diverse forms of intelligence.
The topic of rationality is relevant to a variety of disciplines. It plays a central role in philosophy, psychology, Bayesianism, decision theory, and game theory. But it is also covered in other disciplines, such as artificial intelligence, behavioral economics, microeconomics, and neuroscience. Some forms of research restrict themselves to one specific domain while others investigate the topic in an interdisciplinary manner by drawing insights from different fields.
Paradoxes of rationality
The term paradox of rationality has a variety of meanings. It is often used for puzzles or unsolved problems of rationality. Some are just situations where it is not clear what the rational person should do. Others involve apparent faults within rationality itself, for example, where rationality seems to recommend a suboptimal course of action. A special case are so-called rational dilemmas, in which it is impossible to be rational since two norms of rationality conflict with each other. Examples of paradoxes of rationality include Pascal's Wager, the Prisoner's dilemma, Buridan's ass, and the St. Petersburg paradox.
History
Max Weber
The German scholar Max Weber proposed an interpretation of social action that distinguished between four different idealized types of rationality.
The first, which he called Zweckrational or purposive/instrumental rationality, is related to the expectations about the behavior of other human beings or objects in the environment. These expectations serve as means for a particular actor to attain ends, ends which Weber noted were "rationally pursued and calculated." The second type, Weber called Wertrational or value/belief-oriented. Here the action is undertaken for what one might call reasons intrinsic to the actor: some ethical, aesthetic, religious or other motives, independent of whether it will lead to success. The third type was affectual, determined by an actor's specific affect, feeling, or emotion—to which Weber himself said that this was a kind of rationality that was on the borderline of what he considered "meaningfully oriented." The fourth was traditional or conventional, determined by ingrained habituation. Weber emphasized that it was very unusual to find only one of these orientations: combinations were the norm. His usage also makes clear that he considered the first two as more significant than the others, and it is arguable that the third and fourth are subtypes of the first two.
The advantage in Weber's interpretation of rationality is that it avoids a value-laden assessment, say, that certain kinds of beliefs are irrational. Instead, Weber suggests that ground or motive can be given—for religious or affect reasons, for example—that may meet the criterion of explanation or justification even if it is not an explanation that fits the Zweckrational orientation of means and ends. The opposite is therefore also true: some means-ends explanations will not satisfy those whose grounds for action are Wertrational.
Weber's constructions of rationality have been critiqued both from a Habermasian (1984) perspective (as devoid of social context and under-theorised in terms of social power) and also from a feminist perspective (Eagleton, 2003) whereby Weber's rationality constructs are viewed as imbued with masculine values and oriented toward the maintenance of male power. An alternative position on rationality (which includes both bounded rationality, as well as the affective and value-based arguments of Weber) can be found in the critique of Etzioni (1988), who reframes thought on decision-making to argue for a reversal of the position put forward by Weber. Etzioni illustrates how purposive/instrumental reasoning is subordinated by normative considerations (ideas on how people 'ought' to behave) and affective considerations (as a support system for the development of human relationships).
Richard Brandt
Richard Brandt proposed a "reforming definition" of rationality, arguing someone is rational if their notions survive a form of cognitive-psychotherapy.
Robert Audi
Robert Audi developed a comprehensive account of rationality that covers both the theoretical and the practical side of rationality. This account centers on the notion of a ground: a mental state is rational if it is "well-grounded" in a source of justification. Irrational mental states, on the other hand, lack a sufficient ground. For example, the perceptual experience of a tree when looking outside the window can ground the rationality of the belief that there is a tree outside.
Audi is committed to a form of foundationalism: the idea that justified beliefs, or in his case, rational states in general, can be divided into two groups: the foundation and the superstructure. The mental states in the superstructure receive their justification from other rational mental states while the foundational mental states receive their justification from a more basic source. For example, the above-mentioned belief that there is a tree outside is foundational since it is based on a basic source: perception. Knowing that trees grow in soil, we may deduce that there is soil outside. This belief is equally rational, being supported by an adequate ground, but it belongs to the superstructure since its rationality is grounded in the rationality of another belief. Desires, like beliefs, form a hierarchy: intrinsic desires are at the foundation while instrumental desires belong to the superstructure. In order to link the instrumental desire to the intrinsic desire an extra element is needed: a belief that the fulfillment of the instrumental desire is a means to the fulfillment of the intrinsic desire.
Audi asserts that all the basic sources providing justification for the foundational mental states come from experience. As for beliefs, there are four types of experience that act as sources: perception, memory, introspection, and rational intuition. The main basic source of the rationality of desires, on the other hand, comes in the form of hedonic experience: the experience of pleasure and pain. So, for example, a desire to eat ice-cream is rational if it is based on experiences in which the agent enjoyed the taste of ice-cream, and irrational if it lacks such a support. Because of its dependence on experience, rationality can be defined as a kind of responsiveness to experience.
Actions, in contrast to beliefs and desires, do not have a source of justification of their own. Their rationality is grounded in the rationality of other states instead: in the rationality of beliefs and desires. Desires motivate actions. Beliefs are needed here, as in the case of instrumental desires, to bridge a gap and link two elements. Audi distinguishes the focal rationality of individual mental states from the global rationality of persons. Global rationality has a derivative status: it depends on the focal rationality. Or more precisely: "Global rationality is reached when a person has a sufficiently integrated system of sufficiently well-grounded propositional attitudes, emotions, and actions". Rationality is relative in the sense that it depends on the experience of the person in question. Since different people undergo different experiences, what is rational to believe for one person may be irrational to believe for another person. That a belief is rational does not entail that it is true.
In various fields
Ethics and morality
The problem of rationality is relevant to various issues in ethics and morality. Many debates center around the question of whether rationality implies morality or is possible without it. Some examples based on common sense suggest that the two can come apart. For example, some immoral psychopaths are highly intelligent in the pursuit of their schemes and may, therefore, be seen as rational. However, there are also considerations suggesting that the two are closely related to each other. For example, according to the principle of universality, "one's reasons for acting are acceptable only if it is acceptable that everyone acts on such reasons". A similar formulation is given in Immanuel Kant's categorical imperative: "act only according to that maxim whereby you can, at the same time, will that it should become a universal law". The principle of universality has been suggested as a basic principle both for morality and for rationality. This is closely related to the question of whether agents have a duty to be rational. Another issue concerns the value of rationality. In this regard, it is often held that human lives are more important than animal lives because humans are rational.
Psychology
Many psychological theories have been proposed to describe how reasoning happens and what underlying psychological processes are responsible. One of their goals is to explain how the different types of irrationality happen and why some types are more prevalent than others. They include mental logic theories, mental model theories, and dual process theories. An important psychological area of study focuses on cognitive biases. Cognitive biases are systematic tendencies to engage in erroneous or irrational forms of thinking, judging, and acting. Examples include the confirmation bias, the self-serving bias, the hindsight bias, and the Dunning–Kruger effect. Some empirical findings suggest that metacognition is an important aspect of rationality. The idea behind this claim is that reasoning is carried out more efficiently and reliably if the responsible thought processes are properly controlled and monitored.
The Wason selection task is an influential test for studying rationality and reasoning abilities. In it, four cards are placed before the participants. Each has a number on one side and a letter on the opposite side. In one case, the visible sides of the four cards are A, D, 4, and 7. The participant is then asked which cards need to be turned around in order to verify the conditional claim "if there is a vowel on one side of the card, then there is an even number on the other side of the card". The correct answer is A and 7. But this answer is only given by about 10%. Many choose card 4 instead even though there is no requirement on what letters may appear on its opposite side. An important insight from using these and similar tests is that the rational ability of the participants is usually significantly better for concrete and realistic cases than for abstract or implausible cases. Various contemporary studies in this field use Bayesian probability theory to study subjective degrees of belief, for example, how the believer's certainty in the premises is carried over to the conclusion through reasoning.
In the psychology of reasoning, psychologists and cognitive scientists have defended different positions on human rationality. One prominent view, due to Philip Johnson-Laird and Ruth M. J. Byrne among others is that humans are rational in principle but they err in practice, that is, humans have the competence to be rational but their performance is limited by various factors. However, it has been argued that many standard tests of reasoning, such as those on the conjunction fallacy, on the Wason selection task, or the base rate fallacy suffer from methodological and conceptual problems. This has led to disputes in psychology over whether researchers should (only) use standard rules of logic, probability theory and statistics, or rational choice theory as norms of good reasoning. Opponents of this view, such as Gerd Gigerenzer, favor a conception of bounded rationality, especially for tasks under high uncertainty. The concept of rationality continues to be debated by psychologists, economists and cognitive scientists.
The psychologist Jean Piaget gave an influential account of how the stages in human development from childhood to adulthood can be understood in terms of the increase of rational and logical abilities. He identifies four stages associated with rough age groups: the sensorimotor stage below the age of two, the preoperational state until the age of seven, the concrete operational stage until the age of eleven, and the formal operational stage afterward. Rational or logical reasoning only takes place in the last stage and is related to abstract thinking, concept formation, reasoning, planning, and problem-solving.
Emotions
According to A. C. Grayling, rationality "must be independent of emotions, personal feelings or any kind of instincts". Certain findings in cognitive science and neuroscience show that no human has ever satisfied this criterion, except perhaps a person with no affective feelings, for example, an individual with a massively damaged amygdala or severe psychopathy. Thus, such an idealized form of rationality is best exemplified by computers, and not people. However, scholars may productively appeal to the idealization as a point of reference. In his book, The Edge of Reason: A Rational Skeptic in an Irrational World, British philosopher Julian Baggini sets out to debunk myths about reason (e.g., that it is "purely objective and requires no subjective judgment").
Cognitive and behavioral sciences
Cognitive and behavioral sciences try to describe, explain, and predict how people think and act. Their models are often based on the assumption that people are rational. For example, classical economics is based on the assumption that people are rational agents that maximize expected utility. However, people often depart from the ideal standards of rationality in various ways. For example, they may only look for confirming evidence and ignore disconfirming evidence. Another factor studied in this regard are the limitations of human intellectual capacities. Many discrepancies from rationality are caused by limited time, memory, or attention. Often heuristics and rules of thumb are used to mitigate these limitations, but they may lead to new forms of irrationality.
Logic
Theoretical rationality is closely related to logic, but not identical to it. Logic is often defined as the study of correct arguments. This concerns the relation between the propositions used in the argument: whether its premises offer support to its conclusion. Theoretical rationality, on the other hand, is about what to believe or how to change one's beliefs. The laws of logic are relevant to rationality since the agent should change their beliefs if they violate these laws. But logic is not directly about what to believe. Additionally, there are also other factors and norms besides logic that determine whether it is rational to hold or change a belief. The study of rationality in logic is more concerned with epistemic rationality, that is, attaining beliefs in a rational manner, than instrumental rationality.
Decision theory
An influential account of practical rationality is given by decision theory. Decisions are situations where a number of possible courses of action are available to the agent, who has to choose one of them. Decision theory investigates the rules governing which action should be chosen. It assumes that each action may lead to a variety of outcomes. Each outcome is associated with a conditional probability and a utility. The expected gain of an outcome can be calculated by multiplying its conditional probability with its utility. the expected utility of an act is equivalent to the sum of all expected gains of the outcomes associated with it. From these basic ingredients, it is possible to define the rationality of decisions: a decision is rational if it selects the act with the highest expected utility. While decision theory gives a very precise formal treatment of this issue, it leaves open the empirical problem of how to assign utilities and probabilities. So decision theory can still lead to bad empirical decisions if it is based on poor assignments.
According to decision theorists, rationality is primarily a matter of internal consistency. This means that a person's mental states like beliefs and preferences are consistent with each other or do not go against each other. One consequence of this position is that people with obviously false beliefs or perverse preferences may still count as rational if these mental states are consistent with their other mental states. Utility is often understood in terms of self-interest or personal preferences. However, this is not a necessary aspect of decisions theory and it can also be interpreted in terms of goodness or value in general.
Game theory
Game theory is closely related to decision theory and the problem of rational choice. Rational choice is based on the idea that rational agents perform a cost-benefit analysis of all available options and choose the option that is most beneficial from their point of view. In the case of game theory, several agents are involved. This further complicates the situation since whether a given option is the best choice for one agent may depend on choices made by other agents. Game theory can be used to analyze various situations, like playing chess, firms competing for business, or animals fighting over prey. Rationality is a core assumption of game theory: it is assumed that each player chooses rationally based on what is most beneficial from their point of view. This way, the agent may be able to anticipate how others choose and what their best choice is relative to the behavior of the others. This often results in a Nash equilibrium, which constitutes a set of strategies, one for each player, where no player can improve their outcome by unilaterally changing their strategy.
Bayesianism
A popular contemporary approach to rationality is based on Bayesian epistemology. Bayesian epistemology sees belief as a continuous phenomenon that comes in degrees. For example, Daniel is relatively sure that the Boston Celtics will win their next match and absolutely certain that two plus two equals four. In this case, the degree of the first belief is weaker than the degree of the second belief. These degrees are usually referred to as credences and represented by numbers between 0 and 1. 0 corresponds to full disbelief, 1 corresponds to full belief and 0.5 corresponds to suspension of belief. Bayesians understand this in terms of probability: the higher the credence, the higher the subjective probability that the believed proposition is true. As probabilities, they are subject to the laws of probability theory. These laws act as norms of rationality: beliefs are rational if they comply with them and irrational if they violate them. For example, it would be irrational to have a credence of 0.9 that it will rain tomorrow together with another credence of 0.9 that it will not rain tomorrow. This account of rationality can also be extended to the practical domain by requiring that agents maximize their subjective expected utility. This way, Bayesianism can provide a unified account of both theoretical and practical rationality.
Economics
Rationality plays a key role in economics and there are several strands to this. Firstly, there is the concept of instrumentality—basically the idea that people and organisations are instrumentally rational—that is, adopt the best actions to achieve their goals. Secondly, there is an axiomatic concept that rationality is a matter of being logically consistent within your preferences and beliefs. Thirdly, people have focused on the accuracy of beliefs and full use of information—in this view, a person who is not rational has beliefs that do not fully use the information they have.
Debates within economic sociology also arise as to whether or not people or organizations are "really" rational, as well as whether it makes sense to model them as such in formal models. Some have argued that a kind of bounded rationality makes more sense for such models.
Others think that any kind of rationality along the lines of rational choice theory is a useless concept for understanding human behavior; the term homo economicus (economic man: the imaginary man being assumed in economic models who is logically consistent but amoral) was coined largely in honor of this view. Behavioral economics aims to account for economic actors as they actually are, allowing for psychological biases, rather than assuming idealized instrumental rationality.
Artificial intelligence
The field of artificial intelligence is concerned, among other things, with how problems of rationality can be implemented and solved by computers. Within artificial intelligence, a rational agent is typically one that maximizes its expected utility, given its current knowledge. Utility is the usefulness of the consequences of its actions. The utility function is arbitrarily defined by the designer, but should be a function of "performance", which is the directly measurable consequences, such as winning or losing money. In order to make a safe agent that plays defensively, a nonlinear function of performance is often desired, so that the reward for winning is lower than the punishment for losing. An agent might be rational within its own problem area, but finding the rational decision for arbitrarily complex problems is not practically possible. The rationality of human thought is a key problem in the psychology of reasoning.
International relations
There is an ongoing debate over the merits of using "rationality" in the study of international relations (IR). Some scholars hold it indispensable. Others are more critical. Still, the pervasive and persistent usage of "rationality" in political science and IR is beyond dispute. "Rationality" remains ubiquitous in this field. Abulof finds that Some 40% of all scholarly references to "foreign policy" allude to "rationality"—and this ratio goes up to more than half of pertinent academic publications in the 2000s. He further argues that when it comes to concrete security and foreign policies, IR employment of rationality borders on "malpractice": rationality-based descriptions are largely either false or unfalsifiable; many observers fail to explicate the meaning of "rationality" they employ; and the concept is frequently used politically to distinguish between "us and them."
Criticism
The concept of rationality has been subject to criticism by various philosophers who question its universality and capacity to provide a comprehensive understanding of reality and human existence.
Friedrich Nietzsche, in his work "Beyond Good and Evil" (1886), criticized the overemphasis on rationality and argued that it neglects the irrational and instinctual aspects of human nature. Nietzsche advocated for a reevaluation of values based on individual perspectives and the will to power, stating, "There are no facts, only interpretations."
Martin Heidegger, in "Being and Time" (1927), offered a critique of the instrumental and calculative view of reason, emphasizing the primacy of our everyday practical engagement with the world. Heidegger challenged the notion that rationality alone is the sole arbiter of truth and understanding.
Max Horkheimer and Theodor Adorno, in their seminal work "Dialectic of Enlightenment" (1947), questioned the Enlightenment's rationality. They argued that the dominance of instrumental reason in modern society leads to the domination of nature and the dehumanization of individuals. Horkheimer and Adorno highlighted how rationality narrows the scope of human experience and hinders critical thinking.
Michel Foucault, in "Discipline and Punish" (1975) and "The Birth of Biopolitics" (1978), critiqued the notion of rationality as a neutral and objective force. Foucault emphasized the intertwining of rationality with power structures and its role in social control. He famously stated, "Power is not an institution, and not a structure; neither is it a certain strength we are endowed with; it is the name that one attributes to a complex strategical situation in a particular society."
These philosophers' critiques of rationality shed light on its limitations, assumptions, and potential dangers. Their ideas challenge the universal application of rationality as the sole framework for understanding the complexities of human existence and the world.
See also
Bayesian epistemology
Cognitive bias
Coherence (linguistics)
Counterintuitive
Dysrationalia
Flipism
Homo economicus
Imputation (game theory) (individual rationality)
Instinct
Intelligence
Irrationality
Law of thought
LessWrong
List of cognitive biases
Principle of rationality
Rational emotive behavior therapy
Rationalism
Rationalization (making excuses)
Satisficing
Superrationality
Von Neumann–Morgenstern utility theorem
References
Further reading
Reason and Rationality, by Richard Samuels, Stephen Stich, Luc Faucher on the broad field of reason and rationality from descriptive, normative, and evaluative points of view
Stanford Encyclopedia of Philosophy entry on Historicist Theories of Rationality
Legal Reasoning After Post-Modern Critiques of Reason , by Peter Suber
Lucy Suchman (2007). Human-machine Reconfigurations: Plans and Situated Action. Cambridge University Press.
Cristina Bicchieri (1993). Rationality and Coordination, New York: Cambridge University Press
Cristina Bicchieri (2007). "Rationality and Indeterminacy", in D. Ross and H. Kinkaid (eds.) The Handbook of Philosophy of Economics, The Oxford Reference Library of Philosophy, Oxford University Press, vol. 6, n.2.
Anand, P (1993). Foundations of Rational Choice Under Risk, Oxford, Oxford University Press.
Habermas, J. (1984) The Theory of Communicative Action Volume 1; Reason and the Rationalization of Society, Cambridge: Polity Press.
Mosterín, Jesús (2008). Lo mejor posible: Racionalidad y acción humana. Madrid: Alianza Editorial. 318 pp. .
Nozick, Robert (1993). The Nature of Rationality. Princeton: Princeton University Press.
Sciortino, Luca (2023). History of Rationalities: Ways of Thinking from Vico to Hacking and Beyond. New York: Springer- Palgrave McMillan. .
Eagleton, M. (ed) (2003) A Concise Companion to Feminist Theory, Oxford: Blackwell Publishing.
Simons, H. and Hawkins, D. (1949), "Some Conditions in Macro-Economic Stability", Econometrica, 1949.
Johnson-Laird, P.N. & Byrne, R.M.J. (1991). Deduction. Hillsdale: Erlbaum.
Concepts in epistemology
Concepts in ethics
Concepts in logic
Metaphysical properties
Concepts in the philosophy of mind
Concepts in the philosophy of science
Philosophy of law
Philosophy of life | 0.761662 | 0.996346 | 0.758879 |
Oral stage | In Freudian psychoanalysis, the term oral stage or hemitaxia denotes the first psychosexual development stage wherein the mouth of the infant is their primary erogenous zone. Spanning the life period from birth to the age of 18 months, the oral stage is the first of the five Freudian psychosexual development stages: (i) the oral, (ii) the anal, (iii) the phallic, (iv) the latent, and (v) the genital.
Oral-stage fixation
Freud proposed that if the nursing child's appetite were thwarted during any libidinal development stage, the anxiety would persist into adulthood as a neurosis (functional mental disorder). Therefore, an infantile oral fixation would be manifest as an obsession with oral stimulation. If weaned either too early or too late, the infant might fail to resolve the emotional conflicts of the oral stage of psychosexual development and might develop a maladaptive oral fixation.
The infant who is neglected (insufficiently fed) or who is over-protected (over-fed) in the course of being nursed, might become an orally-fixated person. This fixation might have two effects: (i) the neglected child might become a psychologically dependent adult continually seeking the oral stimulation denied in infancy, thereby becoming a manipulative person in fulfilling their needs, rather than maturing to independence; (ii) the over-protected child might resist maturation and return to dependence upon others in fulfilling their needs. Theoretically, oral-stage fixations are manifested as garrulousness (talkativeness), smoking, continual oral stimulus (eating, chewing objects), and alcoholism.
See also
Amphimixis
Psychosexual development
References
Further reading
External links
Freud's Psychosexual Stages
Freudian psychology | 0.761934 | 0.995973 | 0.758866 |
Sexualization | Sexualization (sexualisation in Commonwealth English) is the emphasis of the sexual nature of a behavior or person. Sexualization is linked to sexual objectification, treating a person solely as an object of sexual desire. According to the American Psychological Association, sexualization occurs when "individuals are regarded as sex objects and evaluated in terms of their physical characteristics and sexiness." "In study after study, findings have indicated that women more often than men are portrayed in a sexual manner (e.g., dressed in revealing clothing, with bodily postures or facial expressions that imply sexual readiness) and are objectified (e.g., used as a decorative object, or as body parts rather than a whole person). In addition, a narrow (and unrealistic) standard of physical beauty is heavily emphasized. These are the models of femininity presented for young girls to study and emulate."
Culture and media
Sexualization has been a subject of debate for academics who work in media and cultural studies. Frederick Attenborough states the term has not been used simply to label what is seen as a social problem, but to indicate the much broader and varied set of ways in which sex has become more visible in media and culture. These include:
the widespread discussion of sexual values, practices and identities in the media; the growth of sexual media of all kinds; for example, erotica, slash fiction, sexual self-help books and the many genres of pornography; the emergence of new forms of sexual experience, for example instant message or avatar sex made possible by developments in technology; a public concern with the breakdown of consensus about regulations for defining and dealing with obscenity; the prevalence of scandals, controversies and panics around sex in the media.
According to the Media Education Foundation's documentary Killing Us Softly 4: Advertising's Image of Women, the sexualization of girls in media and the ways women are portrayed in the dominant culture are detrimental to the development of young girls as they are developing their identities and understanding themselves as sexual beings.
The terms "pornification" and "pornographication" have also been used to describe the way that aesthetics that were previously associated with pornography have become part of popular culture, and that mainstream media texts and other cultural practices "citing pornographic styles, gestures and aesthetics" have become more prominent. This process, which Brian McNair has described as a "pornographication of the mainstream". has developed alongside an expansion of the cultural realm of pornography or "pornosphere" which itself has become more accessible to a much wider variety of audiences. According to McNair, both developments can be set in the context of a wider shift towards a "striptease culture" which has disrupted the boundaries between public and private discourse in late modern Western culture, and which is evident more generally in cultural trends which privilege lifestyle, reality, interactivity, self-revelation and public intimacy.
Criticism
The Australian writers, Catharine Lumby and Kath Albury (2010) have suggested that sexualization is "a debate that has been simmering for almost a decade" and concerns about sex and the media are far from new. Much of the recent writing on sexualization has been the subject of criticism that because of the way that it draws on "one-sided, selective, overly simplifying, generalizing, and negatively toned" evidence and is "saturated in the languages of concern and regulation". In these writings and the widespread press coverage that they have attracted, critics state that the term is often used as "a non-sequitur causing everything from girls flirting with older men to child sex trafficking" They believe that the arguments often ignore feminist work on media, gender and the body and present a very conservative and negative view of sex in which only monogamous heterosexual sexuality is regarded as normal. They say that the arguments tend to neglect any historical understanding of the way sex has been represented and regulated, and they often ignore both theoretical and empirical work on the relationship between sex and media, culture and technology.
The way society shapes ones personal interest is presented in a book review of Girls Gone Skank by Patrice Oppliger, Amanda Mills states that "consequently, girls are socialized to participate in their own abuse by becoming avid consumers of and altering their behavior to reflect sexually exploitative images and goods." The belief that women are powerful and fully capable as men is stated in the text "Uses of the Erotic: The Erotic As Power" by Audre Lorde stating that the suppression of the erotic of women has led them feeling superior to men "the superficially, erotic had been encouraged as a sign of female inferiority on the other hand women have been made to suffer and to feel opposed contemptible and suspect by virtue of its existence".
Effects on children
Children and adolescents spend more time engaging with media than any other age group. This is a time in their life that they are more susceptible to information that they receive. Children are getting sex education from the media, little kids are exposed to sexualized images and more information than ever before in human history but are not able to process the information, they are not developmentally ready to process it, and this impacts their development and behavior.
Sexualization of young girls in the media and infantilization of women creates an environment where it becomes more acceptable to view children as "seductive and sexy". It makes having healthy sexual relationships more difficult for people and creates sexist attitudes.
Some cultural critics have postulated that over recent decades children have evidenced a level of sexual knowledge or sexual behaviour inappropriate for their age group.
Australia
In 2006, an Australian report called Corporate paedophilia: sexualisation of children in Australia was published. The Australian report summarises its conclusion as follows:
Images of sexualised children are becoming increasingly common in advertising and marketing material. Children who appear aged 12 years and under are dressed, posed and made up in the same way as sexy adult models. Children that appear on magazines are seen older than they really are because of the sexualised clothes they are given to pose in. "Corporate paedophilia" is a metaphor used to describe advertising and marketing that sexualises children in these ways.
European Union
In 2012, a draft report a European Parliament resolution gave the following definition of sexualization: [S]exualisation consists of an instrumental approach to a person by perceiving that person as an object for sexual use disregarding the person's dignity and personality traits, with the person's worth being measured in terms of the level of sexual attractiveness; sexualisation also involves the imposition of the sexuality of adult persons on girls, who are emotionally, psychologically and physically unprepared for this at their particular stage of development; sexualisation not being the normal, healthy, biological development
of the sexuality of a person, conditioned by the individual process of development and taking place at the appropriate time for each particular individual
Scotland
However, in 2010, the Scottish Executive released a report titled External research on sexualised goods aimed at children. The report considers the drawbacks of the United States and Australian reviews, concluding:
The Scottish review also notes that:
It also notes that previous coverage "rests on moral assumptions … that are not adequately explained or justified."
United Kingdom
The report 'Letting Children Be Children', also known as the Bailey Review, is a report commissioned by the UK government on the subject of the commercialisation and sexualisation of childhood.
United States
As early as 1997, reports found that sexualization of younger children is becoming more common in advertisements.
The causes of this premature sexualization include portrayals in the media of sex and related issues, especially in media aimed at children; the lack of parental oversight and discipline; access to adult culture via the internet; and the lack of comprehensive school sex education programs.
In 2007, the American Psychological Association (APA) first published Report of the APA Task Force on the Sexualization of Girls, which has had periodic updates. The report looked at the cognitive and emotional consequences of sexualization and the consequences for mental and physical health, and impact on development of a healthy sexual self-image. The report considers that a person is sexualized in the following situations:
A person's value comes only from his or her sexual appeal or sexual behavior, to the exclusion of other characteristics;
A person is held to a standard that equates physical attractiveness (narrowly defined) with being sexy;
A person is sexually objectified—that is, made into a thing for others' sexual use, rather than seen as a person with the capacity for independent action and decision making; and/or
Sexuality is inappropriately imposed upon a person.
Research has linked the sexualization of young girls to negative consequences for girls and society as a whole, finding that the viewing of sexually objectifying material can contribute to body dissatisfaction, eating disorders, low self-esteem, depression, and depressive affect. Medical and social science researchers generally deployed "sexualization" to refer to a liminal zone between sexual abuse and normal family life, in which the child's relationship with their parents was characterized by an "excessive", improper sexuality, even though no recognizable forms of abuse had occurred. The American Psychological Association also argues that the sexualization of young girls contributes to sexist attitudes within society and a societal tolerance of sexual violence as well as that consumerism and globalization have led to the sexualization of girls occurring across all advanced economies, in media and advertisements, to clothing and toys marketed for young girls.
The APA cites the following as advertising techniques that contribute to the sexualization of girls:
Including girls in ads with sexualized women wearing matching clothing or posed seductively.
Dressing girls up to look like adult women.
Dressing women down to look like young girls.
The employment of youthful celebrity adolescents in highly sexual ways to promote or endorse products.
The APA additionally further references the teen magazine market by citing a study by Roberts et al that found that "47% of 8- to 18-year-old [girls] reported having read at least 5 minutes of a magazine the previous day."
A majority of these magazines focused on a theme of presenting oneself as sexually desirable to men, a practice which is called "costuming for seduction" in a study by Duffy and Gotcher.
Studies have found that thinking about the body and comparing it to sexualized cultural ideals may disrupt a girl's mental concentration, and a girl's sexualization or objectification may undermine her confidence in and comfort with her own body, leading to emotional and self-image problems, such as shame and anxiety.
Research has linked sexualization with three of the most common mental health problems diagnosed in girls and women: eating disorders, low self-esteem, and depression or depressed mood.
Research suggests that the sexualization of girls has negative consequences on girls' ability to develop a healthy sexual self-image.
In 2012, an American study found that self-sexualization was common among 6–9-year-old girls. Girls overwhelmingly chose the sexualized doll over the non-sexualized doll for their ideal self and as popular. However other factors, such as how often mothers talked to their children about what is going on in television shows and maternal religiosity, reduced those odds. Surprisingly, the mere quantity of girls' media consumption (television and movies) was unrelated to their self-sexualization for the most part; rather, maternal self-objectification and maternal religiosity moderated its effects.
A result of the sexualization of girls in the media is that young girls are "learning how to view themselves as sex objects". When girls fail to meet the thin ideal and dominant culture's standard of beauty they can develop anxieties. Sexualization is problematic for young children who are developing their sexual identity as they may think that turning themselves into sex objects is empowering and related to having sexual agency.
Products for children
Some commercial products seen as promoting the sexualization of children have drawn considerable media attention:
A number of doll lines have drawn controversy. The original Bratz Dolls, marketed to children as old as 12, were considered by at least one preteen to be “sexy” and were noted for their more mature styles such as shrunken sweaters, shredded jeans, and other suggestive clothing. They were noted in a New York Times article to “look as though they might be at home on any street corner where prostitutes ply their trade.” Bratz Baby Dolls marketed at 6-year-old girls that feature sexualized clothing, like fishnet stockings, feather boas, and miniskirts also advertised fashion similar to that of the mainline "Bratz" line. The My Scene Barbie line, aiming for children in the 8-12 age demographic as the answer to the Bratz line, also drew criticism as the dolls wore low-rise pants, revealed the navel, and wore lots of makeup.
Highly sexualized and gendered Halloween costumes marketed at young girls, such as the "sexy firefighter", a costume that consists of a tight fitted mini dress and high heeled boots. A girl’s version of a policy officer costume also designed similarly. Costumes made for somewhat older girls, such as those around ten-years-old, may be much shorter in length. Comparing and contrasting similar costumes designed by pre-tweens and tweens, the differences in costumes for the somewhat older girls was so dramatic that one observer noted that “According to the costume manufacturers of America, once a girl child reaches double digits, it is officially time for the Halloween hoochification process to begin.”
Thong underwear designed by Abercrombie & Fitch made specifically for ten-year-olds. Released in 2002, the thongs were “adorned with the images of cherries and candy hearts and also include the words "‘kiss me’" and "‘wink, wink.’" While a company spokesman specifically stated that the thongs are not appropriate for children younger than ten, the thongs may have been small enough for girls as young as seven-years old to wear. Despite the controversy, at least some of the thongs were sold; one Abercrombie clerk stated at a mother bought thongs for both of her daughters, who looked to be ten or younger, because all the others girls in their class had at least one. While the Abercrombie & Fitch thongs were eventually pulled, girls aged 10 and 11 wearing thongs in primary school became a regular enough occurrence in at least English school that the headmaster sent a letter asking parents to not allow their daughters to wear them. In France, also in 2003, girls, some of them ten-years old, revealed whale tails on their way to school by exposing their thong underwear above their pants.
Clothing such as T-shirts being marketed for young children in preschool and elementary school with printed slogans like "So Many Boys So Little Time." Other examples include the retailer Big W selling T-shirts for young girls with the slogan “nice baubles” in 2014 and the UK-based company Twisted Tee selling t-shirts that had nipple pasties. Some onesies also drew controversy. A Target onesie made for baby girls with the phrase “I only date heroes” and a TinyHaute Couture creating a cotton onesie for babies that had designs of a lace corset on it.
Clothing originally aimed at young adult women marketed to tweens. Advertised to tween girls since at least the year 2000, low-rise jeans, tight-fitting miniskirts, and shirts that expose the midriff, once worn predominately by young adult women, became core fashion staples for many American tweens in the 8-12 age range in the 2000s. These styles, sold across the country, were so popular at one point that finding other styles for preteen girls became a difficult task for parents.
Padded bras on bikinis aimed at seven-year-old girls. The bikinis were pulled after complaints in 2010. While made for girls slightly older, previously, in 2006, an Australian Target began selling a lightly padded Target brand bra designed for girls as young as eight-years-old. However, there is also evidence that with the mean age of puberty declining in Western cultures, a higher percentage of preteen girls will have enough breast development to justify wearing a functional brassier than ever before.
The Scottish Executive report surveyed 32 High street UK retailers and found that many of the larger chains, including Tesco, Debenhams, JJ Sports, and Marks & Spencer did not offer sexualized goods aimed at children. The report noted that overall prevalence was limited but this was based on a very narrow research brief. Whilst this shows that not all High street retailers were aiming products deemed sexualized by the researchers, the research cannot be taken out of context and used to say that there is not an issue of sexualization.
Effects on women of color
The sexualization of women of color is different from the sexualization of white women. The media plays a significant role in this sexualization. "The media are likely to have powerful effects if the information is presented persistently, consistently, and corroborated among forms. As a media affect, stereotypes rely on the repetition to perpetuate and sustain them." According to Celine Parrenas Shimizu, "To see race is to see sex, and vice versa."
Black women
Many scholars trace the sexualization of Black women back to slavery, where certain stereotypes were invented as a way to dehumanize Black women. These stereotypes include the Jezebel, seen as a light skin overly sexual Black woman with no control over her desires; the Mammy, an Black woman who was asexual in nature and whose sole purpose was to cook for a white family; the Sapphire, first shown on the Radio/Television show Amos n' Andy, she was a loud, crude, jealous woman, who took joy in emasculating men. These stereotypes have carried over to the way young Black girls view themselves and how society views them. The Jezebel stereotype, in particular, has reemerged in the form of hip-hop video vixens. These images seen in music videos have two effects: they influence how black women are viewed in society and they also shape how Black women view themselves.
"Representations of Black girlhood in the media and popular culture suggest that Black girls face a different set of rules when it comes to sex, innocence, and blame", the consequences of the sexualization of Black girls can be seen through the 2004 trial of R. Kelly. The immediate response from the public cleared R. Kelly of any wrongdoing while subsequently blaming the young girl for her abuse. One respondent to a Village Voice article claimed that she was not disturbed by the video because in her words, "It wasn't like she was new to the act. [She--the respondent] heard she [the victim] worked it like most of [her] 30 something-year-old friends have yet to learn how to do". This desensitization is directly linked to a music industry—and subsequent fans—who value the artist over their potential victims.” Instead of being correctly labeled as victims these women are instead turned into "groupies, hoochies, and chickenheads". One of the jurors on the R. Kelly case noted that he believed the defense because her body "appeared to developed". Sika A. Dagbovie-Mullins acknowledged that "this harmful and skewed reasoning reflects a national troubling tendency to view black adolescent females as sexually savvy and therefore responsible themselves for the sexualization and exploitation of their bodies".
Dagbovie-Mullins introduced new problems in regards to the sexualization of Black girls, completely dichotomous to the sexualization of Black girls is the infantilization of Black women. Both of these problems are caused by denying the agency of Black women. Both the infantilization of Black women and the sexualization of young girls are about looking at Black women purely through the lens of their sexuality, without regard to their agency. There is a link between the images of a submissive woman being portrayed by a girl and a willingness for people to believe that young Black girls can give consent. This is a narrative that is supported by the sexy school girl image portrayed in media. The image girls off the illusion of being unavailable—both from a moral and legal standpoint—while at the same time being available. "Music, music videos, and images play a pivotal role in the messages individuals hear and see. These messages can be positive or negative, and they can influence how consumers and producers respond to and interrogate them critically, socially, physically, and emotionally".
The images portrayed "in both African American and mainstream American culture reinforce the lenses through which the everyday experiences and ideal for adolescent Black women are viewed". Shows like the Flavor of Love which rely on the stereotype of the Black pimp and the submissive women, where Flavor Flav strip women of their real name and gives them nicknames such as "Thing 1" and "Thing 2" showcase the denial of the agency of Black women. This denial of agency makes it easier for people to see them as little more than sex symbols. Infantilizing them and stripping them of all things that make them individuals creates a culture in which Black women are no longer seen as people, but objects used for individual male pleasure. Making it easier to side with men when Black women accuse them of assault because Black women cannot be assaulted when all they want is sex.
Along with a deflated sense of self-worth, these stereotypes can also influence Black girls—notably poor ones—that their sense of worth and an escape from poverty can be found through their sexualization. The more modern version of the Jezebel—a black woman who is highly sexual and materialistic—may also have the most importance to inner-city Black girls, "The sexual links to poverty and its relevance to survival are clear. Their lives have been called 'ghetto fabulous', where they are socially embedded in a culture of poverty, yet have the economic means to procure middle-class goods".
Even women are guilty of the sexualization, Nicki Minaj who made the phrase "Barbie Bitch" popular and raps about how she only "fuck[s] with ballers" draw on stereotypes such as the gold digger in order to promote her brand. While the "Bad Bitch Barbie" character was developed out of a history of over-sexualizing the bodies of Black women, it has also been used as a way of Black women to reconquer their sexuality. No longer is it men using their bodies for the enjoyment of other men, but it is they themselves who are showcasing their features as a way of uplifting who they are. Hence, duality is created within hip-hop culture the sexualization of Black women is still being seen, but with the emergence of female artists, we also see an emergence of a counter-culture reclaiming the sexuality of Black Women as their own. While are the same time the "Bad Bitch Barbie" still creates unrealistic images for black girls to compare themselves to. By reclaiming the sexuality that was robbed of them by men, they have introduced a new problem of body dimorphism as Black girls face the pressures to recreate themselves in the images being presented.
In an NPR interview with Professor Herbert Samuels at LaGuardia Community College in New York and Professor Mireille Miller-Young at UC Santa Barbara, they talk about sexual stereotypes of black bodies in America and how even in sex work, already a dangerous job, black women are treated much worse than their counterparts due to the effects of their over-sexualization and objectification in society. Black women's bodies are either invisible or hypervisible. In the 1800s, a South African woman named Sarah Baartman was known as "Hottentot Venus" and her body was paraded around in London and Paris where they looked at her exotic features such as large breasts and behind. Her features were deemed lesser and over sexual.
Asian women
The image of Asian women in Hollywood cinema is directly linked to sexuality as essential to any imagining about the roles they play as well as her actual appearance in popular culture. Asian female fatale's hypersexualized subjection is derived from her sexual behavior that is considered as natural to her particular race and culture. Two types of Asian stereotypes that are commonly found in media are the Lotus Flower and the Dragon Lady. The Lotus Flower archetype is the "self-sacrificing, servile, and suicidal Asian woman." The dragon lady archetype is the opposite of the lotus flower, a "self-abnegating Asian woman…[who] uses her 'Oriental' femininity, associated with seduction and danger to trap white men on behalf of conniving Asian males." According to film-maker and film scholar, Celine Shimizu, "The figure of the Asian American femme fatale signifies a particular deathly seduction. She attracts with her soft, unthreatening, and servile femininity while concealing her hard, dangerous, and domineering nature."
Latina women
Latina characters that embody the "hot Latina" stereotype in film and television are marked by easily identifiable behavioral characteristics such as "'addictively romantic, sensual, sexual and even exotically dangerous', self-sacrificing, dependent, powerless, sexually naive, childlike, pampered, and irresponsible".
Stereotypical Latina physical characteristics include "red lips, big bottoms, large hips, voluptuous bosoms, and small waists" and "high heels, huge hoop earrings, seductive clothing." Within the "hot Latina" stereotype lies three categories of representation:
The Cantina Girl, the Faithful, self-sacrificing señorita, and the vamp. The Cantina Girl markers are "'great sexual allure', teasing, dancing, and 'behaving in an alluring fashion.'"
The faithful, self-sacrificing Señorita starts out as a good girl and turns bad by the end. The Señorita, in an attempt to save her Anglo love interest, utilizes her body to protect him from violence.
The Vamp representation "uses her intellectual and devious sexual wiles to get what she wants." The media represents Latinas "as either [a] hot-blooded spitfire" or "[a] dutiful mother".
The sexual implications of the "hot-blooded" Latina has become an overgeneralized representation of Latin people. This has led many to see the Latin people as "what is morally wrong" with the United States. Some believe it to be wrong simply because the interpretation of this culture seems to go against white, Western culture. Culturally, the Latina is expected to dress "as a proper señorita" in order to be respected as a woman which conflicts with the Western ideals that a girl is sexual if she dresses "too 'mature' for [her] age".
Even in the business world this stereotype continues; "tight skirts and jingling bracelets [are misinterpreted] as a come-on". This sexualization can also be linked to certain stereotypical jobs. The image of the Latina woman often is not in the business world but in the domestic. The sexualization of Latina women sexualizes the positions that they are expected to occupy. Domestic servants, maids, and waitresses are the typical "media-engendered" roles that make it difficult for Latinas to gain "upward mobility" despite the fact that many hold PhDs.
Dominican women
In the Dominican Republic, women are frequently stereotyped as sultry and sexual as the reputation of Dominican sex workers grows. Many poor women have resorted to sex work because the demand is high and the hours and pay are often dictated by the workers themselves. White European and American men "exoticize dark-skinned 'native' bodies" because "they can buy sex for cut-rate prices". This overgeneralizing of the sexuality of Dominican women can also carry back to the women's homes. Even "women who...worked in Europe have become suspect..." even if they had a legal job. They have become "exports" instead of people because of their sexualization.
Native American women
Starting from the time of white colonization of Native American land, some Native American women have been referred to as "squaw." "The 'squaw' [stereotype] is the dirty, subservient, and abused tribal female who is also haggard, violent, and eager to torture tribal captives." Another stereotype is the beautiful Indian princess who leaves her tribe and culture behind to marry a white man.
See also
Child sexuality
Sexualism
Bratz
Kogal
Miss Bimbo
Rape culture
Sexual objectification
Pornographication
Social impact of thong underwear
Sexualization in the video games industry
Pornified
Female Chauvinist Pigs: Women and the Rise of Raunch Culture
Notes
References
Further reading
Books
A guide for parents on girls' body image and other issues.
Looks at media messages and suggests that it promotes early maturation and sexualisation of pre-adolescent girls.
A review of what Levy regards as a highly sexualized American culture in which women are objectified, objectify one another, and are encouraged to objectify themselves.
Looks at sex in contemporary culture and the impact it has on young girls.
Pamela Paul discusses the impact of ready access to pornography on Americans.
Argues that pornography has become a mainstream part of American culture.
Journals
Pdf.
Reports
Online resources
Feminism and society
Women in society
Women's rights legislation | 0.762211 | 0.995592 | 0.758851 |
Determination | Determination is a positive emotional feeling that promotes persevering towards a difficult goal in spite of obstacles. Determination occurs prior to goal attainment and serves to motivate behavior that will help achieve one's goal.
Empirical research suggests that people consider determination to be an emotion; in other words, determination is not just a cognitive state, but an affective state. In the psychology literature, researchers study determination under other terms, including challenge and anticipatory enthusiasm; this may explain one reason for the relative lack of research on determination compared to other positive emotions.
In the field of psychology, emotion research focuses on negative emotions and the behaviors they prompt. However, positive psychology delves into determination as a positive emotion driving people toward action, leading to significant results like persistence and success.
Etymology
The word determination comes from the Latin word , meaning "limit" or "determination, end result". It is derived from the verb , meaning "confine; designate," with the abstract noun suffix -. The meaning shifted from "end result, decision" to its present meaning.
Major theories
Self-determination theory
Self-determination theory (SDT) is a theory of motivation and dedication towards an ambition. It focuses on the interplay between personalities and experiences in social contexts that results in motivations of both autonomous and controlled types. An example of autonomous motivation would be doing something because of intrinsic motivation, or because there is an internal desire to accomplish something. An example of controlled motivation would be doing something because there is outside pressure to accomplish a goal.
Social environments seem to have a profound effect on both intrinsic and extrinsic motivation and self-regulation. Self-determination theory proposes that social and cultural factors influence a person's sense of volition and initiative in regards to
goals, performance, and well-being. High levels of determination and volition are supported by conditions that foster autonomy (e.g., a person has multiple options), competence (e.g., positive feedback) and relatedness (e.g., stable connection to the group a person is working within).
Bio-psychosocial model
Emotions researchers search for physiological patterns associated with particular positive emotions. However, the blending of emotions makes drawing such distinctions difficult. In relation to challenge and determination, psychologists focus on physiological activation in relation to the individual's intended actions (what he/she is determined to do) rather than how the individual subjectively feels.
Researchers associate effort (action tendency) with challenge and determination. So a challenged/determined individual should experience physiological arousal that reflects effort. By focusing on the sympathetic nervous system, researchers can measure systolic blood pressure (SBP) as a proxy for increased effort. People who are introduced to a challenging task experience an increase in SBP when they become determined to complete that task. This is coupled with lowered total peripheral resistance (while the heart is pumping faster, the vasculature is relaxed). This demonstrates an important difference between the physiological reaction of a person motivated by challenge and one motivated by threat or fear.
There seems to be a specific physiological pattern associated with determination. The identification of this pattern is valuable as it can be used in research aimed at eliciting and studying the antecedents and consequences of this common positive emotion.
Appraisal theory
Appraisal theory posits that determination is evoked by three cognitive motivation-appraisal components—evaluations of how the environment and situational circumstances interact with aspects of the individual to create meaning and influence emotional experience:
motivational relevancewhether a situation is relevant to a person's commitments and goals
motivational incongruencewhether a situation is incongruent with a person's commitments and goals
high problem-focused coping potentialwhether a situation is one that a person can deal with by using active coping strategies such as planning and problem-solving
These appraisal components combine to evoke experiences of determination that then motivate one to persevere and strive towards mastery. Appraisal theory proposes that determination is associated with effortful optimism, referring to the belief that a situation can be improved upon with enough effort from the person.
Empirical findings
Emotional experience
Research showed that electrical brain stimulation to the anterior midcingulate cortex elicits a response that mirrors the emotional experience of determination. In this study of two epileptic seizure patients, they reported feeling determined to overcome an approaching challenge; this emotion was reported to feel pleasant. Following electrical stimulation, participants exhibited elevated cardiovascular activity and reported a warm feeling in their upper chest and neck. This work supports the idea that determination is a positive emotion that prepares an individual to overcome obstacles.
Another study compared determination and pride to see how these two positive emotions differentially influenced perseverance in the context of a mathematical problem-solving task. Using a directed imagery task in which participants listened to and imagined a particular scenario, each emotion was differentially induced in participants. The results suggested that determination enhanced task engagement and perseverance, with participants in the determination group spending significantly more time on the most difficult problem in the task. In contrast, pride decreased task engagement and perseverance relative to a neutral condition, with participants in the pride group spending significantly less time on the most difficult problem in the task. This research further supports the notion that determination motivates perseverance, perhaps more so than other positive emotions that have been theorized to be associated with perseverance.
Emotional expression
Experiences of determination are linked to a recognizable facial expression that involves frowning of the eyebrows, an expression that is perceptually similar to anger. This eyebrow frown is associated with the perception of goal obstacles, supporting the notion that determination is associated with the action tendency of preparing to overcome difficult obstacles in goal pursuit.
Intrinsic and extrinsic motivation
Internal motivation is an internal drive, curiosity, or desire to learn that is within human beings. It drives people to learn new things or to put things into action. Intrinsic motivation is often evident when people desire to try new things or find ways to overcome challenges. Intrinsic motivation is often what drives a person to start something, but extrinsic motivation is often what helps people to accomplish their goals. Extrinsic motivation is the external drive that motivates action. It can include things like going to work daily to pay one's bills, or obeying the law to stay out of trouble. This type of motivation is not driven by one's own desires but instead by outside sources.
Applications
Classroom, workplace, and family environment
Determination is believed to be shaped by the emotion of challenge and societal expectations. Environments like education, work, and family that promote encouragement play a role in fostering determination. When individuals have access to resources and supportive peers who believe in their capabilities, they tend to experience heightened determination, leading to improved performance and well-being.
Research shows that students enrolled in learning environments in which teachers incorporate strategies meant to meet students' motivational needs (e.g., encouragement aimed at intrinsic rewards, using student-directed forms of discipline) are more likely to become responsible learners who display a determination to succeed.
William Zinsser studied the pressures faced by college students at Yale, such as the need to develop time management and study skills appropriate for college and university work, the desire for good grades, the desire to meet parents' expectations, and the need to find employment in a competitive job market after graduation.
Health and well-being
Studies have linked challenge and determination to increases in physical health and mental well-being. Some specific positive outcomes include illness resistance, increased survival rates, and decreased levels of depression. A person experiences positive personal growth when that person can to proactively cope with a difficult situation. In such a case, a person can acknowledge a demanding situation, take action, and maintain high coping potential. They can acknowledge the benefits of a difficult experience and display a willingness to put forth an effort and achieve specific personal goals.
Interpersonal relationships
In interpersonal interactions, adopting challenge appraisals is crucial for effectively managing conflicts. For example, young children facing bullying often seek support and report the incidents. When a bullied child employs a challenge appraisal, they view bullying as a chance to rely on others and find positive solutions. This approach maintains their autonomy, as they act independently to involve others. Challenge and determination facilitate goal achievement and increased confidence and decreased evaluation apprehension. Therefore, determined individuals who use challenge appraisals feel capable of handling tough situations while being open to seeking assistance when necessary.
See also
References
Leadership
Personality
Positive psychology
Virtue | 0.769902 | 0.985598 | 0.758814 |
Self-schema | The self-schema refers to a long lasting and stable set of memories that summarize a person's beliefs, experiences and generalizations about the self, in specific behavioral domains. A person may have a self-schema based on any aspect of themselves as a person, including physical characteristics (body image), personality traits and interests, as long as they consider that aspect of their self to be important to their own self-definition. When someone has a schema about themselves they hyper focus on a trait about themselves and believe what they say to themselves about that specific trait. A self schema can be good or bad depending on what that person talks to themselves about and in what kind of tone.
For example, someone will have a self-schema of extroversion if they think of themselves as extroverted and also believe that their extroversion is central to who they are. Their self-schema for extroversion may include general self-categorizations ("I am sociable."), beliefs about how they would act in certain situations ("At a party I would talk to lots of people") and also memories of specific past events ("On my first day at university I made lots of new friends").
General
The term schematic describes having a particular schema for a particular dimension. For instance, a person in a rock band at night would have a "rocker" schema. However, during the day, if he works as a salesperson, he would have a "salesperson" schema during that period of time. Schemas vary according to cultural background and other environmental factors.
Once people have developed a schema about themselves, there is a strong tendency for that schema to be maintained by a bias in what they attend to, in what they remember, and in what they are prepared to accept as true about themselves. In other words, the self-schema becomes self-perpetuating. The self-schema is then stored in long-term memory, which both facilitates and biases the processing of personally relevant information. Individuals who form a self-schema of a person with good exercise habits will then in return exercise more frequently.
Self-schemas vary from person to person because each individual has very different social and cultural life experiences. A few examples of self-schemas are: exciting or dull; quiet or loud; healthy or sickly; athletic or nonathletic; lazy or active; and geek or jock. If a person has a schema for "geek or jock," for example, he might think of himself as a bit of a computer geek and would possess a lot of information about that trait. Because of this, he would probably interpret many situations based on relevance to his being a computer geek.
Another person with the "healthy or sickly" schema might consider themselves a very health conscious person. Their concern with being healthy would then affect everyday decisions such as what groceries they buy, what restaurants they frequent, or how often they exercise. Women who are schematic on appearance exhibited worse body image, lower self-esteem, and more negative mood than did those who are aschematic on appearance.
The term aschematic means not having a schema for a particular dimension. This usually occurs when people are not involved with or concerned about a certain attribute. For example, if a person plans on being a musician, a self-schema in aeronautics will not apply to him; he is aschematic on aeronautics.
Childhood creation
Early in life, we are exposed to the idea of the self from our parents and other figures. We begin to take on a very basic self-schema, which is mostly limited to a "good child" or "bad child" schema—that is, we see ourselves in unambiguously positive or negative terms. It is in childhood that we begin to offer explanations for our actions, which reasoning creates the more complicated concept of the self: a child will begin to believe that the self caused their behaviors, deciding on what motivations to offer as explanations of behavior.
Multiple
Most people have multiple self-schemas, however this is not the same as multiple personalities in the pathological sense. Indeed, for the most part, multiple self-schemas are extremely useful to people in daily life. Subconsciously, they help people make rapid decisions and behave efficiently and appropriately in different situations and with different people. Multiple self-schemas guide what people attend to and how people interpret and use incoming information. They also activate specific cognitive, verbal, and behavioral action sequences – called scripts and action plans in cognitive psychology – that help people meet goals efficiently.
Self-schemas vary not only by circumstances and who the person is interacting with, but also by mood. Researchers found that we have mood-congruent self-schemas that vary with our emotional state.
The body
The self's relationship with and understanding of the body is an important part of self-schema. Body schema is a general term that has multiple definitions in various disciplines. Generally, it refers to a person's concept of his or her own body, where it is in space, what it looks like, how it is functioning, etc.
Our body image is part of our self-schema. The body image includes the following:
The perceptual experience of the body
The conceptual experience of the body—what we have been told and believe about our body, including scientific information, hearsay, myth, etc.
The emotional attitude towards the body
Our body schemata may transcend the realities of what our bodies actually are—or in other words, we may have a different mental picture of our bodies than what they physically are. This is evidenced when individuals who lose limbs have phantom limb sensations. Individuals who lose a limb may still feel like they have that limb. They may even feel in that limb sensations from other limbs.
An example of someone having a self schema or belief, is if someone has a contorted belief of what their body looks like which can lead to body dysmorphia. If they think of themselves as or have been told that they are "too fat," or "too skinny," they will believe that. They will also believe that this contorted version of themselves is actually them. People who possess this self schema might tell themselves negative things to make them feel bad about themselves.
Effect of illness
Individuals afflicted with both physical and mental illness have more negative self-schemata. This has been documented in patients suffering from such illnesses as depression and irritable bowel syndrome. Sufferers tend to identify themselves with their illness, unconsciously associating the negative traits of the illness itself with themselves.
See also
Behavioural confirmation
Identity (social science)
List of maladaptive schemas
Outline of self
Self-image
Self-perception theory
Notes
References
Wilderdom, (2003 Oct 21). Role of schemas in personality. Retrieved March 4, 2009, from Wilderdom - a project in natural living & transformation Web site: http://wilderdom.com/personality/L11-1RoleOfSchemasInPersonality.html
Kristin Valentino, Dante Cicchetti, Fred A Rogosch, Sheree L Toth. (2008). True and false recall and dissociation among maltreated children: The role of self-schema. Development and Psychopathology, 20(1), 213-32. Retrieved March 3, 2009, from Research Library database. (Document ID: 1601417001).
Cervone, D., & Pervin, L. (2008) Personality Theory and Research. Hoboken: John Wiley & Sons, Inc.
Kassin, S., Fein, S., & Markus, H. (2008). Social Psychology Seventh Edition. Boston: Houghton Mifflin Company.
Bartoli, Angela (2008, Jan. 14). Self schema. Retrieved March 3, 2009, from Angela M. Bartolli, Psychology Web site: http://webspace.ship.edu/ambart/PSY_220/selfschemaol.htm
3-S, (2003). What is a self-schema?. Retrieved March 3, 2009, from The Spiritual Self-Schema Development Program Web site: https://medicine.yale.edu/spiritualselfschema/
Identity (social science)
Conceptions of self
Self
Social psychology | 0.77373 | 0.980704 | 0.7588 |
Psychologism | Psychologism is a family of philosophical positions, according to which certain psychological facts, laws, or entities play a central role in grounding or explaining certain non-psychological facts, laws, or entities. The word was coined by Johann Eduard Erdmann as Psychologismus, being translated into English as psychologism.
Definition
The Oxford English Dictionary defines psychologism as: "The view or doctrine that a theory of psychology or ideas forms the basis of an account of metaphysics, epistemology, or meaning; (sometimes) spec. the explanation or derivation of mathematical or logical laws in terms of psychological facts." Psychologism in epistemology, the idea that its problems "can be solved satisfactorily by the psychological study of the development of mental processes", was argued in John Locke's An Essay Concerning Human Understanding (1690).
Other forms of psychologism are logical psychologism and mathematical psychologism. Logical psychologism is a position in logic (or the philosophy of logic) according to which logical laws and mathematical laws are grounded in, derived from, explained or exhausted by psychological facts or laws. Psychologism in the philosophy of mathematics is the position that mathematical concepts and/or truths are grounded in, derived from or explained by psychological facts or laws.
Viewpoints
John Stuart Mill was accused by Edmund Husserl of being an advocate of a type of logical psychologism, although this may not have been the case. So were many nineteenth-century German philosophers such as Christoph von Sigwart, Benno Erdmann, Theodor Lipps, Gerardus Heymans, Wilhelm Jerusalem, and Theodor Elsenhans, as well as a number of psychologists, past and present (e.g., Wilhelm Wundt and Gustave Le Bon).
Psychologism was notably criticized by Gottlob Frege in his anti-psychologistic work The Foundations of Arithmetic, and many of his works and essays, including his review of Husserl's Philosophy of Arithmetic. Husserl, in the first volume of his Logical Investigations, called "The Prolegomena of Pure Logic", criticized psychologism thoroughly and sought to distance himself from it. Frege's arguments were largely ignored, while Husserl's were widely discussed.
In "Psychologism and Behaviorism", Ned Block describes psychologism in the philosophy of mind as the view that "whether behavior is intelligent behavior depends on the character of the internal information processing that produces it." This is in contrast to a behavioral view which would state that intelligence can be ascribed to a being solely via observing its behavior. This latter type of behavioral view is strongly associated with the Turing test.
See also
Antipsychologism
Blockhead argument
Naturalized epistemology
References
External links
Husserl's Criticism of Psychologism. Link broken, page preserved most recently from October 22, 2009 at Internet Archive: Eprint. From Diwatao, (apparently former) online journal of the philosophy department of San Beda College, Manila, the Philippines.
Metatheory
Philosophy of mathematics
Theories of deduction | 0.775962 | 0.977881 | 0.758799 |
Health informatics | Health informatics is the study and implementation of computer structures and algorithms to improve communication, understanding, and management of medical information. It can be viewed as a branch of engineering and applied science.
The health domain provides an extremely wide variety of problems that can be tackled using computational techniques.
Health informatics is a spectrum of multidisciplinary fields that includes study of the design, development and application of computational innovations to improve health care. The disciplines involved combines medicine fields with computing fields, in particular computer engineering, software engineering, information engineering, bioinformatics, bio-inspired computing, theoretical computer science, information systems, data science, information technology, autonomic computing, and behavior informatics.
In academic institutions, medical informatics research focus on applications of artificial intelligence in healthcare and designing medical devices based on embedded systems. In some countries term informatics is also used in the context of applying library science to data management in hospitals. In this meaning health informatics aims at developing methods and technologies for the acquisition, processing, and study of patient data, An umbrella term of biomedical informatics has been proposed.
There are many variations in the name of the field involved in applying information and communication technologies to healthcare, public health, and personal health, ranging from those focused on the molecular (e.g., genomic), organ system (e.g., imaging), individual (e.g., patient or consumer, care provider, and interaction between them), to population-level of application. A spectrum of activity spans efforts ranging from theory and model development, to empirical research, to implementation and management, to widespread adoption.
'Clinical informaticians' are qualified health and social care professionals and 'clinical informatics' is a subspecialty within several medical specialties.
Subject areas
Jan van Bemmel has described medical informatics as the theoretical and practical aspects of information processing and communication based on knowledge and experience derived from processes in medicine and health care.
The Faculty of Clinical Informatics has identified six high level domains of core competency for clinical informaticians:
Health and Wellbeing in Practice
Information Technologies and Systems
Working with Data and Analytical Methods
Enabling Human and Organizational Change
Decision Making
Leading Informatics Teams and projects.
Tools to support practitioners
Clinical informaticians use their knowledge of patient care combined with their understanding of informatics concepts, methods, and health informatics tools to:
Assess information and knowledge needs of health care professionals, patients and their families.
Characterize, evaluate, and refine clinical processes,
Develop, implement, and refine clinical decision support systems, and
Lead or participate in the procurement, customization, development, implementation, management, evaluation, and continuous improvement of clinical information systems.
Clinicians collaborate with other health care and information technology professionals to develop health informatics tools which promote patient care that is safe, efficient, effective, timely, patient-centered, and equitable. Many clinical informaticists are also computer scientists.
Telehealth and telemedicine
Telehealth is the distribution of health-related services and information via electronic information and telecommunication technologies. It allows long-distance patient and clinician contact, care, advice, reminders, education, intervention, monitoring, and remote admissions. Telemedicine is sometimes used as a synonym, or is used in a more limited sense to describe remote clinical services, such as diagnosis and monitoring. Remote monitoring, also known as self-monitoring or testing, enables medical professionals to monitor a patient remotely using various technological devices. This method is primarily used for managing chronic diseases or specific conditions, such as heart disease, diabetes mellitus, or asthma.
These services can provide comparable health outcomes to traditional in-person patient encounters, supply greater satisfaction to patients, and may be cost-effective. Telerehabilitation (or e-rehabilitation[40][41]) is the delivery of rehabilitation services over telecommunication networks and the Internet. Most types of services fall into two categories: clinical assessment (the patient's functional abilities in his or her environment), and clinical therapy. Some fields of rehabilitation practice that have explored telerehabilitation are: neuropsychology, speech-language pathology, audiology, occupational therapy, and physical therapy. Telerehabilitation can deliver therapy to people who cannot travel to a clinic because the patient has a disability or because of travel time. Telerehabilitation also allows experts in rehabilitation to engage in a clinical consultation at a distance.
Decision support, artificial intelligence and machine learning in healthcare
A pioneer in the use of artificial intelligence in healthcare was American biomedical informatician Edward H. Shortliffe. This field deals with utilization of machine-learning algorithms and artificial intelligence, to emulate human cognition in the analysis, interpretation, and comprehension of complicated medical and healthcare data. Specifically, AI is the ability of computer algorithms to approximate conclusions based solely on input data. AI programs are applied to practices such as diagnosis processes, treatment protocol development, drug development, personalized medicine, and patient monitoring and care. A large part of industry focus of implementation of AI in the healthcare sector is in the clinical decision support systems.
As more data is collected, machine learning algorithms adapt and allow for more robust responses and solutions. Numerous companies are exploring the possibilities of the incorporation of big data in the healthcare industry. Many companies investigate the market opportunities through the realms of "data assessment, storage, management, and analysis technologies" which are all crucial parts of the healthcare industry. The following are examples of large companies that have contributed to AI algorithms for use in healthcare:
IBM's Watson Oncology is in development at Memorial Sloan Kettering Cancer Center and Cleveland Clinic. IBM is also working with CVS Health on AI applications in chronic disease treatment and with Johnson & Johnson on analysis of scientific papers to find new connections for drug development. In May 2017, IBM and Rensselaer Polytechnic Institute began a joint project entitled Health Empowerment by Analytics, Learning and Semantics (HEALS), to explore using AI technology to enhance healthcare.
Microsoft's Hanover project, in partnership with Oregon Health & Science University's Knight Cancer Institute, analyzes medical research to predict the most effective cancer drug treatment options for patients. Other projects include medical image analysis of tumor progression and the development of programmable cells.
Google's DeepMind platform is being used by the UK National Health Service to detect certain health risks through data collected via a mobile app. A second project with the NHS involves analysis of medical images collected from NHS patients to develop computer vision algorithms to detect cancerous tissues.
Tencent is working on several medical systems and services. These include AI Medical Innovation System (AIMIS), an AI-powered diagnostic medical imaging service; WeChat Intelligent Healthcare; and Tencent Doctorwork.
Intel's venture capital arm Intel Capital recently invested in startup Lumiata which uses AI to identify at-risk patients and develop care options.
Kheiron Medical developed deep learning software to detect breast cancers in mammograms.
Fractal Analytics has incubated Qure.ai which focuses on using deep learning and AI to improve radiology and speed up the analysis of diagnostic x-rays.
Neuralink has come up with a next generation neuroprosthetic which intricately interfaces with thousands of neural pathways in the brain. Their process allows a chip, roughly the size of a quarter, to be inserted in place of a chunk of skull by a precision surgical robot to avoid accidental injury.
Digital consultant apps like Babylon Health's GP at Hand, Ada Health, Alibaba Health Doctor You, KareXpert and Your.MD use AI to give medical consultation based on personal medical history and common medical knowledge. Users report their symptoms into the app, which uses speech recognition to compare against a database of illnesses. Babylon then offers a recommended action, taking into account the user's medical history. Entrepreneurs in healthcare have been effectively using seven business model archetypes to take AI solution[buzzword] to the marketplace. These archetypes depend on the value generated for the target user (e.g. patient focus vs. healthcare provider and payer focus) and value capturing mechanisms (e.g. providing information or connecting stakeholders). IFlytek launched a service robot "Xiao Man", which integrated artificial intelligence technology to identify the registered customer and provide personalized recommendations in medical areas.
It also works in the field of medical imaging. Similar robots are also being made by companies such as UBTECH ("Cruzr") and Softbank Robotics ("Pepper"). The Indian startup Haptik recently developed a WhatsApp chatbot which answers questions associated with the deadly coronavirus in India. With the market for AI expanding constantly, large tech companies such as Apple, Google, Amazon, and Baidu all have their own AI research divisions, as well as millions of dollars allocated for acquisition of smaller AI based companies. Many automobile manufacturers are beginning to use machine learning healthcare in their cars as well. Companies such as BMW, GE, Tesla, Toyota, and Volvo all have new research campaigns to find ways of learning a driver's vital statistics to ensure they are awake, paying attention to the road, and not under the influence of substances or in emotional distress. Examples of projects in computational health informatics include the COACH project.
Clinical Research Informatics
Clinical research informatics (CRI) is a sub-field of health informatics that tries to improve the efficiency of clinical research by using informatics methods. Some of the problems tackled by CRI are: creation of data warehouses of health care data that can be used for research, support of data collection in clinical trials by the use of electronic data capture systems, streamlining ethical approvals and renewals (in US the responsible entity is the local institutional review board), maintenance of repositories of past clinical trial data (de-identified). CRI is a fairly new branch of informatics and has met growing pains as any up and coming field does. Some issue CRI faces is the ability for the statisticians and the computer system architects to work with the clinical research staff in designing a system and lack of funding to support the development of a new system.
Researchers and the informatics team have a difficult time coordinating plans and ideas in order to design a system that is easy to use for the research team yet fits in the system requirements of the computer team. The lack of funding can be a hindrance to the development of the CRI. Many organizations who are performing research are struggling to get financial support to conduct the research, much less invest that money in an informatics system that will not provide them any more income or improve the outcome of the research (Embi, 2009). Ability to integrate data from multiple clinical trials is an important part of clinical research informatics. Initiatives, such as PhenX and Patient-Reported Outcomes Measurement Information System triggered a general effort to improve secondary use of data collected in past human clinical trials. CDE initiatives, for example, try to allow clinical trial designers to adopt standardized research instruments (electronic case report forms).
A parallel effort to standardizing how data is collected are initiatives that offer de-identified patient level clinical study data to be downloaded by researchers who wish to re-use this data. Examples of such platforms are Project Data Sphere, dbGaP, ImmPort or Clinical Study Data Request. Informatics issues in data formats for sharing results (plain CSV files, FDA endorsed formats, such as CDISC Study Data Tabulation Model) are important challenges within the field of clinical research informatics. There are a number of activities within clinical research that CRI supports, including:
More efficient and effective data collection and acquisition
Improved recruitment into clinical trials
Optimal protocol design and efficient management
Patient recruitment and management
Adverse event reporting
Regulatory compliance
Data storage, transfer, processing and analysis
Repositories of data from completed clinical trials (for secondary analyses)
One of the fundamental elements of biomedical and translation research is the use of integrated data repositories. A survey conducted in 2010 defined "integrated data repository" (IDR) as a data warehouse incorporating various sources of clinical data to support queries for a range of research-like functions. Integrated data repositories are complex systems developed to solve a variety of problems ranging from identity management, protection of confidentiality, semantic and syntactic comparability of data from different sources, and most importantly convenient and flexible query.
Development of the field of clinical informatics led to the creation of large data sets with electronic health record data integrated with other data (such as genomic data). Types of data repositories include operational data stores (ODSs), clinical data warehouses (CDWs), clinical data marts, and clinical registries. Operational data stores established for extracting, transferring and loading before creating warehouse or data marts. Clinical registries repositories have long been in existence, but their contents are disease specific and sometimes considered archaic. Clinical data stores and clinical data warehouses are considered fast and reliable. Though these large integrated repositories have impacted clinical research significantly, it still faces challenges and barriers.
One big problem is the requirement for ethical approval by the institutional review board (IRB) for each research analysis meant for publication. Some research resources do not require IRB approval. For example, CDWs with data of deceased patients have been de-identified and IRB approval is not required for their usage. Another challenge is data quality. Methods that adjust for bias (such as using propensity score matching methods) assume that a complete health record is captured. Tools that examine data quality (e.g., point to missing data) help in discovering data quality problems.
Translational bioinformatics
Translational Bioinformatics (TBI) is a relatively new field that surfaced in the year of 2000 when human genome sequence was released. The commonly used definition of TBI is lengthy and could be found on the AMIA website. In simpler terms, TBI could be defined as a collection of colossal amounts of health related data (biomedical and genomic) and translation of the data into individually tailored clinical entities.
Today, TBI field is categorized into four major themes that are briefly described below:
Clinical big data is a collection of electronic health records that are used for innovations. The evidence-based approach that is currently practiced in medicine is suggested to be merged with the practice-based medicine to achieve better outcomes for patients. As CEO of California-based cognitive computing firm Apixio, Darren Schutle, explains that the care can be better fitted to the patient if the data could be collected from various medical records, merged, and analyzed. Further, the combination of similar profiles can serve as a basis for personalized medicine pointing to what works and what does not for certain condition (Marr, 2016).
Genomics in clinical careGenomic data are used to identify the genes involvement in unknown or rare conditions/syndromes. Currently, the most vigorous area of using genomics is oncology. The identification of genomic sequencing of cancer may define reasons of drug(s) sensitivity and resistance during oncological treatment processes.
Omics for drugs discovery and repurposingRepurposing of the drug is an appealing idea that allows the pharmaceutical companies to sell an already approved drug to treat a different condition/disease that the drug was not initially approved for by the FDA. The observation of "molecular signatures in disease and compare those to signatures observed in cells" points to the possibility of a drug ability to cure and/or relieve symptoms of a disease.
Personalized genomic testingIn the US, several companies offer direct-to-consumer (DTC) genetic testing. The company that performs the majority of testing is called 23andMe. Utilizing genetic testing in health care raises many ethical, legal and social concerns; one of the main questions is whether the health care providers are ready to include patient-supplied genomic information while providing care that is unbiased (despite the intimate genomic knowledge) and a high quality. The documented examples of incorporating such information into a health care delivery showed both positive and negative impacts on the overall health care related outcomes.
Medical signal processing
An important application of information engineering in medicine is medical signal processing. It refers to the generation, analysis, and use of signals, which could take many forms such as image, sound, electrical, or biological.
Medical image computing and imaging informatics
Imaging informatics and medical image computing develops computational and mathematical methods for solving problems pertaining to medical images and their use for biomedical research and clinical care. Those fields aims to extract clinically relevant information or knowledge from medical images and computational analysis of the images. The methods can be grouped into several broad categories: image segmentation, image registration, image-based physiological modeling, and others.
Medical robotics
A medical robot is a robot used in the medical sciences. They include surgical robots. These are in most telemanipulators, which use the surgeon's activators on one side to control the "effector" on the other side. There are the following types of medical robots:
Surgical robots: either allow surgical operations to be carried out with better precision than an unaided human surgeon or allow remote surgery where a human surgeon is not physically present with the patient.
Rehabilitation robots: facilitate and support the lives of infirm, elderly people, or those with dysfunction of body parts affecting movement. These robots are also used for rehabilitation and related procedures, such as training and therapy.
Biorobots: a group of robots designed to imitate the cognition of humans and animals.
Telepresence robots: allow off-site medical professionals to move, look around, communicate, and participate from remote locations.
Pharmacy automation: robotic systems to dispense oral solids in a retail pharmacy setting or preparing sterile IV admixtures in a hospital pharmacy setting.
Companion robot: has the capability to engage emotionally with users keeping them company and alerting if there is a problem with their health.
Disinfection robot: has the capability to disinfect a whole room in mere minutes, generally using pulsed ultraviolet light. They are being used to fight Ebola virus disease.
Pathology informatics
Pathology informatics is a field that involves the use of information technology, computer systems, and data management to support and enhance the practice of pathology. It encompasses pathology laboratory operations, data analysis, and the interpretation of pathology-related information.
Key aspects of pathology informatics include:
Laboratory information management systems (LIMS): Implementing and managing computer systems specifically designed for pathology departments. These systems help in tracking and managing patient specimens, results, and other pathology data.
Digital pathology: Involves the use of digital technology to create, manage, and analyze pathology images. This includes side scanning and automated image analysis.
Telepathology: Using technology to enable remote pathology consultation and collaboration.
Quality assurance and reporting: Implementing informatics solutions to ensure the quality and accuracy of pathology processes.
International history
Worldwide use of computer technology in medicine began in the early 1950s with the rise of the computers. In 1949, Gustav Wagner established the first professional organization for informatics in Germany. Specialized university departments and Informatics training programs began during the 1960s in France, Germany, Belgium and The Netherlands. Medical informatics research units began to appear during the 1970s in Poland and in the U.S. Since then the development of high-quality health informatics research, education and infrastructure has been a goal of the U.S. and the European Union.
Early names for health informatics included medical computing, biomedical computing, medical computer science, computer medicine, medical electronic data processing, medical automatic data processing, medical information processing, medical information science, medical software engineering, and medical computer technology.
The health informatics community is still growing, it is by no means a mature profession, but work in the UK by the voluntary registration body, the UK Council of Health Informatics Professions has suggested eight key constituencies within the domain–information management, knowledge management, portfolio/program/project management, ICT, education and research, clinical informatics, health records(service and business-related), health informatics service management. These constituencies accommodate professionals in and for the NHS, in academia and commercial service and solution providers.
Since the 1970s the most prominent international coordinating body has been the International Medical Informatics Association (IMIA).
History, current state and policy initiatives by region and country
Americas
Argentina
The Argentinian health system is heterogeneous in its function, and because of that, the informatics developments show a heterogeneous stage. Many private health care centers have developed systems, such as the Hospital Aleman of Buenos Aires, or the Hospital Italiano de Buenos Aires that also has a residence program for health informatics.
Brazil
The first applications of computers to medicine and health care in Brazil started around 1968, with the installation of the first mainframes in public university hospitals, and the use of programmable calculators in scientific research applications. Minicomputers, such as the IBM 1130 were installed in several universities, and the first applications were developed for them, such as the hospital census in the School of Medicine of Ribeirão Preto and patient master files, in the Hospital das Clínicas da Universidade de São Paulo, respectively at the cities of Ribeirão Preto and São Paulo campuses of the University of São Paulo.
In the 1970s, several Digital Corporation and Hewlett-Packard minicomputers were acquired for public and Armed Forces hospitals, and more intensively used for intensive-care unit, cardiology diagnostics, patient monitoring and other applications. In the early 1980s, with the arrival of cheaper microcomputers, a great upsurge of computer applications in health ensued, and in 1986 the Brazilian Society of Health Informatics was founded, the first Brazilian Congress of Health Informatics was held, and the first Brazilian Journal of Health Informatics was published. In Brazil, two universities are pioneers in teaching and research in medical informatics, both the University of São Paulo and the Federal University of São Paulo offer undergraduate programs highly qualified in the area as well as extensive graduate programs (MSc and PhD). In 2015 the Universidade Federal de Ciências da Saúde de Porto Alegre, Rio Grande do Sul, also started to offer undergraduate program.
Canada
Health Informatics projects in Canada are implemented provincially, with different provinces creating different systems. A national, federally funded, not-for-profit organisation called Canada Health Infoway was created in 2001 to foster the development and adoption of electronic health records across Canada. As of December 31, 2008, there were 276 EHR projects under way in Canadian hospitals, other health-care facilities, pharmacies and laboratories, with an investment value of $1.5-billion from Canada Health Infoway.
Provincial and territorial programmes include the following:
eHealth Ontario was created as an Ontario provincial government agency in September 2008. It has been plagued by delays and its CEO was fired over a multimillion-dollar contracts scandal in 2009.
Alberta Netcare was created in 2003 by the Government of Alberta. Today the netCARE portal is used daily by thousands of clinicians. It provides access to demographic data, prescribed/dispensed drugs, known allergies/intolerances, immunizations, laboratory test results, diagnostic imaging reports, the diabetes registry and other medical reports. netCARE interface capabilities are being included in electronic medical record products that are being funded by the provincial government.
United States
Even though the idea of using computers in medicine emerged as technology advanced in the early 20th century, it was not until the 1950s that informatics began to have an effect in the United States.
The earliest use of electronic digital computers for medicine was for dental projects in the 1950s at the United States National Bureau of Standards by Robert Ledley. During the mid-1950s, the United States Air Force (USAF) carried out several medical projects on its computers while also encouraging civilian agencies such as the National Academy of Sciences – National Research Council (NAS-NRC) and the National Institutes of Health (NIH) to sponsor such work. In 1959, Ledley and Lee B. Lusted published "Reasoning Foundations of Medical Diagnosis," a widely read article in Science, which introduced computing (especially operations research) techniques to medical workers. Ledley and Lusted's article has remained influential for decades, especially within the field of medical decision making.
Guided by Ledley's late 1950s survey of computer use in biology and medicine (carried out for the NAS-NRC), and by his and Lusted's articles, the NIH undertook the first major effort to introduce computers to biology and medicine. This effort, carried out initially by the NIH's Advisory Committee on Computers in Research (ACCR), chaired by Lusted, spent over $40 million between 1960 and 1964 in order to establish dozens of large and small biomedical research centers in the US.
One early (1960, non-ACCR) use of computers was to help quantify normal human movement, as a precursor to scientifically measuring deviations from normal, and design of prostheses. The use of computers (IBM 650, 1620, and 7040) allowed analysis of a large sample size, and of more measurements and subgroups than had been previously practical with mechanical calculators, thus allowing an objective understanding of how human locomotion varies by age and body characteristics. A study co-author was Dean of the Marquette University College of Engineering; this work led to discrete Biomedical Engineering departments there and elsewhere.
The next steps, in the mid-1960s, were the development (sponsored largely by the NIH) of expert systems such as MYCIN and Internist-I. In 1965, the National Library of Medicine started to use MEDLINE and MEDLARS. Around this time, Neil Pappalardo, Curtis Marble, and Robert Greenes developed MUMPS (Massachusetts General Hospital Utility Multi-Programming System) in Octo Barnett's Laboratory of Computer Science at Massachusetts General Hospital in Boston, another center of biomedical computing that received significant support from the NIH. In the 1970s and 1980s it was the most commonly used programming language for clinical applications. The MUMPS operating system was used to support MUMPS language specifications. , a descendant of this system is being used in the United States Veterans Affairs hospital system. The VA has the largest enterprise-wide health information system that includes an electronic medical record, known as the Veterans Health Information Systems and Technology Architecture (VistA). A graphical user interface known as the Computerized Patient Record System (CPRS) allows health care providers to review and update a patient's electronic medical record at any of the VA's over 1,000 health care facilities.
During the 1960s, Morris Collen, a physician working for Kaiser Permanente's Division of Research, developed computerized systems to automate many aspects of multi-phased health checkups. These systems became the basis the larger medical databases Kaiser Permanente developed during the 1970s and 1980s.
In the 1970s a growing number of commercial vendors began to market practice management and electronic medical records systems. Although many products exist, only a small number of health practitioners use fully featured electronic health care records systems. In 1970, Warner V. Slack, MD, and Howard Bleich, MD, co-founded the academic division of clinical informatics (DCI) at Beth Israel Deaconess Medical Center and Harvard Medical School. Warner Slack is a pioneer of the development of the electronic patient medical history, and in 1977 Dr. Bleich created the first user-friendly search engine for the worlds biomedical literature.
Computerised systems involved in patient care have led to a number of changes. Such changes have led to improvements in electronic health records which are now capable of sharing medical information among multiple health care stakeholders (Zahabi, Kaber, & Swangnetr, 2015); thereby, supporting the flow of patient information through various modalities of care. One opportunity for electronic health records (EHR) to be even more effectively used is to utilize natural language processing for searching and analyzing notes and text that would otherwise be inaccessible for review. These can be further developed through ongoing collaboration between software developers and end-users of natural language processing tools within the electronic health EHRs.
Computer use today involves a broad ability which includes but is not limited to physician diagnosis and documentation, patient appointment scheduling, and billing. Many researchers in the field have identified an increase in the quality of health care systems, decreased errors by health care workers, and lastly savings in time and money (Zahabi, Kaber, & Swangnetr, 2015). The system, however, is not perfect and will continue to require improvement. Frequently cited factors of concern involve usability, safety, accessibility, and user-friendliness (Zahabi, Kaber, & Swangnetr, 2015).
Homer R. Warner, one of the fathers of medical informatics, founded the Department of Medical Informatics at the University of Utah in 1968. The American Medical Informatics Association (AMIA) has an award named after him on application of informatics to medicine.
The American Medical Informatics Association created a, board certification for medical informatics from the American Board of Preventive Medicine. The American Nurses Credentialing Center offers a board certification in Nursing Informatics. For Radiology Informatics, the CIIP (Certified Imaging Informatics Professional) certification was created by ABII (The American Board of Imaging Informatics) which was founded by SIIM (the Society for Imaging Informatics in Medicine) and ARRT (the American Registry of Radiologic Technologists) in 2005. The CIIP certification requires documented experience working in Imaging Informatics, formal testing and is a limited time credential requiring renewal every five years.
The exam tests for a combination of IT technical knowledge, clinical understanding, and project management experience thought to represent the typical workload of a PACS administrator or other radiology IT clinical support role. Certifications from PARCA (PACS Administrators Registry and Certifications Association) are also recognized. The five PARCA certifications are tiered from entry-level to architect level. The American Health Information Management Association offers credentials in medical coding, analytics, and data administration, such as Registered Health Information Administrator and Certified Coding Associate. Certifications are widely requested by employers in health informatics, and overall the demand for certified informatics workers in the United States is outstripping supply. The American Health Information Management Association reports that only 68% of applicants pass certification exams on the first try.
In 2017, a consortium of health informatics trainers (composed of MEASURE Evaluation, Public Health Foundation India, University of Pretoria, Kenyatta University, and the University of Ghana) identified the following areas of knowledge as a curriculum for the digital health workforce, especially in low- and middle-income countries: clinical decision support; telehealth; privacy, security, and confidentiality; workflow process improvement; technology, people, and processes; process engineering; quality process improvement and health information technology; computer hardware; software; databases; data warehousing; information networks; information systems; information exchange; data analytics; and usability methods.
In 2004, President George W. Bush signed Executive Order 13335, creating the Office of the National Coordinator for Health Information Technology (ONCHIT) as a division of the U.S. Department of Health and Human Services (HHS). The mission of this office is widespread adoption of interoperable electronic health records (EHRs) in the US within 10 years. See quality improvement organizations for more information on federal initiatives in this area. In 2014 the Department of Education approved an advanced Health Informatics Undergraduate program that was submitted by the University of South Alabama. The program is designed to provide specific Health Informatics education, and is the only program in the country with a Health Informatics Lab. The program is housed in the School of Computing in Shelby Hall, a recently completed $50 million state of the art teaching facility. The University of South Alabama awarded David L. Loeser on May 10, 2014, with the first Health Informatics degree.
The program currently is scheduled to have 100+ students awarded by 2016. The Certification Commission for Healthcare Information Technology (CCHIT), a private nonprofit group, was funded in 2005 by the U.S. Department of Health and Human Services to develop a set of standards for electronic health records (EHR) and supporting networks, and certify vendors who meet them. In July 2006, CCHIT released its first list of 22 certified ambulatory EHR products, in two different announcements. Harvard Medical School added a department of biomedical informatics in 2015. The University of Cincinnati in partnership with Cincinnati Children's Hospital Medical Center created a biomedical informatics (BMI) Graduate certificate program and in 2015 began a BMI PhD program. The joint program allows for researchers and students to observe the impact their work has on patient care directly as discoveries are translated from bench to bedside.
Europe
European Union
The European Commission's preference, as exemplified in the 5th Framework as well as currently pursued pilot projects, is for Free/Libre and Open Source Software (FLOSS) for health care.
The European Union's Member States are committed to sharing their best practices and experiences to create a European eHealth Area, thereby improving access to and quality health care at the same time as stimulating growth in a promising new industrial sector. The European eHealth Action Plan plays a fundamental role in the European Union's strategy. Work on this initiative involves a collaborative approach among several parts of the Commission services. The European Institute for Health Records is involved in the promotion of high quality electronic health record systems in the European Union.
UK
The broad history of health informatics has been captured in the book UK Health Computing: Recollections and reflections, Hayes G, Barnett D (Eds.), BCS (May 2008) by those active in the field, predominantly members of BCS Health and its constituent groups. The book describes the path taken as "early development of health informatics was unorganized and idiosyncratic". In the early 1950s, it was prompted by those involved in NHS finance and only in the early 1960s did solutions including those in pathology (1960), radiotherapy (1962), immunization (1963), and primary care (1968) emerge. Many of these solutions, even in the early 1970s were developed in-house by pioneers in the field to meet their own requirements. In part, this was due to some areas of health services (for example the immunization and vaccination of children) still being provided by Local Authorities.
The coalition government has proposed broadly to return to the 2010 strategy Equity and Excellence: Liberating the NHS (July 2010); stating: "We will put patients at the heart of the NHS, through an information revolution and greater choice and control' with shared decision-making becoming the norm: "no decision about me without me' and patients having access to the information they want, to make choices about their care. They will have increased control over their own care records."
There are different models of health informatics delivery in each of the home countries (England, Scotland, Northern Ireland and Wales) but some bodies like UKCHIP (see below) operate for those 'in and for' all the home countries and beyond.
NHS informatics in England was contracted out to several vendors for national health informatics solutions under the National Programme for Information Technology (NPfIT) label in the early to mid-2000s, under the auspices of NHS Connecting for Health (part of the Health and Social Care Information Centre as of 1 April 2013). NPfIT originally divided the country into five regions, with strategic 'systems integration' contracts awarded to one of several Local Service Providers (LSP).
The various specific technical solutions were required to connect securely with the NHS 'Spine', a system designed to broker data between different systems and care settings. NPfIT fell significantly behind schedule and its scope and design were being revised in real time, exacerbated by media and political lambasting of the Programme's spend (past and projected) against the proposed budget. In 2010 a consultation was launched as part of the new Conservative/Liberal Democrat Coalition Government's White Paper "Liberating the NHS". This initiative provided little in the way of innovative thinking, primarily re-stating existing strategies within the proposed new context of the Coalition's vision for the NHS.
The degree of computerization in NHS secondary care was quite high before NPfIT, and the programme stagnated further development of the install base – the original NPfIT regional approach provided neither a single, nationwide solution nor local health community agility or autonomy to purchase systems, but instead tried to deal with a hinterland in the middle.
Almost all general practices in England and Wales are computerized under the GP Systems of Choice programme, and patients have relatively extensive computerized primary care clinical records. System choice is the responsibility of individual general practices and while there is no single, standardized GP system, it sets relatively rigid minimum standards of performance and functionality for vendors to adhere to. Interoperation between primary and secondary care systems is rather primitive. It is hoped that a focus on interworking (for interfacing and integration) standards will stimulate synergy between primary and secondary care in sharing necessary information to support the care of individuals. Notable successes to date are in the electronic requesting and viewing of test results, and in some areas, GPs have access to digital x-ray images from secondary care systems.
In 2019 the GP Systems of Choice framework was replaced by the GP IT Futures framework, which is to be the main vehicle used by clinical commissioning groups to buy services for GPs. This is intended to increase competition in an area that is dominated by EMIS and TPP. 69 technology companies offering more than 300 solutions have been accepted on to the new framework.
Wales has a dedicated Health Informatics function that supports NHS Wales in leading on the new integrated digital information services and promoting Health Informatics as a career.
The British Computer Society (BCS) provides 4 different professional registration levels for Health and Care Informatics Professionals: Practitioner, Senior Practitioner, Advanced Practitioner, and Leading Practitioner. The Faculty of Clinical Informatics (FCI) is the professional membership society for health and social care professionals in clinical informatics offering Fellowship, Membership and Associateship. BCS and FCI are member organizations of the Federation for Informatics Professionals in Health and Social Care (FedIP), a collaboration between the leading professional bodies in health and care informatics supporting the development of the informatics professions.
The Faculty of Clinical Informatics has produced a Core Competency Framework that describes the wide range of skills needed by practitioners.
Netherlands
In the Netherlands, health informatics is currently a priority for research and implementation. The Netherlands Federation of University medical centers (NFU) has created the Citrienfonds, which includes the programs eHealth and Registration at the Source. The Netherlands also has the national organizations Society for Healthcare Informatics (VMBI) and Nictiz, the national center for standardization and eHealth.
Asia and Oceania
In Asia and Australia-New Zealand, the regional group called the Asia Pacific Association for Medical Informatics (APAMI) was established in 1994 and now consists of more than 15 member regions in the Asia Pacific Region.
Australia
The Australasian College of Health Informatics (ACHI) is the professional association for health informatics in the Asia-Pacific region. It represents the interests of a broad range of clinical and non-clinical professionals working within the health informatics sphere through a commitment to quality, standards and ethical practice. ACHI is an academic institutional member of the International Medical Informatics Association (IMIA) and a full member of the Australian Council of Professions.
ACHI is a sponsor of the "e-Journal for Health Informatics", an indexed and peer-reviewed professional journal. ACHI has also supported the "Australian Health Informatics Education Council" (AHIEC) since its founding in 2009.
Although there are a number of health informatics organizations in Australia, the Health Informatics Society of Australia (HISA) is regarded as the major umbrella group and is a member of the International Medical Informatics Association (IMIA). Nursing informaticians were the driving force behind the formation of HISA, which is now a company limited by guarantee of the members. The membership comes from across the informatics spectrum that is from students to corporate affiliates. HISA has a number of branches (Queensland, New South Wales, Victoria and Western Australia) as well as special interest groups such as nursing (NIA), pathology, aged and community care, industry and medical imaging (Conrick, 2006).
China
After 20 years, China performed a successful transition from its planned economy to a socialist market economy. Along this change, China's health care system also experienced a significant reform to follow and adapt to this historical revolution. In 2003, the data (released from Ministry of Health of the People's Republic of China (MoH)), indicated that the national health care-involved expenditure was up to RMB 662.33 billion totally, which accounted for about 5.56% of nationwide gross domestic products. Before the 1980s, the entire health care costs were covered in central government annual budget. Since that, the construct of health care-expended supporters started to change gradually. Most of the expenditure was contributed by health insurance schemes and private spending, which corresponded to 40% and 45% of total expenditure, respectively. Meanwhile, the financially governmental contribution was decreased to 10% only. On the other hand, by 2004, up to 296,492 health care facilities were recorded in statistic summary of MoH, and an average of 2.4 clinical beds per 1000 people were mentioned as well.
Along with the development of information technology since the 1990s, health care providers realized that the information could generate significant benefits to improve their services by computerized cases and data, for instance of gaining the information for directing patient care and assessing the best patient care for specific clinical conditions. Therefore, substantial resources were collected to build China's own health informatics system.
Most of these resources were arranged to construct hospital information system (HIS), which was aimed to minimize unnecessary waste and repetition, subsequently to promote the efficiency and quality-control of health care. By 2004, China had successfully spread HIS through approximately 35–40% of nationwide hospitals. However, the dispersion of hospital-owned HIS varies critically. In the east part of China, over 80% of hospitals constructed HIS, in northwest of China the equivalent was no more than 20%. Moreover, all of the Centers for Disease Control and Prevention (CDC) above rural level, approximately 80% of health care organisations above the rural level and 27% of hospitals over town level have the ability to perform the transmission of reports about real-time epidemic situation through public health information system and to analysis infectious diseases by dynamic statistics.
China has four tiers in its health care system. The first tier is street health and workplace clinics and these are cheaper than hospitals in terms of medical billing and act as prevention centers. The second tier is district and enterprise hospitals along with specialist clinics and these provide the second level of care. The third tier is provisional and municipal general hospitals and teaching hospitals which provided the third level of care. In a tier of its own is the national hospitals which are governed by the Ministry of Health. China has been greatly improving its health informatics since it finally opened its doors to the outside world and joined the World Trade Organization (WTO). In 2001, it was reported that China had 324,380 medical institutions and the majority of those were clinics. The reason for that is that clinics are prevention centers and Chinese people like using traditional Chinese medicine as opposed to Western medicine and it usually works for the minor cases. China has also been improving its higher education in regards to health informatics.
At the end of 2002, there were 77 medical universities and medical colleges. There were 48 university medical colleges which offered bachelor, master, and doctorate degrees in medicine. There were 21 higher medical specialty institutions that offered diploma degrees so in total, there were 147 higher medical and educational institutions. Since joining the WTO, China has been working hard to improve its education system and bring it up to international standards.
SARS played a large role in China quickly improving its health care system. Back in 2003, there was an outbreak of SARS and that made China hurry to spread HIS or Hospital Information System and more than 80% of hospitals had HIS. China had been comparing itself to Korea's health care system and figuring out how it can better its own system. There was a study done that surveyed six hospitals in China that had HIS. The results were that doctors did not use computers as much so it was concluded that it was not used as much for clinical practice than it was for administrative purposes. The survey asked if the hospitals created any websites and it was concluded that only four of them had created websites and that three had a third-party company create it for them and one was created by the hospital staff. In conclusion, all of them agreed or strongly agreed that providing health information on the Internet should be utilized.
Collected information at different times, by different participants or systems could frequently lead to issues of misunderstanding, dis-comparing or dis-exchanging. To design an issues-minor system, health care providers realized that certain standards were the basis for sharing information and interoperability, however a system lacking standards would be a large impediment to interfere the improvement of corresponding information systems. Given that the standardization for health informatics depends on the authorities, standardization events must be involved with government and the subsequently relevant funding and supports were critical. In 2003, the Ministry of Health released the Development Lay-out of National Health Informatics (2003–2010) indicating the identification of standardization for health informatics which is 'combining adoption of international standards and development of national standards'.
In China, the establishment of standardization was initially facilitated with the development of vocabulary, classification and coding, which is conducive to reserve and transmit information for premium management at national level. By 2006, 55 international/ domestic standards of vocabulary, classification and coding have served in hospital information system. In 2003, the 10th revision of the International Statistical Classification of Diseases and Related Health Problems (ICD-10) and the ICD-10 Clinical Modification (ICD-10-CM) were adopted as standards for diagnostic classification and acute care procedure classification. Simultaneously, the International Classification of Primary Care (ICPC) were translated and tested in China 's local applied environment.
Another coding standard, named Logical Observation Identifiers Names and Codes (LOINC), was applied to serve as general identifiers for clinical observation in hospitals.
Personal identifier codes were widely employed in different information systems, involving name, sex, nationality, family relationship, educational level and job occupation. However, these codes within different systems are inconsistent, when sharing between different regions. Considering this large quantity of vocabulary, classification and coding standards between different jurisdictions, the health care provider realized that using multiple systems could generate issues of resource wasting and a non-conflicting national level standard was beneficial and necessary. Therefore, in late 2003, the health informatics group in Ministry of Health released three projects to deal with issues of lacking national health information standards, which were the Chinese National Health Information Framework and Standardization, the Basic Data Set Standards of Hospital Information System and the Basic Data Set Standards of Public Health Information System.
The objectives of the Chinese National Health Information Framework and Standardization project were:
Establish national health information framework and identify in what areas standards and guidelines are required
Identify the classes, relationships and attributes of national health information framework. Produce a conceptual health data model to cover the scope of the health information framework
Create logical data model for specific domains, depicting the logical data entities, the data attributes, and the relationships between the entities according to the conceptual health data model
Establish uniform represent standard for data elements according to the data entities and their attributes in conceptual data model and logical data model
Circulate the completed health information framework and health data model to the partnership members for review and acceptance
Develop a process to maintain and refine the China model and to align with and influence international health data models
Comparing China's EHR Standard and ASTM E1384
In 2011, researchers from local universities evaluated the performance of China's Electronic Health Record (EHR) Standard compared with the American Society for Testing and Materials Standard Practice for Content and Structure of Electronic Health Records in the United States (ASTM E1384 Standard, withdrawn in 2017). The deficiencies that were found are listed in the following.
The lack of supporting on privacy and security. The ISO/TS 18308 specifies "The EHR must support the ethical and legal use of personal information, in accordance with established privacy principles and frameworks, which may be culturally or jurisdictionally specific" (ISO 18308: Health Informatics-Requirements for an Electronic Health Record Architecture, 2004). However this China's EHR Standard did not achieve any of the fifteen requirements in the subclass of privacy and security.
The shortage of supporting on different types of data and reference. Considering only ICD-9 is referenced as China's external international coding systems, other similar systems, such as SNOMED CT in clinical terminology presentation, cannot be considered as familiar for Chinese specialists, which could lead to internationally information-sharing deficiency.
The lack of more generic and extensible lower level data structures. China's large and complex EHR Standard was constructed for all medical domains. However, the specific and time-frequent attributes of clinical data elements, value sets and templates identified that this once-for-all purpose cannot lead to practical consequence.
In Hong Kong, a computerized patient record system called the Clinical Management System (CMS) has been developed by the Hospital Authority since 1994. This system has been deployed at all the sites of the authority (40 hospitals and 120 clinics). It is used for up to 2 million transactions daily by 30,000 clinical staff. The comprehensive records of 7 million patients are available on-line in the electronic patient record (ePR), with data integrated from all sites. Since 2004 radiology image viewing has been added to the ePR, with radiography images from any HA site being available as part of the ePR.
The Hong Kong Hospital Authority placed particular attention to the governance of clinical systems development, with input from hundreds of clinicians being incorporated through a structured process. The health informatics section in the Hospital Authority has a close relationship with the information technology department and clinicians to develop health care systems for the organization to support the service to all public hospitals and clinics in the region.
The Hong Kong Society of Medical Informatics (HKSMI) was established in 1987 to promote the use of information technology in health care. The eHealth Consortium has been formed to bring together clinicians from both the private and public sectors, medical informatics professionals and the IT industry to further promote IT in health care in Hong Kong.
India
eHCF School of Medical Informatics
eHealth-Care Foundation
Malaysia
Since 2010, the Ministry of Health (MoH) has been working on the Malaysian Health Data Warehouse (MyHDW) project. MyHDW aims to meet the diverse needs of timely health information provision and management, and acts as a platform for the standardization and integration of health data from a variety of sources (Health Informatics Centre, 2013). The Ministry of Health has embarked on introducing the electronic Hospital Information Systems (HIS) in several public hospitals including Putrajaya Hospital, Serdang Hospital and Selayang Hospital. Similarly, under Ministry of Higher Education, hospitals such as University of Malaya Medical Centre (UMMC) and University Kebangsaan Malaysia Medical Centre (UKMMC) are also using HIS for healthcare delivery.
A hospital information system (HIS) is a comprehensive, integrated information system designed to manage the administrative, financial and clinical aspects of a hospital. As an area of medical informatics, the aim of hospital information system is to achieve the best possible support of patient care and administration by electronic data processing. HIS plays a vital role in planning, initiating, organizing and controlling the operations of the subsystems of the hospital and thus provides a synergistic organization in the process.
New Zealand
Health informatics is taught at five New Zealand universities. The most mature and established programme has been offered for over a decade at Otago. Health Informatics New Zealand (HINZ), is the national organization that advocates for health informatics. HINZ organizes a conference every year and also publishes a journal, Healthcare Informatics Review Online.
Saudi Arabia
The Saudi Association for Health Information (SAHI) was established in 2006 to work under direct supervision of King Saud bin Abdulaziz University for Health Sciences to practice public activities, develop theoretical and applicable knowledge, and provide scientific and applicable studies.
Russia
The Russian health care system is based on the principles of the Soviet health care system, which was oriented on mass prophylaxis, prevention of infection and epidemic diseases, vaccination and immunization of the population on a socially protected basis. The current government health care system consists of several directions:
Preventive health care
Primary health care
Specialized medical care
Obstetrical and gynecologic medical care
Pediatric medical care
Surgery
Rehabilitation/ Health resort treatment
One of the main issues of the post-Soviet medical health care system was the absence of the united system providing optimization of work for medical institutes with one, single database and structured appointment schedule and hence hours-long lines. Efficiency of medical workers might have been also doubtful because of the paperwork administrating or lost book records.
Along with the development of the information systems IT and health care departments in Moscow agreed on design of a system that would improve public services of health care institutes. Tackling the issues appearing in the existing system, the Moscow Government ordered that the design of a system would provide simplified electronic booking to public clinics and automate the work of medical workers on the first level.
The system designed for that purposes was called EMIAS (United Medical Information and Analysis System) and presents an electronic health record (EHR) with the majority of other services set in the system that manages the flow of patients, contains outpatient card integrated in the system, and provides an opportunity to manage consolidated managerial accounting and personalized list of medical help. Besides that, the system contains information about availability of the medical institutions and various doctors.
The implementation of the system started in 2013 with the organization of one computerized database for all patients in the city, including a front-end for the users. EMIAS was implemented in Moscow and the region and it is planned that the project should extend to most parts of the country.
Law
Health informatics law deals with evolving and sometimes complex legal principles as they apply to information technology in health-related fields. It addresses the privacy, ethical and operational issues that invariably arise when electronic tools, information and media are used in health care delivery. Health Informatics Law also applies to all matters that involve information technology, health care and the interaction of information. It deals with the circumstances under which data and records are shared with other fields or areas that support and enhance patient care.
As many health care systems are making an effort to have patient records more readily available to them via the internet, it is important that providers implement security standards in order to ensure that the patients' information is safe. They have to be able to assure confidentiality, integrity, and security of the people, process, and technology. Since there is also the possibility of payments being made through this system, it is vital that this aspect of their private information will also be protected through cryptography.
The use of technology in health care settings has become popular and this trend is expected to continue. Various health care facilities had instigated different kinds of health information technology systems in the provision of patient care, such as electronic health records (EHRs), computerized charting, etc. The growing popularity of health information technology systems and the escalation in the amount of health information that can be exchanged and transferred electronically increased the risk of potential infringement in patients' privacy and confidentiality. This concern triggered the establishment of strict measures by both policymakers and individual facility to ensure patient privacy and confidentiality.
One of the federal laws enacted to safeguard patient's health information (medical record, billing information, treatment plan, etc.) and to guarantee patient's privacy is the Health Insurance Portability and Accountability Act of 1996 or HIPAA. HIPAA gives patients the autonomy and control over their own health records. Furthermore, according to the U.S. Department of Health & Human Services (n.d.), this law enables patients to:
View their own health records
Request a copy of their own medical records
Request correction to any incorrect health information
Know who has access to their health record
Request who can and cannot view/access their health information
Health and medical informatics journals
Computers and Biomedical Research, published in 1967, was one of the first dedicated journals to health informatics. Other early journals included Computers and Medicine, published by the American Medical Association; Journal of Clinical Computing, published by Gallagher Printing; Journal of Medical Systems, published by Plenum Press; and MD Computing, published by Springer-Verlag. In 1984, Lippincott published the first nursing-specific journal, titled Journal Computers in Nursing, which is now known as Computers Informatics Nursing (CIN).
As of September 7, 2016, there are roughly 235 informatics journals listed in the National Library of Medicine (NLM) catalog of journals. The Journal Citation Reports for 2018 gives the top three journals in medical informatics as the Journal of Medical Internet Research (impact factor of 4.945), JMIR mHealth and uHealth (4.301) and the Journal of the American Medical Informatics Association (4.292).
Competencies, education and certification
In the United States, clinical informatics is a subspecialty within several medical specialties. For example, in pathology, the American Board of Pathology offers clinical informatics certification for pathologists who have completed 24 months of related training, and the American Board of Preventive Medicine offers clinical informatics certification within preventive medicine.
In October 2011 American Board of Medical Specialties (ABMS), the organization overseeing the certification of specialist MDs in the United States, announced the creation of MD-only physician certification in clinical informatics. The first examination for board certification in the subspecialty of clinical informatics was offered in October 2013 by American Board of Preventive Medicine (ABPM) with 432 passing to become the 2014 inaugural class of Diplomates in clinical informatics. Fellowship programs exist for physicians who wish to become board-certified in clinical informatics. Physicians must have graduated from a medical school in the United States or Canada, or a school located elsewhere that is approved by the ABPM. In addition, they must complete a primary residency program such as Internal Medicine (or any of the 24 subspecialties recognized by the ABMS) and be eligible to become licensed to practice medicine in the state where their fellowship program is located. The fellowship program is 24 months in length, with fellows dividing their time between Informatics rotations, didactic method, research, and clinical work in their primary specialty.
See also
Related concepts
Clinical documentation improvement
Continuity of care record (CCR)
Diagnosis-related group (DRG)
eHealth
Health information exchange (HIE)
Health information management (HIM)
Human resources for health (HRH) information system
International Classification of Diseases (ICD)
National minimum dataset
Neuroinformatics
Nosology
Nursing documentation
Personal health record (PHR)
Clinical data standards
DICOM
Health Metrics Network
Health network surveillance
HL7
Fast Healthcare Interoperability Resources (FHIR)
Integrating the Healthcare Enterprise
Omaha System
openEHR
SNOMED
xDT
Algorithms
Datafly algorithm
Governance
References
Further reading
External links | 0.762195 | 0.995536 | 0.758793 |
Mourning | Mourning is the expression of an experience that is the consequence of an event in life involving loss, causing grief. It typically occurs as a result of someone's death, often (but not always) someone who was loved, although loss from death is not exclusively the cause of all experience of grief.
The word is used to describe a complex of behaviours in which the bereaved participate or are expected to participate, the expression of which varies by culture. Wearing black clothes is one practice followed in many countries, though other forms of dress are seen. Those most affected by the loss of a loved one often observe a period of mourning, marked by withdrawal from social events and quiet, respectful behavior in some cultures, though in others mourning is a collective experience. People may follow religious traditions for such occasions.
Mourning may apply to the death of, or anniversary of the death of, an important individual such as a local leader, monarch, religious figure, or member of family. State mourning may occur on such an occasion. In recent years, some traditions have given way to less strict practices, though many customs and traditions continue to be followed.
Death can be a release for the mourner, in the case of the death of an abusive or tyrannical person, or when death terminates the long, painful illness of a loved one. However, this release may add remorse and guilt for the mourner.
Stages of grief
Mourning is a personal and collective response which can vary depending on feelings and contexts. Elisabeth Kübler-Ross's theory of grief describes five separate periods of experience in the psychological and emotional processing of death. These stages do not necessarily follow each other, and each period is not inevitable. The theory was originally posited to describe the experiences of those confronted with their imminent deaths, but has since been adopted to understand the experiences of bereaved loved ones. The theory has faced criticism for being overly prescriptive and lacking evidence.
: A phase characterised by the refusal of the griever to accept news of a loved ones death or terminal illness. Typically a shorter period which exists as a defence mechanism in the case of a distressing situation.
: This phase is characterized by a sense of outrage due to the loss, accompanied by guilt in some cases. The anger response typically involves blaming other for the loss, potentially including higher powers.
: This phase sees a person engage in internal or external bargaining and negotiation.
: The depression phase can be the longest phase of the mourning process, characterized by great sadness, questioning, and distress. An allowance of the pain from which the first three stages may be defence mechanisms. Mourners in this phase sometimes feel that they will never complete their mourning. They have experienced a wide range of emotions and their sorrow is great.
: The last stage of mourning, where the bereaved gets better. The reality of the loss is much more understood and accepted. The bereaved can still feel sadness, but has regained full functioning and has also reorganized life adjusting to the loss.
The five stages can be understood in terms of both psychological and social responses.
: When someone close to a person dies, the person enters a period of sorrow and questioning, or even nervous breakdown. There are three stages in the grieving process, encompassing the denial, depression and acceptance phases of Kübler-Ross' five step model.
: The feelings and mental state of the mourner affect their ability to maintain or enter into relationships with others, including professional, personal and sexual relationships. After the customs of burying or cremating the deceased, many cultures follow a number of socially-prescribed traditions that may affect the clothing a person wears and what activities they can partake in. These traditions are generally determined by the degree of kinship to and the social importance of the deceased.
There are various other models for understanding grief. Examples of these include: the Bowlby and Parkes' Four Phases of Grief, Worden's Four Basic Tasks In Adapting To Loss, Wolfelt's Companioning Approach to Grieving, Neimeyer's Narrative and Constructivist Model, the Stroebe and Schut model and the Okun and Nowinski model
Social customs and dress
Africa
Ethiopia
In Ethiopia, an (variants and in the Oromo language) is a traditional community organization whose members assist each other during the mourning process. Members make monthly financial contributions forming the 's fund. They are entitled to receive a certain sum of money from this fund to help cover funeral and other expenses associated with deaths. Additionally, members comfort the mourners: female members take turns doing housework, such as preparing food for the mourning family, while male members usually take the responsibility to arrange the funeral and erect a temporary tent to shelter guests who come to visit the mourning family. members will stay with the mourning family and comfort them for a week or more, during which time the family is never alone.
Nigeria
In Nigeria, there is a cultural belief that a recent widow is impure. During the mourning period, which lasts from 3 months to a year, several traditions are enforced for the purpose of purification, including confinement, complete shaving of the widow and her children, and a ban on any hygiene practices- including hand-washing, wearing clean clothes or sitting off the floor when eating. The extended family of the husband also take all the widow's property. These practices are criticised for the health risks and emotional damage to the widow.
Asia
East Asia
White is the traditional color of mourning in Chinese culture, with white clothes and hats formerly having been associated with death. In imperial China, Confucian mourning obligations required even the emperor to retire from public affairs upon the death of a parent. The traditional period of mourning was nominally 3 years, but usually 25–27 lunar months in practice, and even shorter in the case of necessary officers; the emperor, for example, typically remained in seclusion for just 27 days.
The Japanese term for mourning dress is , referring to either primarily black Western-style formal wear or to black kimono and traditional clothing worn at funerals and Buddhist memorial services. Other colors, particularly reds and bright shades, are considered inappropriate for mourning dress. If wearing Western clothes, women may wear a single strand of white pearls. Japanese-style mourning dress for women consists of a five-crested plain black silk kimono, a black and black accessories worn over white undergarments, black and white . Men's mourning dress consists of clothing worn on extremely formal occasions: a plain black silk five-crested kimono and black and white, or gray and white, striped trousers over white undergarments, a black crested jacket with a white closure, white or black and white . It is customary for Japanese-style mourning dress to be worn only by the immediate family and very close friends of the deceased; other attendees wear Western-style mourning dress or subdued Western or Japanese formal clothes.
Southeast Asia
In Thailand, people wear black when attending a funeral. Black is considered the mourning color, although historically it was white. Widows may wear purple when mourning the death of their spouse.
In the Philippines, mourning customs vary and are influenced by Chinese and folk Catholic beliefs. The immediate family traditionally wear black, with white as a popular alternative. Others may wear subdued colours when paying respects, with red universally considered taboo and bad luck when worn within 9–40 days of a death as the colour is reserved for happier occasions. Those who wear uniforms are allowed to wear a black armband above the left elbow, as do male mourners in . The bereaved, should they wear other clothes, attach a small scrap of black ribbon or a black plastic pin on the left breast, which is disposed of after mourning. Flowers are an important symbol in Filipino funerals. Consuming chicken during the wake and funeral is believed to bring more death to the bereaved, who are also forbidden from seeing visitors off. Counting nine days from moment of death, a novena of Masses or other prayers, known as the (from the word for "nine"), is performed; the actual funeral and burial may take place within this period or after. The spirit of the dead is believed to roam the earth until the 40th day after death, when it is said to cross into the afterlife, echoing the 40 days between Christ's Resurrection and Ascension into Heaven. The immediate family on this day have another Mass said followed by a small feast, and do so again on the first death anniversary. This is the , which is the commonly accepted endpoint of official mourning.
West Asia
In the Assyrian tradition, just after a person passes away, the mourning family host guests in an open house style. Only bitter coffee and tea are served, showcasing the sorrowful state of the family. On the funeral day, a memorial mass is held in the church. At the graveyard, the people gather and burn incense around the grave as clergy chant hymns in the Syriac language. The closest female relatives traditionally bewail or lament in a public display of grief as the casket descends. A few others may sing a dirge or a sentimental threnody. During all these occasions, everyone is expected to dress completely in black. Following the burial, everyone returns to the church hall for afternoon lunch and eulogy. At the hall, the closest relatives sit on a long table facing the guests as many people walk by and offer their condolences. On the third day, mourners customarily visit the grave site with a pastor to burn incense, symbolising Jesus' triumph over death on the third day. This is also done 40 days after the funeral (representing Jesus ascending to heaven), and one year later to conclude the mourning period. Mourners wear only black until the 40 day mark and typically do not dance or celebrate any major events for one year.
Europe
Continental Europe
The custom of wearing unadorned black clothing for mourning dates back at least to the Roman Empire, when the , made of dark-colored wool, was worn during mourning.
Through the Middle Ages and Renaissance, distinctive mourning was worn for general as well as personal loss; after the St. Bartholomew's Day Massacre of Huguenots in France, Elizabeth I of England and her court are said to have dressed in full mourning to receive the French Ambassador.
Widows and other women in mourning wore distinctive black caps and veils, generally in a conservative version of any current fashion.
In areas of Russia, the Czech Republic, Slovakia, Greece, Albania, Mexico, Portugal, and Spain, widows wear black for the rest of their lives. The immediate family members of the deceased wear black for an extended time. Since the 1870s, mourning practices for some cultures, even those who have emigrated to the United States, are to wear black for at least two years, though lifelong black for widows remains in some parts of Europe.
In Belgium, the Court went in public mourning after publication in the Moniteur Belge. In 1924, the court went in mourning after the death of Marie-Adélaïde, Grand Duchess of Luxembourg, for 10 days, the duke of Montpensier for five days, and a full month for the death of Princess Louise of Belgium.
White mourning
The color of deepest mourning among medieval European queens was white. In 1393, Parisians were treated to the unusual spectacle of a royal funeral carried out in white, for Leo V, King of Armenia, who died in exile. This royal tradition survived in Spain until the end of the 15th century. In 1934, Queen Wilhelmina of the Netherlands reintroduced white mourning after the death of her husband Prince Henry. It has since remained a tradition in the Dutch royal family.
In 2004, the four daughters of Queen Juliana of the Netherlands all wore white to their mother's funeral. In 1993, the Spanish-born Queen Fabiola introduced it in Belgium for the funeral of her husband, King Baudouin. The custom for the queens of France to wear ("white mourning") was the origin of the white wardrobe created in 1938 by Norman Hartnell for Queen Elizabeth (later known as the Queen Mother). She was required to join her husband King George VI on a state visit to France even while mourning her mother.
United Kingdom
In the present, no special dress or behaviour is obligatory for those in mourning in the general population of the United Kingdom, although ethnic groups and religious faiths have specific rituals, and black is typically worn at funerals. Traditionally, however, strict social rules were observed.
Georgian and Victorian eras
By the 19th century, mourning behaviour in England had developed into a complex set of rules, particularly among the upper classes. For women, the customs involved wearing heavy, concealing black clothing, and the use of heavy veils of black crêpe. The entire ensemble was colloquially known as "widow's weeds" (from the Old English , meaning "garment"), and would comprise either newly-created clothing, or overdyed clothing which the mourner already owned. Up until the later 18th century, the clothes of the deceased, unless they were considerably poor, were still listed in the inventories of the dead, as clothing constituted a relatively high expense. Mourning attire could feature "s"—conventional markers of grief such as white cuffs or cuff adornments, black hat-bands, or long black crêpe veils.
Special caps and bonnets, usually in black or other dark colours, went with these ensembles; mourning jewellery, often made of jet, was also worn, and became highly popular in the Victorian era. Jewellery was also occasionally using the hair of the deceased. The wealthy would wear cameos or lockets designed to hold a lock of the deceased's hair or some similar relic.
Social norms could prescribe that widows wore special clothes to indicate that they were in mourning for up to four years after the death, although a widow could choose to wear such attire for a longer period of time, even for the rest of her life. To change one's clothing too early was considered disrespectful to the deceased, and, if the widow was still young and attractive, suggestive of potential sexual promiscuity. Those subject to the rules were slowly allowed to re-introduce conventional clothing at specific times; such stages were known by such terms as "full mourning", "half mourning", and similar descriptions. For half mourning, muted colours such as lilac, grey and lavender could be introduced.
Friends, acquaintances, and employees wore mourning to a greater or lesser degree depending on their relationship to the deceased. Mourning was worn for six months after the death of a sibling. Parents would wear mourning for a child for "as long as they [felt] so disposed". A widow was supposed to wear mourning for two years, and was not supposed to "enter society" for 12 months. No lady or gentleman in mourning was supposed to attend social events while in deep mourning. In general, servants wore black armbands following a death in the household. However, amongst polite company, the wearing of a simple black armband was seen as appropriate only for military men, or for others compelled to wear uniform in the course of their duties—a black armband instead of proper mourning clothes was seen as a degradation of proper etiquette, and to be avoided. In general, men were expected to wear mourning suits (not to be confused with morning suits) of black frock coats with matching trousers and waistcoats. In the later interbellum period (between World War I and World War II), as the frock coat became increasingly rare, the mourning suit consisted of a black morning coat with black trousers and waistcoat, essentially a black version of the morning suit worn to weddings and other occasions, which would normally include coloured waistcoats and striped or checked trousers.
Formal mourning customs culminated during the reign of Queen Victoria, whose long and conspicuous grief over the 1861 death of her husband, Prince Albert, heavily influenced society. Although clothing fashions began to be more functional and less restrictive in the succeeding Edwardian era (1901-1910), appropriate dress for men and women—including that for the period of mourning—was still strictly prescribed and rigidly adhered to. In 2014, The Metropolitan Museum of Modern Art mounted an exhibition of women's mourning attire from the 19th century, entitled Death Becomes Her: A Century of Mourning Attire.
The customs were not universally supported, with Charles Voysey writing in 1873 "that it adds needlessly to the gloom and dejection of really afflicted relatives must be apparent to all who have ever taken part in these miserable rites".
The rules gradually relaxed over time, and it became acceptable practice for both sexes to dress in dark colours for up to a year after a death in the family. By the late 20th century, this no longer applied, and women in cities had widely adopted black as a fashionable colour.
North America
United States
Mourning generally followed English forms into the 20th century. Black dress is still considered proper etiquette for attendance at funerals, but extended periods of wearing black dress are no longer expected. However, attendance at social functions such as weddings when a family is in deep mourning is frowned upon. Men who share their father's given name and use a suffix such as "Junior" retain the suffix at least until the father's funeral is complete.
In the antebellum South, with social mores that imitated those of England, mourning was just as strictly observed by the upper classes.
In the 19th century, mourning could be quite expensive, as it required a whole new set of clothes and accessories or, at the very least, overdyeing existing garments and taking them out of daily use. For a poorer family, this was a strain on resources.
At the end of The Wonderful Wizard of Oz, Dorothy explains to Glinda that she must return home because her aunt and uncle cannot afford to go into mourning for her because it was too expensive.
A late 20th and early 21st century North American mourning phenomenon is the rear window memorial decal. This is a large vinyl window-cling decal memorializing a deceased loved one, prominently displayed in the rear windows of cars and trucks belonging to close family members and sometimes friends. It often contains birth and death dates, although some contain sentimental phrases or designs as well.
The Pacific
Tonga
In Tonga, family members of deceased persons wear black for an extended time, with large plain Taʻovala. Often, black bunting is hung from homes and buildings. In the case of the death of royalty, the entire country adopts mourning dress and black and purple bunting is displayed from most buildings.
State and official mourning
States usually declare a period of "official mourning" after the death of a head of state. in the case of a monarchy, court mourning refers to mourning during a set period following the death of a public figure or member of a royal family. The protocols for mourning vary, but typically include the lowering or posting half-mast of flags on public buildings. In contrast, the Royal Standard of the United Kingdom is not flown at half-mast upon the death of a head of state, as there is always a monarch on the throne.
The degree and duration of public mourning is generally decreed by a protocol officer. It was not unusual for the British court to declare that all citizens should wear full mourning for a specified period after the death of the monarch or that the members of the court should wear full- or half-mourning for an extended time. On the death of Queen Victoria (22 January 1901), the Canada Gazette published an "extra" edition announcing that court mourning would continue until 24 January 1902. It directed the public to wear deep mourning until 6 March 1901 and half-mourning until 17 April 1901. As they had done in earlier years for Queen Victoria, her son King Edward VII, his wife Queen Alexandra and the Queen Elizabeth The Queen Mother, the royal family went into mourning on the death of Prince Philip in April 2021. The black-and-white costumes designed by Cecil Beaton for the Royal Ascot sequence in My Fair Lady were inspired by the "Black Ascot" of 1910, when the court was in mourning for Edward VII.
The principle of continuity of the State, however, is also respected in mourning, and is reflected in the French saying ("The king is dead, long live the king!"). Regardless of the formalities of mourning, the power of state is handed on, typically immediately if the succession is uncontested. A short interruption of work in the civil service, however, may result from one or more days of closing the offices, especially on the day of the state funeral.
In January 2006, on the death of Jaber Al-Ahmad Al-Jaber Al-Sabah, the emir of Kuwait, a mourning period of 40 days was declared. In Tonga, the official mourning lasts for a year; the heir is crowned after this period has passed.
Religions and customs
Confucianism
There are five grades of mourning obligations in the Confucian Code. A person is expected to honor most of those descended from their great-great-grandfather, and most of their wives. The death of a person's father and mother would merit 27 months of mourning; the death of a person's grandfather on the male side, as well as their grandfather's wife, would be grade two, or necessitate 12 months of mourning. A paternal uncle is grade three, at nine months, with grade four is reserved for one's father's first cousin, maternal grandparents, siblings and sister's children (five months). First cousins once removed, second cousins and the parents of a man's wife's are considered grade five (three months).
Buddhism
Christianity
Eastern Christianity
Orthodox Christians usually hold the funeral either the day after death or on the third day, and always during the daytime. In traditional Orthodox communities, the body of the departed would be washed and prepared for burial by family or friends, and then placed in the coffin in the home. A house in mourning would be recognizable by the lid of the coffin, with a cross on it, and often adorned with flowers, set on the porch by the front door.
Special prayers are held on the third, seventh or ninth (number varies in different national churches), and 40th days after death; the third, sixth and ninth or twelfth month; and annually thereafter in a memorial service, for up to three generations. Kolyva is ceremoniously used to honor the dead.
Sometimes men in mourning will not shave for the 40 days. In Greece and other Orthodox countries, it is not uncommon for widows to remain in mourning dress for the rest of their lives.
When an Orthodox bishop dies, a successor is not elected until after the 40 days of mourning are completed, during which period his diocese is said to be "widowed".
The 40th day has great significance in Orthodox religion, considered the period during which soul of deceased wanders on earth. On the 40th day, the ascension of the deceased's soul occurs, and is the most important day in mourning period, when special prayers are held on the grave site of deceased.
As in the Roman Catholic rites, there can be symbolic mourning. During Holy Week, some temples in the Church of Cyprus draw black curtains across the icons. The services of Good Friday and Holy Saturday morning are patterned in part on the Orthodox Christian burial service, and funeral lamentations.
Western Christianity
European social forms are, in general, forms of Christian religious expression transferred to the greater community.
In the Roman Catholic Church, the Mass of Paul VI adopted in 1969 allows several options for the liturgical color used in Masses for the Dead. Before this, black was the ordinary color for funeral Masses except for white in the case of small children; the revised use makes other options available, with black as the intended norm. According to the General Instruction of the Roman Missal (§346d-e), black vestments are to be used at Offices and Masses for the Dead; an indult was given for some countries to use violet or white vestments, and in places those colours have largely supplanted black.
Christian churches often go into symbolic mourning during the period of Lent to commemorate the sacrificial death of Jesus. Customs vary among denominations and include temporarily covering or removing statuary, icons and paintings, and use of special liturgical colours, such as violet/purple, during Lent and Holy Week.
In more formal congregations, parishioners also dress according to specific forms during Holy Week, particularly on Maundy Thursday and Good Friday, when it is common to wear black or sombre dress or the liturgical colour of purple.
Special prayers are held on the third, seventh, and 30th days after death; Prayers are held on the third day, because Jesus rose again after three days in the sepulchre (1 Corinthians 15:4). Prayers are held on the seventh day, because Joseph mourned his father Jacob seven days (Genesis 50:10) and in Book of Sirach is written that "seven days the dead are mourned" (Ecclesiasticus 22:13). Prayers are held on the thirtieth day, because Aaron (Numbers 20:30) and Moses (Deuteronomy 34:8) were mourned thirty days.
Hinduism
Death is not seen as the final "end" in Hinduism, but is seen as a turning point in the seemingly endless journey of the indestructible "atman", or soul, through innumerable bodies of animals and people. Hence, Hinduism prohibits excessive mourning or lamentation upon death, as this can hinder the passage of the departed soul towards its journey ahead: "As mourners will not help the dead in this world, therefore (the relatives) should not weep, but perform the obsequies to the best of their power."
Hindu mourning is described in dharma shastras. It begins immediately after the cremation of the body and ends on the morning of the thirteenth day. Traditionally, the body is cremated within 24 hours after death; however, cremations are not held after sunset or before sunrise. Immediately after the death, an oil lamp is lit near the deceased, and this lamp is kept burning for three days.
Hinduism associates death with ritual impurity for the immediate blood family of the deceased, hence during these mourning days, the immediate family must not perform any religious ceremonies (except funerals), must not visit temples or other sacred places, must not serve the sages (holy men), must not give alms, must not read or recite from the sacred scriptures, nor can they attend social functions such as marriages and parties. The family of the deceased is not expected to serve any visiting guests food or drink. It is customary that the visiting guests do not eat or drink in the house where the death has occurred. The family in mourning are required to bathe twice a day, eat a single simple vegetarian meal, and try to cope with their loss.
On the day on which the death has occurred, the family do not cook; hence usually close family and friends will provide food for the mourning family. White clothing (the color of purity) is the color of mourning, and many will wear white during the mourning period.
The male members of the family do not cut their hair or shave, and the female members of the family do not wash their hair until the 10th day after the death. If the deceased was young and unmarried, the "Narayan Bali" is performed by the Pandits. The Mantras of "Bhairon Paath" are recited. This ritual is performed through the person who has given the Mukhagni (Ritual of giving fire to the dead body).
On the morning of the 13th day, a Śrāddha ceremony is performed. The main ceremony involves a fire sacrifice, in which offerings are given to the ancestors and to gods, to ensure the deceased has a peaceful afterlife. Pind Sammelan is performed to ensure the involvement of the departed soul with that of God. Typically after the ceremony, the family cleans and washes all the idols in the family shrine; and flowers, fruits, water and purified food are offered to the gods. Then, the family is ready to break the period of mourning and return to daily life.
Islam
In Shi'a Islam, examples of mourning practices are held annually in the month of Muharram, the first month of Islamic Lunar calendar. This mourning is held in the commemoration of Imam Al Husayn ibn Ali, who was martyred along with his 72 companions by Yazid bin Muawiyah. Shi'a Muslims wear black clothes and take out processions on road to mourn on the tragedy of Karbala. Shi'a Muslims also mourn the death of Fatima (the only daughter of Muhammad) and the Shi'a Imams.
Mourning is observed in Islam by increased devotion, receiving visitors and condolences, and avoiding decorative clothing and jewelry. Loved ones and relatives are to observe a three-day mourning period. Widows observe an extended mourning period (Iddah), four months and ten days long, in accordance with the Qur'an 2:234. During this time, she is not to remarry, move from her home, or wear decorative clothing or jewelry.
Grief at the death of a beloved person is normal, and weeping for the dead is allowed in Islam. What is prohibited is to express grief by wailing ("bewailing" refers to mourning in a loud voice), shrieking, tearing hair or clothes, breaking things, scratching faces, or uttering phrases that make a Muslim lose faith.
Directives for widows
The Qur'an prohibits widows from engaging themselves for four lunar months and ten days after the death of their husbands. According to Qur'an:
Islamic scholars consider this directive a balance between mourning a husband's death and protection of the widow from censure that she became interested in remarrying too soon after her husband's death. This is also to ascertain whether or not she is pregnant.
Judaism
Judaism looks upon mourning as a process by which the stricken can re-enter into society, and so provides a series of customs that make this process gradual. The first stage, observed as all the stages are by immediate relatives (parents, spouse, siblings and children) is the (literally meaning "seven"), which consists of the first seven days after the funeral. The second stage is the (thirty), referring to the thirty days following the death. The period of mourning after the death of a parent lasts one year. Each stage places lighter demands and restrictions than the previous one in order to reintegrate the bereaved into normal life.
The most known and central stage is , which is a Jewish mourning practice in which people adjust their behaviour as an expression of their bereavement for the week immediately after the burial. In the West, typically, mirrors are covered and a small tear is made in an item of clothing to indicate a lack of interest in personal vanity. The bereaved dress simply and sit on the floor, short stools or boxes rather than chairs when receiving the condolences of visitors. In some cases relatives or friends take care of the bereaved's house chores, as cooking and cleaning. English speakers use the expression "to sit shiva".
During the , the mourners are no longer expected to sit on the floor or be taken care of (cooking/cleaning). However, some customs still apply. There is a prohibition on getting married or attending any sort of celebrations and men refrain from shaving or cutting their hair.
Restrictions during the year of mourning include not wearing new clothes, not listening to music and not attending celebrations. In addition, the sons of the deceased recite the Kaddish prayer for the first eleven months of the year during prayer services where there is a quorum of 10 men. The Kaddish prayer is then recited annually on the date of death, usually called the yahrzeit. The date is according to the Hebrew calendar. In addition to saying the Kaddish in the synagogue, a 24-hour memorial candle is lit in the home of the person saying the Kaddish.
See also
Black armband
Burial
Cemetery
Cremation
Death wail
Half-mast
Month's Mind
Mourning portraits
Mourning ring
Mourning sickness
Mourning stationery
Requiem
Rudaali (Indian film)
Victorian fashion
Wake (ceremony)
Widow's cap
References
Citations
Bibliography
The Canada Gazette
Clothing of Ancient Rome
Charles Spencer, Cecil Beaton: Stage and Film Designs, London: Academy Editions, 1975. (no ISBN)
Karen Rae Mehaffey, The After-Life: Mourning Rituals and the Mid-Victorians, Lasar Writers Publishing, 1993. (no ISBN)
External links
The Jewish Way in Death and Mourning By Maurice Lamm
To Those Who Mourn a Christian view by Max Heindel
Funerals
History of clothing | 0.761861 | 0.995947 | 0.758773 |
Naturalistic observation | Naturalistic observation, sometimes referred to as fieldwork, is a research methodology in numerous fields of science including ethology, anthropology, linguistics, the social sciences, and psychology, in which data are collected as they occur in nature, without any manipulation by the observer. Examples range from watching an animal's eating patterns in the forest to observing the behavior of students in a school setting. During naturalistic observation, researchers take great care using unobtrusive methods to avoid interfering with the behavior they are observing. Naturalistic observation contrasts with analog observation in an artificial setting that is designed to be an analog of the natural situation, constrained so as to eliminate or control for effects of any variables other than those of interest. There is similarity to observational studies in which the independent variable of interest cannot be experimentally controlled for ethical or logistical reasons.
Naturalistic observation has both advantages and disadvantages as a research methodology. Observations are more credible because the behavior occurs in a real, typical scenario as opposed to an artificial one generated within a lab. Behavior that could never occur in controlled laboratory environment can lead to new insights. Naturalistic observation also allows for study of events that are deemed unethical to study experimentally, such as the impact of high school shootings on students attending the high school. However, because extraneous variables cannot be controlled as in a laboratory, it is difficult to replicate findings and demonstrate their reliability. In particular, if subjects know they are being observed they may behave differently than otherwise. It may be difficult to generalize findings of naturalistic studies beyond the observed situations.
See also
Jane Goodall
Meditation
Natural history
Observer-expectancy effect
People watching
Qualitative research
Scholar-practitioner model
Unobtrusive measures
References
Behaviorism
Psychology experiments
Qualitative research
Naturalism (philosophy) | 0.77664 | 0.976991 | 0.758771 |
Damasio's theory of consciousness | Developed in his (1999) book, "The Feeling of What Happens", Antonio Damasio's theory of consciousness proposes that consciousness arises from the interactions between the brain, the body, and the environment. According to this theory, consciousness is not a unitary experience, but rather emerges from the dynamic interplay between different brain regions and their corresponding bodily states. Damasio argues that our conscious experiences are influenced by the emotional responses that are generated by our body's interactions with the environment, and that these emotional responses play a crucial role in shaping our conscious experience. This theory emphasizes the importance of the body and its physiological processes in the emergence of consciousness.
Damasio's three layered theory is based on a hierarchy of stages, with each stage building upon the last. The most basic representation of the organism is referred to as the Protoself, Core Consciousness, and Extended Consciousness. Damasio's approach to explaining the development of consciousness relies on three notions: emotion, feeling, and feeling a feeling. Emotions are a collection of unconscious neural responses that give rise to feelings. Emotions are complex reactions to stimuli that cause observable external changes in the organism. A feeling arises when the organism becomes aware of the changes it is experiencing as a result of external or internal stimuli. Antonio Damasio’s work on consciousness :
1. Holistic Approach: Damasio argues that consciousness isn’t just a brain function but involves the entire body. He suggests that the brain works in tandem with older biological systems like the endocrine and immune systems, emphasizing a holistic view of consciousness .
2. Homeostasis as Central: Damasio’s theory places homeostasis at the core of consciousness, proposing that consciousness evolved to help organisms maintain internal stability, which is crucial for survival .
3. Microbiome Influence: Damasio highlights the role of the gut microbiome in influencing brain function and emotional states, suggesting that our consciousness is affected by the microbial environment within our bodies .
4. Dual Mind Registers: He distinguishes between two mental registers: one for cognitive functions like reasoning, and another for emotions and feelings, which are tied to the body’s state .
Protoself
According to Damasio's theory of consciousness, the protoself is the first stage in the hierarchical process of consciousness generation. Shared by many species, the protoself is the most basic representation of the organism, and it arises from the brain's constant interaction with the body. The protoself is an unconscious process that creates a "map" of the body's physiological state, which is then used by the brain to generate conscious experience. This "map" is constantly updated as the brain receives new stimuli from the body, and it forms the foundation for the development of more complex forms of consciousness.
Damasio asserts that the protoself is signified by a collection of neural patterns that are representative of the body's internal state. The function of this 'self' is to constantly detect and record, moment by moment, the internal physical changes that affect the homeostasis of the organism. Protoself does not represent a traditional sense of self; rather, it is a pre-conscious state, which provides a reference for the core self and autobiographical self to build from. As Damasio puts it, "Protoself is a coherent collection of neural patterns, which map moment-by-moment the state of the physical structure of the organism" (Damasio 1999).
Multiple brain areas are required for the protoself to function. Namely, the hypothalamus, which controls the general homeostasis of the organism, the brain stem, whose nuclei map body signals, and the insular cortex, whose function is linked to emotion. These brain areas work together to keep up with the constant process of collecting neural patterns to map the current status of the body's responses to environmental changes. The protoself does not require language in order to function; moreover, it is a direct report of one's experience. In this state, emotion begins to manifest itself as second-order neural patterns located in subcortical areas of the brain. Emotion acts as a neural object, from which a physical reaction can be drawn. This reaction causes the organism to become aware of the changes that are affecting it. From this realization, springs Damasio's notion of “feeling”. This occurs when the patterns contributing to emotion manifest as mental images, or brain movies. When the body is modified by these neural objects, the second layer of self emerges. This is known as core consciousness.
Core consciousness
Sufficiently more evolved is the second layer of Damasio's theory, Core Consciousness. This emergent process occurs when an organism becomes consciously aware of feelings associated with changes occurring to its internal bodily state; it is able to recognize that its thoughts are its own, and that they are formulated in its own perspective. It develops a momentary sense of self, as the brain continuously builds representative images, based on communications received from the Protoself. This level of consciousness is not exclusive to human beings and remains consistent and stable throughout the lifetime of the organism The image is a result of mental patterns which are caused by an interaction with internal or external stimulus. A relationship is established, between the organism and the object it is observing as the brain continuously creates images to represent the organism's experience of qualia.
Damasio's definition of emotion is that of an unconscious reaction to any internal or external stimulus which activates neural patterns in the brain. ‘Feeling’ emerges as a still unconscious state which simply senses the changes affecting the Protoself due to the emotional state. These patterns develop into mental images, which then float into the organism's awareness. Put simply, consciousness is the feeling of knowing a feeling. When the organism becomes aware of the feeling that its bodily state (Protoself) is being affected by its experiences, or response to emotion, Core Consciousness is born. The brain continues to present nonverbal narrative sequence of images in the mind of the organism, based on its relationship to objects. An object in this context can be anything from a person, to a melody, to a neural image. Core consciousness is concerned only with the present moment, here and now. It does not require language or memory, nor can it reflect on past experiences or project itself into the future.
Extended consciousness
When consciousness moves beyond the here and now, Damasio's third and final layer emerges as Extended Consciousness. This level could not exist without its predecessors, and, unlike them, requires a vast use of conventional memory. Therefore, an injury to a person's memory center can cause damage to their extended consciousness, without hurting the other layers. The autobiographical self draws on memory of past experiences which involves use of higher thought. This autobiographical layer of self is developed gradually over time. Working memory is necessary for an extensive display of items to be recalled and referenced. Linguistic areas of the brain are activated to enhance the organism's experience, however, according to the language of thought hypothesis, language would not be necessarily required.
Criticism
Damasio's theory of consciousness has been met with criticism for its lack of explanation regarding the generation of conscious experiences by the brain. Researchers have posited that the brain's interaction with the body alone cannot account for the complexity of conscious experience, and that additional factors must be considered. Furthermore, the theory has been criticized for its inadequate treatment of the concept of self-awareness and its lack of a clear method of measuring consciousness, which hinders empirical testing and evaluation.
Formalistic Elements
Theories of emotion currently fall into four main categories which follow one another in a historical series: evolutionary (ethological), physiological, neurological, and cognitive.
Evolutionary theories derive from Darwin's 'Emotions in man and the animals'.
Physiological theories suggest that responses within the body are responsible for emotions.
Neurological theories propose that activity within the brain leads to emotional responses.
Cognitive theories argue that thoughts and other mental activity play an essential role in forming emotions.
Note that no current theory of emotion falls strictly within a single category, rather each theory uses one approach to form its core premises from which it is then able to extend its main postulates.
Damasio's tri-level view of the human mind, which posits that we share the two lowest levels with other animals, has been suggested before. For example, see Dyer ( www.conscious-computation.webnode.com ).
Dyer's triune expansion is compared to Damasio's-
1. sensorimotor stage (c.f. Damasio's Protoself)
2. spatiotemporal stage (Core-Consciousness)
3. cognolinguistic stage (Extended Consciousness)
An important feature of Damasio's theory (one that it shares with Dyer's theory) is the key role played by mental images, consciously mediating the information exchange between endocrine and cognitive.
Ledoux and Brown have a different view of how emotion is connected to general cognition. They place emotionality on a similar level as that of other cognitive states. (In fairness, both Dyer's and Damasio's models concur on this point, ie that emotionality is not isolated to a particular layer within the tri-level framework).
Earlier less sophisticated models placed the emotions strictly within the Limbic circuits, where their primary role was to consciously respond to, as well as cause responses within, the hypothalamus, the interface between intentional mind states and metabolic (endocrine) body states.
Emotionality is demonstrably a global mind state, just like consciousness. For example, we can be simultaneously aware of (low-level) a pain (low-level) in our body, and an idea (high level) that enters our imagination (working memory). Likewise, our (low-level) emotional reaction to a painful workplace injury (fear, threat to well-being) can coexist with our (high-level) feeling of anger and indignation at the co-worker who failed to follow safety guidelines.
Substantive Process
A careful reading of Damasio's work reveals that he distinguishes his theories from those of his predecessors, in how the formalistic elements interact with each other in a dynamically integrated system. E.g, the suggestion of a dynamic neural map ultimately posits that we are the instantaneous configuration of a neural state in the present moment, rather than the supporting biological construct. I.e., our conscious identity is the software, not the hardware, even though our unique hardware constrains how we operate as the software.
Need for consciousness and qualia
A common criticism stems from the fact that both knowing and feeling can be processed with equal success without conscious awareness, as machines, for instance, do, and those models do not explain the need for consciousness and qualia.
References
Behavioral neuroscience
Consciousness | 0.781064 | 0.971451 | 0.758765 |
Medical specialty | A medical specialty is a branch of medical practice that is focused on a defined group of patients, diseases, skills, or philosophy. Examples include those branches of medicine that deal exclusively with children (paediatrics), cancer (oncology), laboratory medicine (pathology), or primary care (family medicine). After completing medical school or other basic training, physicians or surgeons and other clinicians usually further their medical education in a specific specialty of medicine by completing a multiple-year residency to become a specialist.
History of medical specialization
To a certain extent, medical practitioners have long been specialized. According to Galen, specialization was common among Roman physicians. The particular system of modern medical specialties evolved gradually during the 19th century. Informal social recognition of medical specialization evolved before the formal legal system. The particular subdivision of the practice of medicine into various specialties varies from country to country, and is somewhat arbitrary.
Classification of medical specialization
Medical specialties can be classified along several axes. These are:
Surgical or internal medicine
Age range of patients
Diagnostic or therapeutic
Organ-based or technique-based
Throughout history, the most important has been the division into surgical and internal medicine specialties. The surgical specialties are those in which an important part of diagnosis and treatment is achieved through major surgical techniques. The internal medicine specialties are the specialties in which the main diagnosis and treatment is never major surgery. In some countries, anesthesiology is classified as a surgical discipline, since it is vital in the surgical process, though anesthesiologists never perform major surgery themselves.
Many specialties are organ-based. Many symptoms and diseases come from a particular organ. Others are based mainly around a set of techniques, such as radiology, which was originally based around X-rays.
The age range of patients seen by any given specialist can be quite variable. Pediatricians handle most complaints and diseases in children that do not require surgery, and there are several subspecialties (formally or informally) in pediatrics that mimic the organ-based specialties in adults. Pediatric surgery may or may not be a separate specialty that handles some kinds of surgical complaints in children.
A further subdivision is the diagnostic versus therapeutic specialties. While the diagnostic process is of great importance in all specialties, some specialists perform mainly or only diagnostic examinations, such as pathology, clinical neurophysiology, and radiology. This line is becoming somewhat blurred with interventional radiology, an evolving field that uses image expertise to perform minimally invasive procedures.
Specialties that are common worldwide
List of specialties recognized in the European Union and European Economic Area
The European Union publishes a list of specialties recognized in the European Union, and by extension, the European Economic Area. There is substantial overlap between some of the specialties and it is likely that for example "Clinical radiology" and "Radiology" refer to a large degree to the same pattern of practice across Europe.
Accident and emergency medicine
Allergist
Anaesthetics
Cardiology
Child psychiatry
Clinical biology
Clinical chemistry
Clinical microbiology
Clinical neurophysiology
Craniofacial surgery
Dermatology
Endocrinology
Family and General Medicine
Gastroenterologic surgery
Gastroenterology
General Practice
General surgery
Geriatrics
Hematology
Immunology
Infectious diseases
Internal medicine
Laboratory medicine
Nephrology
Neuropsychiatry
Neurology
Neurosurgery
Nuclear medicine
Obstetrics and gynaecology
Occupational medicine
Oncology
Ophthalmology
Oral and maxillofacial surgery
Orthopaedics
Otorhinolaryngology
Paediatric surgery
Paediatrics
Pathology
Pharmacology
Physical medicine and rehabilitation
Plastic surgery
Podiatric surgery
Preventive medicine
Psychiatry
Public health
Radiation Oncology
Radiology
Respiratory medicine
Rheumatology
Stomatology
Thoracic surgery
Tropical medicine
Urology
Vascular surgery
Venereology
List of North American medical specialties and others
In this table, as in many healthcare arenas, medical specialties are organized into the following groups:
Surgical specialties focus on manually operative and instrumental techniques to treat disease.
Medical specialties that focus on the diagnosis and non-surgical treatment of disease.
Diagnostic specialties focus more purely on diagnosis of disorders.
Salaries
According to the 2022 Medscape Physician Compensation Report, physicians on average earn $339K annually. Primary care physicians earn $260K annually while specialists earned $368K annually.
The table below details the average range of salaries for physicians in the US of medical specialties:
Specialties by country
Australia and New Zealand
There are 15 recognised specialty medical Colleges in Australia. The majority of these are Australasian Colleges and therefore also oversee New Zealand specialist doctors. These Colleges are:
In addition, the Royal Australasian College of Dental Surgeons supervises training of specialist medical practitioners specializing in Oral and Maxillofacial Surgery in addition to its role in the training of dentists. There are approximately 260 faciomaxillary surgeons in Australia.
The Royal New Zealand College of General Practitioners is a distinct body from the Australian Royal Australian College of General Practitioners. There are approximately 5100 members of the RNZCGP.
Within some of the larger Colleges, there are sub-faculties, such as: Australasian Faculty of Rehabilitation Medicine within the Royal Australasian College of Physicians
There are some collegiate bodies in Australia that are not officially recognised as specialities by the Australian Medical Council but have a college structure for members, such as: Australasian College of Physical Medicine
There are some collegiate bodies in Australia of Allied Health non-medical practitioners with specialisation. They are not recognised as medical specialists, but can be treated as such by private health insurers, such as: Australasian College of Podiatric Surgeons
Canada
Specialty training in Canada is overseen by the Royal College of Physicians and Surgeons of Canada and the College of Family Physicians of Canada. For specialists working in the province of Quebec, the Collège des médecins du Québec also oversees the process.
Germany
In Germany these doctors use the term Facharzt.
India
Specialty training in India is overseen by the Medical Council of India, responsible for recognition of post graduate training and by the National Board of Examinations. Education of Ayurveda in overseen by Central Council of Indian Medicine (CCIM), the council conducts UG and PG courses all over India, while Central Council of Homoeopathy does the same in the field of Homeopathy.
Sweden
In Sweden, a medical license is required before commencing specialty training. Those graduating from Swedish medical schools are first required to do a rotational internship of about 1.5 to 2 years in various specialties before attaining a medical license. The specialist training lasts 5 years.
United States
There are three agencies or organizations in the United States that collectively oversee physician board certification of MD and DO physicians in the United States in the 26 approved medical specialties recognized in the country. These organizations are the American Board of Medical Specialties (ABMS) and the American Medical Association (AMA); the American Osteopathic Association Bureau of Osteopathic Specialists (AOABOS) and the American Osteopathic Association; the American Board of Physician Specialties (ABPS) and the American Association of Physician Specialists (AAPS). Each of these agencies and their associated national medical organization functions as its various specialty academies, colleges and societies.
All boards of certification now require that medical practitioners demonstrate, by examination, continuing mastery of the core knowledge and skills for a chosen specialty. Recertification varies by particular specialty between every seven and every ten years.
In the United States there are hierarchies of medical specialties in the cities of a region. Small towns and cities have primary care, middle sized cities offer secondary care, and metropolitan cities have tertiary care. Income, size of population, population demographics, distance to the doctor, all influence the numbers and kinds of specialists and physicians located in a city.
Demography
A population's income level determines whether sufficient physicians can practice in an area and whether public subsidy is needed to maintain the health of the population. Developing countries and poor areas usually have shortages of physicians and specialties, and those in practice usually locate in larger cities. For some underlying theory regarding physician location, see central place theory.
The proportion of men and women in different medical specialties varies greatly. Such sex segregation is largely due to differential application.
Satisfaction and burnout
A survey of physicians in the United States came to the result that dermatologists are most satisfied with their choice of specialty followed by radiologists, oncologists, plastic surgeons, and gastroenterologists. In contrast, primary care physicians were the least satisfied, followed by nephrologists, obstetricians/gynecologists, and pulmonologists. Surveys have also revealed high levels of depression among medical students (25 - 30%) as well as among physicians in training (22 - 43%), which for many specialties, continue into regular practice. A UK survey conducted of cancer-related specialties in 1994 and 2002 found higher job satisfaction in those specialties with more patient contact. Rates of burnout also varied by specialty.
See also
Branches of medicine
Interdisciplinary sub-specialties of medicine, including
Occupational medicine – branch of clinical medicine that provides health advice to organizations and individuals concerning work-related health and safety issues and standards. ''See occupational safety and health.
Disaster medicine – branch of medicine that provides healthcare services to disaster survivors; guides medically related disaster preparation, disaster planning, disaster response and disaster recovery throughout the disaster life cycle and serves as a liaison between and partner to the medical contingency planner, the emergency management professional, the incident command system, government and policy makers.
Preventive medicine – part of medicine engaged with preventing disease rather than curing it. It can be contrasted not only with curative medicine, but also with public health methods (which work at the level of population health rather than individual health).
Medical genetics – the application of genetics to medicine. Medical genetics is a broad and varied field. It encompasses many different individual fields, including clinical genetics, biochemical genetics, cytogenetics, molecular genetics, the genetics of common diseases (such as neural tube defects), and genetic counseling.
Specialty Registrar
Federation of National Specialty Societies of Canada
Society of General Internal Medicine
References
Medical specialties | 0.760319 | 0.997938 | 0.758752 |
Mores | Mores (, sometimes ; , plural form of singular , meaning "manner, custom, usage, or habit") are social norms that are widely observed within a particular society or culture. Mores determine what is considered morally acceptable or unacceptable within any given culture. A folkway is what is created through interaction and that process is what organizes interactions through routine, repetition, habit and consistency.
William Graham Sumner (1840–1910), an early U.S. sociologist, introduced both the terms "mores" (1898)
and "folkways" (1906) into modern sociology.
Mores are strict in the sense that they determine the difference between right and wrong in a given society, and people may be punished for their immorality which is common place in many societies in the world, at times with disapproval or ostracizing. Examples of traditional customs and conventions that are mores include lying, cheating, causing harm, alcohol use, drug use, marriage beliefs, gossip, slander, jealousy, disgracing or disrespecting parents, refusal to attend a funeral, politically incorrect humor, sports cheating, vandalism, leaving trash, plagiarism, bribery, corruption, saving face, respecting your elders, religious prescriptions and fiduciary responsibility.
Folkways are ways of thinking, acting and behaving in social groups which are agreed upon by the masses and are useful for the ordering of society. Folkways are spread through imitation, oral means or observation, and are meant to encompass the material, spiritual and verbal aspects of culture. Folkways meet the problems of social life, we feel security and order from their acceptance and application. Examples of folkways include: acceptable dress, manners, social etiquette, body language, posture, level of privacy, working hours and five day work week, acceptability of social drinking—abstaining or not from drinking during certain working hours, actions and behaviours in public places, school, university, business and religious institution, ceremonial situations, ritual, customary services and keeping personal space.
Terminology
The English word morality comes from the same Latin root "mōrēs", as does the English noun moral. However, mores do not, as is commonly supposed, necessarily carry connotations of morality. Rather, morality can be seen as a subset of mores, held to be of central importance in view of their content, and often formalized into some kind of moral code or even into customary law. Etymological derivations include More danico, More judaico, More veneto, Coitus more ferarum, and O tempora, o mores!.
The Greek terms equivalent to Latin mores are ethos (ἔθος, ἦθος, 'character') or nomos (νόμος, 'law'). As with the relation of mores to morality, ethos is the basis of the term ethics, while nomos gives the suffix -onomy, as in astronomy.
Anthropology
The meaning of all these terms extend to all customs of proper behavior in a given society, both religious and profane, from more trivial conventional aspects of custom, etiquette or politeness—"folkways" enforced by gentle social pressure, but going beyond mere "folkways" or conventions in including moral codes and notions of justice—down to strict taboos, behavior that is unthinkable within the society in question, very commonly including incest and murder, but also the commitment of outrages specific to the individual society such as blasphemy. Such religious or sacral customs may vary. Some examples include funerary services, matrimonial services; circumcision and covering of the hair in Judaism, Christian Ten Commandments, New Commandment and the sacraments or for example baptism, and Protestant work ethic, Shahada, prayer, alms, the fast and the pilgrimage as well as modesty in Islam, and religious diet.
While cultural universals are by definition part of the mores of every society (hence also called "empty universals"), the customary norms specific to a given society are a defining aspect of the cultural identity of an ethnicity or a nation. Coping with the differences between two sets of cultural conventions is a question of intercultural competence.
Differences in the mores of various nations are at the root of ethnic stereotype, or in the case of reflection upon one's own mores, autostereotypes.
The customary norms in a given society may include indigenous land rights, honour, filial piety, customary law and the customary international law that affects countries who may not have codified their customary norms. Land rights of indigenous peoples is under customary land tenure, its a system of arrangement in-line with customs and norms. This is the case in colonies. An example of a norm is an culture of honor exists in some societies, where the family is viewed as the main source of honor and the conduct of family members reflects upon their family honor. For instance some writers say in Rome to have an honorable stance, to be equals with someone, existed for those who are most similar to one another (family and friends) this could be due to the competing for public recognition and therefore for personal and public honor, over rhetoric, sport, war, wealth and virtue. To protrude, stand out, be recognized and demonstrate this "A Roman could win such a "competition" by pointing to past evidences of their honor" and "Or, a critic might be refuted by one's performance in a fresh showdown in which one's bona fides could be plainly demonstrated." Honor culture only can exist if the society has for males the shared code, a standard to uphold, guidelines and rules to follow, do not want to break those rules and how to interact successfully and to engage, this exists within a "closed" community of equals.
Filial piety is ethics towards one's family, as Fung Yu-lan states "the ideological basis for traditional [Chinese] society" and according to Confucious repay a burden debt back to ones parents or caregiver but its also traditional in another sense so as to fulfill an obligation to ones own ancestors, also to modern scholars it suggests extends an attitude of respect to superiors also, who are deserving to have that respect.
See also
Culture-bound syndrome
Enculturation
Euthyphro dilemma, discussing the conflict of sacral and secular mores
Habitus (sociology)
Nihonjinron "Japanese mores"
Piety
Political and Moral Sociology: see Luc Boltanski and French Pragmatism
Repugnancy costs
Value (personal and cultural)
References
Conformity
Consensus reality
Deviance (sociology)
Morality
Social agreement
Sociological terminology
Folklore | 0.761963 | 0.995742 | 0.758718 |
Psychological injury | A psychological injury is the psychological or psychiatric consequence of a traumatic event or physical injury. Such an injury might result from events such as abusive behavior, whistleblower retaliation, bullying, kidnapping, rape, motor vehicular collision or other negligent action. It may cause impairments, disorders, and disabilities perhaps as an exacerbation of a pre-existing condition (e.g., Dalby, Maclean, & Nesca, 2022; Drogin, Dattilio, Sadoff, & Gutheil, 2011; Duckworth, Iezzi, & O'Donohue, 2008; Kane & Dvoskin, 2011; Koch, Douglas, Nicholls, & O'Neil, 2006; Schultz & Gatchel, 2009; Young, 2010, 2011; Young, Kane, & Nicholson, 2006, 2007).
Psychological injury is considered a mental harm, suffering, damage, impairment, or dysfunction caused to a person as a direct result of some action or failure to act by some individual. The psychological injury must reach a degree of disturbance of the pre-existing psychological/ psychiatric state such that it interferes in some significant way with the individual's ability to function. If so, an individual may be able to sue for compensation/ damages.
Typically, a psychological injury may involve posttraumatic stress disorder (PTSD), traumatic brain injury (TBI), a concussion, chronic pain, or a disorder that involves mood or emotions (such as depression, anxiety, fear, or phobia, and adjustment disorder). These disorders may manifest separately or in combination (co-morbidity). If the symptoms and effects persist, the injured person may become a complainant or plaintiff who initiates legal action aimed at obtaining compensation against whoever is considered responsible for the injury.
Diagnosis and treatment
Psychologists and psychiatrists are those professionals typically qualified by their regulating or licensing bodies or boards to diagnose and treat psychological injuries. Psychologists are trained in the study of behavior and its assessment, diagnosis, and treatment. Many psychological tests are limited in their use to psychologists, as psychiatrists are unlikely receive substantial training in test administration and interpretation. However, being medical professionals, psychiatrists have skills and a knowledge base not typically available to psychologists. The Diagnostic and Statistical Manual of Mental Disorders—now in its fourth edition (DSM-IV-TR, American Psychiatric Association, 2000)—will soon be updated by a fifth edition slated for publication in 2013 (see Young and First, 2010, for a critique). This Manual is prepared under the aegis of the American Psychiatric Association, but psychologists contribute to this process by participating in its working groups.
Rehabilitation and other clinical psychologists—such as trauma psychologists—may be in professional contact with injured survivors at the onset injury, shortly thereafter, and throughout the course of recovery, such that these professionals, too, need to know about the legal ramifications of the field. They may employ cognitive behavioral approaches to help their patients deal with any physical injuries, pain experience, PTSD, mood, and effects of their brain injuries (Young, 2008b). They may assist the families of the injured, including spouses and children. They typically adopt a systems approach, working as part of rehabilitative teams. Their hardest cases occur when there is a death in the family as a result of the event for which legal action is involved and therapy is needed. These clinical, rehabilitation, and trauma psychologists refer to treatment guidelines in preparing their treatment plans, and attempt to keep their practices evidence-based when feasible.
Major psychological injuries
Chronic pain
Chronic pain is another controversial psychological condition, labeled in the DSM-IV-TR as Pain Disorder Associated with Psychological Factors (with or without a Medical Condition). The "biopsychosocial approach" recognizes the influence of psychological factors (e.g., stress) on pain. It was once thought that chronic pain could be the result of a "pain-prone personality" or that it is "all in the head." Contemporary research tends to dismiss such conceptualizations, but they continue persist and cause distress to patients whose pain is not recognized as real. Psychologists have an important role to play in helping patients in pain by providing appropriate education and treatment (for example, about catastrophizing or fearing the worst), and by using standard cognitive and behavioral techniques (such as breathing exercises, muscle relaxation, and dealing with cognitive distortions) (see Gatchel, Peng, Fuchs, Peters, and Turk, 2007; Schatman and Gatchel, 2010).
Traumatic brain injury (TBI)
TBI refers to mild to severe pathophysiological effects in the brain and central nervous system due to strong impacts, such as severe blows to the head and penetrating wounds that might take place in accidents and other events at claim. Neuropsychological deficits associated with TBI include those relating to memory, concentration, attention, processing speed, reasoning, problem solving, planning, and inhibitory control. When these effects persist, other psychological difficulties might arise, even in mild cases (such as concussions). However, the underlying reason for the perpetuation of the symptoms beyond the expected time frame might be due to associated factors, such as poor sleep, fatigue, pain, headaches, and distress. Psychologists can help patients with TBI by guiding them in cognitive remediation and dealing with family. When the effects are serious and even devastating, the degree of care from the team may be intensive, covering multiple aspects of daily living (see Ruff and Richards, 2009).
People of both sexes and all types of backgrounds, races, ages, and disability status are injured physically and psychologically in events at claim and in other situations. However, the research does not always consider these differences, and often the diagnostic manuals, psychological tests, and therapeutic protocols in use in the area also lack differentiation along these lines.
Disability and return to work
When psychological injuries compromise daily activities, psychologists need to address the degree of disability (see Schultz, 2009; Schultz & Rogers, 2011). Patients express symptoms that might be accurately diagnosed as PTSD, Pain Disorder, and/or TBI. However, the critical issue is the degree of impairment, limitation, and participation restriction in daily activities in which patients would normally participate at work, at home, in childcare, and in schooling. When the patient cannot undertake the functions involved in these important roles, the psychologist or other mental health professional may conclude that a disability is present, but this cannot be ascertained by the mere presence of a diagnosis of one sort or another. Rather, the psychologist must demonstrate that the person is disabled from the essential duties, tasks, or activities of the role at issue. For example, a forefinger injury leading to chronic pain might mean relatively little to an investment banker—as long as medications control it and other areas of functioning are not greatly affected—but might be devastating to a violinist. Psychologists may refer to the American Medical Association's Guides to the Evaluation of Permanent Impairment (Rondinelli, Genovese, Katz, Mayer, Müller, Ranavaya, & Brigham, 2008) in arriving at disability determinations, which addresses mental health, neuropsychological, and pain issues. However, like the DSM-IV-TR, this compendium is sometimes questioned for its scientific validity and usefulness.
Tort actions and other civil actions are often based on serious, permanent and important psychological injuries that create disabilities of a substantial nature in other areas, such as leisure activities, home care, and family life. Often, psychologists in court lock horns over the degree to which the event at claim and its psychological effects have created serious and potentially permanent psychological disabilities—in part, because there is no one test that can measure "disability," per se.
Treating psychologists try to help clients return to work (RTW) or to their other functional roles and activities of daily living (ADLs). Clients are expected to adhere to treatment regimens, or be compliant with treatment recommendations. Partly, this serves to mitigate their losses, or attempt to return to their pre-event physical and psychological condition. When they reach or are progressing to their maximum medical recovery (physical and psychological/ psychiatric recovery), RTW might be attempted on a modified, part-time, or accommodated basis, and treatment might continue to help full re-integration into the workforce or other daily roles, and to maintain gains and avoid deterioration. Or, clients might be sent for training or education, based on their transferable skills residual to the event at claim and its effects. For those who do not make full recovery and remain disabled because of their permanent barriers to recovery, the goals of rehabilitation include optimizing adjustment, quality of life (QOL), residual functionality, and wellness.
Psychological testing
Psychologists need to use the most appropriate tests available for detecting the person(s) responsible for the psychological injury. In addition, psychologists need to be able to arrive at scientifically informed conclusions in their evaluations that will withstand the rigors of scrutiny by psychologists on the opposing side and of cross-examination in court.
In terms of their education and training, psychologists need to be able to address the full array of areas under discussion, especially in forensic, rehabilitation, and trauma areas. They must become experts in assessment and testing, especially regarding (a) personality tests (e.g., the MMPI-2; Butcher, Dahlstrom, Graham, Tellegen, & Kaemmer, 1989; Butcher, Graham, Ben-Porath, Tellegen, Dahlstrom, & Kaemmer, 2001; and the revision the MMPI-2 RF; Ben-Porath & Tellegen, 2008; as well as the PAI; Morey, 2007), and their embedded validity scales, such as the F family of scales in the MMPI tests, and (b) stand-alone symptom validity tests (e.g., the TOMM; Tombaugh, 1996; WMT; Green, 2005; SIRS; Rogers, Bagby, & Dickens, 1992; and the revision SIRS-2; Rogers, Sewell, & Gillard, 2010). The key factors in the development of tests that are acceptable to psychologists and to court is that the tests should have acceptable psychometric properties, such as reliability and validity. Also, such tests must be standardized by using populations that make sense for the area of psychological injuries, such as accident survivors experiencing pain and other trauma victims.
See also
Causality
Forensic psychiatry
Forensic psychology
Evidence (Law)
Expert witness
Malingering
Chronic pain syndrome
Personal injury
Psychological Injury and Law
Rehabilitation
Tort
Traumatic brain injury complications
References
External links
American Bar Association – Tort Trial and Insurance Practice Section
APA Division 5: Evaluation, Measurement, & Statistics
APA Division 22: Rehabilitation Psychology
APA Division 41: American Psychology – Law Society
APA Division 56: Division of Trauma Psychology
Americans with Disabilities Act
Canadians with Disabilities Act
International Society for Traumatic Stress Studies
Forensic psychology
United States labor law | 0.785289 | 0.966156 | 0.758711 |
Biological process | Biological processes are those processes that are necessary for an organism to live and that shape its capacities for interacting with its environment. Biological processes are made of many chemical reactions or other events that are involved in the persistence and transformation of life forms.
Regulation of biological processes occurs when any process is modulated in its frequency, rate or extent. Biological processes are regulated by many means; examples include the control of gene expression, protein modification or interaction with a protein or substrate molecule.
Homeostasis: regulation of the internal environment to maintain a constant state; for example, sweating to reduce temperature
Organization: being structurally composed of one or more cells – the basic units of life
Metabolism: transformation of energy by converting chemicals and energy into cellular components (anabolism) and decomposing organic matter (catabolism). Living things require energy to maintain internal organization (homeostasis) and to produce the other phenomena associated with life.
Growth: maintenance of a higher rate of anabolism than catabolism. A growing organism increases in size in all of its parts, rather than simply accumulating matter.
Response to stimuli: a response can take many forms, from the contraction of a unicellular organism to external chemicals, to complex reactions involving all the senses of multicellular organisms. A response is often expressed by motion; for example, the leaves of a plant turning toward the sun (phototropism), and chemotaxis.
Interaction between organisms. the processes by which an organism has an observable effect on another organism of the same or different species.
Also: cellular differentiation, fermentation, fertilisation, germination, tropism, hybridisation, metamorphosis, morphogenesis, photosynthesis, transpiration.
See also
Chemical process
Life
Organic reaction
References
Biological concepts | 0.763516 | 0.993705 | 0.758709 |
Model | A model is an informative representation of an object, person or system. The term originally denoted the plans of a building in late 16th-century English, and derived via French and Italian ultimately from Latin modulus, a measure.
Models can be divided into physical models (e.g. a ship model or a fashion model) and abstract models (e.g. a set of mathematical equations describing the workings of the atmosphere for the purpose of weather forecasting). Abstract or conceptual models are central to philosophy of science.
In scholarly research and applied science, a model should not be confused with a theory: while a model seeks only to represent reality with the purpose of better understanding or predicting the world, a theory is more ambitious in that it claims to be an explanation of reality.
Model in specific contexts
As a noun, model has specific meanings in certain fields, derived from its original meaning of "structural design or layout":
Model (art), a person posing for an artist, e.g. a 15th-century criminal representing the biblical Judas in Leonardo da Vinci's painting The Last Supper
Model (person), a person who serves as a template for others to copy, as in a role model, often in the context of advertising commercial products; e.g. the first fashion model, Marie Vernet Worth in 1853, wife of designer Charles Frederick Worth.
Model (product), a particular design of a product as displayed in a catalogue or show room (e.g. Ford Model T, an early car model)
Model (organism) a non-human species that is studied to understand biological phenomena in other organisms, e.g. a guinea pig starved of vitamin C to study scurvy, an experiment that would be immoral to conduct on a person
Model (mimicry), a species that is mimicked by another species
Model (logic), a structure (a set of items, such as natural numbers 1, 2, 3,..., along with mathematical operations such as addition and multiplication, and relations, such as ) that satisfies a given system of axioms (basic truisms), i.e. that satisfies the statements of a given theory
Model (CGI), a mathematical representation of any surface of an object in three dimensions via specialized software
Model (MVC), the information-representing internal component of a software, as distinct from its user interface
Physical model
A physical model (most commonly referred to simply as a model but in this context distinguished from a conceptual model) is a smaller or larger physical representation of an object, person or system. The object being modelled may be small (e.g., an atom) or large (e.g., the Solar System) or life-size (e.g., a fashion model displaying clothes for similarly-built potential customers).
The geometry of the model and the object it represents are often similar in the sense that one is a rescaling of the other. However, in many cases the similarity is only approximate or even intentionally distorted. Sometimes the distortion is systematic, e.g., a fixed scale horizontally and a larger fixed scale vertically when modelling topography to enhance a region's mountains.
An architectural model permits visualization of internal relationships within the structure or external relationships of the structure to the environment. Another use is as a toy.
Instrumented physical models are an effective way of investigating fluid flows for engineering design. Physical models are often coupled with computational fluid dynamics models to optimize the design of equipment and processes. This includes external flow such as around buildings, vehicles, people, or hydraulic structures. Wind tunnel and water tunnel testing is often used for these design efforts. Instrumented physical models can also examine internal flows, for the design of ductwork systems, pollution control equipment, food processing machines, and mixing vessels. Transparent flow models are used in this case to observe the detailed flow phenomenon. These models are scaled in terms of both geometry and important forces, for example, using Froude number or Reynolds number scaling (see Similitude). In the pre-computer era, the UK economy was modelled with the hydraulic model MONIAC, to predict for example the effect of tax rises on employment.
Conceptual model
A conceptual model is a theoretical representation of a system, e.g. a set of mathematical equations attempting to describe the workings of the atmosphere for the purpose of weather forecasting. It consists of concepts used to help understand or simulate a subject the model represents.
Abstract or conceptual models are central to philosophy of science, as almost every scientific theory effectively embeds some kind of model of the physical or human sphere. In some sense, a physical model "is always the reification of some conceptual model; the conceptual model is conceived ahead as the blueprint of the physical one", which is then constructed as conceived. Thus, the term refers to models that are formed after a conceptualization or generalization process.
Examples
Conceptual model (computer science), an agreed representation of entities and their relationships, to assist in developing software
Economic model, a theoretical construct representing economic processes
Language model a probabilistic model of a natural language, used for speech recognition, language generation, and information retrieval
Large language models are artificial neural networks used for generative artificial intelligence (AI), e.g. ChatGPT
Mathematical model, a description of a system using mathematical concepts and language
Statistical model, a mathematical model that usually specifies the relationship between one or more random variables and other non-random variables
Model (CGI), a mathematical representation of any surface of an object in three dimensions via specialized software
Medical model, a proposed "set of procedures in which all doctors are trained"
Mental model, in psychology, an internal representation of external reality
Model (logic), a set along with a collection of finitary operations, and relations that are defined on it, satisfying a given collection of axioms
Model (MVC), information-representing component of a software, distinct from the user interface (the "view"), both linked by the "controller" component, in the context of the model–view–controller software design
Model act, a law drafted centrally to be disseminated and proposed for enactment in multiple independent legislatures
Standard model (disambiguation)
Properties of models, according to general model theory
According to Herbert Stachowiak, a model is characterized by at least three properties:
1. Mapping
A model always is a model of something—it is an image or representation of some natural or artificial, existing or imagined original, where this original itself could be a model.
2. Reduction
In general, a model will not include all attributes that describe the original but only those that appear relevant to the model's creator or user.
3. Pragmatism
A model does not relate unambiguously to its original. It is intended to work as a replacement for the original
a) for certain subjects (for whom?)
b) within a certain time range (when?)
c) restricted to certain conceptual or physical actions (what for?).
For example, a street map is a model of the actual streets in a city (mapping), showing the course of the streets while leaving out, say, traffic signs and road markings (reduction), made for pedestrians and vehicle drivers for the purpose of finding one's way in the city (pragmatism).
Additional properties have been proposed, like extension and distortion as well as validity. The American philosopher Michael Weisberg differentiates between concrete and mathematical models and proposes computer simulations (computational models) as their own class of models.
See also
Conceptual framework
Metamodeling
Model aircraft
Model car
Model house
Model railway
Model rocket
Rail transport modelling
Scale model
Scientific model
References
External links
Broad-concept articles
Simulation
Knowledge representation
Physical models
Scale modeling
Copying | 0.762909 | 0.994405 | 0.758641 |
Applied mechanics | Applied mechanics is the branch of science concerned with the motion of any substance that can be experienced or perceived by humans without the help of instruments. In short, when mechanics concepts surpass being theoretical and are applied and executed, general mechanics becomes applied mechanics. It is this stark difference that makes applied mechanics an essential understanding for practical everyday life. It has numerous applications in a wide variety of fields and disciplines, including but not limited to structural engineering, astronomy, oceanography, meteorology, hydraulics, mechanical engineering, aerospace engineering, nanotechnology, structural design, earthquake engineering, fluid dynamics, planetary sciences, and other life sciences. Connecting research between numerous disciplines, applied mechanics plays an important role in both science and engineering.
Pure mechanics describes the response of bodies (solids and fluids) or systems of bodies to external behavior of a body, in either a beginning state of rest or of motion, subjected to the action of forces. Applied mechanics bridges the gap between physical theory and its application to technology.
Composed of two main categories, Applied Mechanics can be split into classical mechanics; the study of the mechanics of macroscopic solids, and fluid mechanics; the study of the mechanics of macroscopic fluids. Each branch of applied mechanics contains subcategories formed through their own subsections as well. Classical mechanics, divided into statics and dynamics, are even further subdivided, with statics' studies split into rigid bodies and rigid structures, and dynamics' studies split into kinematics and kinetics. Like classical mechanics, fluid mechanics is also divided into two sections: statics and dynamics.
Within the practical sciences, applied mechanics is useful in formulating new ideas and theories, discovering and interpreting phenomena, and developing experimental and computational tools. In the application of the natural sciences, mechanics was said to be complemented by thermodynamics, the study of heat and more generally energy, and electromechanics, the study of electricity and magnetism.
Overview
Engineering problems are generally tackled with applied mechanics through the application of theories of classical mechanics and fluid mechanics. Because applied mechanics can be applied in engineering disciplines like civil engineering, mechanical engineering, aerospace engineering, materials engineering, and biomedical engineering, it is sometimes referred to as engineering mechanics.
Science and engineering are interconnected with respect to applied mechanics, as researches in science are linked to research processes in civil, mechanical, aerospace, materials and biomedical engineering disciplines. In civil engineering, applied mechanics’ concepts can be applied to structural design and a variety of engineering sub-topics like structural, coastal, geotechnical, construction, and earthquake engineering. In mechanical engineering, it can be applied in mechatronics and robotics, design and drafting, nanotechnology, machine elements, structural analysis, friction stir welding, and acoustical engineering. In aerospace engineering, applied mechanics is used in aerodynamics, aerospace structural mechanics and propulsion, aircraft design and flight mechanics. In materials engineering, applied mechanics’ concepts are used in thermoelasticity, elasticity theory, fracture and failure mechanisms, structural design optimisation, fracture and fatigue, active materials and composites, and computational mechanics. Research in applied mechanics can be directly linked to biomedical engineering areas of interest like orthopaedics; biomechanics; human body motion analysis; soft tissue modelling of muscles, tendons, ligaments, and cartilage; biofluid mechanics; and dynamic systems, performance enhancement, and optimal control.
Brief history
The first science with a theoretical foundation based in mathematics was mechanics; the underlying principles of mechanics were first delineated by Isaac Newton in his 1687 book Philosophiæ Naturalis Principia Mathematica. One of the earliest works to define applied mechanics as its own discipline was the three volume Handbuch der Mechanik written by German physicist and engineer Franz Josef Gerstner. The first seminal work on applied mechanics to be published in English was A Manual of Applied Mechanics in 1858 by English mechanical engineer William Rankine. August Föppl, a German mechanical engineer and professor, published Vorlesungen über technische Mechanik in 1898 in which he introduced calculus to the study of applied mechanics.
Applied mechanics was established as a discipline separate from classical mechanics in the early 1920s with the publication of Journal of Applied Mathematics and Mechanics, the creation of the Society of Applied Mathematics and Mechanics, and the first meeting of the International Congress of Applied Mechanics. In 1921 Austrian scientist Richard von Mises started the Journal of Applied Mathematics and Mechanics (Zeitschrift für Angewante Mathematik und Mechanik) and in 1922 with German scientist Ludwig Prandtl founded the Society of Applied Mathematics and Mechanics (Gesellschaft für Angewandte Mathematik und Mechanik). During a 1922 conference on hydrodynamics and aerodynamics in Innsbruck, Austria, Theodore von Kármán, a Hungarian engineer, and Tullio Levi-Civita, an Italian mathematician, met and decided to organize a conference on applied mechanics. In 1924 the first meeting of the International Congress of Applied Mechanics was held in Delft, the Netherlands attended by more than 200 scientist from around the world. Since this first meeting the congress has been held every four years, except during World War II; the name of the meeting was changed to International Congress of Theoretical and Applied Mechanics in 1960.
Due to the unpredictable political landscape in Europe after the First World War and upheaval of World War II many European scientist and engineers emigrated to the United States. Ukrainian engineer Stephan Timoshenko fled the Bolsheviks Red Army in 1918 and eventually emigrated to the U.S. in 1922; over the next twenty-two years he taught applied mechanics at the University of Michigan and Stanford University. Timoshenko authored thirteen textbooks in applied mechanics, many considered the gold standard in their fields; he also founded the Applied Mechanics Division of the American Society of Mechanical Engineers in 1927 and is considered “America’s Father of Engineering Mechanics.” In 1930 Theodore von Kármán left Germany and became the first director of the Aeronautical Laboratory at the California Institute of Technology; von Kármán would later co-found the Jet Propulsion Laboratory in 1944. With the leadership of Timoshenko and von Kármán, the influx of talent from Europe, and the rapid growth of the aeronautical and defense industries, applied mechanics became a mature discipline in the U.S. by 1950.
Branches
Dynamics
Dynamics, the study of the motion and movement of various objects, can be further divided into two branches, kinematics and kinetics. For classical mechanics, kinematics would be the analysis of moving bodies using time, velocities, displacement, and acceleration. Kinetics would be the study of moving bodies through the lens of the effects of forces and masses. In the context of fluid mechanics, fluid dynamics pertains to the flow and describing of the motion of various fluids.
Statics
The study of statics is the study and describing of bodies at rest. Static analysis in classical mechanics can be broken down into two categories, deformable bodies and non-deformable bodies. When studying deformable bodies, considerations relating to the forces acting on the rigid structures are analyzed. When studying non-deformable bodies, the examination of the structure and material strength is observed. In the context of fluid mechanics, the resting state of the pressure unaffected fluid is taken into account.
Relationship to classical mechanics
Applied Mechanics is a result of the practical applications of various engineering/mechanical disciplines; as illustrated in the table below.
Examples
Newtonian foundation
Being one of the first sciences for which a systematic theoretical framework was developed, mechanics was spearheaded by Sir Isaac Newton's Principia (published in 1687). It is the "divide and rule" strategy developed by Newton that helped to govern motion and split it into dynamics or statics. Depending on the type of force, type of matter, and the external forces, acting on said matter, will dictate the "Divide and Rule" strategy within dynamic and static studies.
Archimedes' principle
Archimedes' principle is a major one that contains many defining propositions pertaining to fluid mechanics. As stated by proposition 7 of Archimedes' principle, a solid that is heavier than the fluid its placed in, will descend to the bottom of the fluid. If the solid is to be weighed within the fluid, the fluid will be measured as lighter than the weight of the amount of fluid that was displaced by said solid. Further developed upon by proposition 5, if the solid is lighter than the fluid it is placed in, the solid will have to be forcibly immersed to be fully covered by the liquid. The weight of the amount of displaced fluids will then be equal to the weight of the solid.
Major topics
This section based on the "AMR Subject Classification Scheme" from the journal Applied Mechanics Reviews.
Foundations and basic methods
Continuum mechanics
Finite element method
Finite difference method
Other computational methods
Experimental system analysis
Dynamics and vibration
Dynamics (mechanics)
Kinematics
Vibrations of solids (basic)
Vibrations (structural elements)
Vibrations (structures)
Wave motion in solids
Impact on solids
Waves in incompressible fluids
Waves in compressible fluids
Solid fluid interactions
Astronautics (celestial and orbital mechanics)
Explosions and ballistics
Acoustics
Automatic control
System theory and design
Optimal control system
System and control applications
Robotics
Manufacturing
Mechanics of solids
Elasticity
Viscoelasticity
Plasticity and viscoplasticity
Composite material mechanics
Cables, rope, beams, etc
Plates, shells, membranes, etc
Structural stability (buckling, postbuckling)
Electromagneto solid mechanics
Soil mechanics (basic)
Soil mechanics (applied)
Rock mechanics
Material processing
Fracture and damage processes
Fracture and damage mechanics
Experimental stress analysis
Material Testing
Structures (basic)
Structures (ground)
Structures (ocean and coastal)
Structures (mobile)
Structures (containment)
Friction and wear
Machine elements
Machine design
Fastening and joining
Mechanics of fluids
Rheology
Hydraulics
Incompressible flow
Compressible flow
Rarefied flow
Multiphase flow
Wall Layers (incl boundary layers)
Internal flow (pipe, channel, and couette)
Internal flow (inlets, nozzles, diffusers, and cascades)
Free shear layers (mixing layers, jets, wakes, cavities, and plumes)\
Flow stability
Turbulence
Electromagneto fluid and plasma dynamics
Hydromechanics
Aerodynamics
Machinery fluid dynamics
Lubrication
Flow measurements and visualization
Thermal sciences
Thermodynamics
Heat transfer (one phase convection)
Heat transfer (two phase convection)
Heat transfer (conduction)
Heat transfer (radiation and combined modes)
Heat transfer (devices and systems)
Thermodynamics of solids
Mass transfer (with and without heat transfer)
Combustion
Prime movers and propulsion systems
Earth sciences
Micromeritics
Porous media
Geomechanics
Earthquake mechanics
Hydrology, oceanology, and meteorology
Energy systems and environment
Fossil fuel systems
Nuclear systems
Geothermal systems
Solar energy systems
Wind energy systems
Ocean energy system
Energy distribution and storage
Environmental fluid mechanics
Hazardous waste containment and disposal
Biosciences
Biomechanics
Human factor engineering
Rehabilitation engineering
Sports mechanics
Applications
Electrical Engineering
Civil engineering
Mechanical Engineering
Nuclear engineering
Architectural engineering
Chemical engineering
Petroleum engineering
Publications
Journal of Applied Mathematics and Mechanics
Newsletters of the Applied Mechanics Division
Journal of Applied Mechanics
Applied Mechanics Reviews
Applied Mechanics
Quarterly Journal of Mechanics and Applied Mathematics
Journal of Applied Mathematics and Mechanics (PMM)
Gesellschaft für Angewandte Mathematik und Mechanik
Acta Mechanica Sinica
See also
Biomechanics
Geomechanics
Mechanicians
Mechanics
Physics
Principle of moments
Structural analysis
Kinetics (physics)
Kinematics
Dynamics (physics)
Statics
References
Further reading
J.P. Den Hartog, Strength of Materials, Dover, New York, 1949.
F.P. Beer, E.R. Johnston, J.T. DeWolf, Mechanics of Materials, McGraw-Hill, New York, 1981.
S.P. Timoshenko, History of Strength of Materials, Dover, New York, 1953.
J.E. Gordon, The New Science of Strong Materials, Princeton, 1984.
H. Petroski, To Engineer Is Human, St. Martins, 1985.
T.A. McMahon and J.T. Bonner, On Size and Life, Scientific American Library, W.H. Freeman, 1983.
M. F. Ashby, Materials Selection in Design, Pergamon, 1992.
A.H. Cottrell, Mechanical Properties of Matter, Wiley, New York, 1964.
S.A. Wainwright, W.D. Biggs, J.D. Organisms, Edward Arnold, 1976.
S. Vogel, Comparative Biomechanics, Princeton, 2003.
J. Howard, Mechanics of Motor Proteins and the Cytoskeleton, Sinauer Associates, 2001.
J.L. Meriam, L.G. Kraige. Engineering Mechanics Volume 2: Dynamics, John Wiley & Sons., New York, 1986.
J.L. Meriam, L.G. Kraige. Engineering Mechanics Volume 1: Statics, John Wiley & Sons., New York, 1986.
External links
Video and web lectures
Engineering Mechanics Video Lectures and Web Notes
Applied Mechanics Video Lectures By Prof.SK. Gupta, Department of Applied Mechanics, IIT Delhi
Mechanics
.
Structural engineering | 0.769729 | 0.985528 | 0.758589 |
Collective unconscious | Collective unconscious refers to the unconscious mind and shared mental concepts. It is generally associated with idealism and was coined by Carl Jung. According to Jung, the human collective unconscious is populated by instincts, as well as by archetypes: ancient primal symbols such as The Great Mother, the Wise Old Man, the Shadow, the Tower, Water, and the Tree of Life. Jung considered the collective unconscious to underpin and surround the unconscious mind, distinguishing it from the personal unconscious of Freudian psychoanalysis. He believed that the concept of the collective unconscious helps to explain why similar themes occur in mythologies around the world. He argued that the collective unconscious had a profound influence on the lives of individuals, who lived out its symbols and clothed them in meaning through their experiences. The psychotherapeutic practice of analytical psychology revolves around examining the patient's relationship to the collective unconscious.
Psychiatrist and Jungian analyst Lionel Corbett argues that the contemporary terms "autonomous psyche" or "objective psyche" are more commonly used today in the practice of depth psychology rather than the traditional term of the "collective unconscious". Critics of the collective unconscious concept have called it unscientific and fatalistic, or otherwise very difficult to test scientifically (due to the mystical aspect of the collective unconscious). Proponents suggest that it is borne out by findings of psychology, neuroscience, and anthropology.
Basic explanation
The term "collective unconscious" first appeared in Jung's 1916 essay, "The Structure of the Unconscious". This essay distinguishes between the "personal", Freudian unconscious, filled with sexual fantasies and repressed images, and the "collective" unconscious encompassing the soul of humanity at large.
In "The Significance of Constitution and Heredity in Psychology" (November 1929), Jung wrote:
On October 19, 1936, Jung delivered a lecture "The Concept of the Collective Unconscious" to the Abernethian Society at St. Bartholomew's Hospital in London. He said:
Jung linked the collective unconscious to "what Freud called 'archaic remnants' – mental forms whose presence cannot be explained by anything in the individual's own life and which seem to be aboriginal, innate, and inherited shapes of the human mind". He credited Freud for developing his "primal horde" theory in Totem and Taboo and continued further with the idea of an archaic ancestor maintaining its influence in the minds of present-day humans. Every human being, he wrote, "however high his conscious development, is still an archaic man at the deeper levels of his psyche."
As modern humans go through their process of individuation, moving out of the collective unconscious into mature selves, they establish a persona—which can be understood simply as that small portion of the collective psyche which they embody, perform, and identify with.
The collective unconscious exerts overwhelming influence on the minds of individuals. These effects of course vary widely, however, since they involve virtually every emotion and situation. At times, the collective unconscious can terrify, but it can also heal.
Archetypes
In an early definition of the term, Jung writes: "Archetypes are typical modes of apprehension, and wherever we meet with uniform and regularly recurring modes of apprehension we are dealing with an archetype, no matter whether its mythological character is recognized or not." He traces the term back to Philo, Irenaeus, and the Corpus Hermeticum, which associate archetypes with divinity and the creation of the world, and notes the close relationship of Platonic ideas.
These archetypes dwell in a world beyond the chronology of a human lifespan, developing on an evolutionary timescale. Regarding the animus and anima, the male principle within the woman and the female principle within the man, Jung writes:
Jung also described archetypes as imprints of momentous or frequently recurring situations in the lengthy human past.
A complete list of archetypes cannot be made, nor can differences between archetypes be absolutely delineated. For example, the Eagle is a common archetype that may have a multiplicity of interpretations. It could mean the soul leaving the mortal body and connecting with the heavenly spheres, or it may mean that someone is sexually impotent, in that they have had their spiritual ego body engaged. In spite of this difficulty, Jungian analyst June Singer suggests a partial list of well-studied archetypes, listed in pairs of opposites:
Jung made reference to contents of this category of the unconscious psyche as being similar to Levy-Bruhl's use of collective representations or "représentations collectives", Mythological "motifs", Hubert and Mauss's "categories of the imagination", and Adolf Bastian's "primordial thoughts". He also called archetypes "dominants" because of their profound influence on mental life.
Instincts
Jung's exposition of the collective unconscious builds on the classic issue in psychology and biology regarding nature versus nurture. If we accept that nature, or heredity, has some influence on the individual psyche, we must examine the question of how this influence takes hold in the real world.
On exactly one night in its entire lifetime, the yucca moth discovers pollen in the opened flowers of the yucca plant, forms some into a pellet, and then transports this pellet, with one of its eggs, to the pistil of another yucca plant. This activity cannot be "learned"; it makes more sense to describe the yucca moth as experiencing intuition about how to act. Archetypes and instincts coexist in the collective unconscious as interdependent opposites, Jung would later clarify. Whereas for most animals intuitive understandings completely intertwine with instinct, in humans the archetypes have become a separate register of mental phenomena.
Humans experience five main types of instinct, wrote Jung: hunger, sexuality, activity, reflection, and creativity. These instincts, listed in order of increasing abstraction, elicit and constrain human behavior, but also leave room for freedom in their implementation and especially in their interplay. Even a simple hungry feeling can lead to many different responses, including metaphorical sublimation. These instincts could be compared to the "drives" discussed in psychoanalysis and other domains of psychology. Several readers of Jung have observed that in his treatment of the collective unconscious, Jung suggests an unusual mixture of primordial, "lower" forces, and spiritual, "higher" forces.
Exploration
Jung believed that proof of the existence of a collective unconscious, and insight into its nature, could be gleaned primarily from dreams and from active imagination, a waking exploration of fantasy.
Jung considered that 'the shadow' and the anima and animus differ from the other archetypes in the fact that their content is more directly related to the individual's personal situation'. These archetypes, a special focus of Jung's work, become autonomous personalities within an individual psyche. Jung encouraged direct conscious dialogue of the patients with these personalities within. While the shadow usually personifies the personal unconscious, the anima or the Wise Old Man can act as representatives of the collective unconscious.
Jung suggested that parapsychology, alchemy, and occult religious ideas could contribute understanding of the collective unconscious. Based on his interpretation of synchronicity and extra-sensory perception, Jung argued that psychic activity transcended the brain. In alchemy, Jung found that plain water, or seawater, corresponded to his concept of the collective unconscious.
In humans, the psyche mediates between the primal force of the collective unconscious and the experience of consciousness or dream. Therefore, symbols may require interpretation before they can be understood as archetypes. Jung writes:
A single archetype can manifest in many different ways. Regarding the Mother archetype, Jung suggests that not only can it apply to mothers, grandmothers, stepmothers, mothers-in-law, and mothers in mythology, but to various concepts, places, objects, and animals:
Care must be taken, however, to determine the meaning of a symbol through further investigation; one cannot simply decode a dream by assuming these meanings are constant. Archetypal explanations work best when an already-known mythological narrative can clearly help to explain the confusing experience of an individual.
Evidence
In his clinical psychiatry practice, Jung identified mythological elements which seemed to recur in the minds of his patients—above and beyond the usual complexes which could be explained in terms of their personal lives. The most obvious patterns applied to the patient's parents: "Nobody knows better than the psychotherapist that the mythologizing of the parents is often pursued far into adulthood and is given up only with the greatest resistance."
Jung cited recurring themes as evidence of the existence of psychic elements shared among all humans. For example: "The snake-motif was certainly not an individual acquisition of the dreamer, for snake-dreams are very common even among city-dwellers who have probably never seen a real snake." Still better evidence, he felt, came when patients described complex images and narratives with obscure mythological parallels. Jung's leading example of this phenomenon was a paranoid-schizophrenic patient who could see the sun's dangling phallus, whose motion caused wind to blow on earth. Jung found a direct analogue of this idea in the "Mithras Liturgy", from the Greek Magical Papyri of Ancient Egypt—only just translated into German—which also discussed a phallic tube, hanging from the sun, and causing wind to blow on earth. He concluded that the patient's vision and the ancient Liturgy arose from the same source in the collective unconscious.
Going beyond the individual mind, Jung believed that "the whole of mythology could be taken as a sort of projection of the collective unconscious". Therefore, psychologists could learn about the collective unconscious by studying religions and spiritual practices of all cultures, as well as belief systems like astrology.
Criticism of Jung's evidence
Popperian critic Ray Scott Percival disputes some of Jung's examples and argues that his strongest claims are not falsifiable. Percival takes especial issue with Jung's claim that major scientific discoveries emanate from the collective unconscious and not from unpredictable or innovative work done by scientists. Percival charges Jung with excessive determinism and writes: "He could not countenance the possibility that people sometimes create ideas that cannot be predicted, even in principle." Regarding the claim that all humans exhibit certain patterns of mind, Percival argues that these common patterns could be explained by common environments (i.e. by shared nurture, not nature). Because all people have families, encounter plants and animals, and experience night and day, it should come as no surprise that they develop basic mental structures around these phenomena.
This latter example has been the subject of contentious debate, and Jung critic Richard Noll has argued against its authenticity.
Ethology and biology
Animals all have some innate psychological concepts which guide their mental development. The concept of imprinting in ethology is one well-studied example, dealing most famously with the Mother constructs of newborn animals. The many predetermined scripts for animal behavior are called innate releasing mechanisms.
Proponents of the collective unconscious theory in neuroscience suggest that mental commonalities in humans originate especially from the subcortical area of the brain: specifically, the thalamus and limbic system. These centrally located structures link the brain to the rest of the nervous system and are said to control vital processes including emotions and long-term memory .
Archetype research
A more common experimental approach investigates the unique effects of archetypal images. An influential study of this type, by Rosen, Smith, Huston, & Gonzalez in 1991, found that people could better remember symbols paired with words representing their archetypal meaning. Using data from the Archive for Research in Archetypal Symbolism and a jury of evaluators, Rosen et al. developed an "Archetypal Symbol Inventory" listing symbols and one-word connotations. Many of these connotations were obscure to laypeople. For example, a picture of a diamond represented "self"; a square represented "Earth". They found that even when subjects did not consciously associate the word with the symbol, they were better able to remember the pairing of the symbol with its chosen word. Brown & Hannigan replicated this result in 2013, and expanded the study slightly to include tests in English and in Spanish of people who spoke both languages.
Maloney (1999) asked people questions about their feelings to variations on images featuring the same archetype: some positive, some negative, and some non-anthropomorphic. He found that although the images did not elicit significantly different responses to questions about whether they were "interesting" or "pleasant", but did provoke highly significant differences in response to the statement: "If I were to keep this image with me forever, I would be". Maloney suggested that this question led the respondents to process the archetypal images on a deeper level, which strongly reflected their positive or negative valence.
Ultimately, although Jung referred to the collective unconscious as an empirical concept, based on evidence, its elusive nature does create a barrier to traditional experimental research. June Singer writes:
Application to psychotherapy
Psychotherapy based on analytical psychology would seek to analyze the relationship between a person's individual consciousness and the deeper common structures which underlie them. Personal experiences both activate archetypes in the mind and give them meaning and substance for individual. At the same time, archetypes covertly organize human experience and memory, their powerful effects becoming apparent only indirectly and in retrospect. Understanding the power of the collective unconscious can help an individual to navigate through life.
In the interpretation of analytical psychologist Mary Williams, a patient who understands the impact of the archetype can help to dissociate the underlying symbol from the real person who embodies the symbol for the patient. In this way, the patient no longer uncritically transfers their feelings about the archetype onto people in everyday life, and as a result, can develop healthier and more personal relationships.
Practitioners of analytic psychotherapy, Jung cautioned, could become so fascinated with manifestations of the collective unconscious that they facilitated their appearance at the expense of their patient's well-being. Individuals with schizophrenia, it is said, fully identify with the collective unconscious, lacking a functioning ego to help them deal with actual difficulties of life.
Application to politics and society
Elements from the collective unconscious can manifest among groups of people, who by definition all share a connection to these elements. Groups of people can become especially receptive to specific symbols due to the historical situation they find themselves in. The common importance of the collective unconscious makes people ripe for political manipulation, especially in the era of mass politics. Jung compared mass movements to mass psychoses, comparable to demonic possession in which people uncritically channel unconscious symbolism through the social dynamic of the mob and the leader.
Although civilization leads people to disavow their links with the mythological world of uncivilized societies, Jung argued that aspects of the primitive unconscious would nevertheless reassert themselves in the form of superstitions, everyday practices, and unquestioned traditions such as the Christmas tree.
Based on empirical inquiry, Jung felt that all humans, regardless of racial and geographic differences, share the same collective pool of instincts and images, though these manifest differently due to the moulding influence of culture. However, above and in addition to the primordial collective unconscious, people within a certain culture may share additional bodies of primal collective ideas.
Jung called the UFO phenomenon a "living myth", a legend in the process of consolidation. Belief in a messianic encounter with UFOs demonstrated the point, Jung argued, that even if a rationalistic modern ideology repressed the images of the collective unconscious, its fundamental aspects would inevitably resurface. The circular shape of the flying saucer confirms its symbolic connection to repressed but psychically necessary ideas of divinity.
The universal applicability of archetypes has not escaped the attention of marketing specialists, who observe that branding can resonate with consumers through appeal to archetypes of the collective unconscious.
Distinction from related concepts
Jung contrasted the collective unconscious with the personal unconscious, the unique aspects of an individual study which Jung says constitute the focus of Sigmund Freud and Alfred Adler. Psychotherapy patients, it seemed to Jung, often described fantasies and dreams which repeated elements from ancient mythology. These elements appeared even in patients who were probably not exposed to the original story. For example, mythology offers many examples of the "dual mother" narrative, according to which a child has a biological mother and a divine mother. Therefore, argues Jung, Freudian psychoanalysis would neglect important sources for unconscious ideas, in the case of a patient with neurosis around a dual-mother image.
This divergence over the nature of the unconscious has been cited as a key aspect of Jung's famous split from Sigmund Freud and his school of psychoanalysis. Some commentators have rejected Jung's characterization of Freud, observing that in texts such as Totem and Taboo (1913) Freud directly addresses the interface between the unconscious and society at large. Jung himself said that Freud had discovered a collective archetype, the Oedipus complex, but that it "was the first archetype Freud discovered, the first and only one".
Jung also distinguished the collective unconscious and collective consciousness, between which lay "an almost unbridgeable gulf over which the subject finds himself suspended". According to Jung, collective consciousness (meaning something along the lines of consensus reality) offered only generalizations, simplistic ideas, and the fashionable ideologies of the age. This tension between collective unconscious and collective consciousness corresponds roughly to the "everlasting cosmic tug of war between good and evil" and has worsened in the time of the mass man.
Organized religion, exemplified by the Catholic Church, lies more with the collective consciousness; but, through its all-encompassing dogma it channels and molds the images which inevitably pass from the collective unconscious into the minds of people. (Conversely, religious critics including Martin Buber accused Jung of wrongly placing psychology above transcendental factors in explaining human experience.)
Minimal and maximal interpretations
In a minimalist interpretation of what would then appear as "Jung's much misunderstood idea of the collective unconscious", his idea was "simply that certain structures and predispositions of the unconscious are common to all of us ... [on] an inherited, species-specific, genetic basis". Thus "one could as easily speak of the 'collective arm' – meaning the basic pattern of bones and muscles which all human arms share in common."
Others point out however that "there does seem to be a basic ambiguity in Jung's various descriptions of the Collective Unconscious. Sometimes he seems to regard the predisposition to experience certain images as understandable in terms of some genetic model" – as with the collective arm. However, Jung was "also at pains to stress the numinous quality of these experiences, and there can be no doubt that he was attracted to the idea that the archetypes afford evidence of some communion with some divine or world mind', and perhaps 'his popularity as a thinker derives precisely from this" – the maximal interpretation.
Marie-Louise von Franz accepted that "it is naturally very tempting to identify the hypothesis of the collective unconscious historically and regressively with the ancient idea of an all-extensive world-soul." New Age writer Sherry Healy goes further, claiming that Jung himself "dared to suggest that the human mind could link to ideas and motivations called the collective unconscious ... a body of unconscious energy that lives forever." This is the idea of monopsychism.
Other researchers, including Alexander Fowler, have proposed using the minimal interpretation of his work and incorporating it into that of the theory of biological evolution (i.e., sexual selection) or to unify disparate theoretical orientations within psychology such as neuropsychology, evolutionary psychology and analytical psychology as Jung's postulation of an evidenced mechanism for the genetic transmission of information through sexual selection provides a singular explanation for unanswered questions held by those of varied theoretical orientations.
See also
References
Sources
Jung, Carl G. The Collected Works of C. G. Jung. Bollingen Series XX.
Volume 7. Two Essays on Analytical Psychology. Translated by R. F. C. Hull. ed. Herbert Read, Michael Fordham, & Gerhard Adler. New York: Pantheon Books, 1953.
Volume 8. The Structure and Dynamics of the Psyche. Translated by R. F. C. Hull. Ed. Herbert Read, Michael Fordham, & Gerhard Adler. New York: Pantheon Books, 1960.
Volume 9, Part I. The Archetypes and the Collective Unconscious. Translated by R. F. C. Hull. Ed. Herbert Read, Michael Fordham, & Gerhard Adler. New York: Pantheon Books, 1959.
Volume 10. Civilization in Transition. Translated by R. F. C. Hull. Ed. Herbert Read, Michael Fordham, & Gerhard Adler. New York: Pantheon Books, 1964.
Volume 11. Psychology and Religion: West and East. Translated by R. F. C. Hull. Ed. Herbert Read, Michael Fordham, & Gerhard Adler. New York: Pantheon Books, 1958.
Volume 14. Mysterium Coniunctionis: An Inquiry into the Separation and Synthesis of Psychic Opposites in Alchemy. Translated by R. F. C. Hull. Ed. Herbert Read, Michael Fordham, & Gerhard Adler. Princeton, NJ: Princeton University Press, 1970. (First published in English in London by Routledge, 1963.)
Note: Where appropriate, endnote citations also give names of individual articles, with years of publication/revision.
Progoff, Ira. Jung's Psychology and its Social Meaning: An Introductory Statement of C. G. Jung's Psychological Theories and a First Interpretation of their Significance for the Social Sciences. New York: Grove Press, 1953.
Shelburne, Walter A. Mythos and Logos in the Thought of Carl Jung: The Theory of the Collective Unconscious in Scientific Perspective. State University of New York Press, 1988.
Singer, June Kurlander. Culture and the Collective Unconscious. Dissertation accepted at Northwestern University. August 1968.
Young-Eisendrath, Polly, & Terrence Dawson (eds.) The Cambridge Companion to Jung. Cambridge University Press, 2008.
Further reading
Michael Vannoy Adams, The Mythological Unconscious (2001)
Gallo, Ernest. "Synchronicity and the Archetypes," Skeptical Inquirer, 18 (4). Summer 1994.
Jung, Carl. (1959). Archetypes and the Collective Unconscious.
Jung, Carl. The Development of Personality.
Jung, Carl. (1970). "Psychic conflicts in a child.", Collected Works of C. G. Jung, 17. Princeton University Press. 235 p. (pp. 1–35).
Stevens, Anthony. (2002). Archetype Revisited: An Updated Natural History of the Self. London: Brunner-Routledge.
Whitmont, Edward C. (1969). The Symbolic Quest. Princeton University Press.
External links
Archive for Research in Archetypal Symbolism A pictorial and written archive of mythological, ritualistic, and symbolic images from all over the world and from all epochs of human history.
Collective Unconscious at Carl Jung
Jungian Society for Scholarly Studies – website including journal archives and conference papers
Translated texts by Jung
"On the Nature of the Psyche" – full text hosted at American Buddha Online Library
"The Concept of the Collective Unconscious" – Bahá'í Studies Web Server
Secondary literature
Brown, Jeffrey M., & Terence P. Hannigan. "An Empirical Test of Carl Jung's Collective Unconscious (Archetypal) Memory ". Journal of Border Education Research 5, Fall 2006.
DelVecchio, Milan. "We Archipelago: A Productive Reaction to the Collective Unconscious, in a Conscious State". 2013 Critical Information conference, School of Visual Arts.
Greenwood, Susan F. "Emile Durkheim and C. G. Jung: Structuring a Transpersonal Sociology of Religion". Journal for the Scientific Study of Religion 29.4, 1990; International Journal of Transpersonal Studies 32.2.
Hossain, Shaikat. "The Internet as a Tool for Studying the Collective Unconscious". Jung Journal: Culture & Psyche 6.2, 2012.
Niesser, Arthur. "Neuroscience and Jung's Model of the Psyche: A Close Fit" (archived). International Association for Analytical Psychology, 2004 Conference.
Rosen, D. H.; S. M. Smith; H. L. Huston; & G. Gonzalez. "Empirical Study of Associations Between Symbols and Their Meanings: Evidence of Collective Unconscious (Archetypal) Memory". Journal of Analytical Psychology 36, 1991.
Sedivi, Amy Elizabeth. "Unveiling the Unconscious: The Influence of Jungian Psychology on Jackson Pollock and Mark Rothko". B.A. thesis accepted at College of William and Mary, May 6, 2009.
Shelburne, Walter Avory. "C. G. Jung's Theory of the Collective Unconscious: A Rational Reconstruction". PhD dissertation accepted at University of Florida, June 1976.
Sheldrake, Rupert. "Society, Spirit & Ritual: Morphic Resonance and the Collective Unconscious - Part II". Psychological Perspectives 18.2, Fall 1987.
Analytical psychology
Collective intelligence
Crowd psychology
Jungian archetypes
Occult collective consciousness
Unconscious | 0.760213 | 0.997828 | 0.758562 |
Interpretative phenomenological analysis | Interpretative phenomenological analysis (IPA) is a qualitative form of psychology research. IPA has an idiographic focus, which means that instead of producing generalization findings, it aims to offer insights into how a given person, in a given context, makes sense of a given situation. Usually, these situations are of personal significance; examples might include a major life event, or the development of an important relationship. IPA has its theoretical origins in phenomenology and hermeneutics, and many of its key ideas are inspired by the work of Edmund Husserl, Martin Heidegger, and Maurice Merleau-Ponty. IPA's tendency to combine psychological, interpretative, and idiographic elements is what distinguishes it from other approaches to qualitative, phenomenological psychology.
Taking part
Sometimes IPA studies involve a close examination of the experiences and meaning-making activities of only one participant. Most frequently they draw on the accounts of a small number of people (6 has been suggested as a good number, although anywhere between 3 and 15 participants for a group study can be acceptable). In either case, participants are invited to take part precisely because they can offer the researcher some meaningful insight into the topic of the study; this is called purposive sampling [i.e. it is not randomised]. Usually, participants in an IPA study are expected to have certain experiences in common with one another: the small-scale nature of a basic IPA study shows how something is understood in a given context, and from a shared perspective, a method sometimes called homogeneous sampling. More advanced IPA study designs may draw together samples that offer multiple perspectives on a shared experience (husbands and wives, for example, or psychiatrists and patients); or they may collect accounts over a period of time, to develop a longitudinal analysis.
Data collection
In IPA, researchers gather qualitative data from research participants using techniques such as interview, diaries, or focus group. Typically, these are approached from a position of flexible and open-ended inquiry, and the interviewer adopts a stance that is curious and facilitative (rather than, say, challenging and interrogative). IPA usually requires personally salient accounts of some richness and depth, and it requires that these accounts be captured in a way that permits the researcher to work with a detailed verbatim transcript.
Data analysis
Data collection does not set out to test hypotheses, and this stance is maintained in data analysis. The analyst reflects upon their own preconceptions about the data, and attempts to suspend these in order to focus on grasping the experiential world of the research participant. Transcripts are coded in considerable detail, with the focus shifting back and forth from the key claims of the participant, to the researcher's interpretation of the meaning of those claims. IPA's hermeneutic stance is one of inquiry and meaning-making, and so the analyst attempts to make sense of the participant's attempts to make sense of their own experiences, thus creating a double hermeneutic. One might use IPA if one had a research question which aimed to understand what a given experience was like (phenomenology) and how someone made sense of it (interpretation).
Analysis in IPA is said to be 'bottom-up'. This means that the researcher generates codes from the data, rather than using a pre-existing theory to identify codes that might be applied to the data. IPA studies do not test theories, then, but they are often relevant to the development of existing theories. One might use the findings of a study on the meaning of sexual intimacy to gay men in close relationships, for example, to re-examine the adequacy of theories which attempt to predict and explain safe sex practices. IPA encourages an open-ended dialogue between the researcher and the participants and may, therefore, lead us to see things in a new light.
After transcribing the data, the researcher works closely and intensively with the text, annotating it closely ('coding') for insights into the participants' experience and perspective on their world. As the analysis develops, the researcher catalogues the emerging codes, and subsequently begins to look for patterns in the codes. These patterns are called 'themes'. Themes are recurring patterns of meaning (ideas, thoughts, feelings) throughout the text. Themes are likely to identify both something that matters to the participants (i.e. an object of concern, topic of some import) and also convey something of the meaning of that thing, for the participants. E.g. in a study of the experiences of young people learning to drive, we might find themes like 'Driving as a rite of passage' (where one key psychosocial understanding of the meaning of learning to drive, is that it marks a cultural threshold between adolescence and adulthood).
Some themes will eventually be grouped under much broader themes called 'superordinate themes'. For example, 'Feeling anxious and overwhelmed during the first driving lessons' might be a superordinate category that captures a variety of patterns in participants' embodied, emotional and cognitive experiences of the early phases of learning to drive, where sub-themes relating to, say, 'Feeling nervous', 'Worrying about losing control', and 'Struggling to manage the complexities of the task' might be found. The final set of themes are typically summarised and placed into a table or similar structure where evidence from the text is given to back up the themes produced by a quote from the text.
Analysis
In IPA, a good analysis is one that balances phenomenological description with insightful interpretation and anchors these interpretations in the participants' accounts. It is also likely to maintain an idiographic focus (so that particular variation are not lost), and to keep a close focus on meaning (rather than say, causal relations). A degree of transparency (contextual detail about the sample, a clear account of the process, adequate commentary on the data, key points illustrated by verbatim quotes) is also crucial to estimating the plausibility and transferability of an IPA study. Engagement with credibility issues (such as cross-validation, cooperative inquiry, independent audit, or triangulation) is also likely to increase the reader's confidence.
Applications
Due to an increased interest in the constructed nature of certain aspects of illness (how people perceive bodily and mental symptoms), IPA has been particularly recommended for its uses in the field of health psychology. However, while this subject-centered approach to experiencing illness is congruent with an increase in patient-centered research, IPA may have been historically most employed in health psychology due to the fact that many of its initial supporters operated careers in this field.
With a general increase in the number of IPA studies published over the last decade has come to the employment of this method in a variety of fields including business (organisational psychology ), sexuality, and key life transitions such as transitioning into motherhood and living with cancer as a chronic illness.
See also
Action research
Emic and etic
Ethnography
Existential phenomenology
Hermeneutic phenomenology
Jonathan Smith (psychologist)
Participatory action research
Phenomenology
Triangulation (social science)
Notes
References
Brocki J.J.M, Wearden A.J. (2006). “A critical evaluation of the use of interpretative phenomenological analysis (IPA) in health psychology”. Psychology and Health, 21(1), 87-108
Heron, J. (1996). Co-operative Inquiry: Research into the human condition. London: Sage.
McGeechan, G.J., McPherson, K.E., Roberts, K, (2018). "An interpretative phenomenological analysis of the experience of living with colorectal cancer as a chronic illness". Journal of clinical nursing, 27(15-16), 3148-3156
Reid, K., Flowers, P. & Larkin, M. (2005). Exploring lived experience, The Psychologist, 18, 20-23.
Shaw, R. L. (2001). Why use interpretative phenomenological analysis in Health Psychology? Health Psychology Update, 10, 48-52.
Smith, J.A. (1996) "Beyond the divide between cognition and discourse: Using interpretative phenomenological analysis in health psychology". Psychology & Health, 11(2), 261-271
Smith, J., Jarman, M. & Osborne, M. (1999). Doing interpretative phenomenological analysis. In M. Murray & K. Chamberlain (Eds.), Qualitative Health Psychology. London: Sage.
Smith, J.A. (1999). "Identity development during the transition to motherhood: An interpretative phenomenological analysis". Journal of reproductive and infant psychology, 17(3), 281-299
Smith, J.A. & Osborn, M. (2003) Interpretative phenomenological analysis. In J.A. Smith (Ed.), Qualitative Psychology: A Practical Guide to Research Methods. London: Sage.
Smith, J.A., Flowers, P., & Larkin, M. (2009). Interpretative Phenomenological Analysis: Theory Method and Research. London: Sage.
Smith, J.A. (2011). "Evaluating the contribution of interpretative phenomenological analysis". Health Psychology Review, 5(1), 9-27
External links
IPA website at Birkbeck, University of London
Phenomenological methodology
Qualitative research | 0.769667 | 0.985551 | 0.758546 |
Play (BDSM) | Play, within BDSM circles, is any of the wide variety of "kinky" activities. This includes both physical and mental activities, covering a wide range of intensities and levels of social acceptability. The term originated in the BDSM club and party communities, indicating the activities taking place within a scene. It has since extended to the full range of BDSM activities.
Play can take many forms. It ranges from light "getting to know you" sessions where participants discover each other's likes and dislikes to extreme, extended play between committed individuals that know each other's limits and are willing to push or be pushed at their boundaries. While physical activities are better known and more infamous, it also includes 'mental play' such as erotic hypnosis and mind games.
BDSM play is usually the primary topic of negotiation, especially for casual players and limited scenes. Most BDSM clubs and local communities offer classes and materials about negotiating play scenes. Play safety is a major topic of discussion and debate within BDSM communities.
Categories of play
Play is broken down into two broad categories, physical and mental. Physical play is better known and consists of the typical activities the average person thinks of as BDSM. As the BDSM scene matures and gains greater mainstream tolerance, mental play is becoming an increasingly noteworthy part of the community.
Physical BDSM
Physical BDSM encompasses all "kinky" activities that are carried out physically. Two of the best known examples are flogging and bondage. Extensive classes and workshops teach technical skills to carry out these activities competently, as well as safety considerations and protocols. This is the type of play most often seen in BDSM clubs and in media representations of kink. While often associated with sadism and masochism, many activities are not focused on or even involve pain. Non-painful sensation play and elaborate bondage done mainly for aesthetic purposes are prominent examples.
Mental BDSM
Mental BDSM is the collection of activities intended to create a psychological impact, often without a physical component. Recreational hypnosis is the most prominent example, with a well-developed international community. Another noteworthy but controversial example is the 'mind fuck', wherein a state of confusion and/or psychological conflict is intentionally created. While mental 'players' have considerably less documented material to study, an active Internet community and classes offered through local groups and conventions provide many learning opportunities.
Types of play
Participants in BDSM typically recognise different types of play, based on their intensity and social acceptability. These distinctions can be rather arbitrary and variant. What is considered edge play for a particular couple or local community may be merely heavy play, or even light play, for others.
Light play
Light play consists of activities that are considered mild and/or carry little social stigma. This especially includes BDSM elements commonly practiced by "vanilla" couples. Light bondage, slapping, and casual spanking are examples of light play.
Heavy play
Heavy play indicates elements that are intense and/or carry substantial social stigma. The bulk of activities undertaken by BDSM participants would be considered as heavy play or bordering on heavy play. Examples of heavy play include caning, suspension bondage, and erotic hypnosis.
Edge play
Edgeplay is a term used for types of play that "push the edge." They usually involve a risk of physical or emotional harm. Breath play, knife play, gun play and blood play are all types of edge play. In males, restriction of flow of urine and semen may contribute to the development of benign prostatic hyperplasia and erectile dysfunction.
Edge play can also literally refer to playing with an edge, for example knives, swords and other implements. It is sometimes used to describe activities that challenge the boundaries of the participants.
This type of play generally falls under the umbrella of RACK (Risk Aware Consensual Kink).
Safety and consent
Deadly outcomes arising from BDSM play are rare. A recent case collection reported three cases of deaths related to consensual BDSM activities. According to a study that examined 17 cases of deaths related to such activities, the most prominent cause of death was strangulation (which occurred in 88% of the cases in the sample).
Some deaths related to play, including erotic asphyxia, have resulted in criminal prosecutions, with some defendants arguing in court that their partners had died accidentally. Such defenses have been deemed as problematic by some scholars, who believe that male defendants have disguised misogynistic conduct as a strategy to manipulate trial and sentencing results.
See also
Glossary of BDSM
References
BDSM terminology | 0.767787 | 0.987956 | 0.75854 |
Individuation | The principle of individuation, or , describes the manner in which a thing is identified as distinct from other things.
The concept appears in numerous fields and is encountered in works of Leibniz, Carl Jung, Gunther Anders, Gilbert Simondon, Bernard Stiegler, Friedrich Nietzsche, Arthur Schopenhauer, David Bohm, Henri Bergson, Gilles Deleuze, and Manuel DeLanda.
Usage
The word individuation occurs with different meanings and connotations in different fields.
In philosophy
Philosophically, "individuation" expresses the general idea of how a thing is identified as an individual thing that "is not something else". This includes how an individual person is held to be different from other elements in the world and how a person is distinct from other persons. By the seventeenth century, philosophers began to associate the question of individuation or what brings about individuality at any one time with the question of identity or what constitutes sameness at different points in time.
In Jungian psychology
In analytical psychology, individuation is the process by which the individual self develops out of an undifferentiated unconscious – seen as a developmental psychic process during which innate elements of personality, the components of the immature psyche, and the experiences of the person's life become, if the process is more or less successful, integrated over time into a well-functioning whole. Other psychoanalytic theorists describe it as the stage where an individual transcends group attachment and narcissistic self-absorption.
In the news industry
The news industry has begun using the term individuation to denote new printing and on-line technologies that permit mass customization of the contents of a newspaper, a magazine, a broadcast program, or a website so that its contents match each user's unique interests. This differs from the traditional mass-media practice of producing the same contents for all readers, viewers, listeners, or on-line users.
Communications theorist Marshall McLuhan alluded to this trend when discussing the future of printed books in an electronically interconnected world in the 1970s and 1980s.
In privacy and data protection law
From around 2016, coinciding with increased government regulation of the collection and handling of personal data, most notably the GDPR in EU Law, individuation has been used to describe the ‘singling out’ of a person from a crowd – a threat to privacy, autonomy and dignity.
Most data protection and privacy laws turn on the identifiability of an individual as the threshold criterion for when data subjects will need legal protection. However, privacy advocates argue privacy harms can also arise from the ability to disambiguate or ‘single out’ a person. Doing so enables the person, at an individual level, to be tracked, profiled, targeted, contacted, or subject to a decision or action which impacts them - even if their civil or legal ‘identity’ is not known (or knowable).
In some jurisdictions the wording of the statute already includes the concept of individuation. In other jurisdictions regulatory guidance has suggested that the concept of 'identification' includes individuation - i.e., the process by which an individual can be 'singled out' or distinguished from all other members of a group.
However, where privacy and data protection statutes use only the word ‘identification’ or ‘identifiability’, different court decisions mean that there is not necessarily a consensus about whether the legal concept of identification already encompasses individuation or not.
Rapid advances in technologies including artificial intelligence, and video surveillance coupled with facial recognition systems have now altered the digital environment to such an extent that ‘not identifiable by name’ is no longer an effective proxy for ‘will suffer no privacy harm’. Many data protection laws may require redrafting to give adequate protection to privacy interests, by explicitly regulating individuation as well as identification of individual people.
In physics
Two quantum entangled particles cannot be understood independently. Two or more states in quantum superposition, e.g., as in Schrödinger's cat being simultaneously dead and alive, is mathematically not the same as assuming the cat is in an individual alive state with 50% probability. The Heisenberg's uncertainty principle says that complementary variables, such as position and momentum, cannot both be precisely known – in some sense, they are not individual variables. A natural criterion of individuality has been suggested.
Arthur Schopenhauer
For Schopenhauer, the principium individuationis is constituted of time and space, being the ground of multiplicity. In his view, the mere difference in location suffices to make two systems different, with each of the two states having its own real physical state, independent of the state of the other.
This view influenced Albert Einstein. Schrödinger put the Schopenhaurian label on a folder of papers in his files “Collection of Thoughts on the physical Principium individuationis.”
Carl Jung
According to Jungian psychology, individuation is a process of psychological integration. "In general, it is the process by which individual beings are formed and differentiated [from other human beings]; in particular, it is the development of the psychological individual as a being distinct from the general, collective psychology."
Individuation is a process of transformation whereby the personal and collective unconscious are brought into consciousness (e.g., by means of dreams, active imagination, or free association) to be assimilated into the whole personality. It is a completely natural process that is necessary for the integration of the psyche. Individuation has a holistic healing effect on the person, both mentally and physically.
In addition to Jung's theory of complexes, his theory of the individuation process forms conceptions of an unconscious filled with mythic images, a non-sexual libido, the general types of extraversion and introversion, the compensatory and prospective functions of dreams, and the synthetic and constructive approaches to fantasy formation and utilization.
"The symbols of the individuation process . . . mark its stages like milestones, prominent among them for Jungians being the shadow, the wise old man . . . and lastly the anima in man and the animus in woman." Thus, "There is often a movement from dealing with the persona at the start . . . to the ego at the second stage, to the shadow as the third stage, to the anima or animus, to the Self as the final stage. Some would interpose the Wise Old Man and the Wise Old Woman as spiritual archetypes coming before the final step of the Self."
"The most vital urge in every being, the urge to self-realize, is the motivating force behind the individuation process. With the internal compass of our very nature set toward self-realization, the thrust to become who and what we are derives its power from the instincts. On taking up the study of alchemy, Jung realized his long-held desire to find a body of work expressive of the psychological processes involved in the overarching process of individuation."
Gilbert Simondon
In L'individuation psychique et collective, Gilbert Simondon developed a theory of individual and collective individuation in which the individual subject is considered as an effect of individuation rather than a cause. Thus, the individual atom is replaced by a never-ending ontological process of individuation.
Simondon also conceived of "pre-individual fields" which make individuation possible. Individuation is an ever-incomplete process, always leaving a "pre-individual" left over, which makes possible future individuations. Furthermore, individuation always creates both an individual subject and a collective subject, which individuate themselves concurrently. Like Maurice Merleau-Ponty, Simondon believed that the individuation of being cannot be grasped except by a correlated parallel and reciprocal individuation of knowledge.
Bernard Stiegler
The philosophy of Bernard Stiegler draws upon and modifies the work of Gilbert Simondon on individuation and also upon similar ideas in Friedrich Nietzsche and Sigmund Freud. During a talk given at the Tate Modern art gallery in 2004, Stiegler summarized his understanding of individuation. The essential points are the following:
The I, as a psychic individual, can only be thought in relationship to we, which is a collective individual. The I is constituted in adopting a collective tradition, which it inherits and in which a plurality of I ’s acknowledge each other’s existence.
This inheritance is an adoption, in that I can very well, as the French grandson of a German immigrant, recognize myself in a past which was not the past of my ancestors but which I can make my own. This process of adoption is thus structurally factual.
The I is essentially a process, not a state, and this process is an in-dividuation — it is a process of psychic individuation. It is the tendency to become one, that is, to become indivisible.
This tendency never accomplishes itself because it runs into a counter-tendency with which it forms a metastable equilibrium. (It must be pointed out how closely this conception of the dynamic of individuation is to the Freudian theory of drives and to the thinking of Nietzsche and Empedocles.)
The we is also such a process (the process of collective individuation). The individuation of the I is always inscribed in that of the we, whereas the individuation of the we takes place only through the individuations, polemical in nature, of the I ’s which constitute it.
That which links the individuations of the I and the we is a pre-individual system possessing positive conditions of effectiveness that belong to what Stiegler calls retentional apparatuses. These retentional apparatuses arise from a technical system which is the condition of the encounter of the I and the we — the individuation of the I and the we is, in this respect, also the individuation of the technical system.
The technical system is an apparatus which has a specific role wherein all objects are inserted — a technical object exists only insofar as it is disposed within such an apparatus with other technical objects (this is what Gilbert Simondon calls the technical group).
The technical system is also that which founds the possibility of the constitution of retentional apparatuses, springing from the processes of grammatization growing out of the process of individuation of the technical system. And these retentional apparatuses are the basis for the dispositions between the individuation of the I and the individuation of the we in a single process of psychic, collective, and technical individuation composed of three branches, each branching out into process groups.
This process of triple individuation is itself inscribed within a vital individuation which must be apprehended as:
the vital individuation of natural organs
the technological individuation of artificial organs
and the psycho-social individuation of organizations linking them together
In the process of individuation, wherein knowledge as such emerges, there are individuations of mnemo-technological subsystems which overdetermine, qua specific organizations of what Stiegler calls tertiary retentions, the organization, transmission, and elaboration of knowledge stemming from the experience of the sensible.
See also
Akrasia
Deindividuation
Identical particles
Identity formation
Indiscernibles
Nekyia
Positive disintegration
Principle of individuation
Rationalization (sociology)
Self-actualization
References
Bibliography
Gilbert Simondon, Du mode d'existence des objets techniques (Méot, 1958; Paris: Aubier, 1989, second edition).
Gilbert Simondon, On the Mode of Existence of Technical Objects, Part 1, link to PDF of 1980 translation.
Gilbert Simondon, L'individu et sa genèse physico-biologique (l'individuation à la lumière des notions de forme et d'information) (Paris: PUF, 1964; J.Millon, coll. Krisis, 1995, second edition).
Gilbert Simondon, The Individual and Its Physico-Biological Genesis, Part 1, Part 2, links to HTML files of unpublished 2007 translation.
Gilbert Simondon, L'Individuation psychique et collective (1964; Paris: Aubier, 1989).
Bernard Stiegler, Acting Out.
Bernard Stiegler, Temps et individuation technique, psychique, et collective dans l’oeuvre de Simondon.
Gilles Deleuze
Child development
Analytical psychology
Media studies
Biology terminology
Personhood
Concepts in social philosophy
Metaphysical principles | 0.763 | 0.9941 | 0.758499 |
Androcentrism | Androcentrism (Ancient Greek, ἀνήρ, "man, male") is the practice, conscious or otherwise, of placing a masculine point of view at the center of one's world view, culture, and history, thereby culturally marginalizing femininity. The related adjective is androcentric, while the practice of placing the feminine point of view at the center is gynocentric.
Androcentrism has been described as a pervasive form of sexism. However, it has also been described as a movement centered on, emphasizing, or dominated by males or masculine interests.
Etymology
The term androcentrism was introduced as an analytic concept by Charlotte Perkins Gilman in a scientific debate. Perkins Gilman described androcentric practices in society and the resulting problems they created in her investigation on The Man-Made World; or, Our Androcentric Culture, published in 1911. Because of this, androcentrism can be understood as a societal fixation on masculinity whereby all things originate. Under androcentrism, masculinity is normative and all things outside of masculinity are defined as other. According to Perkins Gilman, masculine patterns of life and masculine mindsets claimed universality while female patterns were considered as deviance.
Science
Until the 19th century, women were effectively barred from higher education in Western countries. For over 300 years, Harvard admitted only white men from prominent families. Many universities, such as for example the University of Oxford, consciously practiced a numerus clausus and restricted the number of female undergraduates they accepted. Due to the later access of women to university and academic life, the participation of women in fundamental research is marginal. The basic principles in sciences, even human sciences, are hence predominantly formed by men.
Medicine
There is a gender health data gap and women are systematically discriminated against and misdiagnosed in medicine. Early medical research has been carried out nearly exclusively on male corpses. Women were considered "small men" and not investigated. To this day, clinical studies are frequently confirmed for both sexes even though only men have participated and the female body is often not considered in animal tests, even when "women diseases" are concerned. However, female and male bodies differ, all the way up to the cell level. The same diseases can have different symptoms in the sexes, calling for different treatment, and medicines can work completely differently, including different side effects. Since male symptoms are much more prominent, women are symptomatically under- and misdiagnosed, and have for example a 50% increased risk to die from a heart attack. Here, the male and known symptoms are chest-, and shoulder pain, the female symptoms are upper abdominal pain and nausea.
Literature
Research by Dr. David Anderson and Dr. Mykol Hamilton has documented the under-representation of female characters in a sample of 200 books that included top-selling children's books from 2001 and a seven-year sample of Caldecott award-winning books. There were nearly twice as many male main characters as female main characters, and male characters appeared in illustrations 53 percent more than female characters. Most of the plot-lines centered on the male characters and their experiences of life.
The arts
In 1985, a group of female artists from New York, the Guerrilla Girls, began to protest the under-representation of female artists. According to them, male artists and the male viewpoint continued to dominate the visual art world. In a 1989 poster (displayed on NYC buses) titled "Do women have to be naked to get into the Met. Museum?" they reported that less than 5% of the artists in the Modern Art sections of the Met Museum were women, but 85% of the nudes were female.
Over 20 years later, women were still under-represented in the art world. In 2007, Jerry Saltz (journalist from the New York Times) criticized the Museum of Modern Art for undervaluing work by female artists. Of the 400 works of art he counted in the Museum of Modern Art, only 14 were by women (3.5%). Saltz also found a significant under-representation of female artists in the six other art institutions he studied.
Generic male language
In literature, the use of masculine language to refer to men, women, intersex, and non-binary people may indicate a male or androcentric bias in society where men are seen as the 'norm', and women, intersex, and non-binary people are seen as the 'other'. Philosophy scholar Jennifer Saul argues that the use of male generic language marginalizes women, intersex, and non-binary people in society. In recent years, some writers have started to use more gender-inclusive language (for instance, using the pronouns they/them and using gender-inclusive words like humankind, person, partner, spouse, businessperson, firefighter, chairperson, and police officer).
Many studies have shown that male generic language is not interpreted as truly gender-inclusive. Psychological research has shown that, in comparison to unbiased terms such as "they" and "humankind", masculine terms lead to male-biased mental imagery in the mind of both the listener and the communicator.
Three studies by Mykol Hamilton show that there is not only a male → people bias but also a people → male bias. In other words, a masculine bias remains even when people are exposed to only gender neutral language (although the bias is lessened). In two of her studies, half of the participants (after exposure to gender neutral language) had male-biased imagery but the rest of the participants displayed no gender bias at all. In her third study, only males showed a masculine-bias (after exposure to gender neutral language) – females showed no gender bias. Hamilton asserted that this may be due to the fact that males have grown up being able to think more easily than females of "any person" as generic "he," since "he" applies to them. Further, of the two options for neutral language, neutral language that explicitly names women (e.g., "he or she") reduces androcentrism more effectively than neutral language that makes no mention of gender whatsoever (e.g., "human").
Feminist anthropologist Sally Slocum argues that there has been a longstanding male bias in anthropological thought as evidenced by terminology used when referring to society, culture, and humankind. According to Slocum, "All too often the word 'man' is used in such an ambiguous fashion that it is impossible to decide whether it refers to males or just the human species in general, including both males and females."
Men's language will be judged as the 'norm' and anything that women do linguistically will be judged negatively against this. The speech of a socially subordinate group will be interpreted as linguistically inadequate against that used by socially dominant groups. It has been found that women use more hedges and qualifiers than men. Feminine speech has been viewed as more tentative and has been deemed powerless speech. This is based on the view that masculine speech is the standard.
Generic male symbols
On the Internet, many avatars are gender-neutral (such as an image of a smiley face). However, when an avatar is human and discernibly gendered, it usually appears to be a man.
Depictions of skeletons typically have male anatomy rather than female, even when the character of the skeleton is meant to be female.
Impacts
Men are more severely impacted by androcentric thinking. However, the ideology has substantial effects on the way of thinking of everyone within it. In a 2022 study, in which 3815 people were shown a selection of 256 images containing illusory faces (objects, in which humans see faces), 90% of the objects were on average by the participants identified as male.
See also
Honorary male
Male as norm
Male supremacy
Manosphere
Patriarchy
Phallocentrism
Trophy wife
References
Literature
Ginzberg, Ruth (1989), "Uncovering gynocentric science", in
Social epistemology
Feminist philosophy
Feminist terminology
Patriarchy
Philosophy of science
Feminism and society | 0.765608 | 0.990701 | 0.758489 |
Culture theory | Culture theory is the branch of comparative anthropology and semiotics that seeks to define the heuristic concept of culture in operational and/or scientific terms.
Overview
In the 19th century, "culture" was used by some to refer to a wide array of human activities, and by some others as a synonym for "civilization". In the 20th century, anthropologists began theorizing about culture as an object of scientific analysis. Some used it to distinguish human adaptive strategies from the largely instinctive adaptive strategies of animals, including the adaptive strategies of other primates and non-human hominids, whereas others used it to refer to symbolic representations and expressions of human experience, with no direct adaptive value. Both groups understood culture as being definitive of human nature.
According to many theories that have gained wide acceptance among anthropologists, culture exhibits the way that humans interpret their biology and their environment. According to this point of view, culture becomes such an integral part of human existence that it is the human environment, and most cultural change can be attributed to human adaptation to historical events. Moreover, given that culture is seen as the primary adaptive mechanism of humans and takes place much faster than human biological evolution, most cultural change can be viewed as culture adapting to itself.
Although most anthropologists try to define culture in such a way that it separates human beings from other animals, many human traits are similar to those of other animals, particularly the traits of other primates. For example, chimpanzees have big brains, but human brains are bigger. Similarly, bonobos exhibit complex sexual behaviour, but human beings exhibit much more complex sexual behaviours. As such, anthropologists often debate whether human behaviour is different from animal behaviour in degree rather than in kind; they must also find ways to distinguish cultural behaviour from sociological behaviour and psychological behavior.
Acceleration and amplification of these various aspects of culture change have been explored by complexity economist, W. Brian Arthur. In his book, The Nature of Technology, Arthur attempts to articulate a theory of change that considers that existing technologies (or material culture) are combined in unique ways that lead to novel new technologies. Behind that novel combination is a purposeful effort arising in human motivation. This articulation would suggest that we are just beginning to understand what might be required for a more robust theory of culture and culture change, one that brings coherence across many disciplines and reflects an integrating elegance.
See also
Cultural studies
Culturology
Cultural behavior
Culture industry
Critical theory
Dual inheritance theory
Engaged theory
Intercultural relations
Popular culture studies
Semiotics of culture
Structuralism
Tartu–Moscow Semiotic School
References
Groh, Arnold A. Theories of Culture. Routledge, London. 2020.
Ogburn, William F. Social Change. 1922. Reprint. Dell, New York. 1966.
Rogers, G.F.C. The Nature of the Engineering: A Philosophy of Technology. Palgrave Macmillan, London, 1983.
Schumpeter, Joseph. The Theory of Economic Development. 1912. Reprint. Harvard University Press, Cambridge, Massachusetts. 1966. 1934.
Cultural anthropology
Cultural studies
Theories | 0.77261 | 0.981653 | 0.758435 |
Amplified musculoskeletal pain syndrome | Amplified musculoskeletal pain syndrome (AMPS) is an illness characterized by notable pain intensity without an identifiable physical cause.
Characteristic symptoms include skin sensitivity to light touch, also known as allodynia. Associated symptoms may include changes associated with disuse including changes in skin texture, color, and temperature, and changes in hair and nail growth. In up to 80% of cases, symptoms are associated with psychological trauma or psychological stress. AMPS may also follow physical injury or illness. Other associations with AMPS include Ehlers-danlos syndrome, myositis, arthritis, and other rheumatologic diseases.
Treatment for notable pain intensity without identifiable pathophysiology can include psychotherapy to alleviate psychological stress. Physical therapists, psychologically informed physical therapists in particular, can coach people on exercises they can do everyday at home. Clinicians who use this diagnosis sometimes apply it to children and adolescents. To date, this diagnosis is used more in women.
Signs and symptoms
Amplified musculoskeletal pain is a syndrome which is a set of characteristic symptoms and signs. Essentially, the syndrome is characterized by diffuse, ongoing, daily pain associated with relatively high levels of incapability and greater care-seeking behavior. The discomfort can be in the skin (allodynia), abdomen, throat (dysphagia), headache, and joints. There can be other somatic symptoms such as, movement issues, dizziness, fatigue, stiffness, shakiness, coordination difficulty, swelling, fast heart rate, skin texture, color, or temperature changes, paresthesia, and changes in nail or hair growth. These symptoms are associated with symptoms of anxiety, depression, psychological trauma, and psychological stress.
Examination
Findings on examination can include factors associated with disuse including swelling; changes in skin texture, color, and temperature; changes in nail and hair growth,
muscle atrophy, and radiographic osteoporosis.
Causes
It's not possible to discuss causes when there is no objectively verifiable pathophysiology. It's more accurate to describe when patients and clinicians might find this diagnosis appealing.
Psychological trauma
Psychological trauma is strongly associated with unexplained pain conceptualized as AMPS.
Physical injury
The combination of physical injury, such as a bone fracture or surgery, and over protectiveness and disuse can be referred to as complex regional pain syndrome, a type of AMPS that is isolated to one region of the body, such as a hand or foot.
Risk factors
Rationale
AMPS is theoretical rather than experimental. The amplified pain is conceptualized as incorrect sympathetic nervous system signals also known as the "fight or flight" nerves. This causes an involuntary response to pain, including vein constriction. This causes increased heart rate, increase in muscle tone, increased respiratory rate, and a reduce of blood flow to the muscles and bone, resulting in an increase in waste products, such as lactic acid. This buildup of waste products, as well as depletion of oxygen, results in the amplified pain associated with AMPS.
Classification
AMPS is classified into four different types, of which may be divided into multiple sub-types. This includes complex regional pain syndrome, diffuse idiopathic pain, intermittent amplified pain, and localized amplified pain.
Complex regional pain syndrome
Complex regional pain syndrome is a term for any amount of spontaneous regional pain lasting longer than the expected recovery time of an observed physical trauma, or other injury. This includes two separate types: type I and type II. Type I CRPS, formerly known as reflex sympathetic dystrophy (RSD) or "Sudeck's atrophy", refers to CRPS without any observed nerve damage. Type II, formerly known as causalgia, refers to CRPS with observed nerve damage. This form, similarly to other forms of AMPS, is known to be able to spread from one limb to a new limb. 35% of people effected with CRPS report full-body impacts from the condition. Common symptoms of CRPS include musculoskeletal pain; swelling; changes to the skin texture, color, or temperature; and limited range of motion.
Diffuse idiopathic pain
This type of AMPS includes full-body pain. It is also known as juvenile fibromyalgia.
Intermittent amplified pain
This type of AMPS refers to amplified pain that varies in intensity over time.
Localized amplified pain
This refers to localized amplified pain without other symptoms. This type cannot include symptoms such as swelling; skin texture, color, or temperature changes; or perspiration. Observation of these symptoms implies the diagnosis of complex regional pain syndrome.
Diagnosis
Because of the little awareness on AMPS, the condition is frequently not diagnosed when symptoms first present, often with multiple diagnoses of physical conditions before the diagnosis of AMPS.
The condition is diagnosed through observation of various patient traits. A full overview of the patients medical history, as well as out rule of any potention physical causes, such as a bone fracture. If no physical causes are observed, a diagnosis of AMPS is likely possible. Other common steps that are taken may include bone scans to detect possible signs of reduced blood flow; magnetic resonance imaging (MRI) to detect possible edema, or muscle atrophy; Nerve testing can be used to look for pain or sensitivity issues; and X-rays can detect osteoporosis as the result of AMPS. While all of these tests can detect possible signs of AMPS, better outcomes are usually made with less tests, and immediate treatment of AMPS without looking for possible differential diagnoses.
Management
As AMPS is not a disease, there is no one specific cure for it. Management of the condition is a process of patients learning to manage the abnormal amplified pain. This can include a combination of treating the cause(s) of the condition, as well as managing the symptoms of the condition.
Medication
As psychological stress accounts for up to 80% of cases of AMPS, medication often involves typical antidepressants. These are also often prescribed for chronic pain due to the impact they have on serotonin and its impact on muscular pain and control. Many providers also use an injectable medication for treatment of AMPS. Opioid use is not recommended for most AMPS cases, as it can worsen recovery, and in rare cases, make the condition worse.
Physical therapy
Physical treatment of AMPS is very common and is shown to have long term benefit. This includes physical therapy, massage therapy, and aerobic exercise. Physical therapy involves training the use of the affected limb or training the use of the body. This is for the purpose of retraining muscles after muscle atrophy, and retraining how to use the affected muscles with less amplified pain.
Massage therapy is used to desensitize the affected area or body so it can build a tolerance to pain. This can help with symptoms such as allodynia and hyperalgesia in AMPS, as well as indirectly help with other common symptoms by relieving the patient of pain which could have been the cause of psychological stress, depression, anxiety, as well as a number of physiological conditions, including headaches.
Psychotherapy
See also
Allodynia
Complex regional pain syndrome
Myalgia
Pediatrics
Rheumatology
References
Nerve, nerve root and plexus disorders
Syndromes of unknown causes
Chronic pain syndromes
Osteopathies | 0.769567 | 0.98547 | 0.758385 |
Social studies | In many countries' curricula, social studies is the combined study of humanities, the arts, and social sciences, mainly including history, economics, and civics. The term was first coined by American educators around the turn of the twentieth century as a catch-all for these subjects, as well as others which did not fit into the models of lower education in the United States such as philosophy and psychology. One of the purposes of social studies, particularly at the level of higher education, is to integrate several disciplines, with their unique methodologies and special focuses of concentration, into a coherent field of subject areas that communicate with each bread by sharing different academic "tools" and perspectives for deeper analysis of social problems and issues. Social studies aims to train students for informed, responsible participation in a diverse democratic society. The content of social studies provides the necessary background knowledge in order to develop values and reasoned opinions, and the objective of the field is civic competence. A related term is humanities, arts, and social sciences, abbreviated HASS.
In Australia
Human Society and Its Environment (HSIE) is a similar term used in the education system of the Australian state of New South Wales.
In the United Kingdom
The original onset of the social studies field emerged in the 19th century and later grew in the 20th century. Those foundations and building blocks were put into place in the 1820s in the country of Great Britain before being integrated into the United States.
In the United States
The purpose of the subject itself was to promote social welfare and its development in countries like the United States and others.
An early concept of social studies is found in John Dewey's philosophy of elementary and secondary education. Dewey valued the subject field of geography for uniting the study of human occupations with the study of the earth. He valued inquiry as a process of learning, as opposed to the absorption and recitation of facts, and he advocated for greater inquiry in elementary and secondary education, to mirror the kind of learning that takes place in higher education. His ideas are manifested to a large degree in the practice of inquiry-based learning and student-directed investigations implemented in contemporary social studies classrooms. Dewey valued the study of history for its social processes and application to contemporary social problems, rather than a mere narrative of human events. In this view, the study of history is made relevant to the modern student and is aimed at the improvement of society.
In the United States through the 1900s, social studies revolved around the study of geography, government, and history. In 1912, the Bureau of Education (not to be confused with its successor agency, the United States Department of Education) was tasked by then Secretary of the Interior Franklin Knight Lane with completely restructuring the American education system for the twentieth century. In response, the Bureau of Education, together with the National Education Association, created the Commission on the Reorganization of Secondary Education. The commission was made up of 16 committees (a 17th was established two years later, in 1916), each one tasked with the reform of a specific aspect of the American Education system. Notable among these was the Committee on Social Studies, which was created to consolidate and standardize various subjects that did not fit within normal school curricula into a new subject, to be called "the social studies".
In 1920, the work done by the Committee on Social Studies culminated in the publication and release of Bulletin No. 28 (also called "The Committee on Social Studies Report, 1916"). The 66-page bulletin, published and distributed by the Bureau of Education, is believed to be the first written work dedicated entirely to the subject. It was designed to introduce the concept to American educators and serve as a guide for the creation of nationwide curricula based around social studies. The bulletin proposed many ideas that were considered radical at the time, and it is regarded by many educators as one of the most controversial educational resources of the early twentieth century. Early proponents of the field of social studies include Harold O. Rugg and David Saville Muzzey.
In the years after its release, the bulletin received criticism from educators on its vagueness, especially in regards to the definition of Social Studies itself. Critics often point to Section 1 of the report, which vaguely defines Social Studies as "understood to be those whose subject matter relates directly to the organization and development of human society, and to man as a member of social groups."
The changes to the field of study never fully materialized until the 1950s, when changes occurred at the state and national levels that dictated the curriculum and the preparation standards of its teacher. This led to a decrease in the amount of factual knowledge being delivered instead of focusing on key concepts, generalizations, and intellectual skills. Eventually, around the 1980s and 1990s, the development of computer technologies helped grow the publishing industry. Textbooks were created around the curriculum of each state and that coupled with the increase in political factors from globalization and growing economies lead to changes in the public and private education system. Now came the influx of national curriculum standards, from the increase of testing to the accountability of teachers and school districts shifting the social study education system to what it is today.
Branches of social studies
Social studies is not a subject unto itself, instead functioning as a field of study that incorporates many different subjects. It primarily includes the subjects of history, economics, and civics. Through all of that, the elements of geography, sociology, ethics, psychology, philosophy, anthropology, art and literature are incorporated into the subject field itself. The field of study itself focuses on human beings and their respective relationships. With that, many of these subjects include some form of social utility that is beneficial to the subject field itself. The whole field is rarely taught; typically, a few subjects combined are taught. Recognition of the field has, arguably, lessened the significance of history, with the exception of U.S. History. Initially, only History and Civics were significant parts of the high school curriculum; eventually, Economics became a significant part of the high school curriculum, as well. While History and Civics were already established, the significance of Economics in the high school curriculum is more recent. History and Civics are similar in many ways, though they differ in class activity. There was some division between scholars on the topic of merging the subjects, though it was agreed that presenting a full picture of the world to students was extremely important.
College level
Social studies as a college major or concentration remains uncommon, although such a degree is offered at Harvard University. Harvard first introduced social studies as a formal field of study in 1960, through the work of a committee led by Stanley Hoffman, and today known as the Committee on Degrees in Social Studies. Hedge fund manager Bill Ackman, guitarist Tom Morello, and theater director Diana Paulus all concentrated in social studies during their time at Harvard.
Teaching social studies
To teach social studies in the United States, one must obtain a valid teaching certification to teach in that given state and a valid subject specific certification in social studies. The social studies certification process focuses on the core areas of history, economics, and civics, and sometimes psychology, and sociology. Each state has specific requirements for the certification process and the teacher must follow the specific guidelines of the state they wish to teach.
Ten themes of social studies
According to the National Council for the Social Studies, there are ten themes that represent the standards about human experience that is constituted in the effectiveness of social studies as a subject study from pre-K through 12th grade.
Culture
The study of culture and diversity allows learners to experience culture through all stages from learning to adaptation, shaping their respective lives and society itself. This social studies theme includes the principles of multiculturalism, a field of study in its own right that aims to achieve greater understanding between culturally diverse groups of students as well as including the experiences of culturally diverse learners in the curriculum.
Time, continuity, and change
Learners examine the past and the history of events that lead to the development of the current world. Ultimately, the learners will examine the beliefs and values of the past to apply them to the present. Learners build their inquiry skills in the study of history.
People, places, and environment
Learners will understand who they are and the environment and places that surround them. It gives spatial views and perspectives of the world to the learner. This theme is largely contained in the field of geography, which includes the study of humanity's connections with resources, instruction in reading maps and techniques and perspectives in analyzing information about human populations and the Earth's systems.
Individual development and identity
Learners will understand their own personal identity, development, and actions. Through this, they will be able to understand the influences that surround them.
Individuals, groups, and institutions
Learners will understand how groups and institutions influence people's everyday lives. They will be able to understand how groups and institutions are formed, maintained, and changed.
Power, authority, and governance
Learners will understand the forms of power, authority, and governance from historical to contemporary times. They will become familiar with the purpose of power, and with the limits that power has on society.
Production, distribution, and consumption
Learners will understand the organization of goods and services, ultimately preparing the learner for the study of greater economic issues. The study of economic issues, and with it, financial literacy, is intended to increase students' knowledge and skills when it comes to participating in the economy as workers, producers, and consumers.
Science, technology, and society
Learners will understand the relationship between science, technology, and society, understanding the advancement through the years and the impacts they have had.
Global connections
Learners will understand the interactive environment of global interdependence and will understand the global connections that shape the everyday world.
Civic ideals and practices
Learners will understand the rights and responsibilities of citizens and learn to grow in their appreciation of active citizenship. Ultimately, this helps their growth as full participants in society. Some of the values that civics courses strive to teach are an understanding of the right to privacy, an appreciation for diversity in American society, and a disposition to work through democratic procedures. One of the curricular tools used in the field of civics education is a simulated congressional hearing. Social studies educators and scholars distinguish between different levels of civic engagement, from the minimal engagement or non-engagement of the legal citizen to the most active and responsible level of the transformative citizen. Within social studies, the field of civics aims to educate and develop learners into transformative citizens who not only participate in a democracy, but challenge the status quo in the interest of social justice.
References
External links
The Social Studies in Secondary Education
National Council for the Social Studies
Changes in Social Studies
History in Social Studies
Social Civics, published in New York by The MacMillan Company, 1922.
Education by subject
Humanities education
Social sciences | 0.760079 | 0.99775 | 0.758369 |
Theory-theory | The theory-theory (or theory theory) is a scientific theory relating to the human development of understanding about the outside world. This theory asserts that individuals hold a basic or 'naïve' theory of psychology ("folk psychology") to infer the mental states of others, such as their beliefs, desires or emotions. This information is used to understand the intentions behind that person's actions or predict future behavior. The term 'perspective taking' is sometimes used to describe how one makes inferences about another person's inner state using theoretical knowledge about the other's situation.
This approach has become popular with psychologists as it gives a basis from which to explore human social understanding. Beginning in the mid-1980s, several influential developmental psychologists began advocating the theory theory: the view that humans learn through a process of theory revision closely resembling the way scientists propose and revise theories. Children observe the world, and in doing so, gather data about the world's true structure. As more data accumulates, children can revise their naive theories accordingly. Children can also use these theories about the world's causal structure to make predictions, and possibly even test them out. This concept is described as the 'Child Scientist' theory, proposing that a series of personal scientific revolutions are required for the development of theories about the outside world, including the social world.
In recent years, proponents of Bayesian learning have begun describing the theory theory in a precise, mathematical way.
The concept of Bayesian learning is rooted in the assumption that children and adults learn through a process of theory revision; that is, they hold prior beliefs about the world but, when receiving conflicting data, may revise these beliefs depending upon their strength.
Child development
Theory-theory states that children naturally attempt to construct theories to explain their observations. As all humans do, children seek to find explanations that help them understand their surroundings. They learn through their own experiences as well as through their observations of others' actions and behaviors.
Through their growth and development, children will continue to form intuitive theories; revising and altering them as they come across new results and observations. Several developmentalists have conducted research of the progression of their theories, mapping out when children start to form theories about certain subjects, such as the biological and physical world, social behaviors, and others' thoughts and minds ("theory of mind"), although there remains controversies over when these shifts in theory-formation occur.
As part of their investigative process, children often ask questions, frequently posing "Why?" to adults, not seeking a technical and scientific explanation but instead seeking to investigate the relation of the concept in question to themselves, as part of their egocentric view. In a study where Mexican-American mothers were interviewed over a two-week period about the types of questions their preschool children ask, researchers discovered that the children asked their parents more about biology and social behaviors rather than nonliving objects and artifacts. In their questions, the children were mostly ambiguous, unclear if they desired an explanation of purpose or cause. Although parents will usually answer with a causal explanation, some children found the answers and explanations inadequate for their understanding, and as a result, they begin to create their own theories, particularly evident in children's understanding of religion.
This theory also plays a part in Vygotsky's social learning theory, also called modeling. Vygotsky claims that humans, as social beings, learn and develop by observing others' behavior and imitating them. In this process of social learning, prior to imitation, children will first post inquiries and investigate why adults act and behave in a particular way. Afterwards, if the adult succeeds at the task, the child will likely copy the adult, but if the adult fails, the child will choose not to follow the example.
Comparison with other theories
Theory of mind (ToM)
Theory-theory is closely related to theory of mind (ToM), which concerns mental states of people, but differs from ToM in that the full scope of theory-theory also concerns mechanical devices or other objects, beyond just thinking about people and their viewpoints.
Simulation theory
In the scientific debate in mind reading, theory-theory is often contrasted with simulation theory, an alternative theory which suggests simulation or cognitive empathy is integral to our understanding of others.
References
Cognitive psychology
Child development | 0.78597 | 0.964869 | 0.758358 |
Educational assessment | Educational assessment or educational evaluation is the systematic process of documenting and using empirical data on the knowledge, skill, attitudes, aptitude and beliefs to refine programs and improve student learning. Assessment data can be obtained by examining student work directly to assess the achievement of learning outcomes or it is based on data from which one can make inferences about learning. Assessment is often used interchangeably with test but is not limited to tests. Assessment can focus on the individual learner, the learning community (class, workshop, or other organized group of learners), a course, an academic program, the institution, or the educational system as a whole (also known as granularity). The word "assessment" came into use in an educational context after the Second World War.
As a continuous process, assessment establishes measurable student learning outcomes, provides a sufficient amount of learning opportunities to achieve these outcomes, implements a systematic way of gathering, analyzing and interpreting evidence to determine how well student learning matches expectations, and uses the collected information to give feedback on the improvement of students' learning. Assessment is an important aspect of educational process which determines the level of accomplishments of students.
The final purpose of assessment practices in education depends on the theoretical framework of the practitioners and researchers, their assumptions and beliefs about the nature of human mind, the origin of knowledge, and the process of learning.
Types
The term assessment is generally used to refer to all activities teachers use to help students learn and to guage student progress. Assessment can be divided for the sake of convenience using the following categorizations:
Placement, formative, summative and diagnostic assessment
Objective and subjective
Referencing (criterion-referenced, norm-referenced, and ipsative (forced-choice))
Informal and formal
Internal and external
Placement, formative, summative and diagnostic
Assessment is often divided into initial, formative, and summative categories for the purpose of considering different objectives for assessment practices.
(1) Placement assessment – Placement evaluation may be used to place students according to prior achievement or level of knowledge, or personal characteristics, at the most appropriate point in an instructional sequence, in a unique instructional strategy, or with a suitable teacher conducted through placement testing, i.e. the tests that colleges and universities use to assess college readiness and place students into their initial classes. Placement evaluation, also referred to as pre-assessment, initial assessment, or threshold knowledge test (TKT), is conducted before instruction or intervention to establish a baseline from which individual student growth can be measured. This type of assessment is used to know what the student's skill level is about the subject, it can also help the teacher to explain the material more efficiently. These assessments are generally not graded.
(2) Formative assessment – This is generally carried out throughout a course or project. It is also referred to as "educative assessment," which is used to help learning. In an educational setting, a formative assessment might be a teacher (or peer) or the learner (e.g., through a self-assessment), providing feedback on a student's work and would not necessarily be used for grading purposes. Formative assessments can take the form of diagnostic, standardized tests, quizzes, oral questions, or draft work. Formative assessments are carried out concurrently with instructions and the results may count. The formative assessments aim is to see if the students understand the instruction before doing a summative assessment.
(3) Summative assessment – This is generally carried out at the end of a course or project. In an educational setting, summative assessments are typically used to assign students a course grade, and are evaluative. Summative assessments are made to summarize what the students have learned in order to know whether they understand the subject matter well. This type of assessment is typically graded (e.g. pass/fail, 0–100) and can take the form of tests, exams or projects. Summative assessments are basically used to determine whether a student has passed or failed a class. A criticism of summative assessments is that they are reductive, and learners discover how well they have acquired knowledge too late for it to be of use.
(4) Diagnostic assessment – At the end, diagnostic assessment focuses on the whole difficulties that occurred during the learning process.
Jay McTighe and Ken O'Connor proposed seven practices to effective learning. One of them is about showing the criteria of the evaluation before the test and another the importance of pre-assessment to know what the skill levels of a student are before giving instructions. Giving a lot of feedback and encouragements are other practices.
Educational researcher Robert Stake explains the difference between formative and summative assessment with the following analogy:
Summative and formative assessment are often referred to in a learning context as assessment of learning and assessment for learning respectively. Assessment of learning is generally summative in nature and intended to measure learning outcomes and report those outcomes to students, parents and administrators. Assessment of learning mostly occurs at the conclusion of a class, course, semester or academic year while assessment for learning is generally formative in nature and is used by teachers to consider approaches to teaching and next steps for individual learners and the class.
A common form of formative assessment is diagnostic assessment. Diagnostic assessment measures a student's current knowledge and skills for the purpose of identifying a suitable program of learning. Self-assessment is a form of diagnostic assessment which involves students assessing themselves.
Forward-looking assessment asks those being assessed to consider themselves in hypothetical future situations.
Performance-based assessment is similar to summative assessment, as it focuses on achievement. It is often aligned with the standards-based education reform and outcomes-based education movement. Though ideally, they are significantly different from a traditional multiple choice test, they are most commonly associated with standards-based assessment which use free-form responses to standard questions scored by human scorers on a standards-based scale, meeting, falling below or exceeding a performance standard rather than being ranked on a curve. A well-defined task is identified and students are asked to create, produce or do something often in settings that involve real-world application of knowledge and skills. Proficiency is demonstrated by providing an extended response. Performance formats are further classified into products and performances. The performance may result in a product, such as a painting, portfolio, paper or exhibition, or it may consist of a performance, such as a speech, athletic skill, musical recital or reading.
Objective and subjective
Assessment (either summative or formative) is often categorized as either objective or subjective. Objective assessment is a form of questioning which has a single correct answer. Subjective assessment is a form of questioning which may have more than one correct answer (or more than one way of expressing the correct answer). There are various types of objective and subjective questions. Objective question types include true/false answers, multiple choice, multiple-response and matching questions while Subjective questions include extended-response questions and essays. Objective assessment is well suited to the increasingly popular computerized or online assessment format.
Some have argued that the distinction between objective and subjective assessments is neither useful nor accurate because, in reality, there is no such thing as "objective" assessment. In fact, all assessments are created with inherent biases built into decisions about relevant subject matter and content, as well as cultural (class, ethnic, and gender) biases.
Basis of comparison
Test results can be compared against an established criterion, or against the performance of other students, or against previous performance:
(5)Criterion-referenced assessment, typically using a criterion-referenced test, as the name implies, occurs when candidates are measured against defined (and objective) criteria. Criterion-referenced assessment is often but not always used to establish a person's competence (whether he/she can do something). The best-known example of criterion-referenced assessment is the driving test when learner drivers are measured against a range of explicit criteria (such as "Not endangering other road users").
(6)Norm-referenced assessment (colloquially known as "grading on the curve"), typically using a norm-referenced test, is not measured against defined criteria. This type of assessment is relative to the student body undertaking the assessment, It is effectively a way of comparing students. The IQ test is the best-known example of norm-referenced assessment. Many entrance tests (to prestigious schools or universities) are norm-referenced, permitting a fixed proportion of students to pass ("passing" in this context means being accepted into the school or university rather than an explicit level of ability). This means that standards may vary from year to year depending on the quality of the cohort; criterion-referenced assessment does not vary from year to year (unless the criteria change).
(7)Ipsative assessment is self-comparison either in the same domain over time, or comparative to other domains within the same student.
Informal and formal
Assessment can be either formal or informal. Formal assessment usually implies a written document, such as a test, quiz, or paper. A formal assessment is given a numerical score or grade based on student performance, whereas an informal assessment does not contribute to a student's final grade. An informal assessment usually occurs in a more casual manner and may include observation, inventories, checklists, rating scales, rubrics, performance and portfolio assessments, participation, peer and self-evaluation, and discussion.
Internal and external
Internal assessment is set and marked by the school (i.e. teachers), students get the mark and feedback regarding the assessment. External assessment is set by the governing body, and is marked by non-biased personnel, some external assessments give much more limited feedback in their marking. However, in tests such as Australia's NAPLAN, the criterion addressed by students is given detailed feedback in order for their teachers to address and compare the student's learning achievements and also to plan for the future.
Standards of quality
In general, high-quality assessments are considered those with a high level of reliability and validity. Other general principles are practicality, authenticity and washback.
Reliability
Reliability relates to the consistency of an assessment. A reliable assessment is one that consistently achieves the same results with the same (or similar) cohort of students. Various factors affect reliability—including ambiguous questions, too many options within a question paper, vague marking instructions and poorly trained markers. Traditionally, the reliability of an assessment is based on the following:
Temporal stability: Performance on a test is comparable on two or more separate occasions.
Form equivalence: Performance among examinees is equivalent on different forms of a test based on the same content.
Internal consistency: Responses on a test are consistent across questions. For example: In a survey that asks respondents to rate attitudes toward technology, consistency would be expected in responses to the following questions:
"I feel very negative about computers in general."
"I enjoy using computers."
The reliability of a measurement x can also be defined quantitatively as:
where is the reliability in the observed (test) score, x;
and are the variability in 'true' (i.e., candidate's innate performance) and measured test scores respectively. can range from 0 (completely unreliable), to 1 (completely reliable).
There are four types of reliability: student-related which can be personal problems, sickness, or fatigue, rater-related which includes bias and subjectivity, test administration-related which is the conditions of test taking process, test-related which is basically related to the nature of a test.
Validity
Valid assessment is one that measures what it is intended to measure. For example, it would not be valid to assess driving skills through a written test alone. A more valid way of assessing driving skills would be through a combination of tests that help determine what a driver knows, such as through a written test of driving knowledge, and what a driver is able to do, such as through a performance assessment of actual driving. Teachers frequently complain that some examinations do not properly assess the syllabus upon which the examination is based; they are, effectively, questioning the validity of the exam.
Validity of an assessment is generally gauged through examination of evidence in the following categories:
Content validity – Does the content of the test measure stated objectives?
Criterion validity – Do scores correlate to an outside reference? (ex: Do high scores on a 4th grade reading test accurately predict reading skill in future grades?)
Construct validity – Does the assessment correspond to other significant variables? (ex: Do ESL students consistently perform differently on a writing exam than native English speakers?)
Others are:
consequential validity
face validity
A good assessment has both validity and reliability, plus the other quality attributes noted above for a specific context and purpose. In practice, an assessment is rarely totally valid or totally reliable. A ruler which is marked wrongly will always give the same (wrong) measurements. It is very reliable, but not very valid. Asking random individuals to tell the time without looking at a clock or watch is sometimes used as an example of an assessment which is valid, but not reliable. The answers will vary between individuals, but the average answer is probably close to the actual time. In many fields, such as medical research, educational testing, and psychology, there will often be a trade-off between reliability and validity. A history test written for high validity will have many essay and fill-in-the-blank questions. It will be a good measure of mastery of the subject, but difficult to score completely accurately. A history test written for high reliability will be entirely multiple choice. It isn't as good at measuring knowledge of history, but can easily be scored with great precision. We may generalize from this. The more reliable our estimate is of what we purport to measure, the less certain we are that we are actually measuring that aspect of attainment.
It is well to distinguish between "subject-matter" validity and "predictive" validity. The former, used widely in education, predicts the score a student would get on a similar test but with different questions. The latter, used widely in the workplace, predicts performance. Thus, a subject-matter-valid test of knowledge of driving rules is appropriate while a predictively valid test would assess whether the potential driver could follow those rules.
Practicality
This principle refers to the time and cost constraints during the construction and administration of an assessment instrument. Meaning that the test should be economical to provide. The format of the test should be simple to understand. Moreover, solving a test should remain within suitable time. It is generally simple to administer. Its assessment procedure should be particular and time-efficient.
Authenticity
The assessment instrument is authentic when it is contextualized, contains natural language and meaningful, relevant, and interesting topic, and replicates real world experiences.
Washback
This principle refers to the consequence of an assessment on teaching and learning within classrooms. Washback can be positive and negative. Positive washback refers to the desired effects of a test, while negative washback refers to the negative consequences of a test. In order to have positive washback, instructional planning can be used.
Evaluation standards
In the field of evaluation, and in particular educational evaluation in North America, the Joint Committee on Standards for Educational Evaluation has published three sets of standards for evaluations. The Personnel Evaluation Standards were published in 1988, The Program Evaluation Standards (2nd edition) were published in 1994, and The Student Evaluation Standards were published in 2003.
Each publication presents and elaborates a set of standards for use in a variety of educational settings. The standards provide guidelines for designing, implementing, assessing and improving the identified form of evaluation. Each of the standards has been placed in one of four fundamental categories to promote educational evaluations that are proper, useful, feasible, and accurate. In these sets of standards, validity and reliability considerations are covered under the accuracy topic. For example, the student accuracy standards help ensure that student evaluations will provide sound, accurate, and credible information about student learning and performance.
In the UK, an award in Training, Assessment and Quality Assurance (TAQA) is available to assist staff learn and develop good practice in relation to educational assessment in adult, further and work-based education and training contexts.
Grade inflation
Due to grade inflation, standardized tests can have higher validity than unstandardized exam scores. Recently increasing graduation rates can be partially attributed to grade inflation.
Summary table of the main theoretical frameworks
The following table summarizes the main theoretical frameworks behind almost all the theoretical and research work, and the instructional practices in education (one of them being, of course, the practice of assessment). These different frameworks have given rise to interesting debates among scholars.
Controversy
Concerns over how best to apply assessment practices across public school systems have largely focused on questions about the use of high-stakes testing and standardized tests, often used to gauge student progress, teacher quality, and school-, district-, or statewide educational success.
No Child Left Behind
For most researchers and practitioners, the question is not whether tests should be administered at all—there is a general consensus that, when administered in useful ways, tests can offer useful information about student progress and curriculum implementation, as well as offering formative uses for learners. The real issue, then, is whether testing practices as currently implemented can provide these services for educators and students.
President Bush signed the No Child Left Behind Act (NCLB) on January 8, 2002. The NCLB Act reauthorized the Elementary and Secondary Education Act (ESEA) of 1965. President Johnson signed the ESEA to help fight the War on Poverty and helped fund elementary and secondary schools. President Johnson's goal was to emphasize equal access to education and establish high standards and accountability. The NCLB Act required states to develop assessments in basic skills. To receive federal school funding, states had to give these assessments to all students at select grade level.
In the U.S., the No Child Left Behind Act mandates standardized testing nationwide. These tests align with state curriculum and link teacher, student, district, and state accountability to the results of these tests. Proponents of NCLB argue that it offers a tangible method of gauging educational success, holding teachers and schools accountable for failing scores, and closing the achievement gap across class and ethnicity.
Opponents of standardized testing dispute these claims, arguing that holding educators accountable for test results leads to the practice of "teaching to the test." Additionally, many argue that the focus on standardized testing encourages teachers to equip students with a narrow set of skills that enhance test performance without actually fostering a deeper understanding of subject matter or key principles within a knowledge domain.
High-stakes testing
The assessments which have caused the most controversy in the U.S. are the use of high school graduation examinations, which are used to deny diplomas to students who have attended high school for four years, but cannot demonstrate that they have learned the required material when writing exams. Opponents say that no student who has put in four years of seat time should be denied a high school diploma merely for repeatedly failing a test, or even for not knowing the required material.
High-stakes tests have been blamed for causing sickness and test anxiety in students and teachers, and for teachers choosing to narrow the curriculum towards what the teacher believes will be tested. In an exercise designed to make children comfortable about testing, a Spokane, Washington newspaper published a picture of a monster that feeds on fear. The published image is purportedly the response of a student who was asked to draw a picture of what she thought of the state assessment.
Other critics, such as Washington State University's Don Orlich, question the use of test items far beyond standard cognitive levels for students' age.
Compared to portfolio assessments, simple multiple-choice tests are much less expensive, less prone to disagreement between scorers, and can be scored quickly enough to be returned before the end of the school year. Standardized tests (all students take the same test under the same conditions) often use multiple-choice tests for these reasons. Orlich criticizes the use of expensive, holistically graded tests, rather than inexpensive multiple-choice "bubble tests", to measure the quality of both the system and individuals for very large numbers of students. Other prominent critics of high-stakes testing include Fairtest and Alfie Kohn.
The use of IQ tests has been banned in some states for educational decisions, and norm-referenced tests, which rank students from "best" to "worst", have been criticized for bias against minorities. Most education officials support criterion-referenced tests (each individual student's score depends solely on whether he answered the questions correctly, regardless of whether his neighbors did better or worse) for making high-stakes decisions.
21st century assessment
It has been widely noted that with the emergence of social media and Web 2.0 technologies and mindsets, learning is increasingly collaborative and knowledge increasingly distributed across many members of a learning community. Traditional assessment practices, however, focus in large part on the individual and fail to account for knowledge-building and learning in context. As researchers in the field of assessment consider the cultural shifts that arise from the emergence of a more participatory culture, they will need to find new methods of applying assessments to learners.
Large-scale learning assessment
Large-scale learning assessments (LSLAs) are system-level assessments that provide a snapshot of learning achievement for a group of learners in a given year, and in a limited number of domains. They are often categorized as national or cross-national assessments and draw attention to issues related to levels of learning and determinants of learning, including teacher qualification; the quality of school environments; parental support and guidance; and social and emotional health in and outside schools.
Assessment in a democratic school
The Sudbury model of democratic education schools do not perform and do not offer assessments, evaluations, transcripts, or recommendations. They assert that they do not rate people, and that school is not a judge; comparing students to each other, or to some standard that has been set is for them a violation of the student's right to privacy and to self-determination. Students decide for themselves how to measure their progress as self-starting learners as a process of self-evaluation: real lifelong learning and the proper educational assessment for the 21st century, they allege.
According to Sudbury schools, this policy does not cause harm to their students as they move on to life outside the school. However, they admit it makes the process more difficult, but that such hardship is part of the students learning to make their own way, set their own standards and meet their own goals.
The no-grading and no-rating policy helps to create an atmosphere free of competition among students or battles for adult approval, and encourages a positive cooperative environment amongst the student body.
The final stage of a Sudbury education, should the student choose to take it, is the graduation thesis. Each student writes on the topic of how they have prepared themselves for adulthood and entering the community at large. This thesis is submitted to the Assembly, who reviews it. The final stage of the thesis process is an oral defense given by the student in which they open the floor for questions, challenges and comments from all Assembly members. At the end, the Assembly votes by secret ballot on whether or not to award a diploma.
Assessing ELL students
A major concern with the use of educational assessments is the overall validity, accuracy, and fairness when it comes to assessing English language learners (ELL). The majority of assessments within the United States have normative standards based on the English-speaking culture, which does not adequately represent ELL populations. Consequently, it would in many cases be inaccurate and inappropriate to draw conclusions from ELL students' normative scores. Research shows that the majority of schools do not appropriately modify assessments in order to accommodate students from unique cultural backgrounds. This has resulted in the over-referral of ELL students to special education, causing them to be disproportionately represented in special education programs. Although some may see this inappropriate placement in special education as supportive and helpful, research has shown that inappropriately placed students actually regressed in progress.
It is often necessary to utilize the services of a translator in order to administer the assessment in an ELL student's native language; however, there are several issues when translating assessment items. One issue is that translations can frequently suggest a correct or expected response, changing the difficulty of the assessment item. Additionally, the translation of assessment items can sometimes distort the original meaning of the item. Finally, many translators are not qualified or properly trained to work with ELL students in an assessment situation. All of these factors compromise the validity and fairness of assessments, making the results not reliable. Nonverbal assessments have shown to be less discriminatory for ELL students, however, some still present cultural biases within the assessment items.
When considering an ELL student for special education the assessment team should integrate and interpret all of the information collected in order to ensure a non biased conclusion. The decision should be based on multidimensional sources of data including teacher and parent interviews, as well as classroom observations. Decisions should take the students unique cultural, linguistic, and experiential backgrounds into consideration, and should not be strictly based on assessment results.
Universal screening
Assessment can be associated with disparity when students from traditionally underrepresented groups are excluded from testing needed for access to certain programs or opportunities, as is the case for gifted programs. One way to combat this disparity is universal screening, which involves testing all students (such as for giftedness) instead of testing only some students based on teachers' or parents' recommendations. Universal screening results in large increases in traditionally underserved groups (such as Black, Hispanic, poor, female, and ELLs) identified for gifted programs, without the standards for identification being modified in any way.
See also
References
Sources
Further reading
American Educational Research Association, American Psychological Association, & National Council for Measurement in Education. (2014). Standards for Educational and Psychological Testing. Washington, DC: American Educational Research Association.
Brown, G. T. L. (2018). Assessment of Student Achievement. New York: Routledge.
Carless, David. Excellence in University Assessment: Learning from Award-Winning Practice. London: Routledge, 2015.
Klinger, D., McDivitt, P., Howard, B., Rogers, T., Munoz, M., & Wylie, C. (2015). Classroom Assessment Standards for PreK-12 Teachers: Joint Committee on Standards for Educational Evaluation.
Kubiszyn, T., & Borich, G. D. (2012). Educational Testing and Measurement: Classroom Application and Practice (10th ed.). New York: John Wiley & Sons.
Miller, D. M., Linn, R. L., & Gronlund, N. E. (2013). Measurement and Assessment in Teaching (11th ed.). Boston, MA: Pearson.
National Research Council. (2001). Knowing What Students Know: The Science and Design of Educational Assessment. Washington, DC: National Academy Press.
Nitko, A. J. (2001). Educational assessment of students (3rd ed.). Upper Saddle River, N.J.: Merrill.
Phelps, Richard P., Ed. Correcting Fallacies about Educational and Psychological Testing. Washington, DC: American Psychological Association, 2008.
Phelps, Richard P., Standardized Testing Primer. New York: Peter Lang, 2007.
Russell, M. K., & Airasian, P. W. (2012). Classroom Assessment: Concepts and Applications (7th ed.). New York: McGraw Hill.
Shepard, L. A. (2006). Classroom assessment. In R. L. Brennan (Ed.), Educational Measurement (4th ed., pp. 623–646). Westport, CT: Praeger.
Academic transfer
Evaluation methods
School terminology
Standards-based education
Thought | 0.762221 | 0.994925 | 0.758353 |
Development studies | Development studies is an interdisciplinary branch of social science. Development studies is offered as a specialized master's degree in a number of reputed universities around the world. It has grown in popularity as a subject of study since the early 1990s, and has been most widely taught and researched in developing countries and countries with a colonial history, such as the UK, where the discipline originated. Students of development studies often choose careers in international organisations such as the United Nations, World Bank, non-governmental organisations (NGOs), media and journalism houses, private sector development consultancy firms, corporate social responsibility (CSR) bodies and research centers.
Professional bodies
Throughout the world, a number of professional bodies for development studies have been founded:
Europe: European Association of Development Research and Training Institutes (EADI)
Latin America: Consejo Latinoamericano de Ciencias Sociales (CLACSO)
Asia: Asian Political and International Studies Association (APISA)
Africa: Council for the Development of Social Science Research in Africa (CODESRIA) and Organization for Social Science Research in Eastern and Southern Africa (OSSREA)
Arabic world: Arab Institutes and Centers for Economic and Social Development Research (AICARDES)
The common umbrella organisation of these association is the Inter-regional Coordinating Committee of Development Associations (ICCDA). In the UK and Ireland, the Development Studies Association is a major source of information for research on and studying in development studies. Its mission is to connect and promote those working on development research.
Disciplines of development studies
Development issues include:
Adult education
Area studies
Anthropology
Community development
Demography
Development aid
Development communication
Development theory
Diaspora studies
Ecology
Economic development
Economic History
Environmental studies
Geography
Gender studies
Governance
History of economic thought
Human rights
Human security
Indigenous rights
Industrial relations
Industrialization
International business
International development
International relations
Journalism
Media Studies
Migration studies
Partnership
Peace and conflict studies
Pedagogy
Philosophy
Political philosophy
Population studies
Postcolonialism
Psychology
Public administration
Public health
Rural development
Queer studies
Sociology
Social policy
Social development
Social work
Sustainable development
Urban studies
Women's studies
History
The emergence of development studies as an academic discipline in the second half of the twentieth century is in large part due to increasing concern about economic prospects for the third world after decolonisation. In the immediate post-war period, development economics, a branch of economics, arose out of previous studies in colonial economics. By the 1960s, an increasing number of development economists felt that economics alone could not fully address issues such as political effectiveness and educational provision. Development studies arose as a result of this, initially aiming to integrate ideas of politics and economics. Since then, it has become an increasingly inter- and multi-disciplinary subject, encompassing a variety of social scientific fields. In recent years the use of political economy analysis- the application of the analytical techniques of economics- to try and assess and explain political and social factors that either enhance or limit development has become increasingly widespread as a way of explaining the success or failure of reform processes. The era of modern development is commonly deemed to have commenced with the inauguration speech of Harry S. Truman in 1949. In Point Four of his speech, with reference to Latin America and other poor nations, he said:
More than half the people of the world are living in conditions approaching misery. Their food is inadequate. They are victims of disease. Their economic life is primitive and stagnant. Their poverty is a handicap and a threat both to them and to more prosperous areas. For the first time in history, humanity possesses the knowledge and the skill to relieve the suffering of these people.
But development studies has since also taken an interest in lessons of past development experiences of Western countries. More recently, the emergence of human security – a new, people-oriented approach to understanding and addressing global security threats – has led to a growing recognition of a relationship between security and development. Human security argues that inequalities and insecurity in one state or region have consequences for global security and that it is thus in the interest of all states to address underlying development issues. This relationship with studies of human security is but one example of the interdisciplinary nature of development studies.
Global Research cooperation between researchers from countries in the Global North and the Global South, so called North-south research partnerships, allow development studies to consider more diverse perspectives on development studies and other strongly value driven issues. Thus, it can contribute new findings to the field of research.
See also
Global South Development Magazine
City development index
Colonization
Community development
Development (disambiguation)
Development Cooperation Issues
Development Cooperation Stories
Development Cooperation Testimonials
Economic development
Human rights
Human security
Industrialization
International development
North-South research partnerships
Postdevelopment theory
Right to development
Social development
Social work
Sustainable development
World-systems theory
References
Further reading
Breuer, Martin. "Development" (2015). University Bielefeld: Center for InterAmerican Studies.
Pradella, Lucia and Marois, Thomas, eds. (2015) Polarizing Development: Alternatives to Neoliberalism and the Crisis. Pluto Press.
Sachs, Wolfgang, ed. (1992) The Development Dictionary: A Guide to Knowledge as Power. Zed Books.
External links
Global South Development Magazine
Development Studies Internet Resources
Studying Development – International Development Studies course directory | 0.766327 | 0.989547 | 0.758316 |
Dual process theory | In psychology, a dual process theory provides an account of how thought can arise in two different ways, or as a result of two different processes. Often, the two processes consist of an implicit (automatic), unconscious process and an explicit (controlled), conscious process. Verbalized explicit processes or attitudes and actions may change with persuasion or education; though implicit process or attitudes usually take a long amount of time to change with the forming of new habits. Dual process theories can be found in social, personality, cognitive, and clinical psychology. It has also been linked with economics via prospect theory and behavioral economics, and increasingly in sociology through cultural analysis.
History
The foundations of dual process theory are probably ancient. Spinoza (1632-1677) distinguished between the passions and reason. William James (1842-1910) believed that there were two different kinds of thinking: associative and true reasoning. James theorized that empirical thought was used for things like art and design work. For James, images and thoughts would come to mind of past experiences, providing ideas of comparison or abstractions. He claimed that associative knowledge was only from past experiences describing it as "only reproductive". James believed that true reasoning could enable overcoming “unprecedented situations” just as a map could enable navigating past obstacles.
There are various dual process theories that were produced after William James's work. Dual process models are very common in the study of social psychological variables, such as attitude change. Examples include Petty and Cacioppo's elaboration likelihood model (explained below) and Chaiken's heuristic systematic model. According to these models, persuasion may occur after either intense scrutiny or extremely superficial thinking. In cognitive psychology, attention and working memory have also been conceptualized as relying on two distinct processes. Whether the focus be on social psychology or cognitive psychology, there are many examples of dual process theories produced throughout the past. The following just show a glimpse into the variety that can be found.
Peter Wason and Jonathan St B. T. Evans suggested dual process theory in 1974. In Evans' later theory, there are two distinct types of processes: heuristic processes and analytic processes. He suggested that during heuristic processes, an individual chooses which information is relevant to the current situation. Relevant information is then processed further whereas irrelevant information is not. Following the heuristic processes come analytic processes. During analytic processes, the relevant information that is chosen during the heuristic processes is then used to make judgments about the situation.
Richard E. Petty and John Cacioppo proposed a dual process theory focused in the field of social psychology in 1986. Their theory is called the elaboration likelihood model of persuasion. In their theory, there are two different routes to persuasion in making decisions. The first route is known as the central route and this takes place when a person is thinking carefully about a situation, elaborating on the information they are given, and creating an argument. This route occurs when an individual's motivation and ability are high. The second route is known as the peripheral route and this takes place when a person is not thinking carefully about a situation and uses shortcuts to make judgments. This route occurs when an individual's motivation or ability are low.
Steven Sloman produced another interpretation on dual processing in 1996. He believed that associative reasoning takes stimuli and divides it into logical clusters of information based on statistical regularity. He proposed that how you associate is directly proportional to the similarity of past experiences, relying on temporal and similarity relations to determine reasoning rather than an underlying mechanical structure. The other reasoning process in Sloman's opinion was of the Rule-based system. The system functioned on logical structure and variables based upon rule systems to come to conclusions different from that of the associative system. He also believed that the Rule-based system had control over the associative system, though it could only suppress it. This interpretation corresponds well to earlier work on computational models of dual processes of reasoning.
Daniel Kahneman provided further interpretation by differentiating the two styles of processing more, calling them intuition and reasoning in 2003. Intuition (or system 1), similar to associative reasoning, was determined to be fast and automatic, usually with strong emotional bonds included in the reasoning process. Kahneman said that this kind of reasoning was based on formed habits and very difficult to change or manipulate. Reasoning (or system 2) was slower and much more volatile, being subject to conscious judgments and attitudes.
Fritz Strack and Roland Deutsch proposed another dual process theory focused in the field of social psychology in 2004. According to their model, there are two separate systems: the reflective system and the impulsive system. In the reflective system, decisions are made using knowledge and the information that is coming in from the situation is processed. On the other hand, in the impulsive system, decisions are made using schemes and there is little or no thought required.
Theories
Dual process learning model
Ron Sun proposed a dual-process model of learning (both implicit learning and explicit learning). The model (named CLARION) re-interpreted voluminous behavioral data in psychological studies of implicit learning and skill acquisition in general. The resulting theory is two-level and interactive, based on the idea of the interaction of one-shot explicit rule learning (i.e., explicit learning) and gradual implicit tuning through reinforcement (i.e. implicit learning), and it accounts for many previously unexplained cognitive data and phenomena based on the interaction of implicit and explicit learning.
The Dual Process Learning model can be applied to a group-learning environment. This is called The Dual Objective Model of Cooperative Learning and it requires a group practice that consists of both cognitive and affective skills among the team. It involves active participation by the teacher to monitor the group throughout its entirety until the product has been successfully completed. The teacher focuses on the effectiveness of cognitive and affective practices within the group's cooperative learning environment. The instructor acts as an aide to the group by encouraging their positive affective behavior and ideas. In addition, the teacher remains, continually watching for improvement in the group's development of the product and interactions amongst the students. The teacher will interject to give feedback on ways the students can better contribute affectively or cognitively to the group as a whole. The goal is to foster a sense of community amongst the group while creating a proficient product that is a culmination of each student's unique ideas.
Dual coding
Using a somewhat different approach, Allan Paivio has developed a dual-coding theory of information processing. According to this model, cognition involves the coordinated activity of two independent, but connected systems, a nonverbal system and a verbal system that is specialized to deal with language. The nonverbal system is hypothesized to have developed earlier in evolution. Both systems rely on different areas of the brain. Paivio has reported evidence that nonverbal, visual images are processed more efficiently and are approximately twice as memorable. Additionally, the verbal and nonverbal systems are additive, so one can improve memory by using both types of information during learning. This additive dual coding claim is compatible with evidence that verbalized thinking does not necessarily overcome common faulty intuitions or heuristics, such as studies showing that thinking aloud during heuristics and biases tests did not necessarily improve performance on the test.
Dual-process accounts of reasoning
Background
Dual-process accounts of reasoning postulate that there are two systems or minds in one brain. A current theory is that there are two cognitive systems underlying thinking and reasoning and that these different systems were developed through evolution. These systems are often referred to as "implicit" and "explicit" or by the more neutral "System 1" and "System 2", as coined by Keith Stanovich and Richard West.
The systems have multiple names by which they can be called, as well as many different properties.
System 1
John Bargh reconceptualized the notion of an automatic process by breaking down the term "automatic" into four components: awareness, intentionality, efficiency, and controllability. One way for a process to be labeled as automatic is for the person to be unaware of it. There are three ways in which a person may be unaware of a mental process: they can be unaware of the presence of the stimulus (subliminal), how the stimulus is categorized or interpreted (unaware of the activation of stereotype or trait constructs), or the effect the stimulus has on the person's judgments or actions (misattribution). Another way for a mental process to be labeled as automatic is for it to be unintentional. Intentionality refers to the conscious "start up" of a process. An automatic process may begin without the person consciously willing it to start. The third component of automaticity is efficiency. Efficiency refers to the amount of cognitive resources required for a process. An automatic process is efficient because it requires few resources. The fourth component is controllability, referring to the person's conscious ability to stop a process. An automatic process is uncontrollable, meaning that the process will run until completion and the person will not be able to stop it. Bargh conceptualized automaticity as a component view (any combination awareness, intention, efficiency, and control) as opposed to the historical concept of automaticity as an all-or-none dichotomy.
One takeaway from the psychological research on dual process theory is that our System 1 (intuition) is more accurate in areas where we’ve gathered a lot of data with reliable and fast feedback, like social dynamics, or even cognitive domains in which we've become expert or even merely familiar.
System 2 in humans
System 2 is evolutionarily recent and speculated as specific to humans. It is also known as the explicit system, the rule-based system, the rational system, or the analytic system. It performs the more slow and sequential thinking. It is domain-general, performed in the central working memory system. Because of this, it has a limited capacity and is slower than System 1 which correlates it with general intelligence. It is known as the rational system because it reasons according to logical standards. Some overall properties associated with System 2 are that it is rule-based, analytic, controlled, demanding of cognitive capacity, and slow.
Social psychology
The dual process has impact on social psychology in such domains as stereotyping, categorization, and judgment. Especially, the study of automaticity and of implicit in dual process theories has the most influence on a person's perception. People usually perceive other people's information and categorize them by age, gender, race, or role. According to Neuberg and Fiske (1987) a perceiver who receives a good amount of information about the target person then will use their formal mental category (Unconscious) as a basis for judging the person. When the perceiver is distracted, the perceiver has to pay more attention to target information (Conscious). Categorization is the basic process of stereotyping in which people are categorized into social groups that have specific stereotypes associated with them. It is able to retrieve people's judgment automatically without subjective intention or effort. Attitude can also be activated spontaneously by the object. John Bargh's study offered an alternative view, holding that essentially all attitudes, even weak ones are capable of automatic activation. Whether the attitude is formed automatically or operates with effort and control, it can still bias further processing of information about the object and direct the perceivers' actions with regard to the target. According to Shelly Chaiken, heuristic processing is the activation and application of judgmental rules and heuristics are presumed to be learned and stored in memory. It is used when people are making accessible decisions such as "experts are always right" (system 1) and systematic processing is inactive when individuals make effortful scrutiny of all the relevant information which requires cognitive thinking (system 2). The heuristic and systematic processing then influence the domain of attitude change and social influence.
Unconscious thought theory is the counterintuitive and contested view that the unconscious mind is adapted to highly complex decision making. Where most dual system models define complex reasoning as the domain of effortful conscious thought, UTT argues complex issues are best dealt with unconsciously.
Stereotyping
Dual process models of stereotyping propose that when we perceive an individual, salient stereotypes pertaining to them are activated automatically. These activated representations will then guide behavior if no other motivation or cognition take place. However, controlled cognitive processes can inhibit the use of stereotypes when there is motivation and cognitive resources to do so. Devine (1989) provided evidence for the dual process theory of stereotyping in a series of three studies. Study 1 linked found prejudice (according to the Modern Racism Scale) was unrelated to knowledge of cultural stereotypes of African Americans. Study 2 showed that subjects used automatically activated stereotypes in judgments regardless of prejudice level (personal belief). Participants were primed with stereotype relevant or non-relevant words and then asked to give hostility ratings of a target with an unspecified race who was performing ambiguously hostile behaviors. Regardless of prejudice level, participants who were primed with more stereotype-relevant words gave higher hostility ratings to the ambiguous target. Study 3 investigated whether people can control stereotype use by activating personal beliefs. Low-prejudice participants asked to list African Americans listed more positive examples than did those high in prejudice.
Terror management theory and the dual process model
According to psychologists Pyszczynski, Greenberg, & Solomon, the dual process model, in relation to terror management theory, identifies two systems by which the brain manages fear of death: distal and proximal. Distal defenses fall under the system 1 category because it is unconscious whereas proximal defenses fall under the system 2 category because it operates with conscious thought. However, recent work by the ManyLabs project has shown that the mortality salience effect (e.g., reflecting on one's own death encouraging a greater defense of one's own worldview) has failed to replicate (ManyLabs attempt to replicate a seminal theoretical finding across multiple laboratories -- in this case some of these labs included input from the original terror management theorists.)
Dual process and habituation
Habituation can be described as decreased response to a repeated stimulus. According to Groves and Thompson, the process of habituation also mimics a dual process. The dual process theory of behavioral habituation relies on two underlying (non-behavioral) processes; depression and facilitation with the relative strength of one over the other determining whether or not habituation or sensitization is seen in the behavior. Habituation weakens the intensity of a repeated stimulus over time subconsciously. As a result, a person will give the stimulus less conscious attention over time. Conversely, sensitization subconsciously strengthens a stimulus over time, giving the stimulus more conscious attention. Though these two systems are not both conscious, they interact to help people understand their surroundings by strengthening some stimuli and diminishing others.
Dual process and steering cognition
According to Walker, system 1 functions as a serial cognitive steering processor for system 2, rather than a parallel system. In large-scale repeated studies with school students, Walker tested how students adjusted their imagined self-operation in different curriculum subjects of maths, science and English. He showed that students consistently adjust the biases of their heuristic self-representation to specific states for the different curriculum subjects. The model of cognitive steering proposes that, in order to process epistemically varied environmental data, a heuristic orientation system is required to align varied, incoming environmental data with existing neural algorithmic processes. The brain's associative simulation capacity, centered around the imagination, plays an integrator role to perform this function. Evidence for early-stage concept formation and future self-operation within the hippocampus supports the model,. In the cognitive steering model, a conscious state emerges from effortful associative simulation, required to align novel data accurately with remote memory, via later algorithmic processes. By contrast, fast unconscious automaticity is constituted by unregulated simulatory biases, which induce errors in subsequent algorithmic processes. The phrase ‘rubbish in, rubbish out' is used to explain errorful heuristic processing: errors will always occur if the accuracy of initial retrieval and location of data is poorly self-regulated.
Application in economic behavior
According to Alos-Ferrer and Strack the dual-process theory has relevance in economic decision-making through the multiple-selves model, in which one person's self-concept is composed of multiple selves depending on the context. An example of this is someone who as a student is hard working and intelligent, but as a sibling is caring and supportive. Decision-making involves the use of both automatic and controlled processes, but also depends on the person and situation, and given a person's experiences and current situation the decision process may differ. Given that there are two decision processes with differing goals one is more likely to be more useful in particular situations. For example, a person is presented with a decision involving a selfish but rational motive and a social motive. Depending on the individual one of the motives will be more appealing than the other, but depending on the situation the preference for one motive or the other may change. Using the dual-process theory it is important to consider whether one motive is more automatic than the other, and in this particular case the automaticity would depend on the individual and their experiences. A selfish person may choose the selfish motive with more automaticity than a non-selfish person, and yet a controlled process may still outweigh this based on external factors such as the situation, monetary gains, or societal pressure. Although there is likely to be a stable preference for which motive one will select based on the individual it is important to remember that external factors will influence the decision. Dual process theory also provides a different source of behavioral heterogeneity in economics. It is mostly assumed within economics that this heterogeneity comes from differences in taste and rationality, while dual process theory indicates necessary considerations of which processes are automated and how these different processes may interact within decision making.
Moral psychology
Moral judgments are said to be explained in part by dual process theory. In moral dilemmas we are presented us with two morally unpalatable options. For example, should we sacrifice one life in order to save many lives or just let many lives be lost? Consider a historical example: should we authorize the use of force against other nations in order to prevent "any future acts of international terrorism" or should we take a more pacifist approach to foreign lives and risk the possibility of terrorist attack? Dual process theorists have argued that sacrificing something of moral value in order to prevent a worse outcome (often called the "utilitarian" option) involves more reflective reasoning than the more pacifist (also known as the "deontological" option). However, some evidence suggests that this is not always the case, that reflection can sometimes increase harm-rejection responses, and that reflection correlates with both the sacrificial and pacifist (but not more anti-social) responses. So some have proposed that tendencies toward sacrificing for the greater good or toward pacifism are better explained by factors besides the two processes proposed by dual process theorists.
Religiosity
Various studies have found that performance on tests designed to require System 2 thinking (a.k.a., reflection tests) can predict differences in philosophical tendencies, including religiosity (i.e., the degree to which one reports being involved in organized religion). This "analytic atheist" effect has even been found among samples of people that include academic philosophers. Nonetheless, some studies detect this correlation between atheism and reflective, System 2 thinking in only some of the countries that they study, suggesting that it is not just intuitive and reflective thinking that predict variance in religiosity, but also cultural differences.
Evidence
Belief bias effect
A belief bias is the tendency to judge the strength of arguments based on the plausibility of their conclusion rather than how strongly they support that conclusion. Some evidence suggests that this bias results from competition between logical (System 2) and belief-based (System 1) processes during evaluation of arguments.
Studies on belief-bias effect were first designed by Jonathan Evans to create a conflict between logical reasoning and prior knowledge about the truth of conclusions. Participants are asked to evaluate syllogisms that are: valid arguments with believable conclusions, valid arguments with unbelievable conclusions, invalid arguments with believable conclusions, and invalid arguments with unbelievable conclusions. Participants are told to only agree with conclusions that logically follow from the premises given. The results suggest when the conclusion is believable, people erroneously accept invalid conclusions as valid more often than invalid arguments are accepted which support unpalatable conclusions. This is taken to suggest that System 1 beliefs are interfering with the logic of System 2.
Tests with working memory
De Neys conducted a study that manipulated working memory capacity while answering syllogistic problems. This was done by burdening executive processes with secondary tasks. Results showed that when System 1 triggered the correct response, the distractor task had no effect on the production of a correct answer which supports the fact that System 1 is automatic and works independently of working memory, but when belief-bias was present (System 1 belief-based response was different from the logically correct System 2 response) the participants performance was impeded by the decreased availability of working memory. This falls in accordance with the knowledge about System 1 and System 2 of the dual-process accounts of reasoning because System 1 was shown to work independent of working memory, and System 2 was impeded due to a lack of working memory space so System 1 took over which resulted in a belief-bias.
fMRI studies
Vinod Goel and others produced neuropsychological evidence for dual-process accounts of reasoning using fMRI studies. They provided evidence that anatomically distinct parts of the brain were responsible for the two different kinds of reasoning. They found that content-based reasoning caused left temporal hemisphere activation whereas abstract formal problem reasoning activated the parietal system. They concluded that different kinds of reasoning, depending on the semantic content, activated one of two different systems in the brain.
A similar study incorporated fMRI during a belief-bias test. They found that different mental processes were competing for control of the response to the problems given in the belief-bias test. The prefrontal cortex was critical in detecting and resolving conflicts, which are characteristic of System 2, and had already been associated with that System 2. The ventral medial prefrontal cortex, known to be associated with the more intuitive or heuristic responses of System 1, was the area in competition with the prefrontal cortex.
Near-infrared spectroscopy
Tsujii and Watanabe did a follow-up study to Goel and Dolan's fMRI experiment. They examined the neural correlates on the inferior frontal cortex (IFC) activity in belief-bias reasoning using near-infrared spectroscopy (NIRS). Subjects performed a syllogistic reasoning task, using congruent and incongruent syllogisms, while attending to an attention-demanding secondary task. The interest of the researchers was in how the secondary-tasks changed the activity of the IFC during congruent and incongruent reasoning processes. The results showed that the participants performed better in the congruent test than in the incongruent test (evidence for belief bias); the high demand secondary test impaired the incongruent reasoning more than it impaired the congruent reasoning. NIRS results showed that the right IFC was activated more during incongruent trials. Participants with enhanced right IFC activity performed better on the incongruent reasoning than those with decreased right IFC activity. This study provided some evidence to enhance the fMRI results that the right IFC, specifically, is critical in resolving conflicting reasoning, but that it is also attention-demanding; its effectiveness decreases with loss of attention. The loss of effectiveness in System 2 following loss of attention makes the automatic heuristic System 1 take over, which results in belief bias.
Matching bias
Matching bias is a non-logical heuristic. The matching bias is described as a tendency to use lexical content matching of the statement about which one is reasoning, to be seen as relevant information and do the opposite as well, ignore relevant information that doesn't match. It mostly affects problems with abstract content. It doesn't involve prior knowledge and beliefs but it is still seen as a System 1 heuristic that competes with the logical System 2.
The Wason selection task provides evidence for the matching bias. The test is designed as a measure of a person's logical thinking ability. Performance on the Wason Selection Task is sensitive to the content and context with which it is presented. If you introduce a negative component into the conditional statement of the Wason Selection Task, e.g. 'If there is an A one side of the card then there is not a 3 on the other side', there is a strong tendency to choose cards that match the items in the negative condition to test, regardless of their logical status. Changing the test to be a test of following rules rather than truth and falsity is another condition where the participants will ignore the logic because they will simply follow the rule, e.g. changing the test to be a test of a police officer looking for underaged drinkers. The original task is more difficult because it requires explicit and abstract logical thought from System 2, and the police officer test is cued by relevant prior knowledge from System 1.
Studies have shown that you can train people to inhibit matching bias which provides neuropsychological evidence for the dual-process theory of reasoning. When you compare trials before and after the training there is evidence for a forward shift in activated brain area. Pre-test results showed activation in locations along the ventral pathway and post-test results showed activation around the ventro-medial prefrontal cortex and anterior cingulate. Matching bias has also been shown to generalise to syllogistic reasoning.
Evolution
Dual-process theorists claim that System 2, a general purpose reasoning system, evolved late and worked alongside the older autonomous sub-systems of System 1. The success of Homo sapiens lends evidence to their higher cognitive abilities above other hominids. Mithen theorizes that the increase in cognitive ability occurred 50,000 years ago when representational art, imagery, and the design of tools and artefacts are first documented. She hypothesizes that this change was due to the adaptation of System 2.
Most evolutionary psychologists do not agree with dual-process theorists. They claim that the mind is modular, and domain-specific, thus they disagree with the theory of the general reasoning ability of System 2. They have difficulty agreeing that there are two distinct ways of reasoning and that one is evolutionarily old, and the other is new. To ease this discomfort, the theory is that once System 2 evolved, it became a 'long leash' system without much genetic control which allowed humans to pursue their individual goals.
Issues with the dual-process account of reasoning
The dual-process account of reasoning is an old theory, as noted above. But according to Evans it has adapted itself from the old, logicist paradigm, to the new theories that apply to other kinds of reasoning as well. And the theory seems more influential now than in the past which is questionable. Evans outlined 5 "fallacies":
All dual-process theories are essentially the same. There is a tendency to assume all theories that propose two modes or styles of thinking are related and so they end up all lumped under the umbrella term of "dual-process theories".
There are just two systems underlying System 1 and System 2 processing. There are clearly more than just two cognitive systems underlying people's performance on dual-processing tasks. Hence the change to theorizing that processing is done in two minds that have different evolutionary histories and that each have multiple sub-systems.
System 1 processes are responsible for cognitive biases; System 2 processes are responsible for normatively correct responding. Both System 1 and System 2 processing can lead to normative answers and both can involve cognitive biases.
System 1 processing is contextualised while System 2 processing is abstract. Recent research has found that beliefs and context can influence System 2 processing as well as System 1.
Fast processing indicates the use of System 1 rather than System 2 processes. Just because a processing is fast does not mean it is done by System 1. Experience and different heuristics can influence System 2 processing to go faster.
Another argument against dual-process accounts for reasoning which was outlined by Osman is that the proposed dichotomy of System 1 and System 2 does not adequately accommodate the range of processes accomplished. Moshman proposed that there should be four possible types of processing as opposed to two. They would be implicit heuristic processing, implicit rule-based processing, explicit heuristic processing, and explicit rule-based processing. Another fine-grained division is as follows: implicit action-centered processes, implicit non-action-centered processes, explicit action-centered processes, and explicit non-action-centered processes (that is, a four-way division reflecting both the implicit-explicit distinction and the procedural-declarative distinction).
In response to the question as to whether there are dichotomous processing types, many have instead proposed a single-system framework which incorporates a continuum between implicit and explicit processes.
Alternative model
The dynamic graded continuum (DGC), originally proposed by Cleeremans and Jiménez is an alternative single system framework to the dual-process account of reasoning. It has not been accepted as better than the dual-process theory; it is instead usually used as a comparison with which one can evaluate the dual-process model. The DGC proposes that differences in representation generate variation in forms of reasoning without assuming a multiple system framework. It describes how graded properties of the representations that are generated while reasoning result in the different types of reasoning. It separates terms like implicit and automatic processing where the dual-process model uses the terms interchangeably to refer to the whole of System 1. Instead the DGC uses a continuum of reasoning that moves from implicit, to explicit, to automatic.
Fuzzy-trace theory
According to Charles Brainerd and Valerie Reyna's fuzzy-trace theory of memory and reasoning, people have two memory representations: verbatim and gist. Verbatim is memory for surface information (e.g. the words in this sentence) whereas gist is memory for semantic information (e.g. the meaning of this sentence).
This dual process theory posits that we encode, store, retrieve, and forget the information in these two traces of memory separately and completely independently of each other. Furthermore, the two memory traces decay at different rates: verbatim decays quickly, while gist lasts longer.
In terms of reasoning, fuzzy-trace theory posits that as we mature, we increasingly rely more on gist information over verbatim information. Evidence for this lies in framing experiments where framing effects become stronger when verbatim information (percentages) are replaced with gist descriptions. Other experiments rule out predictions of prospect theory (extended and original) as well as other current theories of judgment and decision making.
See also
References
External links
Laboratory for Rational Decision Making, Cornell University
Cognition
Cognitive psychology
Psychological theories | 0.763565 | 0.993117 | 0.75831 |
Personality pathology | Personality pathology refers to enduring patterns of cognition, emotion, and behavior that negatively affect a person's adaptation. In psychiatry and clinical psychology, it is characterized by adaptive inflexibility, vicious cycles of maladaptive behavior, and emotional instability under stress. In the United States and elsewhere, personality disorders are diagnosed categorically on Axis II of the Diagnostic and Statistical Manual of Mental Disorders published by the American Psychiatric Association.
See also
Personality disorders
Personality psychology
Psychopathology
References
American Psychiatric Association. (2000). Diagnostic and statistical manual of mental disorders (text revision, 4th ed.). Washington, DC: American Psychiatric Association.
Millon, T. (1981). Disorders of personality. DSM-III: Axis II. New York, NY: John Wiley.
Mischel, W., & Shoda, Y. (1995). A cognitive-affective system theory of personality: Reconceptualizing situations, dispositions, dynamics, and invariance in personality structure. Psychological Review, 102(2), 246-268.
Westen, D. (1995). A clinical-empirical model of personality: Life after the mischelian ice age and the NEO-lithic era. Journal of Personality, 63, 495-524.
Personality disorders | 0.799646 | 0.948301 | 0.758304 |
Seminar | A seminar is a form of academic instruction, either at an academic institution or offered by a commercial or professional organization. It has the function of bringing together small groups for recurring meetings, focusing each time on some particular subject, in which everyone present is requested to participate. This is often accomplished through an ongoing Socratic dialogue with a seminar leader or instructor, or through a more formal presentation of research. It is essentially a place where assigned readings are discussed, questions can be raised and debates can be conducted.
Etymology
The word seminar was borrowed from German (in which is it capitalized as ), and is ultimately derived from the Latin word , meaning 'seed plot' (an old-fashioned term for 'seedbed'). Its root word is (Latin for 'seed').
Overview
The term seminar is also used to describe a research talk, often given by a visiting researcher and primarily attended by academics, research staff, and postgraduate students. Seminars often occur in regular series, but each seminar is typically given by a different speaker, on a topic of that speaker's choosing. Such seminars are not usually a part of a course of study and are therefore not usually associated with any assessment or credit.
In some European universities, a seminar may be a large lecture course, especially when conducted by a renowned thinker (regardless of the size of the audience or the scope of student participation in discussion). Some non-English speaking countries in Europe use the word seminar (e.g. German Seminar, Slovenian seminar, Polish seminarium) to refer to a university class that includes a term paper or project, as opposed to a lecture class (e.g. German Vorlesung, Slovenian predavanje, Polish wykład). This does not correspond to the English use of the term. In some academic institutions, typically in scientific fields, the term "preceptorial" is used interchangeably with "seminar".
In North Indian universities, the term "seminar" refers to a course of intense study relating to the student's major. Seminars typically have significantly fewer students per professor than normal courses, and are generally more specific in topic of study. Seminars can revolve around term papers, exams, presentations, and several other assignments. Seminars are almost always required for university graduation. Normally, participants must not be beginners in the field under discussion at US and Canadian universities. Seminar classes are generally reserved for upper-class students, although at UK and Australian universities seminars are often used for all years. The idea behind the seminar system is to familiarize students more extensively with the methodology of their chosen subject and also to allow them to interact with examples of the practical problems that always occur during research work.
Seminar rooms
"Seminar room" is often used as a name for a generic group study or work space at a library. Some seminar rooms are more tailored to a specific topic or field, literally a space designed for a seminar course or individualized self-study to occur.
See also
Academic conference
French mathematical seminars
Plenary session
Poster session
Senior seminar, aka Capstone course or final year course
Symposium (academic)
Webinar (a seminar attended through the Internet)
Jesus Seminar, a group of 50 biblical criticism scholars and 100 laymen
References
Academic terminology | 0.762815 | 0.994085 | 0.758303 |
Egotism | Egotism is defined as the drive to maintain and enhance favorable views of oneself and generally features an inflated opinion of one's personal features and importance distinguished by a person's amplified vision of one's self and self-importance. It often includes intellectual, physical, social, and other overestimations. The egotist has an overwhelming sense of the centrality of the "me" regarding their personal qualities.
Characteristics
Egotism is closely related to an egocentric love for one's imagined self or narcissism. Egotists have a strong tendency to talk about themselves in a self-promoting fashion, and they may well be arrogant and boastful with a grandiose sense of their own importance. Their inability to recognise the accomplishments of others leaves them profoundly self-promoting; while sensitivity to criticism may lead, on the egotist's part, to narcissistic rage at a sense of insult.
Egotism differs from both altruism – or behaviour motivated by the concern for others rather than for oneself – and from egoism, the constant pursuit of one's self-interest. Various forms of "empirical egoism" have been considered consistent with egotism, but do not – which is also the case with egoism in general – necessitate having an inflated sense of self.
Development
In developmental terms, two different paths can be taken to reach egotism – one being individual, and the other being cultural.
With respect to the developing individual, a movement takes place from egocentricity to sociality during the process of growing up. It is normal for an infant to have an inflated sense of egotism. The over-evaluation of one's own ego regularly appears in childish forms of love.
Optimal development allows a gradual decrease into a more realistic view of one's own place in the world. A less optimal adjustment may later lead to what has been called defensive egotism, serving to overcompensate for a fragile concept of self. Robin Skynner however considered that in the main growing up leads to a state where "your ego is still there, but it's taking its proper limited place among all the other egos".
However, alongside such a positive trajectory of diminishing individual egotism, a rather different arc of development can be noted in cultural terms, linked to what has been seen as the increasing infantilism of post-modern society. Whereas in the nineteenth century egotism was still widely regarded as a traditional vice – for Nathaniel Hawthorne egotism was a sort of diseased self-contemplation – Romanticism had already set in motion a countervailing current, what Richard Eldridge described as a kind of "cultural egotism, substituting the individual imagination for vanishing social tradition". The romantic idea of the self-creating individual – of a self-authorizing, artistic egotism – then took on broader social dimensions in the following century. Keats might still attack Wordsworth for the regressive nature of his retreat into the egotistical sublime; but by the close of the twentieth century egotism had been naturalized much more widely by the Me generation into the Culture of Narcissism.
In the 21st century, romantic egotism has been seen as feeding into techno-capitalism in two complementary ways: on the one hand, through the self-centred consumer, focused on their own self-fashioning through brand 'identity'; on the other through the equally egotistical voices of 'authentic' protest, as they rage against the machine, only to produce new commodity forms that serve to fuel the system for further consumption.
Sexuality
There is a question mark over the relationship between sexuality and egotism. Sigmund Freud popularly made the claim that intimacy can transform the egotist, giving a new sense of humility in relation to others.
At the same time, it is very apparent that egotism can readily show itself in sexual ways and indeed arguably one's whole sexuality may function in the service of egotistical needs.
Social egotism
Leo Tolstoy, used the term aduyevschina (after the protagonist Aduyev of Goncharov's first novel, A Common Story) to describe social egotism as the inability of some people to see beyond their immediate interests.
Etymology
The term egotism is derived from the Greek ("εγώ") and subsequently its Latinised ego (ego), meaning "self" or "I," and -ism, used to denote a system of belief. As such, the term shares early etymology with egoism.
Egotism vs. pride
Egotism differs from pride. Although they share the state of mind of an individual, ego is defined by a person's self-perception. That is how the particular individual thinks, feels and distinguishes him/herself from others. Pride may be equated to the feeling one experiences as the direct result of one's accomplishment or success.
Cultural examples
A. A. Milne has been praised for his clear-eyed vision of the ruthless, open, unashamed egotism of the young child.
Ryan Holiday described our cultural values as dependent on validation, entitled, and ruled by our emotions, a form of egotism.
See also
References
Further reading
External links
by George Santayana
B. J. Bushman/R. F. Baumeister, 'Threatened Egotism...'
Egoism
Narcissism
Philosophy of life | 0.763479 | 0.993198 | 0.758286 |
Cultural identity | Cultural identity is a part of a person's identity, or their self-conception and self-perception, and is related to nationality, ethnicity, religion, social class, generation, locality, gender, or any kind of social group that has its own distinct culture. In this way, cultural identity is both characteristic of the individual but also of the culturally identical group of members sharing the same cultural identity or upbringing. Cultural identity is an unfixed process that is continually evolving within the discourses of social, cultural, and historical experiences. Some people undergo more cultural identity changes as opposed to others, those who change less often have a clear cultural identity. This means that they have a dynamic yet stable integration of their culture.
There are three pieces that make up a person's cultural identity: cultural knowledge, category label, and social connections. Cultural knowledge refers to a person's connection to their identity through understanding their culture's core characteristics. Category label refers to a person's connection to their identity through indirect membership of said culture. Social connections refers to a person's connection to their identity through their social relationships. Cultural identity is developed through a series of steps. First, a person comes to understand a culture through being immersed in those values, beliefs, and practices. Second, the person then identifies as a member of that culture dependent on their rank within that community. Third, they develop relationships such as immediate family, close friends, coworkers, and neighbors.
Culture is a term that is highly complex and often contested with academics recording about 160 variations in meaning. Underpinning the notion of culture is that it is dynamic and changes over time and in different contexts resulting in many people today identifying with one or more cultures and many different ways.
It is a defining feature of a person's identity, contributing to how they see themselves and the groups with which they identify. A person's understanding of their own and other's identities develops from birth and is shaped by the values and attitudes prevalent at home and in the surrounding community.
Description
Various modern cultural studies and social theories have investigated cultural identity and understanding. In recent decades, a new form of identification has emerged that breaks down the understanding of the individual as a coherent whole subject into a collection of various cultural identifiers. These cultural identifiers may be the result of various conditions including: location, sex, race, history, nationality, language, sexuality, religious beliefs, ethnicity, aesthetics, and food. As one author writes:
When talking about identity, we generally define this word as the series of physical features that differentiate a person. Thus at birth, our parents declare us and give us a name with which they will identify us based on whether we are a boy or a girl. Identity is not only a right that declares the name, sex, time, and place that one is born; the word identity goes beyond what we define it. Identity is a function of elements that portrays one in a dynamic way, in constant evolution, throughout the stages of life identity develops based on personal experiences, tastes, and choices of a sexual and religious nature, as well as the social environment, these being some of the main parameters that influence and transform the day to day and allow us to discover a new part of ourselves.
The divisions between cultures can be very fine in some parts of the world, especially in rapidly changing cities where the population is ethnically diverse and social unity is based primarily on locational contiguity.
As a "historical reservoir," culture is an important factor in shaping identity. Since one of the main characteristics of a culture is its "historical reservoir," many if not all groups entertain revisions, either consciously or unconsciously, in their historical record in order to either bolster the strength of their cultural identity or to forge one which gives them precedent for actual reform or change.
Some critics of cultural identity argue that the preservation of cultural identity, being based upon difference, is a divisive force in society and that cosmopolitanism gives individuals a greater sense of shared citizenship. When considering practical association in international society, states may share an inherent part of their 'make up' that gives common ground and an alternative means of identifying with each other. Nations provide the framework for cultural identities called external cultural reality, which influences the unique internal cultural realities of the individuals within the nation.
There is a relationship between cultural identity and new media.
Rather than necessarily representing an individual's interaction within a certain group, cultural identity may be defined by the social network of people imitating and following the social norms as presented by the media. Accordingly, instead of learning behavior and knowledge from cultural/religious groups, individuals may be learning these social norms from the media to build on their cultural identity.
A range of cultural complexities structures the way individuals operate with the cultural realities in their lives. Nation is a large factor of the cultural complexity, as it constructs the foundation for an individual's identity, but it may contrast with one's cultural reality. Cultural identities are influenced by several different factors such as ones religion, ancestry, skin color, language, class, education, profession, skill, family and political attitudes. These factors contribute to the development of one's identity.
History
The history of cultural identity develops out of the observations of a number of social scientists. A history of cultural identity is important because it outlines the understanding of how our identities provide a way to see ourselves in relation to the world in which we live. "Cultural identities...are the natural, and most fundamental, constitutive elements of individual and collective identity."
Franz Boas is an important figure in the creation of the idea of cultural identity. Boas is known for challenging ideas about culture. Boas promoted the importance of viewing a culture from within its own perspective and understanding, not from the outsider's view point. This was a somewhat radical perspective at the time. Additionally, Myron Lustig is credited with contributing the concept of cultural identity theory.
A number of contemporary theorists continue to contribute to the concept of cultural identity. For instance, contemporary work completed by Stuart Hall is considered essential to understand cultural identity. According to Hall, identity is defined by at least two specific actions, which are similarity and difference. Specifically, in settings of slavery and colonization, identity provides a connection to the past as well as disintegration from a shared origination.
Theorists' questions about identity include “whether identity is to be understood as something internal that persists through change or as something ascribed from without that changes according to circumstance." Whatever the case may be, Gleason advocates for “sensitivity to the intrinsic complexities of the subject matter with which it deals, and careful attention to the need for precision and consistency in its application. Cultural identity can also become a marker of difference that requires sensitivity.
Kuper presents concepts on cultural identity within the framework of a power dynamic. He writes, "The privileged lie and mislead, but the oppressed come gradually to appreciate their objective circumstances and formulate a new consciousness that will ultimately liberate them." The consciousness is a facet of their identity. Similarly, identity plays a role in mediating between a human being and the environment in which they exist.
The identity of a person is “a result of socialization and customs” that promotes the maintenance of distinct cultural identities from generation to generation. Additionally, identity can be considered that which forms cultures and results in “dictated appropriate behavior." Put another way, identity may dictate behavior that results in the reification of identity with the individual as a “replicate in miniature of the larger social and cultural entity. Another way to consider cultural identity is that it is “the sum of material wealth and spiritual wealth created by human beings in the practice of social history."
Globalization is connected to influences in economics, politics, and society. Accordingly, globalization has an impact on cultural identity. As societies become even more connected, there are concerns that cultural identities will become homogenized through the increased level of connection and communication. However, there are alternative perspectives on this issue. For instance, Wright theorizes that "The spread of global culture and globalised ideas has led to many movements designed to embrace the uniqueness and diversity of an individual’s particular culture."
Cultural arena
It is also noted that an individual's "cultural arena," or place where one lives, impacts the culture that person abides by. The surroundings, environment, and people in these places play a role in how one feels about the culture they wish to adopt. Many immigrants find the need to change their culture in order to fit into the culture of most citizens in the country. This can conflict with an immigrant's current belief in their culture and might pose a problem, as the immigrant feels compelled to choose between the two presenting cultures.
Some might be able to adjust to the various cultures in the world by committing to two or more cultures. It is not required to stick to one culture. Many people socialize and interact with people in one culture in addition to another group of people in another culture. Thus, cultural identity is able to take many forms and can change depending on the cultural area. The impact of the cultural arena has changed with the advent of the Internet, bringing together groups of people with shared cultural interests who before would have been more likely to integrate into their real-world cultural arena. This adaptability is what allows people to feel a part of society and culture wherever they go.
Language
Language allows for people in a group to communicate their values, beliefs, and customs, all of which contribute to creating a cultural identity. It was for a long time believed that if children lose their languages, they lose part or all of their cultural identity. When students who are non-native English speakers, go to classes where they are required to speak only English, they feel that their native language has no value. Some studies found, that this leads to loss of their culture and language altogether and this can lead to either a massive change in cultural identity, or they find themselves struggling to understand who they are. Language also includes the way people speak with peers, family members, authority figures, and strangers, including the tone and familiarity that is included in the language. The learning process can also be affected by cultural identity via the understanding of specific words, and the preference for specific words when learning and using a second language. Since many aspects of a person's cultural identity can be changed, such as citizenship or influence from outside cultures, language is a major component of cultural identity. However, more recent research could show, that language may be not a crucial part of a person's identity or cultural identity.
Education
Cultural identity is often not discussed in the classroom or learning environment where an instructor presides over the class. This often happens when the instructor attempts to discuss cultural identity and the issues that come with it in the classroom and is met with disagreement and cannot make forward progress in the conversation. Moreover, not talking about cultural identity can lead to issues such as prohibiting growth of education, development of a sense of self, and social competency. In these environments there are often many different cultures and problems can occur due to different worldviews that prevent others from being able to think outwardly about their peers' values and differing backgrounds. If students are able to think outwardly, then they can not only better connect with their peers, but also further develop their own worldview. In addition to this, instructors should take into account the needs of different students' backgrounds in order to best relay the material in a way that engages the student.
When students learn that knowledge and truth are relevant to each person, that instructors do not know everything, and that their own personal experiences dictate what they believe they can better contextualize new information using their own experiences as well as taking into account the different cultural experiences of others. This in turn increases the ability to critically think and challenge new information which benefits all students learning in a classroom setting. There are two ways instructors can better elicit this response from their students through active communication of cultural identity. The first is by having students engage in class discussion with their peers. Doing so creates community and allows for students to share their knowledge as well as question their peers and instructors, thereby, learning about each other's cultural identity and creating acceptance of differing worldviews in the classroom. The second way is by using active learning methods such as "forming small groups and analyzing case studies". Through engaging in active learning students learn that their cultural identity is welcomed and accepted.
Cultural identity and immigrant experience
Identity development among immigrant groups has been studied across a multi-dimensional view of acculturation. Acculturation is the phenomenon that results when groups or individuals from different cultures come into continuous contact with one another and adopt certain values and practices that were not originally their own. Acculturation is unique from assimilation. Dina Birman and Edison Trickett (2001) conducted a qualitative study through informal interviews with first-generation Soviet Jewish refugee adolescents looking at the process of acculturation through three different dimensions: language competence, behavioral acculturation, and cultural identity. The results indicated that "acculturation appears to occur in a linear pattern over time for most dimensions of acculturation, with acculturation to the American culture increasing and acculturation to the Russian culture decreasing. However, Russian language competence for the parents did not diminish with length of residence in the country" (Birman & Trickett, 2001).
In a similar study, Phinney, Horencyzk, Liebkind, and Vedder (2001) focused on a model, which concentrates on the interaction between immigrant characteristics and the responses of the majority society to understand the psychological effects of immigration. The researchers concluded that most studies find that being bicultural, the combination of a strong ethnic and a strong national identity, yields the best adaptation in the new country of residence. An article by LaFromboise, L. K. Colemna, and Gerton, reviews the literature on the impact of being bicultural. It showed that it is possible to have the ability to obtain competence within two cultures without losing one's sense of identity or having to identity with one culture over the other. (LaFromboise Et Al. 1993) The importance of ethnic and national identity in the educational adaptation of immigrants indicates that a bicultural orientation is advantageous for school performance (Portes & Rumbaut, 1990). Educators can assume their positions of power in beneficially impactful ways for immigrant students, by providing them with access to their native cultural support groups, language classes, after-school activities, and clubs in order to help them feel more connected to both native and national cultures. It is clear that the new country of residence can impact immigrants' identity development across multiple dimensions. Biculturalism can allow for a healthy adaptation to life and school. With many new immigrant youth, a school district in Alberta, Canada, has gone as far as to partner with various agencies and professionals in an effort to aid the cultural adjustment of new Filipino immigrant youths. In the study cited, a combination of family workshops and teacher professional development aimed to improve the language learning and emotional development of these youths and families.
School Transitions
How great is "Achievement Loss Associated with the Transition to Middle School and High School"? John W. Alspaugh's research is in the September/October 1998 Journal of Educational Research (vol. 92, no. 1), 2026. Comparing three groups of 16 school districts, the loss was greater where the transition was from sixth grade than from a K-8 system. It was also greater when students from multiple elementary schools merged into a single middle school. Students from both K-8 and middle schools lost achievement in transition to high school, though this was greater for middle school students, and high school dropout rates were higher for districts with grades 6-8 middle schools than for those with K-8 elementary schools.
The Jean S. Phinney Three-Stage Model of Ethnic Identity Development is a widely accepted view of the formation of cultural identity. In this model cultural Identity is often developed through a three-stage process: unexamined cultural identity, cultural identity search, and cultural identity achievement.
Unexamined cultural identity: "a stage where one's cultural characteristics are taken for granted, and consequently there is little interest in exploring cultural issues." This for example is the stage one is in throughout their childhood when one doesn't distinguish between cultural characteristics of their household and others. Usually, a person in this stage accepts the ideas they find on culture from their parents, the media, community, and others.
An example of thought in this stage: "I don't have a culture I'm just an American." "My parents tell me about where they lived, but what do I care? I've never lived there."
Cultural identity search: "is the process of exploration and questioning about one's culture in order to learn more about it and to understand the implications of membership in that culture." During this stage a person will begin to question why they hold their beliefs and compare it to the beliefs of other cultures. For some this stage may arise from a turning point in their life or from a growing awareness of other cultures. This stage is characterized by growing awareness in social and political forums and a desire to learn more about culture. This can be expressed by asking family members questions about heritage, visiting museums, reading of relevant cultural sources, enrolling in school courses, or attendance at cultural events. This stage might have an emotional component as well.
An example of thought in this stage: "I want to know what we do and how our culture is different from others." "There are a lot of non-Japanese people around me, and it gets pretty confusing to try and decide who I am."
Cultural identity achievement: "is characterized by a clear, confident acceptance of oneself and an internalization of one's cultural identity." In this stage people often allow the acceptance of their cultural identity play a role in their future choices such as how to raise children, how to deal with stereotypes and any discrimination and approach negative perceptions. This usually leads to an increase in self-confidence and positive psychological adjustment
The role of the internet
There is a set of phenomena that occur in conjunction between virtual culture – understood as the modes and norms of behavior associated with the internet and the online world – and youth culture. While we can speak of a duality between the virtual (online) and real sphere (face-to-face relations), for youth, this frontier is implicit and permeable. On occasions – to the annoyance of parents and teachers – these spheres are even superposed, meaning that young people may be in the real world without ceasing to be connected.
In the present techno-cultural context, the relationship between the real world and the virtual world cannot be understood as a link between two independent and separate worlds, possibly coinciding at a point, but as a Moebius strip where there exists no inside and outside and where it is impossible to identify limits between both. For new generations, to an ever-greater extent, digital life merges with their home life as yet another element of nature. In this naturalizing of digital life, the learning processes from that environment are frequently mentioned not just since they are explicitly asked but because the subject of the internet comes up spontaneously among those polled. The ideas of active learning, of googling 'when you don't know', of recourse to tutorials for learning a program or a game, or the expression 'I learnt English better and in a more entertaining way by playing' are examples often cited as to why the internet is the place most frequented by the young people polled.
The internet is becoming an extension of the expressive dimension of the youth condition. There, youth talk about their lives and concerns, design the content that they make available to others and assess others' reactions to it in the form of optimized and electronically mediated social approval. Many of today's youth go through processes of affirmation procedures and is often the case for how youth today grow dependent on peer approval. When connected, youth speak of their daily routines and lives. With each post, image or video they upload, they have the possibility of asking themselves who they are and to try out profiles differing from those they assume in the 'real' world. The connections they feel in more recent times have become much less interactive through personal means compared to past generations. The influx of new technology and access has created new fields of research on effects on teens and young adults. They thus negotiate their identity and create senses of belonging, putting the acceptance and censure of others to the test, an essential mark of the process of identity construction.
Youth ask themselves about what they think of themselves, how they see themselves personally and, especially, how others see them. On the basis of these questions, youth make decisions which, through a long process of trial and error, shape their identity. This experimentation is also a form through which they can think about their insertion, membership and sociability in the 'real' world.
From other perspectives, the question arises on what impact the internet has had on youth through accessing this sort of 'identity laboratory' and what role it plays in the shaping of youth identity. On the one hand, the internet enables young people to explore and perform various roles and personifications while on the other, the virtual forums – some of them highly attractive, vivid and absorbing (e.g. video games or virtual games of personification) – could present a risk to the construction of a stable and viable personal identity.
See also
Sources
References
Sources
Gad Barzilai, Communities and Law: Politics and Cultures of Legal Identities University of Michigan Press, 2003.
Tan, S.-h. (2005). Challenging citizenship: group membership and cultural identity in a global age. Aldershot, Hants, England: Ashgate.
Bunschoten, R., Binet, H., & Hoshino, T. (2001). Urban flotsam: stirring the city : Chora. Rotterdam: 010 Publishers.
Mandelbaum, M. (2000). The new European diasporas: national minorities and conflict in Eastern Europe. New York: Council on Foreign Relations Press
Houtman, G. (1999). Mental culture in Burmese crisis politics: Aung San Suu Kyi and the National League for Democracy. Tokyo: Institute for the Study of Languages and Cultures of Asia and Africa, Tokyo University of Foreign Studies. (library.cornell.edu).
Sagasti, F. R., & Alcalde, G. (1999). Development cooperation in a fractured global order: an arduous transition. Ottawa: International Development Research Centre.
Crahan, M. E., & Vourvoulias-Bush, A. (1997). The city and the world: New York's global future. New York: Council on Foreign relations.
Hall, S., & Du Gay, P. (1996). Questions of cultural identity. London: Sage.
Cable, V. (1994). The world's new fissures: identities in crisis. London: Demos.
Berkson, I. B. (1920).Theories of Americanization a critical study, with special reference to the Jewish group. New York City: Teachers College, Columbia University.
Mora, Necha. (2008).
Further reading
Anderson, Benedict (1983). Imagined Communities. London: Verso.
Balibar, Renée & Laporte, Dominique (1974). Le français national: Politique et pratique de la langue nationale sous la Révolution. Paris: Hachette.
(full-text IDENTITIES: how Governed, Who Pays?)
de Certeau, Michel; Julia, Dominique; & Revel, Jacques (1975). Une politique de la langue: La Révolution française et les patois. Paris: Gallimard.
Evangelista, M. (2003). "Culture, Identity, and Conflict: The Influence of Gender," in Conflict and Reconstruction in Multiethnic Societies, Washington, D.C.: The National Academies Press
Fishman, Joshua A. (1973). Language and Nationalism: Two Integrative Essays. Rowley, MA: Newbury House.
Gellner, Ernest (1983). Nations and Nationalism. Oxford: Basil Blackwell.
Gordon, David C. (1978). The French Language and National Identity (1930–1975). The Hague: Mouton.
Milstein, T. & Castro-Sotomayor, J. (2020). "Routledge Handbook of Ecocultural Identity". London, UK: Routledge. https://doi.org/10.4324/9781351068840
Robyns, Clem (1995). "Defending the national identity". In Andreas Poltermann (Ed.), Literaturkanon, Medienereignis, Kultureller Text. Berlin: Erich Schmidt Verlag .
Sparrow, Lise M. (2014). Beyond multicultural man: Complexities of identity. In Molefi Kete Asante, Yoshitaka Miike, & Jing Yin (Eds.), The global intercultural communication reader (2nd ed., pp. 393–414). New York, NY: Routledge.
Stewart, Edward C., & Bennet, Milton J. (1991). American cultural patterns: A cross-cultural perspective (Rev. ed.). Yarmouth, ME: Intercultural Press.
Woolf, Stuart. "Europe and the Nation-State". EUI Working Papers in History 91/11. Florence: European University Institute.
Anthropology
Cultural geography
Identity
Identity
Cross-cultural psychology | 0.760844 | 0.996609 | 0.758264 |
Moral psychology | Moral Psychology is the study of human thought and behavior in ethical contexts. Historically, the term "moral psychology" was used relatively narrowly to refer to the study of moral development. This field of study is interdisciplinary between the application of philosophy and psychology. Moral psychology eventually came to refer more broadly to various topics at the intersection of ethics, psychology, and philosophy of mind. Some of the main topics of the field are moral judgment, moral reasoning, moral satisficing, moral sensitivity, moral responsibility, moral motivation, moral identity, moral action, moral development, moral diversity, moral character (especially as related to virtue ethics), altruism, psychological egoism, moral luck, moral forecasting, moral emotion, affective forecasting, and moral disagreement.
Today, moral psychology is a thriving area of research spanning many disciplines, with major bodies of research on the biological, cognitive/computational and cultural basis of moral judgment and behavior, and a growing body of research on moral judgment in the context of artificial intelligence.
History
The origins of moral psychology can be traced back to early philosophical works, largely concerned with moral education, such as by Plato and Aristotle in Ancient Greece, as well as from the Buddhist and Confucian traditions. Empirical studies of moral judgment go back at least as far as the 1890s with the work of Frank Chapman Sharp, coinciding with the development of psychology as a discipline separate from philosophy. Since at least 1894, philosophers and psychologists attempted to empirically evaluate the morality of an individual, especially attempting to distinguish adults from children in terms of their judgment. Unfortunately, these efforts failed because they "attempted to quantify how much morality an individual had—a notably contentious idea—rather than understand the individual's psychological representation of morality".
In most introductory psychology courses, students learn about moral psychology by studying the psychologist Lawrence Kohlberg, who proposed a highly influential theory of moral development, developed throughout the 1950s and 1960s. This theory was built on Piaget's observation that children develop intuitions about justice that they can later articulate. Kohlberg proposed six stages broken into three categories of moral reasoning that he believed to be universal to all people in all cultures. The increasing sophistication of justice-based reasoning was taken as a sign of development. Moral cognitive development, in turn, was assumed to be a necessary (but not sufficient) condition for moral action.
But researchers using the Kohlberg model found a gap between what people said was most moral and actions they took. In response, Augusto Blasi proposed his self-model that links ideas of moral judgment and action through moral commitment. Those with moral goals central to the self-concept are more likely to take moral action, as they feel a greater obligation to do so. Those who are motivated will attain a unique moral identity.
Following the independent publication of a pair of landmark papers in 2001 (respectively led by Jonathan Haidt and Joshua Greene), there was a surge in interest in moral psychology across a broad range of subfields of psychology, with interest shifting away from developmental processes towards a greater emphasis on social, cognitive, affective and neural processes involved in moral judgment.
Methods
Philosophers, psychologists and researchers from other fields have created various methods for studying topics in moral psychology, with empirical studies dating back to at least the 1890s. The methods used in these studies include moral dilemmas such as the trolley problem, structured interviews and surveys as a means to study moral psychology and its development, as well as the use of economic games, neuroimaging, and studies of natural language use.
Interview techniques
In 1963, Lawrence Kohlberg presented an approach to studying differences in moral judgment by modeling evaluative diversity as reflecting a series of developmental stages (à la Jean Piaget). Lawrence Kohlberg's stages of moral development are:
Obedience and punishment orientation
Self-interest orientation
Interpersonal accord and conformity
Authority and social-order maintaining orientation
Social contract orientation
Universal ethical principles
Stages 1 and 2 are combined into a single stage labeled "pre-conventional", and stages 5 and 6 are combined into a single stage labeled "post-conventional" for the same reason; psychologists can consistently categorize subjects into the resulting four stages using the "Moral Judgement Interview" which asks subjects why they endorse the answers they do to a standard set of moral dilemmas.
Survey instruments
Between 1910 and 1930, in the United States and Europe, several morality tests were developed to classify subjects as either fit or unfit to make moral judgments. Test-takers would classify or rank standardized lists of personality traits, hypothetical actions, or pictures of hypothetical scenes. As early as 1926, catalogs of personality tests included sections specifically for morality tests, though critics persuasively argued that they merely measured intelligence or awareness of social expectations.
Meanwhile, Kohlberg inspired a new series of morality tests. The Defining Issues Test (dubbed "Neo-Kohlbergian" by its constituents) scores relative preference for post-conventional justifications, and the Moral Judgment Test scores consistency of one's preferred justifications. Both treat evaluative ability as similar to IQ (hence the single score), allowing categorization by high score vs. low score.
Among the more recently developed survey measures, the Moral Foundations Questionnaire is a widely used survey measure of the five moral intuitions proposed by Moral Foundations Theory: care/harm, fairness/cheating, loyalty/betrayal, authority/subversion, and sanctity/degradation. The questions ask respondents to rate various considerations in terms of how relevant they are to the respondent's moral judgments. The purpose of the questionnaire is to measure the degree to which people rely upon each of the five moral intuitions (which may coexist). The new and improved version of this instrument (i.e., Moral Foundations Questionnaire-2; MFQ-2) was developed in 2023. In this version, Fairness was split to Equality and Proportionality. Hence, the MFQ-2 measures Care, Equality, Proportionality, Loyalty, Authority, and Purity. In addition to survey instruments measuring endorsement of moral foundations, a number of other contemporary survey measures exist relating to other broad taxonomies of moral values, as well as more specific moral beliefs, or concerns.
Evolutionary origins
According to Haidt, the belief that morality is not innate was one of the few theoretical commitments uniting many of the prominent psychologists studying morality in the twentieth century (with some exceptions). A substantial amount of research in recent decades has focused on the evolutionary origins of various aspects of morality.
In Unto Others: the Evolution and Psychology of Unselfish Behavior (1998), Elliott Sober and David Sloan Wilson demonstrated that diverse moralities could evolve through group selection. In particular, they dismantled the idea that natural selection will favor a homogeneous population in which all creatures care only about their own personal welfare and/or behave only in ways which advance their own personal reproduction.
Tim Dean has advanced the more general claim that moral diversity would evolve through frequency-dependent selection because each moral approach is vulnerable to a different set of situations which threatened our ancestors.
Topics and theories
Moral identity
Moral identity refers to the importance of morality to a person's identity, typically construed as either a trait-like individual difference, or set of chronically accessible schemas. Moral identity is theorized to be one of the key motivational forces connecting moral reasoning to moral behavior, as suggested by a 2016 meta-analysis reporting that moral identity is positively (albeit only modestly) associated with moral behavior.
Moral satisficing
The theory of moral satisficing applies the study of ecological rationality to moral behavior. In this view, much of moral behavior is based on social heuristics rather than traits, virtues, or utilitarian calculations. Social heuristics are a form of satisficing, a term coined by Nobel laureate Herbert Simon. Social heuristics are not good or bad, or beneficial or harmful, per se, but solely in relation to the environments in which they are used. For instance, an adolescent may commit a crime not because of an evil character or a utilitarian calculation but due to following the social heuristic “do what your peers do.” After shifting to a different peer group, the same person’s behavior may shift to a more socially desirable outcome – by relying on the very same heuristic. From this perspective, moral behavior is thus not simply a consequence of inner virtue or traits, but a function of both the mind and the environment, a view based on Simon’s scissors analogy. Many other moral theories, in contrast, consider the mind alone, such as Kohlberg’s state theory, identity theories, virtue theories, and willpower theories.
The ecological perspective has methodological implications for the study of morality: According to it, behavior needs to be studied in social groups and not only in individuals, in natural environments and not only in labs. Both principles are violated, for instance, by the study of how individuals respond to artificial trolley problems. The theory of moral satisficing also has implications for moral policy, implying that problematic behavior can be changed by changing the environment, not only the individual.
Darwin argued that one original function of morality was the coherence and coordination of groups. This suggests that social heuristics that generate coherence and coordination are also those that guide moral behavior. These social heuristics include imitate-your-peers, equality (divide a resource equally), and tit-for-tat (be kind first, then imitate your partner’s behavior). In general, the social heuristics of individuals or institutions shape their moral fabric.
Moral satisficing explains two phenomena that pose a puzzle for virtue and trait theories: moral luck and systematic inconsistencies, as when teens who voluntarily made a virginity pledge were just as likely to have premarital sex as their peers who did not. From an ecological view of morality, such inconsistencies are to be expected when individuals move from one environment to another.
Nagel (1979, p. 59) defines moral luck as follows: ‘‘Where a significant aspect of what someone does depends on factors beyond his control, yet we continue to treat him in that respect as an object of moral judgment, it can be called moral luck.’’ Others voiced concerns that moral luck poses a limit to improving our moral behavior and makes it difficult to evaluate behavior as right or wrong. Yet this concern is based on an internal view of the causes of moral behavior; from an ecological view, moral luck is an inevitable consequence of the interaction between mind and environment. A teen is morally lucky to have not grown up in a criminal peer group, and an adult is morally lucky to have not been conscripted into an army.
Moral satisficing postulates that behavior is guided by social heuristics, not by moral rules such as “don’t kill”, as assumed in theories of moral heuristics or in Hauser’s “moral grammar” with hard-wired moral rules. Moral satisficing postulates that moral rules are essentially social heuristics that ultimately serve the coordination and cooperation of social groups.
Moral values
Psychologist Shalom Schwartz defines individual values as "conceptions of the desirable that guide the way social actors (e.g. organisational leaders, policymakers, individual persons) select actions, evaluate people at events, and explain their actions and evaluations." Cultural values form the basis for social norms, laws, customs and practices. While individual values vary case by case (a result of unique life experience), the average of these values point to widely held cultural beliefs (a result of shared cultural values).
Kristiansen and Hotte reviewed many research articles regarding people's values and attitudes and whether they guide behavior. With the research they reviewed and their own extension of Ajzen and Fishbein's theory of reasoned action, they conclude that value-attitude-behavior depends on the individual and their moral reasoning. Another issue that Kristiansen and Hotte discovered through their research was that individuals tended to "create" values to justify their reactions to certain situations, which they called the "value justification hypothesis". Their theory is comparable to Jonathan Haidt's social intuitionist theory, where individuals justify their intuitive emotions and actions through post-hoc moral reasoning.
Kristiansen and Hotte also found that independent selves had actions and behaviors that are influenced by their own thoughts and feelings, but Interdependent selves have actions, behaviors and self-concepts that were based on the thoughts and feelings of others. Westerners have two dimensions of emotions, activation and pleasantness. The Japanese have one more, the range of their interdependent relationships. Markus and Kitayama found that these two different types of values had different motives. Westerners, in their explanations, show self-bettering biases. Easterners, on the other hand, tend to focus on "other-oriented" biases.
Moral foundations theory
Moral foundations theory, first proposed in 2004 by Jonathan Haidt and Craig Joseph, attempts to explain the origins of and variation in human moral reasoning on the basis of innate, modular foundations. Notably, moral foundations theory has been used to describe the difference between the moral foundations of political liberals and political conservatives. Haidt and Joseph expanded on previous research done by Shweder and his three ethics theory. Shweder's theory consisted of three moral ethics: the ethics of community, autonomy, and divinity. Haidt and Graham took this theory and extended it to discuss the five psychological systems that more specifically make up the three moral ethics theory. These Five Foundations of Morality and their importance vary throughout each culture and construct virtues based on their emphasized foundation.
The five psychological foundations are:
Harm/care, which starts with the sensitivity to signs of suffering in offspring and develops into a general dislike of seeing suffering in others and the potential to feel compassion in response.
Fairness/reciprocity, which is developed when someone observes or engages in reciprocal interactions. This foundation is concerned with virtues related to fairness and justice.
Ingroup/loyalty, which constitutes recognizing, trusting, and cooperating with members of one's ingroup as well as being wary of members of other groups.
Authority/respect, which is how someone navigates in a hierarchal ingroups and communities.
Purity/sanctity, which stems from the emotion of disgust that guards the body by responding to elicitors that are biologically or culturally linked to disease transmission.
The five foundations theory are both a nativist and cultural-psychological theory. Modern moral psychology concedes that "morality is about protecting individuals" and focuses primarily on issues of justice (harm/care and fairness/reciprocity). Their research found that "justice and related virtues...make up half of the moral world for liberals, while justice-related concerns make up only one fifth of the moral world for conservatives". Liberals value harm/care and fairness/reciprocity significantly more than the other moralities, while conservatives value all five equally. Ownership has also been argued to be a strong candidate to be a moral foundation.
Moral virtues
In 2004, D. Lapsley and D. Narvaez outlined how social cognition explains aspects of moral functioning. Their social cognitive approach to personality has six critical resources of moral personality: cognition, self-processes, affective elements of personality, changing social context, lawful situational variability, and the integration of other literature. Lapsley and Narvaez suggest that moral values and actions stem from more than our virtues and are controlled by a set of self-created schemas (cognitive structures that organize related concepts and integrate past events). They claim that schemas are "fundamental to our very ability to notice dilemmas as we appraise the moral landscape" and that over time, people develop greater "moral expertise".
Triune ethics theory
The triune ethics meta-theory (TEM) has been proposed by Darcia Narvaez as a metatheory that highlights the relative contributions to moral development of biological inheritance (including human evolutionary adaptations), environmental influences on neurobiology, and the role of culture. TET proposes three basic mindsets that shape ethical behavior: self-protectionism (a variety of types), engagement, and imagination (a variety of types that are fueled by protectionism or engagement). A mindset influences perception, affordances, and rhetorical preferences. Actions taken within a mindset become an ethic when they trump other values. Engagement and communal imagination represent optimal human functioning that are shaped by the evolved developmental niche (evolved nest) that supports optimal psychosocial neurobiological development. Based on worldwide anthropological research (e.g., Hewlett and Lamb's Hunter-Gatherer Childhoods), Narvaez uses small-band hunter-gatherers as a baseline for the evolved nest and its effects.
Moral reasoning and development
Moral development and reasoning are two overlapping topics of study in moral psychology that have historically received a great amount of attention, even preceding the influential work of Piaget and Kohlberg. Moral reasoning refers specifically to the study of how people think about right and wrong and how they acquire and apply moral rules. Moral development refers more broadly to age-related changes in thoughts and emotions that guide moral beliefs, judgments and behaviors.
Kohlberg's stage theory
Jean Piaget, in watching children play games, noted how their rationales for cooperation changed with experience and maturation. He identified two stages, heteronomous (morality centered outside the self) and autonomous (internalized morality). Lawerence Kohlberg sought to expand Piaget's work. His cognitive developmental theory of moral reasoning dominated the field for decades. He focused on moral development as one's progression in the capacity to reason about justice. Kohlberg's interview method included hypothetical moral dilemmas or conflicts of interest (most notably, the Heinz dilemma). He proposed six stages and three levels of development (claiming that "anyone who interviewed children about dilemmas and who followed them longitudinally in time would come to our six stages and no others). At the Preconventional level, the first two stages included the punishment-and-obedience orientation and the instrumental-relativist orientation. The next level, the conventional level, included the interpersonal concordance or "good boy – nice girl" orientation, along with the "law and order" orientation. Lastly, the final Postconventional level consisted of the social-contract, legalistic orientation and the universal-ethical-principle orientation. According to Kohlberg, an individual is considered more cognitively mature depending on their stage of moral reasoning, which grows as they advance in education and world experience.
Critics of Kohlberg's approach (such as Carol Gilligan and Jane Attanucci) argue that there is an over-emphasis on justice and an under-emphasis on an additional perspective to moral reasoning, known as the care perspective. The justice perspective draws attention to inequality and oppression, while striving for reciprocal rights and equal respect for all. The care perspective draws attention to the ideas of detachment and abandonment, while striving for attention and response to people who need it. Care Orientation is relationally based. It has a more situational focus that is dependent on the needs of others as opposed to Justice Orientation's objectivity. However, reviews by others have found that Gilligan's theory was not supported by empirical studies since orientations are individual dependent. In fact, in neo-Kohlbergian studies with the Defining Issues Test, females tend to get slightly higher scores than males.
The attachment approach to moral judgment
Aner Govrin's attachment approach to moral judgment proposes that, through early interactions with the caregiver, the child acquires an internal representation of a system of rules that determine how right/wrong judgments are to be construed, used, and understood. By breaking moral situations down into their defining features, the attachment model of moral judgment outlines a framework for a universal moral faculty based on a universal, innate, deep structure that appears uniformly in the structure of almost all moral judgments regardless of their content.
Moral behaviour
Historically, major topics of study in the domain of moral behavior have included violence and altruism, bystander intervention and obedience to authority (e.g., the Milgram experiment and Stanford prison experiment). Recent research on moral behavior uses a wide range of methods, including using experience sampling to try and estimate the actual prevalence of various kinds of moral behavior in everyday life. Research has also focused on variation in moral behavior over time, through studies of phenomena such as moral licensing. Yet other studies focusing on social preferences examine various kinds of resource allocation decisions, or use incentivized behavioral experiments to investigate the way people weighted their own interests against other people's when deciding whether to harm others, for example, by examine how willing people are to administer electric shocks to themselves vs. others in exchange for money.
James Rest reviewed the literature on moral functioning and identified at least four components necessary for a moral behavior to take place:
Sensitivity – noticing and interpreting the situation
Reasoning and making a judgment regarding the best (most moral) option
Motivation (in the moment but also habitually, such as moral identity)
Implementation - having the skills and perseverance to carry out the action
Reynolds and Ceranic researched the effects of social consensus on one's moral behavior. Depending on the level of social consensus (high vs. low), moral behaviors will require greater or lesser degrees of moral identity to motivate an individual to make a choice and endorse a behavior. Also, depending on social consensus, particular behaviors may require different levels of moral reasoning.
More recent attempts to develop an integrated model of moral motivation have identified at least six different levels of moral functioning, each of which has been shown to predict some type of moral or pro-social behavior: moral intuitions, moral emotions, moral virtues/vices (behavioral capacities), moral values, moral reasoning, and moral willpower. This social intuitionist model of moral motivation suggests that moral behaviors are typically the product of multiple levels of moral functioning, and are usually energized by the "hotter" levels of intuition, emotion, and behavioral virtue/vice. The "cooler" levels of values, reasoning, and willpower, while still important, are proposed to be secondary to the more affect-intensive processes.
Moral behavior is also studied under the umbrella of personality psychology. Topics within personality psychology include the traits or individual differences underlying moral behavior, such as generativity, self-control, agreeableness, cooperativeness and honesty/humility, as well as moral change goals, among many other topics.
Regarding interventions aimed at shaping moral behavior, a 2009 meta analysis of business ethics instruction programs found that such programs have only "a minimal impact on increasing outcomes related to ethical perceptions, behavior, or awareness." A 2005 meta analysis suggested that positive affect can at least momentarily increase prosocial behavior (with subsequent meta analyses also showing that prosocial behavior reciprocally increases positive affect in the actor).
Value-behavior consistency
In looking at the relations between moral values, attitudes, and behaviors, previous research asserts that there is less correspondence between these three aspects than one might assume. In fact, it seems to be more common for people to label their behaviors with a justifying value rather than having a value beforehand and then acting on it. There are some people that are more likely to act on their personal values: those low in self-monitoring and high in self-consciousness, due to the fact that they are more aware of themselves and less aware of how others may perceive them. Self-consciousness here means being literally more conscious of yourself, not fearing judgement or feeling anxiety from others. Social situations and the different categories of norms can be telling of when people may act in accordance with their values, but this still is not concrete either. People will typically act in accordance with social, contextual and personal norms, and there is a likelihood that these norms can also follow one's moral values. Though there are certain assumptions and situations that would suggest a major value-attitude-behavior relation, there is not enough research to confirm this phenomenon.
Moral willpower
Building on earlier work by Metcalfe and Mischel on delayed gratification, Baumeister, Miller, and Delaney explored the notion of willpower by first defining the self as being made up of three parts: reflexive consciousness, or the person's awareness of their environment and of himself as an individual; interpersonal being, which seeks to mold the self into one that will be accepted by others; and executive function. They stated, "[T]he self can free its actions from being determined by particular influences, especially those of which it is aware". The three prevalent theories of willpower describe it as a limited supply of energy, as a cognitive process, and as a skill that is developed over time. Research has largely supported that willpower works like a "moral muscle" with a limited supply of strength that may be depleted (a process referred to as Ego depletion), conserved, or replenished, and that a single act requiring much self-control can significantly deplete the "supply" of willpower. While exertion reduces the ability to engage in further acts of willpower in the short term, such exertions actually improve a person's ability to exert willpower for extended periods in the long run. Additional research has been conducted that may cast doubt on the idea of ego-depletion.
Moral intuitions
In 2001, Jonathan Haidt introduced his social intuitionist model which claimed that with few exceptions, moral judgments are made based upon socially derived intuitions. Moral intuitions happen immediately, automatically, and unconsciously, with reasoning largely serving to generate post-hoc rationalizations to justify one's instinctual reactions. He provides four arguments to doubt causal importance of reason. Firstly, Haidt argues that since there is a dual process system in the brain when making automatic evaluations or assessments, this same process must be applicable to moral judgement as well. The second argument, based on research on motivated reasoning, claims that people behave like "intuitive lawyers", searching primarily for evidence that will serve motives for social relatedness and attitudinal coherence. Thirdly, Haidt found that people have post hoc reasoning when faced with a moral situation, this a posteriori (after the fact) explanation gives the illusion of objective moral judgement but in reality is subjective to one's gut feeling. Lastly, research has shown that moral emotion has a stronger link to moral action than moral reasoning, citing Damasio's research on the somatic marker hypothesis and Batson's empathy-altruism hypothesis.
Similarly, in his theory of moral satisficing, Gerd Gigerenzer argues that moral behavior is not solely a result of deliberate reasoning but also of social heuristics that are embedded in social environments. In other words, intuitionist theories can use heuristics to explain intuition. He emphasizes that these are key to understanding moral behavior. Modifying moral behavior therefore entails changing heuristics and/or modifying environments rather than focussing on individuals. In this way, moral satisficing extends social intuitionism by adding both concrete heuristics and a focus on the environments with which the heuristics interact to produce behavior.
Following the publication of a landmark fMRI study in 2001, Joshua Greene separately proposed his dual process theory of moral judgment, according to which intuitive/emotional and deliberative processes respectively give rise to characteristically deontological and consequentialist moral judgments. A "deontologist" is someone who has rule-based morality that is mainly focused on duties and rights; in contrast, a "consequentialist" is someone who believes that only the best overall consequences ultimately matter.
Moral emotions
Moral emotions are a variety of social emotion that are involved in forming and communicating moral judgments and decisions, and in motivating behavioral responses to one's own and others' moral behavior.
While moral reasoning has been the focus of most study of morality dating back to Plato and Aristotle, the emotive side of morality was historically looked upon with disdain in early moral psychology research. However, in the last 30–40 years, there has been a rise in a new front of research: moral emotions as the basis for moral behavior. This development began with a focus on empathy and guilt, but has since moved on to encompass new scholarship on emotions such as anger, shame, disgust, awe, and elevation. While different moral transgressions have been linked to different emotional reactions, bodily reactions to such transgressions are not too different and can be characterized by some felt activations in the gut area as well as the head area.
Moralization and moral conviction
Moralization, a term introduced to moral psychology by Paul Rozin, refers to the process through which preferences are converted into values. Relatedly, Linda Skitka and colleagues have introduced the concept of moral conviction, which refers to a "strong and absolute belief that something is right or wrong, moral or immoral." According to Skitka's integrated theory of moral conviction (ITMC), attitudes held with moral conviction, known as moral mandates, differ from strong but non-moral attitudes in a number of important ways. Namely, moral mandates derive their motivational force from their perceived universality, perceived objectivity, and strong ties to emotion. Perceived universality refers to the notion that individuals experience moral mandates as transcending persons and cultures; additionally, they are regarded as matters of fact. Regarding association with emotion, ITMC is consistent with Jonathan Haidt's social intuitionist model in stating that moral judgments are accompanied by discrete moral emotions (i.e., disgust, shame, guilt). Importantly, Skitka maintains that moral mandates are not the same thing as moral values. Whether an issue will be associated with moral conviction varies across persons.
One of the main lines of IMTC research addresses the behavioral implications of moral mandates. Individuals prefer greater social and physical distance from attitudinally dissimilar others when moral conviction was high. This effect of moral conviction could not be explained by traditional measures of attitude strength, extremity, or centrality. Skitka, Bauman, and Sargis placed participants in either attitudinally heterogeneous or homogenous groups to discuss procedures regarding two morally mandated issues, abortion and capital punishment. Those in attitudinally heterogeneous groups demonstrated the least amount of goodwill towards other group members, the least amount of cooperation, and the most tension/defensiveness. Furthermore, individuals discussing a morally mandated issue were less likely to reach a consensus compared to those discussing non-moral issues.
Intersections with other fields
Sociological applications
Some research shows that people tend to self-segregate based on moral and political views, exaggerate the magnitude of moral disagreements across political divides, and avoid exposure to the opinions of those with opposing political views.
Normative implications
Researchers have begun to debate the implications (if any) moral psychology research has for other subfields of ethics such as normative ethics and meta-ethics. For example Peter Singer, citing Haidt's work on social intuitionism and Greene's dual process theory, presented an "evolutionary debunking argument" suggesting that the normative force of our moral intuitions is undermined by their being the "biological residue of our evolutionary history." John Michael Doris discusses the way in which social psychological experiments—such as the Stanford prison experiments involving the idea of situationism—call into question a key component in virtue ethics: the idea that individuals have a single, environment-independent moral character. As a further example, Shaun Nichols (2004) examines how empirical data on psychopathology suggests that moral rationalism is false.
Additionally, research in moral psychology is being used to inform debates in applied ethics around moral enhancement.
Robotics and artificial intelligence
At the intersection of moral psychology and machine ethics, researchers have begun to study people's views regarding the potentially ethically significant decisions that will be made by self-driving cars.
Mohammad Atari and his colleagues recently examined the moral psychology of the famous chatbot, ChatGPT. These authors asked in their title, "which humans?" — rhetorically pointing out that people should not ask how "human-like" machine morality is, but to which humans it resembles. These authors discovered that Large Language Models (LLMs), especially ChatGPT, tend to echo moral values endorsed by Westerners, as their training datasets originate predominantly from Western, Educated, Industrialized, Rich, and Democratic (WEIRD) societies. This study points out that compared to the global average, people from WEIRD societies are more inclined toward individualism and impersonal prosocial behaviors while showing less traditionalism and group loyalty. The authors further highlighted that societies less aligned with these WEIRD moral values tend to experience greater misalignment with the moral values and outputs of ChatGPT.
Gerd Gigerenzer argued that the focus of AI ethics should reach far beyond the question whether an AI system has a moral bias or is able to exhibit human-like moral responses. It also needs to investigate the actual motives and ethical behavior of the people behind the AI. Contrary to the 1990s dream of an egalitarian internet providing honest and accurate information to all, various tech billionaires and politicians have highly succeeded in leveraging AI for their own purposes of surveillance and control, for tolerating systematic misinformation for profit, and for increasing their individual power to the detriment of a democracy.
See also
Altruism
Character education
Community psychology
Descriptive ethics
Experimental philosophy
Moral luck
Moral Minds
Neuroethics
Neuroeconomics §Social decision making
Peace psychology
Psychological egoism
Psychology of genocide
Science of morality
Social preferences
Value (ethics and social sciences)
Notes
References
Govrin, A. (2019). Ethics and attachment - How we make moral judgments. London: Routledge
External links
Moral Psychology Research Group – with Knobe, Nichols, Doris and others.
From the Stanford Encyclopedia of Philosophy
Empathy
Moral Character
Moral Motivation
Moral Psychology: Empirical Approaches
Moral Responsibility
From the Internet Encyclopedia of Philosophy
Psychological Issues in Metaethics
Moral Character
Moral Development
Responsibility
Descriptive ethics
Philosophy of mind | 0.767821 | 0.987542 | 0.758255 |
Evolutionary approaches to depression | Evolutionary approaches to depression are attempts by evolutionary psychologists to use the theory of evolution to shed light on the problem of mood disorders within the perspective of evolutionary psychiatry. Depression is generally thought of as dysfunction or a mental disorder, but its prevalence does not increase with age the way dementia and other organic dysfunction commonly does. Some researchers have surmised that the disorder may have evolutionary roots, in the same way that others suggest evolutionary contributions to schizophrenia, sickle cell anemia, psychopathy and other disorders. The proposed explanations for the evolution of depression remain controversial.
Background
Major depression (also called "major depressive disorder", "clinical depression" or often simply "depression") is a leading cause of disability worldwide, and in 2000 was the fourth leading contributor to the global burden of disease (measured in DALYs); it is also an important risk factor for suicide. It is understandable, then, that clinical depression is thought to be a pathology—a major dysfunction of the brain.
In most cases, rates of organ dysfunction increase with age, with low rates in adolescents and young adults, and the highest rates in the elderly. These patterns are consistent with evolutionary theories of aging which posit that selection against dysfunctional traits decreases with age (because there is a decreasing probability of surviving to later ages).
In contrast to these patterns, prevalence of clinical depression is high in all age categories, including otherwise healthy adolescents and young adults. In one study of the US population, for example, the 12 month prevalence for a major depression episode was highest in the youngest age category (15- to 24-year-olds). The high prevalence of unipolar depression (excluding depression associated bipolar disorder) is also an outlier when compared to the prevalence of other mental disorders such as major intellectual disability, autism, schizophrenia and even the aforementioned bipolar disorder, all with prevalence rates about one tenth that of depression, or less. , the only mental disorders with a higher prevalence than depression are anxiety disorders.
The common occurrence and persistence of a trait like clinical depression with such negative effects early in life is difficult to explain. (Rates of infectious disease are high in young people, of course, but clinical depression is not thought to be caused by an infection.) Evolutionary psychology and its application in evolutionary medicine suggest how behaviour and mental states, including seemingly harmful states such as depression, may have been beneficial adaptations of human ancestors which improved the fitness of individuals or their relatives. It has been argued, for example, that Abraham Lincoln's lifelong depression was a source of insight and strength. Some even suggest that "we aren't designed to have happiness as our natural default" and so a state of depression is the evolutionary norm.
The following hypotheses attempt to identify a benefit of depression that outweighs its obvious costs.
Such hypotheses are not necessarily incompatible with one another and may explain different aspects, causes, and symptoms of depression.
Psychic pain hypothesis
One reason depression is thought to be a pathology is that it causes so much psychic pain and distress. However, physical pain is also very distressful, yet it has an evolved function: to inform the organism that it is being damaged, to motivate it to withdraw from the source of damage, and to learn to avoid such damage-causing circumstances in the future. Sadness is also distressing, yet is widely believed to be an evolved adaptation. In fact, perhaps the most influential evolutionary view is that most cases of depression are simply particularly intense cases of sadness in response to adversity, such as the loss of a loved one.
According to the psychic pain hypothesis, depression is analogous to physical pain in that it informs them that current circumstances, such as the loss of a friend, are imposing a threat to biological fitness. It motivates them to cease activities that led to the costly situation, if possible, and it causes him or her to learn to avoid similar circumstances in the future. Proponents of this view tend to focus on low mood, and regard clinical depression as a dysfunctional extreme of low mood—and not as a unique set of characteristics that are physiologically distanced from regular depressed mood.
Alongside the absence of pleasure, other noticeable changes include psychomotor retardation, disrupted patterns of sleeping and feeding, a loss of sex drive and motivation—which are all also characteristics of the body's reaction to actual physical pain. In depressed people there is an increased activity in the regions of the cortex involved with the perception of pain, such as the anterior cingulate cortex and the left prefrontal cortex. This activity allows the cortex to manifest an abstract negative thought as a true physical stressor to the rest of the brain.
Behavioral shutdown model
The behavioral shutdown model states that if an organism faces more risk or expenditure than reward from activities, the best evolutionary strategy may be to withdraw from them. This model proposes that emotional pain, like physical pain, serves a useful adaptive purpose. Negative emotions like disappointment, sadness, grief, fear, anxiety, anger, and guilt are described as "evolved strategies that allow for the identification and avoidance of specific problems, especially in the social domain." Depression is characteristically associated with anhedonia and lack of energy, and those experiencing it are risk-aversive and perceive more negative and pessimistic outcomes because they are focused on preventing further loss. Although the model views depression as an adaptive response, it does not suggest that it is beneficial by the standards of current society; but it does suggest that many approaches to depression treat symptoms rather than causes, and underlying social problems need to be addressed.
A related phenomenon to the behavioral shutdown model is learned helplessness. In animal subjects, a loss of control or predictability in the subject's experiences results in a condition similar to clinical depression in humans. That is to say, if uncontrollable and unstoppable stressors are repeated for long enough, a rat subject will adopt a learned helplessness, which shares a number of behavioral and psychological features with human depression. The subject will not attempt to cope with problems, even when placed in a stressor-free novel environment. Should their rare attempts at coping prove successful in a new environment, a long lasting cognitive block prevents them from perceiving their action as useful and their coping strategy does not last long. From an evolutionary perspective, learned helplessness also allows a conservation of energy for an extended period of time should people find themselves in a predicament that is outside of their control, such as an illness or a dry season. However, for today's humans whose depression resembles learned helplessness, this phenomenon usually manifests as a loss of motivation and the distortion of one uncontrollable aspect of a person's life being viewed as representative of all aspects of their life – suggesting a mismatch between ultimate cause and modern manifestation.
Analytical rumination hypothesis
This hypothesis suggests that depression is an adaptation that causes the affected individual to concentrate his or her attention and focus on a complex problem in order to analyze and solve it.
One way depression increases the individual's focus on a problem is by inducing rumination. Depression activates the left ventrolateral prefrontal cortex, which increases attention control and maintains problem-related information in an "active, accessible state" referred to as "working memory", or WM. As a result, depressed individuals have been shown to ruminate, reflecting on the reasons for their current problems. Feelings of regret associated with depression also cause individuals to reflect and analyze past events in order to determine why they happened and how they could have been prevented. The rumination hypothesis has come under criticism. Evolutionary fitness is increased by ruminating before rather than after bad outcomes. A situation that resulted in a child being in danger but unharmed should lead the parent to ruminate on how to avoid the dangerous situation in the future. Waiting until the child dies and then ruminating in a state of depression is too late.
Some cognitive psychologists argue that ruminative tendency itself increases the likelihood of the onset of depression.
Another way depression increases an individual's ability to concentrate on a problem is by reducing distraction from the problem. For example, anhedonia, which is often associated with depression, decreases an individual's desire to participate in activities that provide short-term rewards, and instead, allows the individual to concentrate on long-term goals. In addition, "psychomotoric changes", such as solitariness, decreased appetite, and insomnia also reduce distractions. For instance, insomnia enables conscious analysis of the problem to be maintained by preventing sleep from disrupting such processes. Likewise, solitariness, lack of physical activity, and lack of appetite all eliminate sources of distraction, such as social interactions, navigation through the environment, and "oral activity", which disrupt stimuli from being processed.
Possibilities of depression as a dysregulated adaptation
Depression, especially in the modern context, may not necessarily be adaptive. The ability to feel pain and experience depression, are adaptive defense mechanisms, but when they are "too easily triggered, too intense, or long lasting", they can become "dysregulated". In such a case, defense mechanisms, too, can become diseases, such as "chronic pain or dehydration from diarrhea". Depression, which may be a similar kind of defense mechanism, may have become dysregulated as well.
Thus, unlike other evolutionary theories this one sees depression as a maladaptive extreme of something that is beneficial in smaller amounts. In particular, one theory focuses on the personality trait neuroticism. Low amounts of neuroticism may increase a person's fitness through various processes, but too much may reduce fitness by, for example, recurring depressions. Thus, evolution will select for an optimal amount and most people will have neuroticism near this amount. However, genetic variation continually occurs, and some people will have high neuroticism which increases the risk of depressions.
Rank theory
Rank theory is the hypothesis that, if an individual is involved in a lengthy fight for dominance in a social group and is clearly losing, then depression causes the individual to back down and accept the submissive role. In doing so, the individual is protected from unnecessary harm. In this way, depression helps maintain a social hierarchy. This theory is a special case of a more general theory derived from the psychic pain hypothesis: that the cognitive response that produces modern-day depression evolved as a mechanism that allows people to assess whether they are in pursuit of an unreachable goal, and if they are, to motivate them to desist.
Social risk hypothesis
This hypothesis is similar to the social rank hypothesis but focuses more on the importance of avoiding exclusion from social groups, rather than direct dominance contests. The fitness benefits of forming cooperative bonds with others have long been recognised—during the Pleistocene period, for instance, social ties were vital for food foraging and finding protection from predators.
As such, depression is seen to represent an adaptive, risk-averse response to the threat of exclusion from social relationships that would have had a critical impact on the survival and reproductive success of our ancestors. Multiple lines of evidence on the mechanisms and phenomenology of depression suggest that mild to moderate (or "normative") depressed states preserve an individual's inclusion in key social contexts via three intersecting features: a cognitive sensitivity to social risks and situations (e.g., "depressive realism"); it inhibits confident and competitive behaviours that are likely to put the individual at further risk of conflict or exclusion (as indicated by symptoms such as low self-esteem and social withdrawal); and it results in signalling behaviours directed toward significant others to elicit more of their support (e.g., the so-called "cry for help"). According to this view, the severe cases of depression captured by clinical diagnoses reflect the maladaptive, dysregulation of this mechanism, which may partly be due to the uncertainty and competitiveness of the modern, globalised world.
Honest signaling theory
Another reason depression is thought to be a pathology is that key symptoms, such as loss of interest in virtually all activities, are extremely costly to them. Biologists and economists have proposed, however, that signals with inherent costs can credibly signal information when there are conflicts of interest. In the wake of a serious negative life event, such as those that have been implicated in depression (e.g., death, divorce), "cheap" signals of need, such as crying, might not be believed when social partners have conflicts of interest. The symptoms of major depression, such as loss of interest in virtually all activities and suicidality, are inherently costly, but, as costly signaling theory requires, the costs differ for individuals in different states. For individuals who are not genuinely in need, the fitness cost of major depression is very high because it threatens the flow of fitness benefits. For individuals who are in genuine need, however, the fitness cost of major depression is low, because the individual is not generating many fitness benefits. Thus, only an individual in genuine need can afford to have major depression. Major depression therefore serves as an honest, or credible, signal of need.
For example, individuals suffering a severe loss such as the death of a spouse are often in need of help and assistance from others. Such individuals who have few conflicts with their social partners are predicted to experience grief—a means, in part, to signal need to others. Such individuals who have many conflicts with their social partners, in contrast, are predicted to experience depression—a means, in part, to credibly signal need to others who might be skeptical that the need is genuine.
Bargaining theory
Depression is not only costly to the affected person, it also imposes a significant burden on family, friends, and society at large—yet another reason it is thought to be pathological. Yet if people with depression have real but unmet needs, they might have to provide an incentive to others to address those needs.
The bargaining theory of depression is similar to the honest signaling, niche change, and social navigation theories of depression described below. It draws on theories of labor strikes developed by economists to basically add one additional element to honest signaling theory: The fitness of social partners is generally correlated. When a wife has depression and reduces her investment in offspring, for example, the husband's fitness is also put at risk. Thus, not only do the symptoms of major depression serve as costly and therefore honest signals of need, they also compel reluctant social partners to respond to that need in order to prevent their own fitness from being reduced. This explanation for depression has been challenged. Depression decreases the joint product of the family or group as the husband or helper only partially compensates for the loss of productivity by the depressed person. Instead of being depressed the person could break their own leg and gain help from the social group, but this obviously is a counterproductive strategy. And the lack of a sex drive certainly does not improve marital relations or fitness.
Social navigation or niche change theory
The social navigation or niche change hypothesis proposes that depression is a social navigation adaptation of last resort, designed especially to help individuals overcome costly, complex contractual constraints on their social niche. The hypothesis combines the analytical rumination and bargaining hypotheses and suggests that depression, operationally defined as a combination of prolonged anhedonia and psychomotor retardation or agitation, provides a focused sober perspective on socially imposed constraints hindering a person's pursuit of major fitness enhancing projects. Simultaneously, publicly displayed symptoms, which reduce the depressive's ability to conduct basic life activities, serve as a social signal of need; the signal's costliness for the depressive certifies its honesty. Finally, for social partners who find it uneconomical to respond helpfully to an honest signal of need, the same depressive symptoms also have the potential to extort relevant concessions and compromises. Depression's extortionary power comes from the fact that it slows the flow of just those goods and services such partners have come to expect from the depressive under status quo socioeconomic arrangements.
Thus depression may be a social adaptation especially useful in motivating a variety of social partners, all at once, to help the depressive initiate major fitness-enhancing changes in their socioeconomic life. There are diverse circumstances under which this may become necessary in human social life, ranging from loss of rank or a key social ally which makes the current social niche uneconomic to having a set of creative new ideas about how to make a livelihood which begs for a new niche. The social navigation hypothesis emphasizes that an individual can become tightly ensnared in an overly restrictive matrix of social exchange contracts, and that this situation sometimes necessitates a radical contractual upheaval that is beyond conventional methods of negotiation. Regarding the treatment of depression, this hypothesis calls into question any assumptions by the clinician that the typical cause of depression is related to maladaptive perverted thinking processes or other purely endogenous sources. The social navigation hypothesis calls instead for analysis of the depressive's talents and dreams, identification of relevant social constraints (especially those with a relatively diffuse non-point source within the social network of the depressive), and practical social problem-solving therapy designed to relax those constraints enough to allow the depressive to move forward with their life under an improved set of social contracts. This theory has been the subject of criticism.
Depression as an incentive device
This approach argues that being in a depressed state is not adaptive (indeed quite the opposite), but the threat of depression for bad outcomes and the promise of pleasure for good outcomes are adaptive because they motivate the individual toward undertaking effort that increase fitness. The reason for not relying on pleasure alone as an incentive device is because happiness is costly in terms of fitness as the individual becomes less cautious. This is most readily seen when an individual is manic and undertakes very risky behavior. The physiological manifestation of the incentives are most noticeable when an individual is bipolar with bouts of extreme elation and extreme depression as anxiety which is about the (possibly immediate) future is highly correlated with being bipolar. As noted earlier, bipolar disorder and clinical depression, as opposed to event depression, are viewed as dysregulation just as persistently high (or low) blood pressure are viewed as dysregulation even though at times high or low blood pressure is fitness enhancing.
Prevention of infection
It has been hypothesized that depression is an evolutionary adaptation because it helps prevent infection in both the affected individual and their kin.
First, the associated symptoms of depression, such as inactivity and lethargy, encourage the affected individual to rest. Energy conserved through such methods is highly crucial, as immune activation against infections is relatively costly; there must be, for instance, a 10% increase in metabolic activity for even a 1°C change in body temperature. Therefore, depression allows one to conserve and allocate energy to the immune system more efficiently.
Depression further prevents infection by discouraging social interactions and activities that may result in exchange of infections. For example, the loss of interest discourages one from engaging in sexual activity, which, in turn, prevents the exchange of sexually transmitted diseases. Similarly, depressed mothers may interact less with their children, reducing the probability of the mother infecting her kin.
Lastly, the lack of appetite associated with depression may also reduce exposure to food-borne parasites.
However, it should also be noted that chronic illness itself may be involved in causing depression. In animal models, the prolonged overreaction of the immune system, in response to the strain of chronic disease, results in an increased production of cytokines (a diverse group of hormonal regulators and signaling molecules). Cytokines interact with neurotransmitter systems—mainly norepinephrine, dopamine, and serotonin, and induce depressive characteristics. The onset of depression may help an individual recover from their illness by allowing them a more reserved, safe and energetically efficient lifestyle. The overproduction of these cytokines, beyond optimal levels due to the repeated demands of dealing with a chronic disease, may result in clinical depression and its accompanying behavioral manifestations that promote extreme energy reservation.
The third ventricle hypothesis
The third ventricle hypothesis of depression proposes that the behavioural cluster associated with depression (hunched posture, avoidance of eye contact, reduced appetites for food and sex plus social withdrawal and sleep disturbance) serves to reduce an individual's attack-provoking stimuli within the context of a chronically hostile social environment. It further proposes that this response is mediated by the acute release of an unknown inflammatory agent (probably cytokine) into the third ventricular space. In support of this suggestion, imaging studies reveal that the third ventricle is enlarged in depressives.
Reception
Clinical psychology and psychiatry have historically been relatively isolated from the field of evolutionary psychology. Some psychiatrists raise the concern that evolutionary psychologists seek to explain hidden adaptive advantages without engaging the rigorous empirical testing required to back up such claims. While there is strong research to suggest a genetic link to bipolar disorder and schizophrenia, there is significant debate within clinical psychology about the relative influence and the mediating role of cultural or environmental factors. For example, epidemiological research suggests that different cultural groups may have divergent rates of diagnosis, symptomatology, and expression of mental illnesses. There has also been increasing acknowledgment of culture-bound disorders, which may be viewed as an argument for an environmental versus genetic psychological adaptation. While certain mental disorders may have psychological traits that can be explained as 'adaptive' on an evolutionary scale, these disorders cause individuals significant emotional and psychological distress and negatively influence the stability of interpersonal relationships and day-to-day adaptive functioning.
See also
Evolutionary approaches to postpartum depression
General:
Evolutionary psychology
Evolutionary medicine
Videos
TED Talk: Can Depression be Good for You?
References
Depression (mood)
Evolutionary psychology | 0.774444 | 0.979075 | 0.758238 |
Study skills | Study skills or study strategies are approaches applied to learning. Study skills are an array of skills which tackle the process of organizing and taking in new information, retaining information, or dealing with assessments. They are discrete techniques that can be learned, usually in a short time, and applied to all or most fields of study. More broadly, any skill which boosts a person's ability to study, retain and recall information which assists in and passing exams can be termed a study skill, and this could include time management and motivational techniques.
Some examples are mnemonics, which aid the retention of lists of information; effective reading; concentration techniques; and efficient note taking.
Due to the generic nature of study skills, they must, therefore, be distinguished from strategies that are specific to a particular field of study (e.g. music or technology), and from abilities inherent in the student, such as aspects of intelligence or learning styles. It is crucial in this, however, for students to gain initial insight into their habitual approaches to study, so they may better understand the dynamics and personal resistances to learning new techniques.
Historical context
Study skills are generally critical to success in school, considered essential for acquiring good grades, and useful for learning throughout one's life. While often left up to the student and their support network, study skills are increasingly taught at the high school and university level.
The term study skills is used for general approaches to learning, skills for specific courses of study. There are many theoretical works on the subject, including a vast number of popular books and websites. Manuals for students have been published since the 1940s.
In the 1950s and 1960s, college instructors in the fields of psychology and the study of education used to research, theory, and experience with their own students in writing manuals. Marvin Cohn based the advice for parents in his 1978 book Helping Your Teen-Age Student on his experience as a researcher and head of a university reading clinic that tutored teenagers and young adults. In 1986, when Dr. Gary Gruber’s Essential Guide to Test Taking for Kids was first published, the author had written 22 books on taking standardized tests. A work in two volumes, one for upper elementary grades and the other for middle school, the Guide has methods for taking tests and completing schoolwork.
Types
Rehearsal and rote learning
Memorization is the process of committing something to memory, often by rote. The act of memorization is often a deliberate mental process undertaken in order to store information in one's memory for later recall. This information can be experiences, names, appointments, addresses, telephone numbers, lists, stories, poems, pictures, maps, diagrams, facts, music or other visual, auditory, or tactical information. Memorization may also refer to the process of storing particular data into the memory of a device. One of the most basic approaches to learning any information is simply to repeat it by rote. Typically this will include reading over notes or a textbook and re-writing notes.
The weakness of rote learning is that it implies a passive reading and listening style. Educators such as John Dewey have argued that students need to learn critical thinking – questioning and weighing up evidence as they learn. This can be done during lectures or when reading books.
Reading and listening
A method that is useful during the first interaction with the subject of study is REAP method. This method helps students to improve their understanding of the text and bridge the idea with that of the author's. REAP is an acronym for Read, Encode, Annotate and Ponder.
Read: Reading a section to discern the idea.
Encode: Paraphrasing the idea from the author's perspective to the student's own words.
Annotate: Annotating the section with critical understanding and other relevant notes.
Ponder: To ponder about what they read through thinking, discussing with others and reading related materials. Thus it allows the possibility of elaboration and fulfillment of zone of proximal development.
Annotating and Encoding helps reprocess content into concise and coherent knowledge which adds to a meaningful symbolic fund of knowledge. Precise annotation, Organizing question annotation, Intentional annotation, and Probe annotation are some of the annotation methods used.
A method used to focus on key information when studying from books uncritically is the PQRST method. This method prioritizes the information in a way that relates directly to how they will be asked to use that information in an exam. PQRST is an acronym for Preview, Question, Read, Summary, Test.
Preview: The student looks at the topic to be learned by glancing over the major headings or the points in the syllabus.
Question: The student formulates questions to be answered following a thorough examination of the topic(s).
Read: The student reads through the related material, focusing on the information that best relates to the questions formulated earlier.
Summary: The student summarizes the topic, bringing his or her own understanding of the process. This may include written notes, spider diagrams, flow diagrams, labeled diagrams, mnemonics, or even voice recordings.
Test: The student answers the questions drafted earlier, avoiding adding any questions that might distract or change the subject.
There are a variety of studies from different colleges nationwide that show peer-communication can help increase better study habits tremendously. One study shows that an average of 73% score increase was recorded by those who were enrolled in the classes surveyed.
In order to make reading or reviewing material more engaging and active, learners can create cues that will stimulate recall later on. A cue can be a word, short phrase, or song that helps the learner access a memory that was encoded intentionally with this prompt in mind. The use of cues to aid memory has been popular for many years, however, research suggests that adopting cues made by others is not as effective as cues that learners create themselves.
Self-testing is another effective practice, when preparing for exams or other standardized memory recall situations. Many students prepare for exams by simply rereading textbook passages or materials. However, it's likely that this can create a false sense of understanding because of the increased familiarity that students have with passages that they have reviewed recently or frequently. Instead, in 2006, Roediger and Karpicke studied eighth-grade students’ performance on history exams. Their results showed that students who tested themselves on material they had learned, rather than simply reviewing or rereading subjects had both better and longer lasting retention. The term Testing Effect is used to describe this increase in memory performance.
Taking notes by using a computer can also deter impactful learning, even when students are using computers solely for the purpose note-taking and are not attempting to multitask, during lectures or study sessions. This is likely due to shallower processing from students using computers to take notes. Taking notes on a computer often ushers a tendency for students to record lectures verbatim, instead of writing the points of a lecture in their own words.
Speed reading, while trainable, results in lower accuracy, comprehension, and understanding.
Flashcards
Flashcards are visual cues on cards. These have numerous uses in teaching and learning but can be used for revision. Students often make their own flashcards, or more detailed index cards – cards designed for filing, often A5 size, on which short summaries are written. Being discrete and separate, they have the advantage of allowing students to re-order them, pick a selection to read over, or choose randomly for self-testing. Software equivalents can be used.
Summary methods
Summary methods vary depending on the topic, but most involve condensing the large amount of information from a course or book into shorter notes. Often, these notes are then condensed further into key facts.
Organized summaries: Such as outlines showing keywords and definitions and relations, usually in a tree structure.
Spider diagrams: Using spider diagrams or mind maps can be an effective way of linking concepts together. They can be useful for planning essays and essay responses in exams. These tools can give a visual summary of a topic that preserves its logical structure, with lines used to show how different parts link together.
Visual imagery
Some memory techniques make use of visual memory. One popular memory enhancing technique is the method of loci, a system of visualizing key information in real physical locations e.g. around a room.
Diagrams are often underrated tools. They can be used to bring all the information together and provide practice reorganizing what has been learned in order to produce something practical and useful. They can also aid the recall of information learned very quickly, particularly if the student made the diagram while studying the information. Pictures can then be transferred to flashcards that are very effective last-minute revision tools rather than rereading any written material.
Acronyms and mnemonics
A mnemonic is a method of organizing and memorizing information. There are four main types of mnemonic: (1) Narrative (relying on a story of some kind, or a sequence of real or imagined events); (2) Sonic/Textual (using rhythm or repeated sound, such as rhyme, or memorable textual patterns such as acronyms); (3) Visual (diagrams, mind maps, graphs, images, etc.); (4) 'Topical' (meaning ‘place-dependent’, for instance, using features of a familiar room, building or set of landmarks as a way of coding and recalling sequenced facts). Some mnemonics use a simple phrase or fact as a trigger for a longer list of information. For example, the cardinal points of the compass can be recalled in the correct order with the phrase "Never Eat Shredded Wheat". Starting with North, the first letter of each word relates to a compass point in clockwise order round a compass.
Examination strategies
The Black-Red-Green method (developed through the Royal Literary Fund) helps the student to ensure that every aspect of the question posed has been considered, both in exams and essays. The student underlines relevant parts of the question using three separate colors (or some equivalent). BLAck denotes 'BLAtant instructions', i.e. something that clearly must be done; a directive or obvious instruction. REd is a REference Point or REquired input of some kind, usually to do with definitions, terms, cited authors, theory, etc. (either explicitly referred to or strongly implied). GREen denotes GREmlins, which are subtle signals one might easily miss, or a ‘GREEN Light’ that gives a hint on how to proceed, or where to place the emphasis in answers. Another popular method while studying is to use the PEE method; Point, evidence and explain, reason being, this helps the student break down exam questions allowing them to maximize their marks/grade during the exam. Many Schools will encourage practicing the P.E. BEing method prior to an exam.
Spacing
Spacing, also called distributed learning by some; helps individuals remember at least as much if not more information for a longer period of time than using only one study skill. Using spacing in addition to other study methods can improve retention and performance on tests. Spacing is especially useful for retaining and recalling new material. The theory of spacing allows students to split that a single long session to a few shorter sessions in a day, if not days apart, instead of cramming all study materials into one long study session that lasts for hours. Studying will not last longer than it would have originally, and one is not working harder but this tool gives the user the ability to remember and recall things for a longer time period. Spacing effect is not only beneficial for memorization, but spaced repetition can also potentially improve classroom learning. The science behind this; according to Jost's Law from 1897 “If two associations are of equal strength but of different age, a new repetition has a greater value for the older one”. This means that if a person were to study two things once, at different times, the one studied most recently will be easier to recall.
Interleaving and blocking
Blocking is studying one topic at a time. Interleaving is another technique used to enhance learning and memory; it involves practicing and learning multiple related skills or topics. For example, when training three skills A, B and C: blocking uses the pattern of AAA-BBB-CCC while interleaving uses the pattern of ABC-ABC-ABC. Research has found that interleaving is superior to blocking in learning skills and studying.
Retrieval and testing
One of the most efficient methods of learning is trying to retrieve learned information and skills. This could be achieved by leveraging the testing effect including: testing, quizzing, self-testing, problem-solving, active recall, flashcards, practicing the skills, and other.
Time management, organization and lifestyle changes
Often, improvements to the effectiveness of study may be achieved through changes to things unrelated to the study material itself, such as time-management, boosting motivation and avoiding procrastination, and in improvements to sleep and diet.
Time management in study sessions aims to ensure that activities that achieve the greatest benefit are given the greatest focus. A traffic lights system is a simple way of identifying the importance of information, highlighting or underlining information in colours:
Green: topics to be studied first; important and also simple
Amber: topics to be studied next; important but time-consuming
Red: lowest priority; complex and not vital.
This reminds students to start with the things which will provide the quickest benefit, while 'red' topics are only dealt with if time allows. The concept is similar to the ABC analysis, commonly used by workers to help prioritize. Also, some websites (such as FlashNotes) can be used for additional study materials and may help improve time management and increase motivation.
In addition to time management, sleep is important; getting adequate rest improves memorisation. Students are generally more productive in the morning than the afternoon.
In addition to time management and sleep, emotional state of mind can matter when a student is studying. If an individual is calm or nervous in class; replicating that emotion can assist in studying. With replicating the emotion, an individual is more likely to recall more information if they are in the same state of mind when in class. This also goes the other direction; if one is upset but normally calm in class it's much better to wait until they are feeling calmer to study. At the time of the test or class they will remember more.
While productivity is greater earlier in the day, current research suggests that material studied in the afternoon or evening is better consolidated and retained. This is consistent with current memory consolidation models that student tasks requiring analysis and application are better suited toward the morning and midday while learning new information and memorizing are better suited to evenings.
The Pomodoro Method is another effective way of increasing the productivity a set amount of time, by limiting interruptions. Invented in the 1980s, the Pomodoro Technique segments blocks of time into 30-minute sections. Each 30-minute section (called a Pomodoro) is composed of a 25-minute study or work period and a 5-minute rest period. And it is recommended that every 4 Pomodoro's, should be followed with a 15-30-minute break. Though this technique has increased in popularity, it hadn't been empirically studied until more recently. A software engineering corporation found that employees using the Pomodoro Method saw a decrease in their work flow interruptions and an increase in their satisfaction. by being mindful of wasted time during study, students can increase their learning productivity.
Journaling can help students increase their academic performance principally through reducing stress and anxiety. Much of students’ difficulty or aversion to analytic subjects such as math or science, is due to a lack of confidence or belief that learning is reasonably within their abilities. Therefore, reducing the stress of learning new and/or complex material is paramount to helping them succeed. Students without access to an outside source of support can use journaling to simulate a similar environment and effect. For example, Frattaroli, et al., studied students that were preparing to take graduate study entrance exams, such as the GRE, LSAT, and MCAT. They found that students’ journal entries recorded immediately before taking these historically stress-inducing tests followed a similar logical flow; where during the beginning of writing, participants would express fear or concern toward the test. However, through the course of writing their experiences down, participants would encourage themselves and ultimately cultivate hope in upcoming exams. As a result of this, those who journaled immediately before these tests reported a lower amount of anxiety, and a better test result.
Studying environment
Studying can also be more effective if one changes their environment while studying. For example: the first time studying the material, one can study in a bedroom, the second time one can study outside, and the final time one can study in a coffee shop. The thinking behind this is that as when an individual changes their environment the brain associates different aspects of the learning and gives a stronger hold and additional brain pathways with which to access the information. In this context environment can mean many things; from location, to sounds, to smells, to other stimuli including foods. When discussing environment in regards to its effect on studying and retention Carey says “a simple change in venue improved retrieval strength (memory) by 40 percent.” Another change in the environment can be background music; if people study with music playing and they are able to play the same music during test time they will recall more of the information they studied. According to Carey “background music weaves itself subconsciously into the fabric of stored memory.” This “distraction” in the background helps to create more vivid memories with the studied material.
Analogies
Analogies can be a highly effective way to increase the efficiency of coding and long-term memory. Popular uses of analogies are often forming visual images that represent subject matter, linking words or information to one's self, and either imagining or creating diagrams that display the relationship between elements of complex concepts. A 1970 study done by Bower and Winzez found that as participants created analogies that had sentimentality or relevance to themselves as a unique individual, they were better able to store information as well as recall what had been studied. This is referred to as the Self-reference Effect. Adding to this phenomenon, examples that are more familiar an individual or that are more vivid or detailed are even more easily remembered. However, analogies that are logically flawed and/or are not clearly described can create misleading or superficial models in learners.
Concept mapping
There is some support for the efficacy of concept mapping as a learning tool.
See also
Homework
Learning
Learning styles
Reading day
Speed reading
SQ3R
Study guide
Study software
Video study guide
References
External links
Improving Students’ Learning With Effective Learning Techniques: Promising Directions From Cognitive and Educational Psychology from Association for Psychological Science
Think You Know How To Study? Think Again - audio report by NPR
Academic learning strategy videos from Dartmouth College provide skills training
Learning methods | 0.762959 | 0.993808 | 0.758235 |
Supervision | Supervision is an act or instance of directing, managing, or oversight.
Etymology
The English noun "supervision" derives from the two Latin words "super" (above) and "videre" (see, observe).
Spelling
The spelling is "Supervision" in Standard English of all English linguistic varieties, including North American English.
Definitions
Supervision is the act or function of overseeing something or somebody. It is the process that involves guiding, instructing and correcting someone.
A person who performs supervision is a "supervisor", but does not always have the formal title of supervisor. A person who is getting supervision is the "supervisee".
Theoretical scope
Generally, supervision contains elements of providing knowledge, helping to organize tasks, enhance motivation, and monitoring activity and results; the amount of each element is varying in different contexts.
Nature of supervision
Academia
In academia, supervision is aiding and guiding of a postgraduate research student, graduate student, or undergraduate student, in their research project; offering both moral support and scientific insight and guidance. The supervisor is often a senior scientist or scholar, and in some countries called doctoral advisor.
Business
In business, supervision is overseeing the work of staff. The person performing supervision could lack a formal title or carry the title supervisor or manager, where the latter has wider authority.
Counseling
In clinical supervision, the psychologist or psychiatrist has talk sessions with another professional in the field to debrief and mentally process the patient work.
Society
In society, supervision could be performed by the state or corporate entities to monitor and control its citizens. Public entities often do supervision of different activities in the nation, such as bank supervision.
See also
Clinical supervision
Management
Supervisor
References
Management | 0.772563 | 0.981447 | 0.758229 |
Mental model | A mental model is an internal representation of external reality: that is, a way of representing reality within one's mind. Such models are hypothesized to play a major role in cognition, reasoning and decision-making. The term for this concept was coined in 1943 by Kenneth Craik, who suggested that the mind constructs "small-scale models" of reality that it uses to anticipate events. Mental models can help shape behaviour, including approaches to solving problems and performing tasks.
In psychology, the term mental models is sometimes used to refer to mental representations or mental simulation generally. The concepts of schema and conceptual models are cognitively adjacent. Elsewhere, it is used to refer to the "mental model" theory of reasoning developed by Philip Johnson-Laird and Ruth M. J. Byrne.
History
The term mental model is believed to have originated with Kenneth Craik in his 1943 book The Nature of Explanation. Georges-Henri Luquet in Le dessin enfantin (Children's drawings), published in 1927 by Alcan, Paris, argued that children construct internal models, a view that influenced, among others, child psychologist Jean Piaget.
Jay Wright Forrester defined general mental models thus:
The image of the world around us, which we carry in our head, is just a model. Nobody in his head imagines all the world, government or country. He has only selected concepts, and relationships between them, and uses those to represent the real system (Forrester, 1971).
Philip Johnson-Laird published Mental Models: Towards a Cognitive Science of Language, Inference and Consciousness in 1983. In the same year, Dedre Gentner and Albert Stevens edited a collection of chapters in a book also titled Mental Models. The first line of their book explains the idea further: "One function of this chapter is to belabor the obvious; people's views of the world, of themselves, of their own capabilities, and of the tasks that they are asked to perform, or topics they are asked to learn, depend heavily on the conceptualizations that they bring to the task." (see the book: Mental Models).
Since then, there has been much discussion and use of the idea in human-computer interaction and usability by researchers including Donald Norman and Steve Krug (in his book Don't Make Me Think). Walter Kintsch and Teun A. van Dijk, using the term situation model (in their book Strategies of Discourse Comprehension, 1983), showed the relevance of mental models for the production and comprehension of discourse.
Charlie Munger popularized the use of multi-disciplinary mental models for making business and investment decisions.
Mental models and reasoning
One view of human reasoning is that it depends on mental models. In this view, mental models can be constructed from perception, imagination, or the comprehension of discourse (Johnson-Laird, 1983). Such mental models are similar to architects' models or to physicists' diagrams in that their structure is analogous to the structure of the situation that they represent, unlike, say, the structure of logical forms used in formal rule theories of reasoning. In this respect, they are a little like pictures in the picture theory of language described by philosopher Ludwig Wittgenstein in 1922. Philip Johnson-Laird and Ruth M.J. Byrne developed their mental model theory of reasoning which makes the assumption that reasoning depends, not on logical form, but on mental models (Johnson-Laird and Byrne, 1991).
Principles of mental models
Mental models are based on a small set of fundamental assumptions (axioms), which distinguish them from other proposed representations in the psychology of reasoning (Byrne and Johnson-Laird, 2009). Each mental model represents a possibility. A mental model represents one possibility, capturing what is common to all the different ways in which the possibility may occur (Johnson-Laird and Byrne, 2002). Mental models are iconic, i.e., each part of a model corresponds to each part of what it represents (Johnson-Laird, 2006). Mental models are based on a principle of truth: they typically represent only those situations that are possible, and each model of a possibility represents only what is true in that possibility according to the proposition. However, mental models can represent what is false, temporarily assumed to be true, for example, in the case of counterfactual conditionals and counterfactual thinking (Byrne, 2005).
Reasoning with mental models
People infer that a conclusion is valid if it holds in all the possibilities. Procedures for reasoning with mental models rely on counter-examples to refute invalid inferences; they establish validity by ensuring that a conclusion holds over all the models of the premises. Reasoners focus on a subset of the possible models of multiple-model problems, often just a single model. The ease with which reasoners can make deductions is affected by many factors, including age and working memory (Barrouillet, et al., 2000). They reject a conclusion if they find a counterexample, i.e., a possibility in which the premises hold, but the conclusion does not (Schroyens, et al. 2003; Verschueren, et al., 2005).
Criticisms
Scientific debate continues about whether human reasoning is based on mental models, versus formal rules of inference (e.g., O'Brien, 2009), domain-specific rules of inference (e.g., Cheng & Holyoak, 2008; Cosmides, 2005), or probabilities (e.g., Oaksford and Chater, 2007). Many empirical comparisons of the different theories have been carried out (e.g., Oberauer, 2006).
Mental models of dynamics systems: mental models in system dynamics
Characteristics
A mental model is generally:
founded on unquantifiable, impugnable, obscure, or incomplete facts;
flexible – considerably variable in positive as well as in negative sense;
an information filter that causes selective perception, perception of only selected parts of information;
very limited, compared with the complexities of the world, and even when a scientific model is extensive and in accordance with a certain reality in the derivation of logical consequences of it, it must take into account such restrictions as working memory; i.e., rules on the maximum number of elements that people are able to remember, gestaltisms or failure of the principles of logic, etc.;
dependent on sources of information, which one cannot find anywhere else, are available at any time and can be used.
Mental models are a fundamental way to understand organizational learning. Mental models, in popular science parlance, have been described as "deeply held images of thinking and acting". Mental models are so basic to understanding the world that people are hardly conscious of them.
Expression of mental models of dynamic systems
S.N. Groesser and M. Schaffernicht (2012) describe three basic methods which are typically used:
Causal loop diagrams – displaying tendency and a direction of information connections and the resulting causality and feedback loops
System structure diagrams – another way to express the structure of a qualitative dynamic system
Stock and flow diagrams - a way to quantify the structure of a dynamic system
These methods allow showing a mental model of a dynamic system, as an explicit, written model about a certain system based on internal beliefs. Analyzing these graphical representations has been an increasing area of research across many social science fields. Additionally software tools that attempt to capture and analyze the structural and functional properties of individual mental models such as Mental Modeler, "a participatory modeling tool based in fuzzy-logic cognitive mapping", have recently been developed and used to collect/compare/combine mental model representations collected from individuals for use in social science research, collaborative decision-making, and natural resource planning.
Mental model in relation to system dynamics and systemic thinking
In the simplification of reality, creating a model can find a sense of reality, seeking to overcome systemic thinking and system dynamics.
These two disciplines can help to construct a better coordination with the reality of mental models and simulate it accurately. They increase the probability that the consequences of how to decide and act in accordance with how to plan.
System dynamics – extending mental models through the creation of explicit models, which are clear, easily communicated and can be compared with each other.
Systemic thinking – seeking the means to improve the mental models and thereby improve the quality of dynamic decisions that are based on mental models.
Experimental studies carried out in weightlessness and on Earth using neuroimaging
showed that humans are endowed with a mental model of the effects of gravity on object motion.
Single and double-loop learning
After analyzing the basic characteristics, it is necessary to bring the process of changing the mental models, or the process of learning. Learning is a back-loop process, and feedback loops can be illustrated as: single-loop learning or double-loop learning.
Single-loop learning
Mental models affect the way that people work with information, and also how they determine the final decision. The decision itself changes, but the mental models remain the same. It is the predominant method of learning, because it is very convenient.
Double-loop learning
Double-loop learning (see diagram below) is used when it is necessary to change the mental model on which a decision depends. Unlike single loops, this model includes a shift in understanding, from simple and static to broader and more dynamic, such as taking into account the changes in the surroundings and the need for expression changes in mental models.
See also
All models are wrong
Cognitive map
Cognitive psychology
Conceptual model
Educational psychology
Folk psychology
Internal model (motor control)
Knowledge representation
Lovemap
Macrocognition
Map–territory relation
Model-dependent realism
Neuro-linguistic programming
Neuroeconomics
Neuroplasticity
OODA loop
Psyche (psychology)
Self-stereotyping
Social intuitionism
Space mapping
System dynamics
Text and conversation theory
Notes
References
Barrouillet, P. et al. (2000). Conditional reasoning by mental models: chronometric and developmental evidence. Cognit. 75, 237-266.
Byrne, R.M.J. (2005). The Rational Imagination: How People Create Counterfactual Alternatives to Reality. Cambridge MA: MIT Press.
Byrne, R.M.J. & Johnson-Laird, P.N. (2009). 'If' and the problems of conditional reasoning. Trends in Cognitive Sciences. 13, 282-287
Cheng, P.C. and Holyoak, K.J. (2008) Pragmatic reasoning schemas. In Reasoning: studies of human inference and its foundations (Adler, J.E. and Rips, L.J., eds), pp. 827–842, Cambridge University Press
Cosmides, L. et al. (2005) Detecting cheaters. Trends in Cognitive Sciences. 9,505–506
Forrester, J. W. (1971) Counterintuitive behavior of social systems. Technology Review.
Oberauer K. (2006) Reasoning with conditionals: A test of formal models of four theories. Cognit. Psychol. 53:238–283.
O’Brien, D. (2009). Human reasoning includes a mental logic. Behav. Brain Sci. 32, 96–97
Oaksford, M. and Chater, N. (2007) Bayesian Rationality. Oxford University Press
Johnson-Laird, P.N. (1983). Mental Models: Towards a Cognitive Science of Language, Inference, and Consciousness. Cambridge: Cambridge University Press.
Johnson-Laird, P.N. (2006) How We Reason. Oxford University Press
Johnson-Laird, P.N. and Byrne, R.M.J. (2002) Conditionals: a theory of meaning, inference, and pragmatics. Psychol. Rev. 109, 646–678
Schroyens, W. et al. (2003). In search of counterexamples: Deductive rationality in human reasoning. Quart. J. Exp. Psychol. 56(A), 1129–1145.
Verschueren, N. et al. (2005). Everyday conditional reasoning: A working memory-dependent tradeoff between counterexample and likelihood use. Mem. Cognit. 33, 107-119.
Further reading
Georges-Henri Luquet (2001). Children's Drawings. Free Association Books.
Chater, N. et al. (2006) Probabilistic Models of Cognition: Conceptual Foundations. Trends Cogn Sci 10(7):287-91. .
Gentner, Dedre; Stevens, Albert L., eds. (1983). Mental Models. Hillsdale: Erlbaum 1983.
Groesser, S.N. (2012). Mental model of dynamic systems. In N.M. Seel (Ed.). The encyclopedia of the sciences of learning (Vol. 5, pp. 2195–2200). New York: Springer.
Groesser, S.N. & Schaffernicht, M. (2012). Mental Models of Dynamic Systems: Taking Stock and Looking Ahead. System Dynamics Review, 28(1): 46-68, Wiley.
Johnson-Laird, P.N. 2005. The History of Mental Models
Jones, N. A. et al. (2011). "Mental Models: an interdisciplinary synthesis of theory and methods" Ecology and Society.16 (1): 46.
Jones, N. A. et al. (2014). "Eliciting mental models: a comparison of interview procedures in the context of natural resource management" Ecology and Society.19 (1): 13.
Prediger, S. (2008). "Discontinuities for mental models - a source for difficulties with the multiplication of fractions" Proceedings of ICME-11, Topic Study Group 10, Research and Development of Number Systems and Arithmetic. (See also Prediger's references to Fischbein 1985 and Fischbein 1989, "Tacit models and mathematical reasoning".)
Robles-De-La-Torre, G. & Sekuler, R. (2004). "Numerically Estimating Internal Models of Dynamic Virtual Objects ". In: ACM Transactions on Applied Perception 1(2), pp. 102–117.
Sterman, John D. A Skeptic’s Guide to Computer Models, Massachusetts Institute of Technology
External links
Mental Models and Reasoning Laboratory
Systems Analysis, Modelling and Prediction Group, University of Oxford
System Dynamics Society
Conceptual models
Cognitive modeling
Cognitive psychology
Cognitive science
Information
Information science | 0.764119 | 0.99228 | 0.758219 |
Solastalgia | Solastalgia is a neologism, formed by the combination of the Latin words sōlācium (solace or comfort), 'solus' (desolation) with meanings connected to devastation, deprivation of comfort, abandonment and loneliness and the Greek root -algia (pain, suffering, grief), that describes a form of emotional or existential distress caused by negatively perceived environmental change. A distinction can be made between solastalgia as the lived experience of negatively perceived change in the present and eco-anxiety linked to worry or concern about what may happen in the future (associated with "pre-traumatic stress", in reference to post-traumatic stress).
Origins
The concept of solastalgia was coined by philosopher Glenn Albrecht in 2003 and then published in the 2005 article 'Solastalgia: a new concept in human health and identity'. He describes it as "the homesickness you have when you are still at home" and your home environment is changing in ways you find distressing. In many cases this is in reference to global climate change, but more localized events such as volcanic eruptions, drought or destructive mining techniques can cause solastalgia as well. Differing from nostalgic distress on being absent from home, solastalgia refers to the distress specifically caused by environmental change while still in a home environment.
More recent approaches have connected solastalgia to the experience of historic heritage threatened by the climate crisis, such as the ancient cities of Venice, Amsterdam, and Hoi An.
Effects
A paper published by Albrecht et al. in 2007 focused on two contexts: the experiences of persistent drought in rural New South Wales (NSW) and the impact of large-scale open-cut coal mining on individuals in the Upper Hunter Valley of NSW. In both cases, people exposed to environmental change had negative reactions brought about by a sense of powerlessness over the unfolding environmental changes. A community's loss of certainty in a once-predictable environment is common among groups that express solastalgia.
In 2015, an article in the medical journal The Lancet included solastalgia as a contributing concept to the impact of climate change on human health and well-being. A study review over solastalgia shows 15 years of scholarly literature on the understanding between climate change, how it is measured in literature, and how it affects people's mentality.
A temporal component of solastalgia has also been highlighted, with scientists demonstrating a link between one's experience of unwelcome environmental change and increased anticipation about changes to come in one's environment, with this being linked with greater reported symptoms of anxiety, PTSD, anger.
Research has indicated that solastalgia can have an adaptive function when it leads people to seek comfort collectively. Like other climate related emotions, when processed collectively through conversation that allows for emotion to be processed and reflective function to be increased, this can lead to resilience and growth.
Contexts
Employment
Hedda Haugen Askland outlines how distress is caused by a lack of interaction between the society in social and political ways, that in turn affect the experience of a community. Societies whose livelihoods are not closely tied to their environment are not as likely to express solastalgia and, in turn, societies that are closely tied to their environments are more susceptible. Groups that depend heavily upon agroecosystems are considered particularly vulnerable. There are many examples of this across Africa, where agrarian communities have lost vital resources due to environmental changes. This has resulted in an increase in the number of environmental refugees throughout Africa in recent years.
Wealth
Solastalgia tends to affect wealthier populations less. A study conducted in the western United States showed that higher-income families experienced the effects of solastalgia significantly less than their lower-income neighbors following a destructive wildfire. This is due to the flexibility wealth can provide. In this case, wealthy families were able to move from or rebuild their homes, reducing the uncertainty caused by the wildfire. Other studies have supported the existence of solastalgia in Appalachian communities affected by mountain-top removal coal mining practices. Communities located in close proximity to coal mining sites experienced significantly higher depression rates than those located farther from the sites.
In Music
American death metal band Cattle Decapitation released a song 'Solastalgia' with official videoclip
In 2018, Australian pop rock musician Missy Higgins released
See also
Ecophobia
Ecopsychology
Environmental psychology
Paradise, California
References
Environment and society
Neologisms
Nostalgia
Environmental impact by effect | 0.76645 | 0.989207 | 0.758178 |
Four-field approach | The four-field approach in anthropology sees the discipline as composed of the four sub fields of Archaeology, Linguistics, Physical Anthropology, and Cultural Anthropology (known jocularly to students as "stones", "tones", "bones", and "thrones"). The approach is conventionally understood as having been developed by Franz Boas, who developed the discipline of anthropology in the United States. A 2013 re-assessment of the evidence has indicated that the idea of four-field anthropology has a more complex 19th-century history in Europe and North America. It is most likely that the approach was being used simultaneously in different parts of the world, but was not widely discussed until it was being taught at the collegiate level in the United States, Germany, England, and France by 1902. For Boas, the four-field approach was motivated by his holistic approach to the study of human behavior, which included
integrated analytical attention to culture history, material culture, anatomy and population history, customs and social organization, folklore, grammar and language use. For most of the 20th century, U.S. anthropology departments housed anthropologists specializing in all of the four branches, but with the increasing professionalization and specialization, elements such as linguistics and archaeology came to be regarded largely as separate disciplines. Today, physical anthropologists often collaborate more closely with biology and medicine than with cultural anthropology. However, it is widely accepted that a complete four-field analysis is needed in order to accurately and fully explain an anthropological topic.
The four-field approach is dependent on collaboration. However, collaboration in any field can get costly. To counter this, the four-field approach is often taught to students as they go through college courses. By teaching all four disciplines, the anthropological field is able to produce scholars that are knowledgeable of all subfields. However, it is common and often recommended for an anthropologist to have a specialization. The four-field approach also encourages scholars to look holistically at an artifact, ecofact, data, etc. in almost an omnipotent way, meaning that having knowledge from all perspectives helps to eliminate bias and/or incorrect assumptions of past and present cultures.
References
Anthropology | 0.77624 | 0.97673 | 0.758177 |
Compassion-focused therapy | Compassion Focused Therapy (CFT) is a system of psychotherapy developed by Professor Paul Gilbert (OBE) that integrates techniques from cognitive behavioral therapy with concepts from evolutionary psychology, social psychology, developmental psychology, Buddhist psychology, and neuroscience. According to Gilbert, "One of its key concerns is to use compassionate mind training to help people develop and work with experiences of inner warmth, safeness and soothing, via compassion and self-compassion."
Overview
A central therapeutic technique of CFT is compassionate mind training, which teaches the skills and attributes of compassion. Compassionate mind training helps transform problematic patterns of cognition and emotion related to anxiety, anger, shame and self-criticism.
Biological evolution forms the theoretical backbone of CFT. Humans have evolved with at least three primal types of emotion regulation system: the threat (protection) system, the drive (resource-seeking) system, and the soothing system. CFT emphasizes the links between cognitive patterns and these three emotion regulation systems. Through the use of techniques such as compassionate mind training and cognitive behavioral therapy (CBT), counselling clients can learn to manage each system more effectively and respond more appropriately to situations.
Compassion Focused Therapy is especially appropriate for people who have high levels of shame and self-criticism and who have difficulty in feeling warmth toward, and being kind to, themselves or others. CFT can help such people learn to feel more safeness and warmth in their interactions with others and themselves.
Numerous methods are used in CFT to develop a person's compassion. For example, people undergoing CFT are taught to understand compassion from the third person, before transferring these thought processes to themselves.
Core principles
CFT is largely built on the idea that the evolution of caring behavior has major regulatory and developmental functions. The central focus of CFT is to concentrate on helping clients relate to their difficulties in compassionate ways, as well as provide them with effective tools to work with challenging circumstances and emotions they encounter. CFT helps those learn tools to engage with their battles in accepting and encouraging ways, thereby aiding themselves to feel confident about accomplishing difficult tasks and dealing with challenging situations.
This is facilitated by:
Developing a positive therapeutic relationship that facilitates the process of engaging with one's challenges and development of skills to deal with them.
Developing non-blaming compassionate understandings into the nature of suffering.
Developing the ability to experience and cultivate compassionate attributes.
Developing the feeling of compassion for others, being open to compassion from others, and developing self-compassion.
According to evolutionary analysis, there are three types of functional emotion regulation systems: drive, safety and threat. CFT is based on the relationship and interactions between these systems. One is born with each system but our surroundings implicate whether one utilizes and sustains the non-survival-based systems (drive and caregiving).
Threat and self-protection focused system: evolved to alert and direct attention to detect and respond to threats. This system contains threat-based emotions (anger, anxiety, disgust), and threat-based behaviors (fight/flight, freezing).
Drive, seeking and acquisition focused system: pay attention and notice advantageous resources, experience drive and pleasure in securing them (positive system is activating).
Contentment, soothing and affiliative system: enables state of peacefulness when individuals are no longer focused on threats or seeking out resources (allows body to rest and digest and have open attention).
Using CFT enriches the compassion-based soothing system, while withdrawing from the threat-focused emotional regulation system. In turn, this will augment the ability to activate (drive) and work towards valued goals.
Applications
Compassion Focused Therapy has been investigated as a novel treatment for a wide variety of psychological disorders.
A 2012 randomized controlled trial showed CFT to be a safe and clinically effective treatment option for psychosis patients. CFT was shown to be more effective than "treatment as usual", with particular efficacy in reducing depression symptoms. A further 2015 literature review of 14 different studies showed promising psychotherapeutic benefits of CFT, especially when treating mood disorders. A recent meta analysis found good support for CFT as a treatment for a variety of psychological difficulties. However, further large-scale trials are necessary in order for CFT to become an accepted, "evidence-based" treatment for these disorders.
CFT has also been explored as a treatment for individuals with eating disorders. This slightly modified version of CFT, CFT-E, has had promising results in treating adult outpatients with restrictive eating disorders as well as with binging and purging disorders. A 2014 literature review found CFT-E to be a particularly effective treatment for eating disorders due to the fact that it confronts the "high levels of shame and self‐criticism" that patients often experience. More recent primary studies have further proved CFT-E to be a safe and effective intervention for eating disorders.
CFT is also being studied as a rehabilitation method for patients with acquired brain injuries (ABI). Preliminary, small-scale studies have shown CFT to be safe and beneficial in treating anxiety and depressive symptoms of ABI patients, although further large-scale studies are needed.
As well as being a psychological therapy (for individuals and groups), Compassionate Mind Training (CMT) has been shown to be an effective approach for reducing psychological distress in the general public. A variety of studies have found that engaging in guided audios, online courses, an 8 week group and using an app (The Self-Compassion App) can lead to reductions in self-criticism, shame, attachment insecurity, depression and anxiety symptoms, as well as increasing self-compassion, positive emotions and wellbeing.
CMT has also been used as an effective approach in schools, with results suggesting a variety of benefits for teachers who engaged in an 8 week compassion training course.
Limitations
Beaumont and Hollins Martin (2015) examined narrative reviews of 12 research findings that has shown use of CFT to treat and experiment with psychological outcomes in clinical populations. The researchers found that overall, there are improvements of mental health issues with CFT intervention, especially when combined with approaches such as cognitive behavioral therapy (CBT).
Beaumont and Hollins Martin (2015) found a major limitation in the empirical studies are the small number of participants involved in each case. For instance, Gilbert and Proctor (2006) showed small reductions in depression, anxiety, self-criticism and shame, however their participant group involved only 6 members. The small number of participants can cause bias or facilitate a problem of generalization for the broader population. For instance, out of the twelve studies only two individually supported effectiveness of CFT. A study conducted by Lucre and Corten (2012) found CFT to be effective for treating patients with personality disorders, and another study by Heriot-Maitland et al. (2014) found that treating clients in acute inpatient settings was effective.
Recommendations
The findings of Beaumont and Hollins Martin (2015) recommended that the effectiveness of CFT needs further extensive research in order to fully examine reductions in mental illnesses and overall improvements in quality of life. This study recommends for consideration of larger samples of participants in order to ensure that CFT can be independently effective without other psychotherapy interventions involved such as CBT.
References
Cognitive therapy
Cognitive behavioral therapy | 0.769248 | 0.985592 | 0.758164 |
Metacognitive therapy | Metacognitive therapy (MCT) is a psychotherapy focused on modifying metacognitive beliefs that perpetuate states of worry, rumination and attention fixation. It was created by Adrian Wells based on an information processing model by Wells and Gerald Matthews. It is supported by scientific evidence from a large number of studies.
The goals of MCT are first to discover what patients believe about their own thoughts and about how their mind works (called metacognitive beliefs), then to show the patient how these beliefs lead to unhelpful responses to thoughts that serve to unintentionally prolong or worsen symptoms, and finally to provide alternative ways of responding to thoughts in order to allow a reduction of symptoms. In clinical practice, MCT is most commonly used for treating anxiety disorders such as social anxiety disorder, generalised anxiety disorder (GAD), health anxiety, obsessive compulsive disorder (OCD) and post-traumatic stress disorder (PTSD) as well as depression – though the model was designed to be transdiagnostic (meaning it focuses on common psychological factors thought to maintain all psychological disorders).
History
Metacognition, Greek for "after" (meta) "thought" (cognition), refers to the human capacity to be aware of and control one's own thoughts and internal mental processes. Metacognition has been studied for several decades by researchers, originally as part of developmental psychology and neuropsychology. Examples of metacognition include a person knowing what thoughts are currently in their mind and knowing where the focus of their attention is, and a person's beliefs about their own thoughts (which may or may not be accurate). The first metacognitive interventions were devised for children with attentional disorders in the 1980s.
Model of mental disorders
Self-regulatory executive function model
In the metacognitive model, symptoms are caused by a set of psychological processes called the cognitive attentional syndrome (CAS). The CAS includes three main processes, each of which constitutes extended thinking in response to negative thoughts. These three processes are:
Worry/rumination
Threat monitoring
Coping behaviours that backfire
All three are driven by patients' metacognitive beliefs, such as the belief that these processes will help to solve problems, although the processes all ultimately have the unintentional consequence of prolonging distress. Of particular importance in the model are negative metacognitive beliefs, especially those concerning the uncontrollability and dangerousness of some thoughts. Executive functions are also believed to play a part in how the person can focus and refocus on certain thoughts and mental modes. These mental modes can be categorized as object mode and metacognitive mode, which refers to the different types of relationships people can have towards thoughts. All of the CAS, the metacognitive beliefs, the mental modes and the executive function together constitute the self-regulatory executive function model (S-REF). This is also known as the metacognitive model. In more recent work, Wells has described in greater detail a metacognitive control system of the S-REF aimed at advancing research and treatment using metacognitive therapy.
Therapeutic intervention
MCT is a time-limited therapy which usually takes place between 8–12 sessions. The therapist uses discussions with the patient to discover their metacognitive beliefs, experiences and strategies. The therapist then shares the model with the patient, pointing out how their particular symptoms are caused and maintained.
Therapy then proceeds with the introduction of techniques tailored to the patient's difficulties aimed at changing how the patient relates to thoughts and that bring extended thinking under control. Experiments are used to challenge metacognitive beliefs (e.g. "You believe that if you worry too much you will go 'mad' – let's try worrying as much as possible for the next five minutes and see if there is any effect") and strategies such as attentional training technique, situational attention refocusing and detached mindfulness (this is a distinct strategy from various other mindfulness techniques).
Research
Clinical trials (including randomized controlled trials) have found MCT to produce large clinically significant improvements across a range of mental health disorders, although as of 2014 the total number of subjects studied is small and a meta-analysis concluded that further study is needed before strong conclusions can be drawn regarding effectiveness. A 2015 special issue of the journal Cognitive Therapy and Research was devoted to MCT research findings.
A 2018 meta-analysis confirmed the effectiveness of MCT in the treatment of a variety of psychological complaints with depression and anxiety showing high effect sizes. It concluded, "Our findings indicate that MCT is an effective treatment for a range of psychological complaints. To date, strongest evidence exists for anxiety and depression. Current results suggest that MCT may be superior to other psychotherapies, including cognitive behavioral interventions. However, more trials with larger number of participants are needed in order to draw firm conclusions."
In 2020, a study showed superior effectiveness in MCT over cognitive behavioral therapy (CBT) in the treatment of depression. It summarised, "MCT appears promising and might offer a necessary advance in depression treatment, but there is insufficient evidence at present from adequately powered trials to assess the relative efficacy of MCT compared with CBT in depression."
In 2018–2020, a research topic in the journal Frontiers in Psychology highlighted the growing experimental, clinical, and neuropsychological evidence base for MCT.
A recent network meta-analysis indicated that MCT (and cognitive processing therapy) might be superior to other psychological treatments for PTSD. However, although the evidence-base for MCT is promising and growing, it is important to note that most clinical trials investigating MCT are characterized by small and select samples and potential conflict of interests as its originator is involved in most clinical trials conducted. As such, there is a pressing need for larger, preferably pragmatic, well-conducted randomized controlled trials, conducted by independent trialists without potential conflict of interests before there is a large scale implementation of MCT in community mental health clinics.
See also
Meta-cognitions questionnaire
References
Further reading
External links
Cognitive therapy
Treatment of obsessive–compulsive disorder | 0.767476 | 0.987847 | 0.75815 |
Beck Depression Inventory | The Beck Depression Inventory (BDI, BDI-1A, BDI-II), created by Aaron T. Beck, is a 21-question multiple-choice self-report inventory, one of the most widely used psychometric tests for measuring the severity of depression. Its development marked a shift among mental health professionals, who had until then, viewed depression from a psychodynamic perspective, instead of it being rooted in the patient's own thoughts.
In its current version, the BDI-II is designed for individuals aged 13 and over, and is composed of items relating to symptoms of depression such as hopelessness and irritability, cognitions such as guilt or feelings of being punished, as well as physical symptoms such as fatigue, weight loss, and lack of interest in sex.
There are three versions of the BDI—the original BDI, first published in 1961 and later revised in 1978 as the BDI-1A, and the BDI-II, published in 1996. The BDI is widely used as an assessment tool by health care professionals and researchers in a variety of settings.
The BDI was used as a model for the development of the Children's Depression Inventory (CDI), first published in 1979 by clinical psychologist Maria Kovacs.
Development and history
According to Beck's publisher, 'When Beck began studying depression in the 1950s, the prevailing psychoanalytic theory attributed the syndrome to inverted hostility against the self.' By contrast, the BDI was developed in a novel way for its time; by collating patients' verbatim descriptions of their symptoms and then using these to structure a scale which could reflect the intensity or severity of a given symptom.
Beck drew attention to the importance of "negative cognitions" described as sustained, inaccurate, and often intrusive negative thoughts about the self. In his view, it was the case that these cognitions caused depression, rather than being generated by depression.
Beck developed a triad of negative cognitions about the world, the future, and the self, which play a major role in depression.
An example of the triad in action taken from Brown (1995) is the case of a student obtaining poor exam results:
The student has negative thoughts about the world, so he may come to believe he does not enjoy the class.
The student has negative thoughts about his future because he thinks he may not pass the class.
The student has negative thoughts about his self, as he may feel he does not deserve to be in college.
The development of the BDI reflects that in its structure, with items such as "I have lost all of my interest in other people" to reflect the world, "I feel discouraged about the future" to reflect the future, and "I blame myself for everything bad that happens" to reflect the self. The view of depression as sustained by intrusive negative cognitions has had particular application in cognitive behavioral therapy (CBT), which aims to challenge and neutralize them through techniques such as cognitive restructuring.
BDI
The original BDI, first published in 1961, consisted of twenty-one questions about how the subject has been feeling in the last week. Each question had a set of at least four possible responses, ranging in intensity. For example:
(0) I do not feel sad
(1) I feel sad.
(2) I am sad all the time and I can't snap out of it.
(3) I am so sad or unhappy that I can't stand it.
When the test is scored, a value of 0 to 3 is assigned for each answer and then the total score is compared to a key to determine the depression's severity. The standard cut-off scores were as follows:
0–9: indicates minimal depression
10–18: indicates mild depression
19–29: indicates moderate depression
30–63: indicates severe depression.
Higher total scores indicate more severe depressive symptoms.
Some items on the original BDI had more than one statement marked with the same score. For instance, there are two responses under the Mood heading that score a 2: (2a) "I am blue or sad all the time and I can't snap out of it" and (2b) "I am so sad or unhappy that it is very painful".
BDI-IA
The BDI-IA was a revision of the original instrument developed by Beck during the 1970s, and copyrighted in 1978. To improve ease of use, the "a and b statements" described above were removed, and respondents were instructed to endorse how they had been feeling during the preceding two weeks. The internal consistency for the BDI-IA was good, with a Cronbach's alpha coefficient of around 0.85, meaning that the items on the inventory are highly correlated with each other.
However, this version retained some flaws; the BDI-IA only addressed six out of the nine DSM-III criteria for depression. This and other criticisms were addressed in the BDI-II.
BDI-II
The BDI-II was a 1996 revision of the BDI, developed in response to the American Psychiatric Association's publication of the Diagnostic and Statistical Manual of Mental Disorders, Fourth Edition, which changed many of the diagnostic criteria for Major Depressive Disorder.
Items involving changes in body image, hypochondriasis, and difficulty working were replaced. Also, sleep loss and appetite loss items were revised to assess both increases and decreases in sleep and appetite. All but three of the items were reworded; only the items dealing with feelings of being punished, thoughts about suicide, and interest in sex remained the same. Finally, participants were asked to rate how they have been feeling for the past two weeks, as opposed to the past week as in the original BDI.
Like the BDI, the BDI-II also contains about 21 questions, each answer being scored on a scale value of 0 to 3. Higher total scores indicate more severe depressive symptoms. The standardized cutoffs used differ from the original:
0–13: minimal depression
14–19: mild depression
20–28: moderate depression
29–63: severe depression.
One measure of an instrument's usefulness is to see how closely it agrees with another similar instrument that has been validated against information from a clinical interview by a trained clinician. In this respect, the BDI-II is positively correlated with the Hamilton Depression Rating Scale with a Pearson r of 0.71, showing good convergent validity. The test was also shown to have a high one-week test–retest reliability (Pearson's r = 0.93), suggesting that it was not overly sensitive to day-to-day variations in mood. The test also has high internal consistency (α = .91).
Impact
The development of the BDI was an important event in psychiatry and psychology; it represented a shift in health care professionals' view of depression from a Freudian, psychodynamic perspective, to one guided by the patient's own thoughts or "cognitions". It also established the principle that instead of attempting to develop a psychometric tool based on a possibly invalid theory, self-report questionnaires when analysed using techniques such as factor analysis can suggest theoretical constructs.
The BDI was originally developed to provide a quantitative assessment of the intensity of depression. Because it is designed to reflect the depth of depression, it can monitor changes over time and provide an objective measure for judging improvement and the effectiveness or otherwise of treatment methods. The instrument remains widely used in research; in 1998, it had been used in over 2000 empirical studies. It has been translated into multiple European languages as well as Arabic, Chinese, Japanese, Persian, and Xhosa.
Limitations
The BDI has the same limitations as other self-report inventories, in that scores can be easily exaggerated or minimized by the person completing them. Like all questionnaires, the way the instrument is administered can have an effect on the final score. If a patient is asked to fill out the form in front of other people in a clinical environment, for instance, social expectations have been shown to elicit a different response compared to administration via a postal survey.
In participants with concomitant physical illness the BDI's reliance on physical symptoms such as fatigue may artificially inflate scores due to symptoms of the illness, rather than of depression. In an effort to deal with this concern Beck and his colleagues developed the "Beck Depression Inventory for Primary Care" (BDI-PC), a short screening scale consisting of seven items from the BDI-II considered to be independent of physical function. Unlike the standard BDI, the BDI-PC produces only a binary outcome of "not depressed" or "depressed" for patients above a cutoff score of 4.
Although designed as a screening device rather than a diagnostic tool, the BDI is sometimes used by health care providers to reach a quick diagnosis.
The BDI is copyrighted; a fee must be paid for each copy used. There is no evidence that the BDI-II is more valid or reliable than other depression scales, and public domain scales such as the Patient Health Questionnaire – Nine Item (PHQ-9) have been studied as a useful tool.
See also
Beck Anxiety Inventory
Beck Hopelessness Scale
Diagnostic classification and rating scales used in psychiatry
Major Depression Inventory
Quality of Life in Depression Scale
Notes
Further reading
Beck A.T. (1988). "Beck Hopelessness Scale." The Psychological Corporation.
External links
A list of psychiatric rating scales for depression from Neurotransmitter.net
Beck Depression Inventory (BDI-II). Test Online
Depression screening and assessment tools | 0.762107 | 0.994735 | 0.758094 |
Faculty psychology | Faculty psychology is the idea that the mind is separated into faculties or sections, and that each of these faculties is assigned to certain mental tasks. Some examples of the mental tasks assigned to these faculties include judgment, compassion, memory, attention, perception, and consciousness. For example, we can speak because we have the faculty of speech or we can think because we have the faculty of thought. Thomas Reid mentions over 43 faculties of the mind that work together as a whole. Additionally, faculty psychology claims that we are born with separate, innate human functions.
The views of faculty psychology are explicit in the psychological writings of the medieval scholastic theologians, such as Thomas Aquinas, as well as in Franz Joseph Gall's formulation of phrenology, albeit more implicitly. More recently faculty psychology has been revived by Jerry Fodor's concept of modularity of mind, the hypothesis that different modules autonomously manage sensory input as well as other mental functions.
Faculty psychology resembles localization of function, the claim that specific cognitive functions are performed in specific areas of the brain. For example, Broca's area is associated with language production and syntax, while the Wernicke's Area is associated with language comprehension and semantics. It is currently known that while the brain's functions are separate, they also work together in a localized function.
Additionally, faculty psychology depicts the mind as something similar to a muscle of the human body since both function the same way. The way of training a muscle is by repetitive and brutal training in order to adapt the muscle to the type of workout you’re putting it through. Therefore, by putting your mind through plenty of brain-exercising problems, your mind will also increase in knowledge. In fact, it is also called ”mental discipline”.“Mental discipline” is also the best way to train one’s mind intellectually because when you’re focused, you’re motivated to learn. For example, an athlete who works on their sprinting every day, by running the same distance every day. After a certain time, their body is gonna adapt to the energy and the effort they put into their training. Similarly, if a student were to read the same book weekly for an entire year. They will eventually have read the same book 52 times, and by reading this often, their mind will process the information quicker when they see the same words and will share a deeper understanding and meaning of the same book.
Some psychologists brand it as a fallacy due to it being outdated, but others think that it is a necessary philosophical standpoint with added things for the conclusions of experiments because of bias. Faculty Psychology is branded as a philosophy due to the advancements in science. The term ‘faculty’ has been abandoned by psychologists due to their thinking that is old-fashioned, though many psychologists still abide by this philosophy. Many psychologists have moved on to newer psychological philosophies based on the theories they came up with on the brain and how it works with the help of modern technology.
Historical change
It is debatable to what extent the continuous mention of faculties throughout the history of psychology should be taken to indicate a continuity of the term's meaning. In medieval writings, psychological faculties were often intimately related to metaphysically-loaded conceptions of forces,, particularly to Aristotle's notion of an efficient cause. This is the view of faculties which is explicit in the works of Thomas Aquinas:
By the 19th century, the founders of experimental psychology had a very different view of faculties. In this period, introspection was well-regarded by many as one tools among others for the investigation of mental life. In his Principles of Physiological Psychology, Wilhelm Wundt insisted that faculties were nothing but descriptive class concepts, meant to denote classes of mental events that could be discerned in introspection, but which never actually appeared in isolation. He took caution in insisting that older, metaphysical conceptions of faculties must be guarded against and that the scientist's tasks of classification and explanation must be kept distinct:
It was in this and the ensuing period that faculty psychology came to be sharply distinguished from the act psychology promoted by Franz Brentano—whereas the two are barely distinguished in Aquinas, for example.
Faculty Psychology in different domains
Faculty psychology from different perspectives
For thousands of years, a debate has been ongoing: whether we are born with knowledge or gain it through experience. Multiple philosophers have different opinions on it and thus, the debate is still ongoing to this day. It has been called many names over the years: Pocketknife vs. Meatloaf, nativism vs. empiricism, and more recently, faculty psychology vs. associationism. In Seven and a Half Lessons About the Brain, Lisa Feldman Barrett describes faculty psychology, using a metaphor, being the pocketknife brain. It is called this due to the fact that faculty psychology is the theory that the mind is separated into sections that serve their own purpose just like a pocketknife. She describes this concept by mentioning exponents, instead of simply adding a mere tool to our brain(2¹⁴), adding an entire new function for each faculty/tool (3¹⁴), resulting in a more complex brain. The conclusion made results in a much more flexible brain that contains complex traits. Lisa Feldman Barrett links her idea of the pocket brain to phrenology’s idea of how the brain functions.
Connections to Faculty Psychology
Complex brain
Humans have, thanks to evolution, pretty complex brains. However, not everyone knows what a complex brain really is. A complex brain is able to adapt to its environment and it's because of that we humans can live in society. We’re able to change the environment or meet new people and because of our complex brains, we can adapt to all those changes. Our brain also allows us to resist injury since if certain neurons are occupied doing other things or simply stop working, other neurons will take their places and do what they were originally intended to do. Consequently, we can compare the complex brain and the Pocketknife brain together. In the complex brain, a group of neurons is able to do another group of neurons’ jobs while in the Pocketknife brain, it’s a whole different story: occupied or lost neurons are seen as losses of purpose.
Meatloaf brain
Not only does Lisa Feldman Barrett present the idea of the pocketknife brain, she also mentions a new idea called the meatloaf brain. Just like the pocketknife brain and our human complex brain, it contains the same amount of neurons. Although, unlike these two every single neuron is connected to one another. She describes her meatloaf brain as a single element since all neurons are connected to one another. If a single neuron receives the green light to modify it/’s firing rate, it will control the outcome and firing rate of every other neuron, in contrast to faculty psychology where the brain’s neurons are divided into their own separate tasks and do not share as many connections with one another.
References
Citations
Bibliography
Barrett, Lisa Feldman (2020). Seven and a Half Lessons About the Brain. Boston: Houghton Mifflin Harcourt. p. 42, 159.
Edmund J. Sass, Ed.D. “Faculty Theory and Mental Discipline”.
Field, G. C. (1921). "Faculty psychology and instinct psychology." Mind, 30(119), 257-270.
Lehman, H. C., & Witty, P. A. (1934). Faculty psychology and personality traits. The American Journal of Psychology, 46(3), 486-500.
Commins, W. D. (1933). What is “Faculty Psychology”?. Thought: Fordham University Quarterly, 8(1), 48-57.
External links
Divisions of Gall's brain
History of psychology
Psychological schools | 0.789039 | 0.960779 | 0.758092 |
Hakomi | The Hakomi Method is a form of mindfulness-centered somatic psychotherapy developed by Ron Kurtz in the 1970s.
Approach and method
According to the Hakomi Institute website, the method is an experiential psychotherapy modality, wherein present, felt experience is used as an access route to core material; this unconscious material is elicited and surfaces experientially, and changes are integrated into the client's immediate experience. Hakomi combines Western psychology, systems theory, and body-centered techniques with the principles of mindfulness and nonviolence drawn from Eastern philosophy.
Hakomi is grounded in five principles:
mindfulness
nonviolence
organicity
unity
body-mind holism
These five principles are set forth in Kurtz's book, Body Centered Psychotherapy. Some Hakomi leaders add two more principles, truth and mutability.
The Hakomi Method regards people as self-organizing systems, organized psychologically around core memories, beliefs, and images; this core material expresses itself through habits and attitudes around which people unconsciously organize their behavior. The goal is to transform their way of being in the world through working with core material and changing core beliefs.
Hakomi relies on mindfulness of body sensations, emotions, and memories. Although many therapists now recommend mindfulness meditation to support psychotherapy, Hakomi is unique in that it conducts the majority of the therapy session in mindfulness.
The Hakomi Method follows this general outline:
Create healing relationship: Client and therapist work to build a relationship that maximizes safety and the cooperation of the unconscious. This includes practicing "loving presence", a state of acceptance and empathic resonance.
Establish mindfulness: The therapist helps clients study and focus on the ways they organize experience. Hakomi's viewpoint is that most behaviors are habits automatically organized by core material; therefore, studying the organization of experience is studying the influence of this core material.
Evoke experience: Client and therapist make direct contact with core feelings, beliefs, and memories using "experiments in mindfulness"—gentle somatic and verbal techniques to safely "access" the present experience behind the client's verbal presentation, or to explore "indicators": chronic physical patterns, habitual gestures, bodily tension, etc.
Processing: This process usually evokes deeper emotions and/or memories, and if the client feels ready, the therapist helps them deepen into these, often using state-specific processing such as "working with the child" and/or strong emotions. The client is helped to recognize the core beliefs as they emerge, and the therapist often provides what Kurtz called "the missing experience", a form of "memory re-consolidation" where the child, as they revisit the negative experience(s) that generated their core beliefs, now receives the nourishment and support that was needed at the time. This supports the process of transformation of core beliefs. The same process may be used working with the adult rather than the "child state".
Transformation: The client has an experience in therapy different from the one they had as a child (or are having as an adult) and experientially realizes that new healing experiences are possible and begins to be open to these experiences.
Integration: Client and therapist work to make connections between the new healing experiences and the rest of the client's life and relationships.
Other components of the Hakomi Method include the sensitivity cycle, techniques such as "contact and tracking", "prompts" and "taking over", "embracing resistance", and developing a greater sensitivity to clients and how to work with their individual issues based on character typology originated by Alexander Lowen.
Related therapies
The Hakomi Institute (founded in 1981) describes itself as an international nonprofit organization that teaches Hakomi therapy worldwide. Its website includes an international directory of Hakomi practitioners. The institute's programs focus on training psychotherapists and professionals in related fields. Its faculty are mainly professional psychotherapists who base their teaching of the Hakomi Method on current discoveries in neuroscience and on their own clinical insights. The Hakomi Institute is a professional member of the Association for Humanistic Psychology, the U.S. Association for Body Psychotherapists, and an accredited Continuing Education provider for the National Board for Certified Counselors and the National Association of Social Workers.
Ron Kurtz left the Hakomi Institute in the 1990s to create a new organization, Ron Kurtz Trainings. With a new group, he developed the Hakomi Method in new directions, offering training for both professionals and laypeople. He called the refined version of his work Hakomi Assisted Self-Discovery.
Both versions of the Hakomi Method are based in loving presence, mindfulness, somatics, and the other principles described above, and fall within the definition of body psychotherapy.
Another technique based on the Hakomi Method is Sensorimotor psychotherapy, developed by Pat Ogden.
Validation
Body psychotherapy has been scientifically validated by the European Association for Psychotherapy (EAP) as having a number of modalities within this branch of psychotherapy. Hakomi Therapy is one of the approaches or modalities within Body Psychotherapy recognized by the EAP.
Notes
Sources
Further reading
The Herald (22 September 2004) Hakomi is the topic. Page 15.
Johanson, Gregory. (22 June 2006) Annals of the American Psychotherapy Association. A survey of the use of mindfulness in psychotherapy. Volume 9; Issue 2; Page 15.
Marshall, Lisa. (15 October 2001) Daily Camera The power of touch. Body psychotherapy sees massage, movement as adjunct to counseling. Section: Fit; Page C1
Sutter, Cindy. (21 June 2004) Daily Camera Healing the body and the mind Hakomi helps clients heal with mindfulness. Section: Fit; Page D1.
Books
Weiss, Johanson, Monda, editors. Hakomi Mindfulness-Centered Somatic Psychotherapy: A Comprehensive Guide to Theory and Practice, 2015, Norton, NY. Foreword by Richard C. Schwartz, .
Benz, Dyrian and Halko Weiss. To The Core of Your Experience, Luminas Press, 1989, preface by Ron Kurtz.
Fisher, Rob. Experiential Psychotherapy With Couples: A Guide for the Creative Pragmatist. Phoenix, AZ: Zeig, Tucker & Theisen, 2002, foreword by Ron Kurtz. .
Johanson, Greg and Kurtz, Ron. Grace Unfolding, Psychotherapy in the Spirit of the Tao Te Ching, New York: Bell Tower, 1991.
Kurtz, Ron and Prestera, Hector. The Body Reveals: An Illustrated Guide to the Psychology of the Body, New York: Harper&Row/Quicksilver Books, 1976.
Kurtz, Ron: Hakomi Therapy, Boulder, CO: 1983.
Kurtz, Ron: Body-Centered Psychotherapy: The Hakomi Method. Mendecino: LifeRhythm, 1990..
Chapters
Caldwell, Christine, ed. Getting in Touch: The Guide to New Body-Centered Therapies. Wheaton: Quest Books, 1997. See ch. 3 by Ron Kurtz and Kukuni Minton on "Essentials of Hakomi Body-Centered Psychotherapy", pp. 45–60, and ch. 9 by Pat Ogden on "Hakomi Integrated Somatics: Hands-On Psychotherapy", pp. 153–178.
Capuzzi, David and Douglas Gross, eds. Counseling and Psychotherapy: Theories and Interventions. 4th ed. Upper Saddle River, NJ: Merrill Prentice Hall, 2003: See Donna M. Roy "Body-Centered Counseling and Psychotherapy", pp. 360–389.
Cole, J. David and Carol Ladas-Gaskin. Mindfulness Centered Therapies: An Integrative Approach. Seattle, WA: Silver Birch Press, 2007.
Menkin, Dan. Transformation through Bodywork: Using Touch Therapies for Inner Peace. Santa Fe, New Mexico: Bear & Company, 1996. See especially ch. 15 on "The Tao Te Ching and the Principle of Receptivity", pp. 119–128.
Morgan, Marilyn. The Alchemy of Love: Personal Growth Journeys in Psychotherapy Training. VDM Verlag, Saarbrücken, Germany, 2008.
Schaefer, Charles E., ed. Innovative Interventions in Child and Adolescent Therapy. New York: John Wiley & Sons, 1988. See Greg Johanson and Carol Taylor, "Hakomi Therapy with Seriously Emotionally Disturbed Adolescents," pp. 232–265.
Staunton, Tree. Body Psychotherapy. New York: Taylor & Francis, 2002. See Philippa Vick, "Psycho-Spiritual Body Psychotherapy", pp. 133–147.
External links
Hakomi Education Network website
Hakomi Institute website
Ron Kurtz website
Psychotherapy by type
Body psychotherapy
Mindfulness | 0.774775 | 0.978461 | 0.758086 |
Group work | Group work is a form of voluntary association of members benefiting from cooperative learning, that enhances the total output of the activity than when done individually. It aims to cater for individual differences, and develop skills such as communication skills, collaborative skills, critical thinking skills, etc. It is also meant to develop generic knowledge and socially acceptable attitudes. Through group work, a "group mind" - conforming to standards of behavior and judgement - can be fostered.
Specifically in psychotherapy and social work, "group work" refers to group therapy, offered by a practitioner trained in psychotherapy, psychoanalysis, counseling or other relevant disciplines.
Social group work
Social group work is a method of social work that enhance people's social functioning through purposeful group experiences, and to cope more effectively with personal, group or community problems (Marjorie Murphy, 1959).
Social group work is a primary modality of social work in bringing about positive change. It is defined as an educational process emphasizing the development and social adjustment of an individual through voluntary association and use of this association as a means of furthering socially desirable ends. It is a psychosocial process which is concerned in developing leadership and cooperation with building on the interests of the group for a social purpose. Social group work is a method through which individuals in groups in a social agency setting are helped by a worker who guides their interaction through group activities so they may relate to others and experience growth opportunities in line with their needs and capacities of the individual, group and community development. It aims at the development of persons through the interplay of personalities in a group setting and at the creation of such group setting as provide for integrated, cooperative group action for common ends. It is also a process and a method through which group life is affected by a worker who consciously directs the interacting process towards the accomplishment of goals which are conceived in a democratic frame of reference. Its distinct characteristics lies in the fact that group work is used with group experience as a means of individual growth and development, and that the group worker is concerned in developing social responsibility and active citizenship for the improvement of democratic societies. Group work is a way to serving an individual within and through small face to face groups in order to bring about the desired change among client participants.
Models
There are four models in social group work:
Remedial model (Vinter, R. D., 1967) – Remedial model focuses on the individuals dysfunction and utilizes the group as a context and means for altering deviant behaviour.
Reciprocal or Mediating model (W. Schwartz, 1961) - A model based on open systems theory, humanistic psychology and existential perspective. Relationship rooted in reciprocal transactions and intensive commitment is considered critical in this model.
Developmental model (Berustein, S. & Lowy, 1965) - A model based on Erikson's ego psychology, group dynamics and conflict theory. In this model groups are seen as having "a degree of independence and autonomy, but the dynamics of to and fro flow between them and their members, between them and their social settings, are considered crucial to their existence, viability and achievements". The connectedness (intimacy and closeness) is considered critical in this model.
Social goals model (Gisela Konopka & Weince, 1964) - A model based on 'programming' social consciousness, social responsibility, and social change. It suggests that democratic participation with others in a group situation can promote enhancement of personal function in individuals, which in-turn can affect social change. It results in heightened self-esteem and a rise in social power for the members of the group collectively and as individuals.
See also
Social case work
Further reading
Douglas, Tom (1976), Group Work Practice, International Universities Press, New York.
Konopka, G. (1963), Social Group Work : A Helping Process, Prentice Hall, Englewood Cliffs.
Treeker, H.B. (1955), Social Group Work, Principles and Practices, Whiteside, New York.
Phillips, Helen, U. (1957), Essential of Social Group Work Skill, Association Press, New York.
References
Harleigh B. Trecker, Social Group Work: Principles and Practices, Association Press, 1972
Joan Benjamin, Judith Bessant and Rob Watts. Making Groups Work: Rethinking Practice, Allen & Unwin, 1997
Ellen Sarkisian, "Working in Groups." Working in Groups - A Quick Guide for Students, Derek Bok Center, Harvard University
Group psychotherapy
Group processes
Social work | 0.774158 | 0.979211 | 0.758065 |
Social anxiety | Social anxiety is the anxiety and fear specifically linked to being in social settings (i.e., interacting with others). Some categories of disorders associated with social anxiety include anxiety disorders, mood disorders, autism spectrum disorders, eating disorders, and substance use disorders. Individuals with higher levels of social anxiety often avert their gazes, show fewer facial expressions, and show difficulty with initiating and maintaining a conversation. Social anxiety commonly manifests itself in the teenage years and can be persistent throughout life; however, people who experience problems in their daily functioning for an extended period of time can develop social anxiety disorder. Trait social anxiety, the stable tendency to experience this anxiety, can be distinguished from state anxiety, the momentary response to a particular social stimulus. Half of the individuals with any social fears meet the criteria for social anxiety disorder. Age, culture, and gender impact the severity of this disorder. The function of social anxiety is to increase arousal and attention to social interactions, inhibit unwanted social behavior, and motivate preparation for future social situations.
Disorder
Social anxiety disorder (SAD), also known as social phobia, is an anxiety disorder characterized by a significant amount of fear in one or more social situations causing considerable distress and impaired ability to function in at least some parts of daily life. These fears can be triggered by perceived or actual scrutiny from others. Social anxiety disorder affects 8% of women and 6.1% of men. In the United States, anxiety disorders are the most common mental illness. They affect 40 million adults, ages 18 and older. Anxiety can come in different forms and panic attacks can lead to panic disorders which is the recurrence of unexpected panic attacks. Other related anxiety disorders include social anxiety disorder, generalized anxiety disorder, obsessive compulsive disorder (OCD), various types of phobias, and post traumatic stress disorder (PTSD). Fortunately, it is highly treatable and not everyone needs the treatment.
Physical symptoms often include excessive blushing, excess sweating, trembling, palpitations, and nausea. Stammering may be present, along with rapid speech. Panic attacks can also occur under intense fear and discomfort. Some sufferers may use alcohol or other drugs to reduce fears and inhibitions at social events. It is common for sufferers of social phobia to self-medicate in this fashion, especially if they are undiagnosed, untreated, or both; this can lead to alcoholism, eating disorders or other kinds of substance abuse. SAD is sometimes referred to as an "illness of lost opportunities" where "individuals make major life choices to accommodate their illness". According to ICD-10 guidelines, the main diagnostic criteria of social anxiety disorder are fear of being the focus of attention, or fear of behaving in a way that will be embarrassing or humiliating, often coupled with avoidance and anxiety symptoms. Standardized rating scales can be used to screen for social anxiety disorder and measure the severity of anxiety.
== Stages ==
Child development
Some feelings of anxiety in social situations are normal and necessary for effective social functioning and developmental growth. The difficulty with identifying social anxiety disorder in children lies in determining the difference between social anxiety and basic shyness. Typically, children may be diagnosed when their social fears are extreme or cannot be outgrown. Cognitive advances and increased pressures in late childhood and early adolescence result in repeated social anxiety. More and more children are being diagnosed with social anxiety, and this can lead to problems with education if not closely monitored. Part of social anxiety is fear of being criticized by others, and in children, social anxiety causes extreme distress over everyday activities such as playing with other kids, reading in class, or speaking to adults. Some children with social anxiety may act out because of their fear, or they may exhibit nervousness or crying in an event where they feel anxious. Adolescents have identified their most common anxieties as focused on relationships with peers to whom they are attracted, peer rejection, public speaking, blushing, self-consciousness, panic, and past behavior. Most adolescents progress through their fears and meet the developmental demands placed on them.
Adults
It can be easier to identify social anxiety within adults because they tend to shy away from any social situation and keep to themselves. Common adult forms of social anxiety include performance anxiety, public speaking anxiety, stage fright, and timidness. All of these may also assume clinical forms, i.e., become anxiety disorders (see below).
Criteria that distinguish between clinical and nonclinical forms of social anxiety include the intensity and level of behavioral and psychosomatic disruption (discomfort) in addition to the anticipatory nature of the fear. Social anxieties may also be classified according to the broadness of triggering social situations. For example, fear of eating in public has a very narrow situational scope (eating in public), while shyness may have a wide scope (a person may be shy of doing many things in various circumstances). The clinical (disorder) forms are also divided into general social phobia (i.e., social anxiety disorder) and specific social phobia.
Signs and symptoms
Blushing is a physiological response unique to humans and is a hallmark physiological response associated with social anxiety. Blushing is the involuntary reddening of the face, neck, and chest in reaction to evaluation or social attention. Blushing occurs not only in response to feelings of embarrassment but also other socially-oriented emotions such as shame, guilt, shyness, and pride. Individuals high in social anxiety perceive themselves as blushing more than those who are low in social anxiety. Three types of blushing can be measured: self-perceived blushing (how much the individual believes they are blushing), physiological blushing (blushing as measured by physiological indices), and observed blushing (blushing observed by others). Social anxiety is strongly associated with self-perceived blushing, weakly associated with blushing as measured by physiological indices such as temperature and blood flow to the cheeks and forehead, and moderately associated with observed blushing. The relationship between physiological blushing and self-perceived blushing is small among those high in social anxiety, indicating that individuals with high social anxiety may overestimate their blushing. That social anxiety is associated most strongly with self-perceived blushing is also important for cognitive models of blushing and social anxiety, indicating that socially anxious individuals use both internal cues and other types of information to draw conclusions about how they are coming across. Individuals with social anxiety might also refrain from making eye contact, or constantly fiddling with things during conversations or public speaking. Other indicators are physical symptoms which may include rapid heartbeat, muscle tension, dizziness and lightheadedness, stomach trouble and diarrhea, unable to catch a breath, and “out of body” sensation.
Attention bias
Individuals who tend to experience more social anxiety turn their attention away from threatening social information and toward themselves, prohibiting themselves from challenging negative expectations about others and maintaining high levels of social anxiety. For example, a socially anxious individual may perceive rejection from a conversational partner, turn their attention away, and never learn that the individual is actually welcoming. Individuals who are high in social anxiety tend to show increased initial attention toward negative social cues, such as threatening faces, followed by attention away from these social cues, indicating a pattern of hypervigilance followed by avoidance. Attention in social anxiety has been measured using the dot-probe paradigm, which presents two faces next to one another. One face has an emotional expression and the other has a neutral expression, and when the faces disappear, a probe appears in the location of one of the faces. This creates a congruent condition in which the probe appears in the same location as the emotional face and an incongruent condition. Participants respond to the probe by pressing a button and differences in reaction times reveal attentional biases. This task has produced mixed results, with some studies finding no differences between socially-anxious individuals and controls, some studies finding avoidance of all faces by socially-anxious individuals, and other studies finding vigilance by socially-anxious individuals only toward threat faces. The Face-in-the-crowd task shows that individuals with social anxiety are faster at detecting an angry face in a predominantly neutral or positive crowd or slower at detecting happy faces than a non-anxious person.
Focus on the self has been associated with increased social anxiety and negative affect. However, there are two types of self-focus: public and private. In public self-focus, one shows concern for the impact of one's own actions on others and their impressions. This type of self-focus predicts greater social anxiety. Other more private forms of self-consciousness (e.g., egocentric goals) are associated with other types of negative affect.
Basic science research suggests that cognitive biases can be modified. Attention bias modification training has been shown to temporarily impact social anxiety.
Triggers and behaviors
Triggers are sets of events or actions that can remind someone of a previous trauma or feared consequence. Exposure to a trigger could lead a person to have an emotional or physical reaction. Individuals could also have behavioral changes, such as avoiding public places or situations that might direct excessive focus and attention toward them, such as public speaking or talking to new people. They also may not participate in certain activities for fear of embarrassment, which can lead to isolation. For someone who has social anxiety, this could lead them to have a panic attack. There are many negative side effects that can come from social anxiety if untreated, such as low self-esteem, trouble being assertive, hypersensitivity to criticism, poor social skills, becoming isolated, having difficulties with social relationships, low academic and employment achievements, substance abuse, and suicidal thoughts or attempts. Safety behaviors often involve avoidance of the trigger itself or of perceived threats when exposed to the trigger. For example, once in a feared social situation, a socially-anxious individual may avoid eye contact, speaking to strangers, or eating in front of others. Safety behaviors meant to make an individual feel safer have been found to most often enforce or validate anxious feelings, thus leading to a cycle in which the safety behavior is thought to be needed and the trigger's perceived threat is never challenged.
Measures and treatment
Trait social anxiety is most commonly measured by self-report. This method possesses limitations, but subjective responses are the most reliable indicator of a subjective state. Other measures of social anxiety include diagnostic interviews, clinician-administered instruments, and behavioral assessments. No single trait social anxiety self-report measure shows all psychometric properties, including different kinds of validity (content validity, criterion validity, construct validity), reliability, and internal consistency. The SIAS along with the SIAS-6A and -6B are rated as the best. These measures include:
Fear of Negative Evaluation (FNE) and Brief form (BFNE)
Fear Questionnaire Social Phobic Subscale (FQSP)
Interaction Anxiousness Scale (IAS)
Liebowitz Social Anxiety Scale--Self Report (LSAS-SR)
Older Adult Social-Evaluative Situations (OASES)
Social Avoidance and Distress (SAD)
Self-Consciousness Scale (SCS)
Social Interaction Anxiety Scale (SIAS) and brief form (SIAS-6A and -6B)
Social Interaction Phobia Scale (SIPS)
Social Phobia and Anxiety Inventory (SPAI) and brief form (SPAI-23)
Situational Social Avoidance (SSA)
Many types of treatments are available for Social Anxiety Disorder (SAD). The disorder can more effectively be treated if identified early, such as in the early teenage years when SAD onset usually occurs. Treatment is made more effective by considering individual patients’ backgrounds and needs and often by combining behavioral and pharmacological interventions. The first-line treatment for social anxiety disorder is cognitive behavioral therapy (CBT), with medications recommended only in those who are not interested in therapy. CBT is effective in treating social phobia, whether delivered individually or in a group setting. The cognitive and behavioral components seek to change thought patterns and physical reactions to anxiety-inducing situations. The cognitive part of CBT helps individuals with social anxiety challenge unhelpful thoughts and allow new patterns of positive or realistic thinking. The behavioral component involves taking action to challenge the identified negative thoughts, such as participating in an anxiety-inducing activity that isn't dangerous in reality. Challenging behaviors in this way is part of exposure therapy. The attention given to social anxiety disorder has significantly increased since 1999 with the approval and marketing of drugs for its treatment. Prescribed medications include several classes of antidepressants: selective serotonin reuptake inhibitors (SSRIs), serotonin-norepinephrine reuptake inhibitors (SNRIs), and monoamine oxidase inhibitors (MAOIs). Other commonly used medications include beta blockers and benzodiazepines. SAD is the most common anxiety disorder, with up to 10% of people being affected at some point in their life. Other treatments that individuals with social anxiety may find helpful include massages, meditation, mindfulness, hypnotherapy, and acupuncture.
Development and evolutionary theories
Social development in childhood
Fearful temperament and either underdeveloped social skills or excessive socialization of a child can cause the child to become hyper-aware of inappropriate social situations. Additional factors in upbringing which can increase the likelihood of a child to develop social anxiety include overprotection by parents, lack of an emotionally expressive home environment, and observation of other people's social fears or mistakes.
Sensory processing sensitivity
Sensory processing sensitivity (SPS) is a temperamental or personality trait involving "an increased sensitivity of the central nervous system and a deeper cognitive processing of physical, social and emotional stimuli". The trait is characterized by "a tendency to 'pause to check' in novel situations, greater sensitivity to subtle stimuli, and the engagement of deeper cognitive processing strategies for employing coping actions, all of which is driven by heightened emotional reactivity, both positive and negative". Genetic inheritance of a high level of sensory processing sensitivity may increase an individual's awareness of social situations and their potential consequences.
Biological adaptation to living in small groups
There is a suggestion that people have adapted to live with others in small groups. Living in a group is attractive to humans as there are more people to provide labor and protection, and there is a concentration of potential mates. Any perceived threat to group resources should leave an individual on guard, as should any potential position of status that might bring conflict with others. In effect, anxiety is adaptive because it helps people understand what is socially acceptable and what is not. The threat of exclusion from resources could lead to death.
Much of evolutionary theory is concerned with reproduction, so exposure to potential mates within a group is an evolutionary benefit. Finally, at a basic level, being confined to a particular group of people limits exposure to certain diseases. Studies have suggested that social affiliation has an impact on health, and, the more integrated and accepted we are, the healthier we are. All of these factors are evolutionary primers for humans to be sensitive to social situations and their potential consequences.
Exclusion theory
At its simplest, social anxiety might come from as a basic human need to 'fit into' a given social group. Someone might be excluded due to their inability to contribute to a group, deviance from group standards, or even unattractiveness. Due to the benefits of living in a group, an individual would want to avoid social isolation at any cost. Knowing what is and is not seen as attractive to others allows individuals to anticipate and prevent rejection, criticisms, or exclusion by others. Humans are physiologically sensitive to social cues and therefore detect changes in interactions which may indicate dissatisfaction or unpleasant reactions. Overall, social anxiety may serve as a way for people to avoid certain actions that might bring anticipated social exclusion.
See also
Alexithymia
Agoraphobia
Asociality
Bullying
Autism spectrum (Asperger syndrome, Autism)
Avoidant personality disorder
Competition
Emotional labor
Emotion work
Evaluation
Harassment
Highly sensitive person
Identity performance
Keeping up with the Joneses
Major depressive disorder
Obsessive-compulsive disorder
Peer pressure
Productivism
Rat race
Schizoid personality disorder
Selective mutism
Shame
Social determinants of health
Social determinants of health in poverty
Social determinants of mental health
Social inhibition
Social isolation
Social rejection
Social stress
Toxic workplace
Workplace harassment
References
Anxiety disorders | 0.761807 | 0.995062 | 0.758045 |
Biophilic design | Biophilic design is a concept used within the building industry to increase occupant connectivity to the natural environment through the use of direct nature, indirect nature, and space and place conditions. Used at both the building and city-scale, it is argued that this idea has health, environmental, and economic benefits for building occupants and urban environments, with few drawbacks. Although its name was coined in recent history, indicators of biophilic design have been seen in architecture from as far back as the Hanging Gardens of Babylon. While the design features that characterize Biophilic design were all traceable in preceding sustainable design guidelines, the new term sparked wider interest and lent academic credibility.
Biophilia hypothesis
The word "Biophilia" was first introduced by a psychoanalyst named Erich Fromm who stated that biophilia is the "passionate love of life and of all that is alive...whether in a person, a plant, an idea, or a social group" in his book The Anatomy of Human Destructiveness in 1973. Fromm's approach was that of a psychoanalyst (a person who studies the unconscious mind) and presented a broad spectrum as he called biophilia a biologically normal instinct.
The term has been used since by many scientists, and philosophers overall being adapted to several different areas of study. Some notable mentions of biophilia include Edward O. Wilson's book Biophilia (1984) where he took a biologist's approach and first coined the "Biophilia hypothesis" and popularized the notion. Wilson defined biophilia as "the innate tendency to focus on life and lifelike processes", claiming a link with nature is not only physiological (as Fromm suggested) but has a genetic basis. The biophilia hypothesis is the idea that humans have an inherited need to connect to nature and other biotic forms due to our evolutionary dependence on it for survival and personal fulfillment. This idea is relevant in daily life – humans travel and spend money to sightsee in national parks and nature preserves, relax on beaches, hike mountains, and explore jungles. Further, many sports revolve around nature such as skiing, mountain biking, and surfing. From a home perspective, people are more likely to spend more on houses that have views of nature; buyers are willing to spend 7% more on homes with excellent landscaping, 58% more on properties that look at water, and 127% more on those that are waterfront. Humans also value companionship with animals. In America 60.2 million people own dogs and 47.1 million own cats.
Biophobia
While biophilia refers to the inherent need to experience and love nature, biophobia is human's inherited fear of nature and animals. In the case of modern life, humans urge to separate themselves from nature and move towards technology; a cultural drive where people tend to associate with human artifacts, interests, and managed activities. Some anxieties of the natural environment are inherited from threats seen in anthropocentric evolution: this includes fear of snakes, spiders, and blood. In relation to buildings, biophobia can be induced through the use of bright colors, heights, enclosed spaces, darkness, and large open spaces are major contributors to occupant discomfort.
Dimensions
Considered as one of the pioneers of biophilic design, Stephen Kellert has created a framework where nature in the built environment is used in a way that satisfies human needs – his principles are meant to celebrate and show respect for nature, and provide an enriching urban environment that is multisensory. The dimensions and attributes that define Kellert's biophilic framework are below.
Direct experience of nature
Direct experience refers to tangible contact with natural features:
Light: Allows orientation of time of day and season, and is attributed to wayfinding and comfort; light can also cause natural patterns and form, movements and shadows. In design, this can be applied through clerestories, reflective materials, skylights, glass, and atriums. This provides well-being and interest from occupants.
Air: Ventilation, temperature, and humidity are felt through air. Such conditions can be applied through the use of windows and other passive strategies, but most importantly the variation in these elements can promote occupant comfort and productivity.
Water: Water is multisensory and can be used in buildings to provide movement, sounds, touch, and sight. In design it can be incorporated through water bodies, fountains, wetlands, and aquariums; people have a strong connection to water and when used, it can decrease stress and increase health, performance, and overall satisfaction.
Plants: Bringing vegetation to the exterior and interior spaces of the building provides a direct relationship to nature. This should be abundant (i.e., make use of green walls or many potted plants) and some vegetation should flower; plants have been proven to increase physical health, performance, and productivity and reduce stress.
Animals: While hard to achieve, it can be done through aquariums, gardens, animal feeders, and green roofs. This interaction with promotes interest, mental stimulation, and pleasure.
Weather: Weather can be observed directly through windows and transitional spaces, but it can also be simulated through the manipulation of air within the space; awareness of weather signified human fitness and survival in ancient times and now promotes awareness and mental stimulation.
Natural landscapes: This is done through creating self-sustaining ecosystems into the built environment. Given human evolution and history, people tend to enjoy savannah-like landscapes as they depict spaciousness and an abundance of natural life. Contact with these types of environments can be done through vistas and or direct interactions such as gardens. Such landscapes are known to increase occupant satisfaction.
Fire: This natural element is hard to incorporate, however when implemented correctly into the building, it provides color, warmth, and movement, all of which are appealing and pleasing to occupants.
Indirect experience of nature
Indirect experience refers to contact with images and or representations of nature:
Images of Nature: This has been proven to be emotionally and intellectually satisfying to occupants; images of nature can be implemented through paintings, photos, sculptures, murals, videos, etcetera.
Natural Materials: People prefer natural materials as they can be mentally stimulating. Natural materials are susceptible to the patina of time; this change invokes responses from people. These materials can be incorporated into buildings through the use of wood and stone. Interior design can use natural fabrics and furnishings. Leather has often been included as recommended Biophilic material however with the awareness of animal agriculture (leather being a co-product of the meat industry) as a major contributor to climate change faux, or plant-based, leathers created from mushroom, pineapple skin, or cactus are now seen as viable alternatives. It is also seen that to feel, and be, closer to nature and animals to destroy them in the pursuit of this is counter-productive and in conflict with the philosophy of Biophilia.
Natural Colors: Natural colors or "earth-tones", are those that are commonly found in nature and are often subdued tones of brown, green, and blue. When using colors in buildings, they should represent these natural tones. Brighter colors should only be used sparingly – one study found that red flowers on plants were found to be fatiguing and distracting by occupants.
Simulations of Natural Light and Air: In areas where natural forms of ventilation and light cannot be achieved, creative use of interior lighting and mechanical ventilation can be used to mimic these natural features. Designers can do this through variations in lighting through different lighting types, reflective mediums, and natural geometries that the fixture can shine through; natural airflow can be imitated through mild changes in temperature, humidity, and air velocity.
Naturalistic Shapes: Natural shapes and forms can be achieved in architectural design through columns and nature-based patterns on facades - including these different elements into spaces can change a static space into an intriguing and appealing complex area.
Evoking Nature: This uses characteristics found in nature to influence the structural design of the project. These may be things that may not occur in nature, rather elements that represent natural landscapes such as mimicking different plant heights found in ecosystems, and or mimicking particular animal, water, or plant features.
Information Richness: This can be achieved by providing complex, yet not noisy environments that invoke occupant curiosity and thought. Many ecosystems are complex and filled with different abiotic and biotic elements – in such the goal of this attribute is to include these elements into the environment of the building.
Change and the Patina of Time: People are intrigued by nature and how it changes, adapts, and ages over time, much like ourselves. In buildings, this can be accomplished by using organic materials that are susceptible to weathering and color change – this allows for us to observe slight changes in our built environment over time.
Natural Geometries: The design of facades or structural components can include the use of repetitive, varied patterns that are seen in nature (fractals). These geometries can also have hierarchically organized scales and winding flow rather than be straight with harsh angles. For instance, commonly used natural geometries are the honeycomb pattern and ripples found in water.
Biomimicry: This is a design strategy that imitates uses found in nature as solutions for human and technical problems. Using these natural functions in construction can entice human creativity and consideration of nature.
Experience of space and place
The experience of space and place uses spatial relationships to enhance well-being:
Prospect and Refuge: Refuge refers to the building's ability to provide comfortable and nurturing interiors (alcoves, dimmer lighting), while prospect emphasizes horizons, movement, and sources of danger. Examples of design elements include balconies, alcoves, lighting changes, and areas spaciousness (savannah environment).
Organized Complexity: This principle is meant to simulate the need for controlled variability; this is done in design through repetition, change, and detail of the building's architecture.
Integration of Parts: When different parts comprise a whole, it provides satisfaction for occupants: design elements include interior spaces using clear boundaries and or the integration of a central focal point.
Transitional Spaces: This element aims to connect interior spaces with the outside or create comfort by providing access from one space to another environment through the use of porches, decks, atriums, doors, bridges, fenestrations, and foyers.
Mobility: The ability for people to comfortably move between spaces, even when complex; it provides the feeling of security for occupants and can be done through making clear points of entry and egress.
Cultural and Ecological Attachment to Place: Creating a cultural sense of place in the built environment creates human connection and identity. This is done by incorporating the area's geography and history into the design. Ecological identity is done through the creation of ecosystems that promote the use of native flora and fauna.
Each of these experiences are meant to be considered individually when using biophilia in projects, as there is no one right answer for one building type. Each building's architect(s) and project owner(s) must collaborate to include the biophilic principles they believe fit within their scope and most effectively reach their occupants.
City-scale
Timothy Beatley believes the key objective of biophilic cities is to create an environment where the residents want to actively participate in, preserve, and connect with the natural landscape that surrounds them. He established ways to achieve this through a framework of infrastructure, governance, knowledge, and behavior; these dimensions can also be indicators of existing biophilic attributes that already exist in current cities.
Biophilic Conditions and Infrastructure: The idea that a certain number of people at any given time should be near a green space or park. This can be done through the creation of integrated ecological networks and walking trails throughout the city, the designation of certain portions of land area for vegetation and forests, green and biophilic building design features, and the use of flora and fauna throughout the city.
Biophilic Activities: This refers to the increased amount of time spent outside and visiting parks, longer outdoor periods at schools, improved foot traffic across the city, improved participation in community gardens and conservatory clubs, larger participation in local volunteer efforts.
Biophilic Attitudes and Knowledge: In areas with urban biophilic design elements, there will be an improved number of residents who care about nature and can identify local native species; resident curiosity of their local ecosystems also increases.
Biophilic Institutions and Governance: Local government bodies allocate part of the budget to nature and biophilic activities. Indicators of this include increased regulation that requires more green and biophilic design principles, grant programs that promote the use of nature and biophilia, the inclusion of natural history museums and educational programs, and increased number of nature non-governmental organizations and community groups.
Based on Kellert's dimensions, biophilic product design dimensions have also been presented.
Benefits
Biophilic design is argued to have a wealth of benefits for building occupants and urban environments through improving connections to nature. For cities, many believe the biggest proponent of the concept is its ability to make the city more resilient to any environmental stressor it may face.
Health benefits
Catherine Ryan, et al. found that elements such as nature sounds, improved mental health 37% faster than traditional urban noise after stressor exposure; the same study found that when surgery patients were exposed to aromatherapy, 45% used less morphine and 56% used fewer painkillers overall. Another study by Kaitlyn Gillis and Birgitta Gatersleben found that the inclusion of plants in interior environments reduce stress and increase pain tolerance; the use of water elements and incorporating views of nature are also mentally restorative for occupants. When researching the effects of biophilia on hospital patients, Peter Newman and Jana Soderlund found that by increasing vista quality in hospital rooms depression and pain in patients is reduced, which in turn shortened hospital stays from 3.67 days to 2.6 days.
In biophilic cities, Andrew Dannenberg, et al. indicated that there are higher levels of social connectivity and better capability to handle life crises; this has resulted in lower crime rate levels of violence and aggression. The same study found that implementing outdoor facilities such as impromptu gymnasiums like the "Green Gym" in the United Kingdom, allow people to help clear overgrown vegetation, build walking paths, plant foliage, and more readily exercise (walking, running, climbing, etc.); this has been proven to build social capital, increase physical activity, better mental health and quality of life. Further, Dannenberg, et al. also found that children growing up in green neighborhoods are seen to have lower levels of asthma; decreased mortality rates and health disparities between the wealthy and poor were also observed in greener neighborhoods.
Mental health benefits
Highly prevalent in nature, fractal patterns, biophilic patterns possess self-similar components that repeat at varying size scales. The perceptual experience of human-made environments can be impacted with inclusion of these natural patterns. Previous work has demonstrated consistent trends in preference for and complexity estimates of fractal patterns. However, limited information has been gathered on the impact of other visual judgments. Here we examine the aesthetic and perceptual experience of fractal 'global-forest' designs already installed in humanmade spaces and demonstrate how fractal pattern components are associated with positive psychological experiences that can be utilized to promote occupant wellbeing. These designs are composite fractal patterns consisting of individual fractal 'tree-seeds' which combine to create a 'global fractal forest.' The local 'tree-seed' patterns, global configuration of tree-seed locations, and overall resulting 'global-forest' patterns have fractal qualities. These designs span multiple mediums yet are all intended to lower occupant stress without detracting from the function and overall design of the space. In this series of studies, we first establish divergent relationships between various visual attributes, with pattern complexity, preference, and engagement ratings increasing with fractal complexity compared to ratings of refreshment and relaxation which stay the same or decrease with complexity. Subsequently, we determine that the local constituent fractal ('tree-seed') patterns contribute to the perception of the overall fractal design, and address how to balance aesthetic and psychological effects (such as individual experiences of perceived engagement and relaxation) in fractal design installations. This set of studies demonstrates that fractal preference is driven by a balance between increased arousal (desire for engagement and complexity) and decreased tension (desire for relaxation or refreshment). Installations of these composite mid-high complexity 'global-forest' patterns consisting of 'tree-seed' components balance these contrasting needs, and can serve as a practical implementation of biophilic patterns in human-made environments to promote occupant wellbeing.
Environmental benefits
Some argue that by adding physical natural elements, such as plants, trees, rain gardens, and green roofs, to the built environment, buildings and cities can manage stormwater runoff better as there are fewer impervious surfaces and better infiltration. To maintain these natural systems in a cost-effective way, excess greywater can be reused to water the plants and greenery; vegetative walls and roofs also decrease polluted water as the plants act as biofilters. Adding greenery also reduces carbon emissions, the heat island effect, and increases biodiversity. Carbon is reduced through carbon sequestration in the plant's roots during photosynthesis. Green and high albedo rooftops and facades, and shading of streets and structures using vegetation can reduce the amount of heat absorption normally found in asphalt or dark surfaces – this can reduce heating and cooling needs by 25% and reduce temperature fluctuations by 50%. Further, adding green facades can increase the biodiversity of an area if native species are planted - the Khoo Teck Puat Hospital in Singapore has seen a resurgence of 103 species of butterflies onsite, thanks to their use of vegetation throughout the exterior of the building.
Economic benefits
Biophilia may have slightly higher costs due to the addition of natural elements that require maintenance, higher-priced organic items, etc., however, the perceived health and environmental benefits are believed to negate this. Peter Newman found that by adding biophilic design and landscapes, cities like New York City can see savings nearing $470 million due to increased worker productivity and $1.7 billion from reduced crime expenses. They also found that storefronts on heavily vegetated streets increased foot traffic and attracted consumers that were likely to spend 25% more; the same study showed that increasing daylighting through skylights in a store increase sales by 40% +/- 7%. Properties with biophilic design also benefit from higher selling prices, with many selling at 16% more than conventional buildings.
Sustainability and resilience
On the urban scale, Timothy Beatley believes that biophilic design will allow cities to better adapt to stresses that occur from changes in climate and thus, local environments. To better show this, he created a biophilic cities framework, where pathways can be taken to increase the resilience and sustainability of cities. This includes three sections: Biophilic Urbanism - the physical biophilic and green measures that can be taken to increase the resilience of the city, Adaptive Capacity - how the community's behaviors will adapt as a result of these physical changes, and Resilient Outcomes - what can happen if both of these steps are achieved.
Under the Biophilic Urbanism section, one of the ways a city can increase resilience is by pursuing the biophysical pathway – by safeguarding and promoting the inclusion of natural systems, the natural protective barrier of the city is increased. For example, New Orleans is a city that has built over its natural wet plains and has exposed themselves to flooding. It is estimated that if they kept the bayous intact, the city could save $23 billion yearly in storm protection.
In the Adaptive Capacity section, Beatley states that the commitment to place and home pathway creates stimulating and interesting nature environments for residents – this will create stronger bonds to home, which will increase the likelihood that citizens will take care of where they live. He goes further in saying that in times of shock or stress, these people are more likely to rebuild and or support the community instead of fleeing. This may also increase governmental action to protect the city from future disasters.
By achieving Biophilic Urbanism and Adaptive Capacity, Beatley believes that one of the biggest resilient outcomes of this framework will be increased adaptability of the residents. Because the steps leading to resilience encourage people to be outside walking and participating in activities, the citizens become healthier and more physically fit; it has been found that those who take walks in nature experience decreased depression, anger, and increased vigor, versus those who walk in interior environments.
Use in building standards
Given the increased information supporting the benefits of biophilic design, organizations are beginning to incorporate the concept into their standards and rating systems to encourage building professionals to use biophilia in their projects. As of now, the most prominent supporters of biophilic design are the WELL Building Standard and the Living Building Challenge.
WELL Building Standard
The International WELL Building Institute uses biophilic design in their WELL Standard as a qualitative and quantitive metric. The qualitative metric must incorporate nature (environmental elements, natural lighting, and spatial qualities), natural patterns, and nature interaction within and outside the building; these efforts must be documented through professional narrative to be considered for certification. For the quantitative portion, projects must have outdoor biophilia (25% of the project must have accessible landscaped grounds and or rooftop gardens and 70% of that 25% must have plantings), indoor biophilia (plant beds and pots must cover 1% of the floor area and plant walls must cover 2% of the floor area), and water features (projects over 100,000 sqft must have a water feature that is either 1.8 m in height or 4 m2 in floor area). Verification is enforced through assurance letters by the architects and owners, and by on-site spot checks. Generally, both metric types can be applied to every building type the WELL Standard addresses, with two exceptions: core and shell construction does not need to include quantitative interior biophilia and existing interiors do not need to include qualitative nature interaction.
Living Building Challenge
The International Living Future Institute is the creator of the living building challenge – a rigorous building standard that aims to maximize building performance. This standard classifies the use of a biophilic environment as an imperative element in their health and happiness section. The living building challenge requires that a framework be created that shows the following: how the project will incorporate nature through environmental features, light, and space, natural shapes and forms, natural patterns, and place-based relationships. The challenge also requires that the occupants be able to connect to nature directly through interaction within the interior and exterior of the building. These are then verified through a preliminary audit procedure.
Criticisms
Biophilic, or sustainable design more generally, is slowly being accepted by major developers and green building certification companies. However, the benefits of nature were not given scientific credence. Hence, until recently, there was little funding for research that explored the long-term challenges, negatives, and even benefits of biophilic elements in buildings and cities. Some have noted that biophilic design has focused primarily on benefits to humans and few projects that claim to be biophilic contribute positively to nature or biodiversity, an omission that could be easily corrected. Other concerns are the initial and maintenance costs of projects that implement expensive biophilic design principles.
Building-scale examples of application
Church of Mary Magdalene
The Church of Mary Magdalene is in Jerusalem and was consecrated in 1888. This church's architecture is biophilic in that it contains natural geometries, organized complexity, information richness, and organic forms (onion-shaped domes) and materials. On the exterior, complexity and order are shown through the repetitive use of domes, their scale, and placement. Inside, the church experiences symmetry and a savannah-like environment through its vaulting and domes – the columns also have leaf-like fronds, which represents images of nature. Prospect is explored through raised ceilings that have balconies and increased lighting; refuge is experienced in lower areas, where there are reduced lighting and alcoves and throughout, where small windows are encased by thick walls.
Fallingwater
Fallingwater, one of Frank Lloyd Wright's most famous buildings, exemplifies many biophilic features. The home has human-nature connectivity through the integrative use of the waterfall and stream in its architecture - the sound from these water features can be heard throughout the inside of the home. This allows visitors to feel like they are "participating" in nature rather than "spectating" it like they would be if the waterfall were downstream. In addition, the structure is built around existing foliage and encompasses the local geology by incorporating a large rock in the center of the living room. There are also many glass walls to connect the occupants to the surrounding woods and nature that is outdoors. To better the flow of the space, Wright included many transitional spaces in the home (porches and decks); he also enhanced the direct and indirect experiences of nature by using multiple fireplaces and a wealth of organic shapes, colors, and materials. His use of Kellert's biophilic design principles are prominent throughout the structure, even though this home was constructed before these ideas were developed.
Khoo Teck Puat Hospital
Referred to as a "garden hospital", KTP has an abundance of native plants and water features that surround its exterior. This inclusion of vegetation has increased the biodiversity of the local ecosystem, bringing butterflies and bird species; the rooftop of the hospital is also used by local residents to grow produce. Unlike many other hospitals, 15% of visitors come to Khoo Teck Puat for recreational reasons such as gardening or relaxing. The design behind this hospital was to increase the productivity of its doctors, wellbeing of its visitors, and increase the healing speed and pain resilience of its patients. To do this, the designers incorporated greenery from the hospital's courtyard to its upper floors, where patients have balconies that are covered in scented foliage. The hospital is centered on the Yishun pond, and like Frank Lloyd Wright's Fallingwater, the architects made this natural feature part of the hospital by having water stream through its courtyard, creating the illusion that the water was "drawn" from the pond. The hospital also utilizes natural ventilation as much as possible in common areas and corridors by orienting them in the direction the north and southeast prevailing winds; this has reduced energy consumption by 60% and increased airflow by 20-30%. This creates thermally adequate environments for patients and medical staff alike. Using Kellert strategies above, it is apparent that most of the strategies used for Khoo Teck Puat are direct nature experiences. The hospital also uses transitional spaces to make occupants more connected to the outdoors and has organized complexity throughout its overall architectural design. KTP has created a sense of place for occupants and neighbors, as it acts as a communal place for both those who work there and live nearby.
Sandy Hook Elementary School
After the disaster that struck Sandy Hook Elementary in 2012, a new school was built to help heal the community and provide a new sense of security for those occupying the space. Major biophilic design parameters that Svigals + Partners included in this project are animal feeders, wetlands, courtyards, natural shapes and patterns, natural materials, transitional spaces, images of nature, natural colors, and use of natural light. The school has incorporated a victory garden that is meant to act as a way of healing for children after the tragedy. The architects wanted the children to feel as if they are learning in the trees so they set the school back at the edge of the woods and surrounded the space with large windows; there are also metaphoric metal trees in the lobby that have reflective metal leaves that refract light onto colored glass. Using Kellert's biophilic framework, it is prevalent that the school utilizes many different nature experiences. The use of wood planks and stone on the outside of the building help enforce indirect experiences of nature because these are natural materials. Further, the interior environment of the school experiences information richness through the architects' use of light reflection and color. Naturalistic shapes are brought into the interior environment through the metal trees and leaves. For experiences of space and place, Svigals + Partners bring nature into the classroom and school through the placement of windows that act as transitional spaces. The school also has a variety of breezeways, bridges, and pathways for students as they move from one space to another. Direct experiences of nature are enjoyed through water features, large rain gardens, and courtyards found on the property. The animal feeders also act as a way to bring fauna into the area.
City-scale examples of application
Singapore, Singapore
Nicknamed a "city in a garden", Singapore has dedicated many resources to make a system of nature preserves, parks and connectors (ex. Southern Ridges), and tree-lined streets that promote the return of wildlife and reduce the heat island effect that is often seen in dense city centers; local governments agree with Kellert and Beatley that daily doses of nature enhance the wellbeing of its citizens. To manage stormwater, Singaporean governments have implemented the Bishan-Ang Mo Kio Park Project, where the old concrete water drains were excavated for the reconstruction of the Kallang River; this allowed residents in the area to enjoy the physiological and physical health benefits of having a green space with water. The reimagining of the park has increased the biodiversity of the local ecosystem, with dragonflies, butterflies, hornbills, and smooth-coated otters returning to the Singaporean region - the river also acts as a natural stormwater management system by increasing infiltration and movement of excess water.
To increase the immediate presence of nature in the city, Singapore provides subsidies (up to half the installation cost) for those who include vegetative walls, green roofs, sky parks, etc. in their building designs. The city-state also has an impressive number of biophilic buildings and structures. For example, their Gardens by the Bay Project has an installation called the "Supertree Grove". This urban nature installation has over 160,000 plants that stem from 200 different species installed in the 16 supertrees; many of these urban trees have sky walkways, observatories, and or solar panels. Lastly, Singapore has implemented efforts to increase community engagement through the creation of over 1,000 community gardens for resident use.
Oslo, Norway
Oslo is sandwiched between the Oslo Fjord and wooded areas. Woods serve as an important feature to this municipality. More than two-thirds of the city is protected forests; in recent surveys over 81% of Oslo residents said they have gone to these forests at least once in the last year. These forests are protected, as Oslo adheres to ISO14001 for its forest management – the trees are controlled under "living forest" standards, which means that limited harvesting is acceptable. In addition to its extensive forest system, the city compounds its exposure to nature by bringing the natural environment into the urban setting. Being an already compact city (after all, two-thirds is forest) the city allocates around 20% of its urban land to green spaces; the local government is in the process of creating a network of paths to connect these green areas so that citizens can walk and ride their bikes undisturbed. In addition to the expanding park accessibility, the city has also restored the city's river the Akerselva, which runs through Oslo's center. Because the water feature is near sets of dense housing, the city made the river more appealing and accessible to residents by adding waterfalls and nature trails; altogether the city has 365 kilometers worth of nature trails.
To connect the city with its fjords, Oslo's government has started the process of putting its roadways underground in tunnels. This, combined with the construction of aesthetically creative architecture (Barcode Project) on the waterfront and promenade foot trails, is transforming this area into a place where residents can experience enjoyment from the unobstructed views of the fjord. Lastly, Oslo has a Noise Action Plan to help alleviate urban noise levels – some of these areas (mostly recreational) have noise levels as low as 50 dB.
Sydney, Australia
One Central Park in Sydney is a residential development known for its innovative biophilic design. Completed in 2014, the project was designed by Ateliers Jean Nouvel and features two towers with a distinctive vertical garden set on a common retail podium. Hydroponic walls with different native and exotic plants serve as natural sun shade that varies with the seasons, protecting the apartments from direct sun in the summer and letting in maximum sunlight in the winter.
See also
Biomimetic architecture
Building-integrated agriculture
Ecological design
Folkewall
Green architecture
Green building and wood
Green building
Green roof
Greening
Log house
Natural building
Roof garden
Sustainable city
Thorncrown Chapel
References
Biophilia hypothesis
Sustainable architecture
Urban design | 0.762388 | 0.994225 | 0.757986 |
Anti-social behaviour | Antisocial behaviours, sometimes called dissocial behaviours, are actions which are considered to violate the rights of or otherwise harm others by committing crime or nuisance, such as stealing and physical attack or noncriminal behaviours such as lying and manipulation. It is considered to be disruptive to others in society. This can be carried out in various ways, which includes, but is not limited to, intentional aggression, as well as covert and overt hostility. Anti-social behaviour also develops through social interaction within the family and community. It continuously affects a child's temperament, cognitive ability and their involvement with negative peers, dramatically affecting children's cooperative problem-solving skills. Many people also label behaviour which is deemed contrary to prevailing norms for social conduct as anti-social behaviour. However, researchers have stated that it is a difficult term to define, particularly in the United Kingdom where many acts fall into its category. The term is especially used in Irish English and British English.
Although the term is fairly new to the common lexicon, the word anti-social behaviour has been used for many years in the psychosocial world where it was defined as "unwanted behaviour as the result of personality disorder." For example, David Farrington, a British criminologist and forensic psychologist, stated that teenagers can exhibit anti-social behaviour by engaging in various amounts of wrongdoings such as stealing, vandalism, sexual promiscuity, excessive smoking, heavy drinking, confrontations with parents, and gambling. In children, conduct disorders could result from ineffective parenting. Anti-social behaviour is typically associated with other behavioural and developmental issues such as hyperactivity, depression, learning disabilities, and impulsivity. Alongside these issues one can be predisposed or more inclined to develop such behaviour due to one's genetics, neurobiological and environmental stressors in the prenatal stage of one's life, through the early childhood years.
The American Psychiatric Association, in its Diagnostic and Statistical Manual of Mental Disorders, diagnoses persistent anti-social behaviour starting from a young age as antisocial personality disorder. Genetic factors include abnormalities in the prefrontal cortex of the brain while neurobiological risk include maternal drug use during pregnancy, birth complications, low birth weight, prenatal brain damage, traumatic head injury, and chronic illness. The World Health Organization includes it in the International Classification of Diseases as dissocial personality disorder. A pattern of persistent anti-social behaviours can also be present in children and adolescents diagnosed with conduct problems, including conduct disorder or oppositional defiant disorder under the DSM-5. It has been suggested that individuals with intellectual disabilities have higher tendencies to display anti-social behaviours, but this may be related to social deprivation and mental health problems. More research is required on this topic.
Development
Intent and discrimination may determine both pro-social and anti-social behaviour. Infants may act in seemingly anti-social ways and yet be generally accepted as too young to know the difference before the age of four or five. Berger states that parents should teach their children that "emotions need to be regulated, not depressed". One problem with the assumption that a behaviour that is "simply ignorant" in infants would have antisocial causes in persons older than four or five years at the same time as the latter are supposed to have more complex brains (and with it a more advanced consciousness) is that it presumes that what appears to be the same behaviour would have fewer possible causes in a more complex brain than in a less complex brain, which is criticized because a more complex brain increases the number of possible causes of what looks like the same behaviour as opposed to decreasing it.
Studies have shown that in children between ages 13–14 who bully or show aggressive behaviour towards others exhibit anti-social behaviours in their early adulthood. There are strong statistical relationships that show this significant association between childhood aggressiveness and anti-social behaviours. Analyses saw that 20% of these children who exhibit anti-social behaviours at later ages had court appearances and police contact as a result of their behaviour.
Many of the studies regarding the media's influence on anti-social behaviour have been deemed inconclusive. Some reviews have found strong correlations between aggression and the viewing of violent media, while others find little evidence to support their case. The only unanimously accepted truth regarding anti-social behaviour is that parental guidance carries an undoubtedly strong influence; providing children with brief negative evaluations of violent characters helps to reduce violent effects in the individual.
Cause and effects
Family
Families greatly impact the causation of anti-social behaviour. Some other familial causes are parent history of anti-social behaviours, parental alcohol and drug abuse, unstable home life, absence of good parenting, physical abuse, parental instability (mental health issues/PTSD) and economic distress within the family.
Neurobiology
Studies have found that there is a link between antisocial behaviour and increased amygdala activity specifically centered around facial expressions that are based in anger. This research focuses on the fact that the symptom of over reactivity to perceived threats that comes with antisocial behaviour may be from this increase in amygdala activity. This focus on perceived threat does not include emotions centered around distress.
Consumption patterns
There is a small link between antisocial personality characteristics in adulthood and more TV watching as a child. The risk of early adulthood criminal conviction increased by nearly 30 percent with each hour children spent watching TV on an average weekend. Peers can also impact one's predisposition to anti-social behaviours, in particular, children in peer groups are more likely to associate with anti-social behaviours if present within their peer group. Especially within youth, patterns of lying, cheating and disruptive behaviours found in young children are early signs of anti-social behaviour. Adults must intervene if they notice their children providing these behaviours. Early detection is best in the preschool years and middle school years in best hopes of interrupting the trajectory of these negative patterns. These patterns in children can lead to conduct disorder, a disorder that allows children to rebel against atypical age-appropriate norms. Moreover, these offences can lead to oppositional defiant disorder, which allows children to be defiant against adults and create vindictive behaviours and patterns. Furthermore, children who exhibit anti-social behaviour also are more prone to alcoholism in adulthood.
Intervention and treatment
As a high prevalence mental health problem in children, many interventions and treatments are developed to prevent anti-social behaviours and to help reinforce pro-social behaviours.
Several factors are considered as direct or indirect causes of developing anti-social behaviour in children. Addressing these factors is necessary to develop a reliable and effective intervention or treatment. Children's perinatal risk, temperament, intelligence, nutrition level, and interaction with parents or caregivers can influence their behaviours. As for parents or caregivers, their personality traits, behaviours, socioeconomic status, social network, and living environment can also affect children's development of anti-social behaviour.
An individual's age at intervention is a strong predictor of the effectiveness of a given treatment. The specific kinds of anti-social behaviours exhibited, as well as the magnitude of those behaviours also impact how effective a treatment is for an individual. Behavioural parent training (BPT) is more effective to preschool or elementary school-aged children, and cognitive behavioural therapy (CBT) has higher effectiveness for adolescents. Moreover, early intervention of anti-social behaviour is relatively more promising. For preschool children, family is the main consideration for the context of intervention and treatment. The interaction between children and parents or caregivers, parenting skills, social support, and socioeconomic status would be the factors. For school-aged children, the school context also needs to be considered. The collaboration amongst parents, teachers, and school psychologists is usually recommended to help children develop the ability of resolving conflicts, managing their anger, developing positive interactions with other students, and learning pro-social behaviours within both home and school settings.
Moreover, the training for parents or caregivers are also important. Their children would be more likely to learn positive social behaviours and reduce inappropriate behaviours if they become good role models and have effective parenting skills.
Cognitive behavioural therapy
Cognitive behavioural therapy (CBT), is a highly effective, evidence-based therapy, in relation to anti-social behaviour. This type of treatment focuses on enabling the patients to create an accurate image of the self, allowing the individuals to find the trigger of their harmful actions and changing how individuals think and act in social situations. Due to their impulsivity, their inability to form trusting relationships and their nature of blaming others when a situation arises, individuals with particularly aggressive anti-social behaviours tend to have maladaptive social cognitions, including hostile attribution bias, which lead to negative behavioural outcomes. CBT has been found to be more effective for older children and less effective for younger children. Problem-solving skills training (PSST) is a type of CBT that aims to recognize and correct how an individual thinks and consequently behaves in social environments. This training provides steps to assist people in obtaining the skill to be able to evaluate potential solutions to problems occurring outside of therapy and learn how to create positive solutions to avoid physical aggression and resolve conflict.
Therapists, when providing CBT intervention to individuals with anti-social behaviour, should first assess the level of the risk of the behaviour in order to establish a plan on the duration and intensity of the intervention. Moreover, therapists should support and motivate individuals to practice the new skills and behaviours in environments and contexts where the conflicts would naturally occur to observe the effects of CBT.
Behavioural parent training
Behavioural parent training (BPT) or parent management training (PMT), focuses on changing how parents interact with their children and equips them with ways to recognize and change their child's maladaptive behaviour in a variety of situations. BPT assumes that individuals are exposed to reinforcements and punishments daily and that anti-social behaviour, which can be learned, is a result of these reinforcements and punishments. Since certain types of interactions between parents and children may reinforce a child's anti-social behaviour, the aim of BPT is to teach the parent effective skills to better manage and communicate with their child. This could be done by reinforcing pro-social behaviours while punishing or ignoring anti-social behaviours. It is important to note that the effects of this therapy can be seen only if the newly acquired communication methods are maintained. BPT has been found to be most effective for younger children under the age of 12. Researchers credit the effectiveness of this treatment at younger ages due to the fact that younger children are more reliant on their parents. BPT is used to treat children with conduct problems, but also for children with ADHD.
According to a meta-analysis, the effectiveness of BPT is supported by short-term changes on the children's anti-social behaviour. However, whether these changes are maintained over a longer period of time is still unclear.
School-based Intervention
First Step to Success is an early intervention for Kindergarten to 3rd grade children who are demonstrating antisocial behaviours. First Step is a collaborative intervention between home and school. There are three important components: (1) Screening; (2) School intervention (CLASS): teaches the child appropriate behaviour through positive reinforcement; (3) Home intervention (HomeBase): teaches the parent key skills for supporting their child and the use of positive reinforcement. The classroom intervention phase (CLASS) takes about 30 days to complete and has 3 phases: (1) Coach-led; (2) Teacher-led; (3) Maintaining. The Red Card/Green Card game (red = inappropriate behaviour; green = appropriate behaviour) is played at school each day. The coach/teacher shows a red/green card as a visual cue to the target student based on their current behaviour. Points are earned if the card is on green at the end of a timed interval. If enough points are earned at the end of the game, the target child gets to choose a reward that the entire class can enjoy together (i.e., extra time at recess, playing a special game, etc.). Coaches/teachers communicate daily with parent(s) throughout the intervention. The home intervention (HomeBase) begins a few days after the classroom intervention. HomeBase builds parent's confidence in 6 specific skill areas and in parent-child activities. Coaches meet with parent(s) once weekly for 6 weeks. Parent(s) engage with the target child for 10–15 minutes daily in one-on-one time during the intervention. Overall, First Steps takes about 3 months to implement, requires minimal time from parent(s) and teachers and has shown empirically positive results in increasing prosocial behaviour in at-risk children.
Psychotherapy
Psychotherapy or talk therapy, although not always effective, can also be used to treat individuals with anti-social behaviour. Individuals can learn skills such as anger and violence management. This type of therapy can help individuals with anti-social behaviour bridge the gap between their feelings and behaviours, which they lack the connection previously. It is most effective when specific issues are being discussed with individuals with anti-social behaviours, rather than a broad general concept. This type of therapy works well with individuals who are at a mild to moderate stage of anti-social behaviour since they still have some sense of responsibility regarding their own problems. Mentalization-based treatment is another form of group psychotherapy shifting its focus on the relational and mental factors related to anti-social personality disorder rather than anger management and violent acts. This particular group therapy targets the mentalizing vulnerabilities and attachment patterns of patients by using a semi-structured group process focused on personal formulation and by establishing group values to promote learning from other members and generating "we-ness."
When working with individuals with anti-social behaviour, therapist must be mindful of building a trusting therapeutic relationship since these individuals might have never experienced rewarding relationships. Therapists also need to be reminded that changes might take place slowly, thus an ability for noticing small changes and constant encouragement for individuals with anti-social behaviour to continue the intervention are required.
Family therapy
Family therapy, which is a type of psychotherapy, helps promote communication between family members, thus resolving conflicts related to anti-social behaviour. Since family exerts enormous influence over children's development, it is important to identify the behaviours that could potentially lead to anti-social behaviours in children. It is a relatively short-term therapy which involves the family members who are willing to participate. Family therapy can be used to address specific topics such as aggression. The therapy may end when the family can resolve conflicts without needing the therapists to intervene.
Diagnosis
There is no official diagnosis for anti-social behaviour. However, we can have a look at the official diagnosis for antisocial personality disorder (ASPD) and use it as guideline while keeping in mind that anti-social behaviour and ASPD are not to be confused.
Distinguishing from antisocial personality disorder
When looking at non-ASPD patients (who show anti-social behaviour) and ASPD patients, it all comes down to the same types of behaviours. However, ASPD is a personality disorder which is defined by the consistency and stability of the observed behaviour, in this case, anti-social behaviour. Antisocial personality disorder can only be diagnosed when a pattern of anti-social behaviour began being noticeable during childhood and/or early teens and remained stable and consistent across time and context. In the official DSM IV-TR for ASPD, it is specified that the anti-social behaviour has to occur outside of time frames surrounding traumatic life events or manic episodes (if the individual is diagnosed with another mental disorder). The diagnosis for ASPD cannot be done before the age of 18. For example, someone who exhibits anti-social behaviour with their family but pro-social behaviour with friends and coworkers would not qualify for ASPD because the behaviour is not consistent across context. Someone who was consistently behaving in a pro-social way and then begins exhibiting anti-social behaviour in response to a specific life event would not qualify for ASPD either because the behaviour is not stable across time.
Law breaking behaviour in which the individuals are putting themselves or others at risk is considered anti-social even if it is not consistent or stable (examples: speeding, use of drugs, getting in physical conflict). In relation to the previous statement, juvenile delinquency is a core element to the diagnosis of ASPD. Individuals who begin getting in trouble with the law (in more than one area) at an abnormally early age (around 15) and keep recurrently doing so in adulthood may be suspected of having ASPD.
Evidence: frustration and aggression
With some limitations, research has established a correlation between frustration and aggression when it comes to anti-social behaviour. The presence of anti-social behaviour may be detected when an individual is experiencing an abnormally high amount of frustrations in their daily life routine and when those frustrations always result into aggression. The term impulsivity is commonly used to describe this behavioural pattern. Anti-social behaviour can also be detected if the aggressiveness and impulsiveness of the individual's behaviour in response to frustrations is so that it causes obstruction to social interactions and achievement of personal goals. In both of these cases, we can consider the different types of treatment and therapy previously mentioned in this article.
Examples in childhood: unable to make friends, unable to follow rules, getting kicked out of school, unable to fulfill minimal levels of education (elementary school, middle school).
Examples in early adulthood: unable to keep a job or an apartment, difficulty with maintaining relationships.
Prognosis
The prognosis of having anti-social behaviour is not very favourable due to its high stability throughout children development. Studies have shown that children who are aggressive and have conduct problems are more likely to have anti-social behaviour in adolescence. Early intervention of anti-social behaviour is relatively more effective since the anti-social pattern lasts for a shorter period of time. Moreover, since younger children would have smaller social networks and less social activities, fewer contexts need to be considered for the intervention and treatment. For adolescents, studies have shown that the influence of treatments becomes less effective.
The prognosis seems to not be influenced by the duration of intervention, however; a long-term follow-up is necessary to confirm that the intervention or treatment is effective.
Individuals who exhibit anti-social behaviour are more likely to use drugs and abuse alcohol. This could make the prognosis worse since he or she would less likely be involved in social activities and would become more isolated.
By location
United Kingdom
An anti-social behaviour order (ASBO) is a civil order made against a person who has been shown, on the balance of evidence, to have engaged in anti-social behaviour. The orders, introduced in the United Kingdom by Prime Minister Tony Blair in 1998, were designed to criminalize minor incidents that would not have warranted prosecution before.
The Crime and Disorder Act 1998 defines anti-social behaviour as acting in a manner that has "caused or was likely to cause harassment, alarm or distress to one or more persons not of the same household" as the perpetrator. There has been debate concerning the vagueness of this definition.
However, among legal professionals in the UK there are behaviours commonly considered to fall under the definitions of anti-social behaviour. These include, but are not limited to, threatening or intimidating actions, racial or religious harassment, verbal abuse, and physical abuse.
In a survey conducted by University College London during May 2006, the UK was thought by respondents to be Europe's worst country for anti-social behaviour, with 76% believing Britain had a "big or moderate problem".
Current legislation governing anti-social behaviour in the UK is the Anti-Social Behaviour, Crime and Policing Act 2014 which received Royal Assent in March 2014 and came into enforcement in October 2014. This replaces tools such as the ASBO with 6 streamlined tools designed to make it easier to act on anti-social behaviour.
Australia
Anti-social behaviour can have a negative effect and impact on Australian communities and their perception of safety. The Western Australia Police force define anti-social behaviour as any behaviour that annoys, irritates, disturbs or interferes with a person's ability to go about their lawful business. In Australia, many different acts are classed as anti-social behaviour, such as: misuse of public space; disregard for community safety; disregard for personal well-being; acts directed at people; graffiti; protests; liquor offences; and drunk driving. It has been found that it is very common for Australian adolescents to engage in different levels of anti-social behaviour. A survey was conducted in 1996 in New South Wales, Australia, of 441, 234 secondary school students in years 7 to 12 about their involvement in anti-social activities. 38.6% reported intentionally damaging or destroying someone else's property, 22.8% admitted to having received or selling stolen goods and close to 40% confessed to attacking someone with the idea of hurting them. The Australian community are encouraged to report any behaviour of concern and play a vital role assisting police in reducing anti-social behaviour. One study conducted in 2016 established how perpetrators of anti-social behaviour may not actually intend to cause offense. The study examined anti-social behaviours (or microaggressions) within the LGBTIQ community on a university campus. The study established how many members felt that other people would often commit anti-social behaviours, however there was no explicit suggestion of any maliciousness behind these acts. Rather, it was just that the offenders were naive to the impact of their behaviour.
The Western Australia Police force uses a three-step strategy to deal with anti-social behaviour.
Prevention – This action uses community engagement, intelligence, training and development and the targeting of hotspots, attempting to prevent unacceptable behaviour from occurring.
Response – A timely and effective response to anti-social behaviour is vital. Police provide ownership, leadership and coordination to apprehend offenders.
Resolution – Identifying the underlying issues that cause anti-social behaviour and resolve these issues with the help of the community. Offenders are successfully prosecuted.
Japan
The 1970's, brought attention to a social and historical phenomenon called hikikomori. Often called the lost generation, with pervasive and severe social withdrawal and anti-social tendencies. Individuals with hikikomori, are commonly in their 20's or 30's, avoiding as much social interaction as possible. Japanese psychologist and leading expert on the topic, Tamaki Saito, was one of the first to present that approximately 1% of the country's population was considered hikikomori at the time. Today, it is still existent in Japan taking on new forms of seclusion by using digital tools, such as video games and internet chatting, to replace social interaction. The term Hikikomori has since been used throughout the world, in Asia, Europe, North and South America, Africa and Australia.
See also
References
Further reading
External links
Anti-Social Behaviour.org.uk
MIT Technology Review - How a Troll-Spotting Algorithm Learned Its Anti-antisocial Trade
Behavioral addiction
Criminal subcultures | 0.761002 | 0.996014 | 0.757968 |
Avoidance coping | In psychology, avoidance coping is a coping mechanism and form of experiential avoidance. It is characterized by a person's efforts, conscious or unconscious, to avoid dealing with a stressor in order to protect oneself from the difficulties the stressor presents. Avoidance coping can lead to substance abuse, social withdrawal, and other forms of escapism. High levels of avoidance behaviors may lead to a diagnosis of avoidant personality disorder, though not everyone who displays such behaviors meets the definition of having this disorder. Avoidance coping is also a symptom of post-traumatic stress disorder and related to symptoms of depression and anxiety. Additionally, avoidance coping is part of the approach-avoidance conflict theory introduced by psychologist Kurt Lewin.
Literature on coping often classifies coping strategies into two broad categories: approach/active coping and avoidance/passive coping. Approach coping includes behaviors that attempt to reduce stress by alleviating the problem directly, and avoidance coping includes behaviors that reduce stress by distancing oneself from the problem. Traditionally, approach coping has been seen as the healthiest and most beneficial way to reduce stress, while avoidance coping has been associated with negative personality traits, potentially harmful activities, and generally poorer outcomes. However, avoidance coping can reduce stress when nothing can be done to address the stressor.
Measurement
Avoidance coping is measured via a self-reported questionnaire. Initially, the Multidimensional Experiential Avoidance Questionnaire (MEAQ) was used, which is a 62-item questionnaire that assesses experiential avoidance, and thus avoidance coping, by measuring how many avoidant behaviors a person exhibits and how strongly they agree with each statement on a scale of 1–6. Today, the Brief Experiential Avoidance Questionnaire (BEAQ) is used instead, containing 15 of the original 62 items from the MEAQ. In research, avoidance coping can be objectively quantified using immersive virtual reality.
Treatment
Cognitive behavioral and psychoanalytic therapy are used to help those coping by avoidance to acknowledge, comprehend, and express their emotions. Acceptance and commitment therapy, a behavioral therapy that focuses on breaking down avoidance coping and showing it to be an unhealthy method for dealing with traumatic experiences, is also sometimes used.
Both active-cognitive and active-behavioral coping are used as replacement techniques for avoidance coping. Active-cognitive coping includes changing one's attitude towards a stressful event and looking for any positive impacts. Active-behavioral coping refers taking positive actions after finding out more about the situation.
See also
Coping (psychology)
Mindfulness meditation
Posttraumatic stress disorder
Avoidant personality disorder
Procrastination
Stress management
Video game addiction
References
Anxiety
Interpersonal conflict
Psychological stress
Human behavior
Problem behavior
Symptoms and signs of mental disorders | 0.765715 | 0.989874 | 0.757961 |
Occupational stress | Occupational stress is psychological stress related to one's job. Occupational stress refers to a chronic condition. Occupational stress can be managed by understanding what the stressful conditions at work are and taking steps to remediate those conditions. Occupational stress can occur when workers do not feel supported by supervisors or coworkers, feel as if they have little control over the work they perform, or find that their efforts on the job are incommensurate with the job's rewards. Occupational stress is a concern for both employees and employers because stressful job conditions are related to employees' emotional well-being, physical health, and job performance. The World Health Organization and the International Labour Organization conducted a study. The results showed that exposure to long working hours, operates through increased psycho-social occupational stress. It is the occupational risk factor with the largest attributable burden of disease, according to these official estimates causing an estimated 745,000 workers to die from ischemic heart disease and stroke events in 2016.
A number of disciplines within psychology are concerned with occupational stress including occupational health psychology, human factors and ergonomics, epidemiology, occupational medicine, sociology, industrial and organizational psychology, and industrial engineering.
Psychological theories of worker stress
A number of psychological theories at least partly explain the occurrence of occupational stress. The theories include the demand-control-support model, the effort-reward imbalance model, the person-environment fit model, job characteristics model, the diathesis stress model, and the job-demands resources model.
Demand-control-support model
The demand-control-support (DCS) model, originally the demand-control (DC) model, has been the most influential psychological theory in occupational stress research. The DC model advances the idea that the combination of low levels of work-related decision latitude (i.e., autonomy and control over the job) and high psychological workloads is harmful to the health of workers. High workloads and low levels of decision latitude either in combination or singly can lead to job strain, the term often used in the field of occupational health psychology to reflect poorer mental or physical health. The DC model has been extended to include work-related social isolation or lack of support from coworkers and supervisors to become the DCS model. Evidence indicates that high workload, low levels of decision latitude, and low levels of support either in combination or singly lead to poorer health. The combination of high workload, low levels of decision latitude, and low levels of support has also been termed iso-strain.
Effort-reward imbalance model
The effort-reward imbalance (ERI) model focuses on the relationship between the worker's efforts and the work-related rewards the employee receives. The ERI model suggests that work marked by high levels of effort and low rewards leads to strain (e.g., psychological symptoms, physical health problems). The rewards of the job can be tangible like pay or intangible like appreciation and fair treatment. Another facet of the model is that overcommitment to the job can fuel imbalance.
Person–environment fit model
The person–environment fit model underlines the match between a person and his/her work environment. The closeness of the match influences the individual's health. For healthy working conditions, it is necessary that employees' attitudes, skills, abilities, and resources match the demands of their job. The greater the gap or misfit (either subjective or objective) between the person and his/her work environment, the greater the strain. Strains can include mental and physical health problems. Misfit can also lead to lower productivity and other work problems. The P–E fit model was popular in the 1970s and the early 1980s; however, since the late 1980s interest in the model has waned because of difficulties representing P–E discrepancies mathematically and statistical models linking P–E fit to strain have been problematic.
Job characteristics model
The job characteristics model focuses on factors such as skill variety, task identity, task significance, autonomy, and feedback. These job factors are thought to psychological states such as a sense of meaningfulness and knowledge acquisition. The theory holds that positive or negative job characteristics give rise to a number of cognitive and behavioral outcomes such as extent of worker motivation, satisfaction, and absenteeism. Hackman and Oldham (1980) developed the Job Diagnostic Survey to assess these job characteristics and help organizational leaders make decisions regarding job redesign.
Diathesis-stress model
The diathesis–stress model looks the individual's susceptibility to stressful life experiences, i.e., the diathesis. Individuals differ on that diathesis or vulnerability. The model suggests that the individual's diathesis is part of the context in which he or she encounters job stressors at various levels of intensity. If the individual has a very high tolerance (is relatively invulnerable), an intense stressor may not lead to a mental or physical problem. However, if the stressor (e.g., high workload, difficult coworker relationship) outstrips the individual's diathesis, then health problems may ensue.
Job demands-resources model
In the job demands-resources model model derives from both conservation of resources theory and the DCS model. Demands refer to the size of the workload, as in the DCS model. Resources refer to the physical (e.g., equipment), psychological (e.g., the incumbent's job-related skills and knowledge), social (e.g., supportiveness of supervisors), and organizational resources (e.g., how much task-related discretion is given the worker) that are available to satisfactorily perform the job. High workloads and low levels of resources are related job strain.
Factors related to the above mentioned psychological theories of occupational stress
Role conflict involves the worker facing incompatible demands. Workers are pulled in conflicting directions in trying to respond to those demands.
Role ambiguity refers to a lack of informational clarity with regard to the duties a worker's role in an organization requires. Like role conflict, role ambiguity is a source of strain.
Coping refers to the individual's efforts to either prevent the occurrence of a stressor or mitigate the distress the impact of the stressor is likely to cause. Research on the ability of the employees to cope with the specific workplace stressors is equivocal; coping in the workplace may even be counterproductive. Pearlin and Schooler advanced the view that because work roles, unlike such personally organized roles as parent and spouse, tend to be impersonally organized, work roles are not a context conducive to successful coping. Pearlin and Schooler suggested that the impersonality of workplaces may even result in occupational coping efforts making conditions worse for the employee.
Organizational climate refers to employees' collective or consensus appraisal of the organizational work environment. Organizational climate takes into account many dimensions of the work environment (e.g., safety climate; mistreatment climate; work-family climate). The communication, management style, and extent of worker participation in decision-making are factors that contribute to one or another type of organizational climate.
Negative health and other effects
Physiological reactions to stress can have consequences for health over time. Researchers have been studying how stress affects the cardiovascular system, as well as how work stress can lead to hypertension and coronary artery disease. These diseases, along with other stress-induced illnesses tend to be quite common in American work-places. There are a number of physiological reactions to stress including the following:
Blood is shunted to the brain and large muscle groups, and away from extremities and skin.
Activity in an area near the brain stem known as the reticular activating system increases, causing a state of keen alertness as well as sharpening of hearing and vision.
Epinephrine is released into the blood.
The HPA axis is activated.
There is increased activity in the sympathetic nervous system.
Cortisol levels are elevated.
Energy-providing compounds of glucose and fatty acids are released into the bloodstream.
The action immune and digestive systems are temporarily reduced.
Studies have shown an association between occupational stress and "health risk behaviors". Occupational stress has shown to be linked with an increase in alcohol consumption among men and an increase in body weight.
Occupational stress accounts for more than 10% of work-related health claims. Many studies suggest that psychologically demanding jobs that allow employees little control over the work process increase the risk of cardiovascular disease. Research indicates that job stress increases the risk for development of back and upper-extremity musculoskeletal disorders. Stress at work can also increase the risk of acquiring an infection and the risk of accidents at work. A 2021 WHO study concluded that working 55+ hours a week raises the risk of stroke by 35% and the risk of dying from heart conditions by 17%, when compared to a 35-40 hour week.
Occupational stress can lead to three types of strains: behavioral (e.g., absenteeism), physical (e.g., headaches), and psychological (e.g., depressed mood). Job stress has been linked to a broad array of conditions, including psychological disorders (e.g., depression, anxiety, post-traumatic stress disorder), job dissatisfaction, maladaptive behaviors (e.g., substance abuse), cardiovascular disease, and musculoskeletal disorders.
Stressful job conditions can also lead to poor work performance, counterproductive work behavior, higher absenteeism, and injury. Chronically high levels of job stress diminish a worker's quality of life and increase the cost of the health benefits the employer provides. A study of short haul truckers found that high levels of job stress were related to increased risk of occupational injury. Research conducted in Japan showed a more than two-fold increase in the risk of stroke among men with job strain (combination of high job demand and low job control). The Japanese use the term karoshi to reflect death from overwork.
High levels of stress are associated with substantial increases in health service utilization. For example, workers who report experiencing stress at work also show excessive health care utilization. In a 1998 study of 46,000 workers, health care costs were nearly 50% greater for workers reporting high levels of stress in comparison to "low risk" workers. The increment rose to nearly 150%, an increase of more than $1,700 per person annually, for workers reporting high levels of both stress and depression. Health care costs increase by 200% in those with depression and high occupational stress. Additionally, periods of disability due to job stress tend to be much longer than disability periods for other occupational injuries and illnesses.
Occupational stress has negative effects for organizations and employers. Occupational stress contributes to turnover and absenteeism.
Gender
In today's workplaces every individual will experience work-related stress and the level of stress varies person-to-person. Different aspects of a person's life will affect their stress levels through work. In comparing women and men, there is a higher risk for women to experience stress, anxiety and others forms of psychological stress in response to their work life than there is for men due to societal expectations of women. Such as women having more domestic responsibilities, the fact that women receive less pay for doing similar work as men and that societally women are expected to say "yes" to any requests given to them. These societal expectations added into a work environment can create a very psychologically stressful environment for women, without any added stressors from work. Desmarais and Alksnis suggest two explanations for the greater psychological distress of women. First, the genders may differ in their awareness of negative feelings, leading women to be more likely to express and report strains, whereas men more likely to deny and inhibit such feelings. Second, the demands to balance work and family result in more overall stress for women that leads to increased strain.
Stereotype threat is a phenomenon that can have effects on everyone, it highly depends on the situation the individual is. Some of the proposed mechanisms that are involved with stereotype threat include, but are not limited to: anxiety, negative cognition (where you are focused on stereotype-thinking), lowered motivation, lowered performance expectation (where you do worse on something because the expectation is that you won't be able to do well anyways), decrease in working memory capacity, etc.
Women are also more vulnerable to sexual harassment and assault than men. These authors are referring to the very real "double burden" hypothesis. In addition, women, on average, earn less than their male counterparts.
According to a recent report by the European Union (EU), in the EU and affiliated countries the skills gap between men and women has narrowed in the ten years preceding 2015. In the EU, when compared to men, women typically spend fewer hours in paid work but instead spend more hours in unpaid work.
Causes of occupational stress
Both the broad categories and the specific categories of occupational stress mentioned in the following paragraph fall under different psychological theories of worker stress, which include demand-control-support model, the effort-reward imbalance model, the person-environment fit model, job characteristics model, the diathesis stress model, and the job-demands resources model. (All these models are expanded upon earlier in this Wikipedia page).
The causes of occupational stress can be placed into a broad category of what the main occupational stressor is and a more specific category of what causes occupational stress. The broad category of occupational stressors include some of the following: bad management practices, the job content and its demands, a lack of support or autonomy and much more. The more specific causes of occupational stress includes some of the following: working long hours, having insufficient skills for the job, discrimination and harassment and much more.
General working conditions
Although the importance of individual differences cannot be ignored, scientific evidence suggests that certain working conditions are stressful to most people. Such evidence argues for a greater emphasis on working conditions as the key source of job stress, and for job redesign as a primary prevention strategy. In the ten years leading up to 2015, workers in the EU and affiliated countries have seen improvement in noise exposure but worsening in exposure to chemicals. Approximately, one-third of EU workers experience tight deadlines and must work quickly. Those in the health sector are exposed to the highest levels of work intensity. In order to meet job demands, a little more than 20% of EU workers must work during their free time. Approximately one-third of EU office-workers have some decision latitude. By contrast, about 80% of managers have significant levels of latitude.
General working conditions that induce occupational stress may also be aspects of the physical environment of one's job. For example, the noise level, lighting, and temperature are all components of one's working environment. If these factors are not adequate for a successful working environment, one can experience changes in mood and arousal, which in turn creates more difficulty to successfully do the job right.
Workload
In an occupational setting, dealing with workload can be stressful and serve as a stressor for employees. There are three aspects of workload that can be stressful.
Quantitative workload or overload: Having more work to do than can be accomplished comfortably, like stress related with deadline or unrealistic target.
Qualitative workload: Having work that is too difficult.
Underload: Having work that fails to use a worker's skills and abilities.
Workload as a work demand is a major component of the demand-control model of stress. This model suggests that jobs with high demands can be stressful, especially when the individual has low control over the job. In other words, control serves as a buffer or protective factor when demands or workload is high. This model was expanded into the demand-control-support model that suggests that the combination of high control and high social support at work buffers the effects of high demands.
As a work demand, workload is also relevant to the job demands-resources model of stress that suggests that jobs are stressful when demands (e.g., workload) exceed the individual's resources to deal with them. With the growth of industries and the emergence of modern industries, the cognitive workload has increased even more. It seems that with the stability of Industry 4, more serious problems will arise in this field.
Long hours
According to the U.S. Bureau of Labor Statistics in 2022, 12,000,000 Americans or 8.7% of the labor force worked 41–48 hours per week. And 13,705,000 Americans or 9.8% of the labor force worked 49–59 hours per week. And approximately 9,181,000 Americans or 6.7% of the labor force worked 60 or more hours per week. A meta-analysis involving more than 600,000 individuals and 25 studies indicated that, controlling for confounding factors, working long hours is related to a small but significantly higher risk of cardiovascular disease and slightly higher risk of stroke.
Status
A person's status in the workplace is related to occupational stress because jobs associated with lower socioeconomic status (SES) typically provide workers less control and greater insecurity than higher-SES jobs. Lower levels of job control and greater job insecurity are related to reduced mental and physical health.
Salary
The types of jobs that pay workers higher salaries tend to provide them with greater job-related autonomy. As indicated above, job-related autonomy is associated with better health. A problem in research on occupational stress is how to "unconfound" the relationship between stressful working conditions, such as low levels of autonomy, and salary. Because higher levels of income buy resources (e.g., better insurance, higher quality food) that help to improve or maintain health, researchers need to better specify the extent to which differences in working conditions and differences in pay affect health.
Workplace bullying
Workplace bullying involves the chronic mistreatment of a worker by one or more other workers or managers. Bullying involves a power imbalance in which the target has less power in the unit or the organization than the bully or bullies. Bullying is neither a one-off episode nor is a conflict between two workers who are equals in terms of power. There has to be a power imbalance for there to be bullying. Bullying tactics include verbal abuse, psychological abuse, and even physical abuse. The adverse effects of workplace bullying include depression for the worker and lost productivity for the organization.
Narcissism and psychopathy
Thomas suggests that there tends to be a higher level of stress with people who work or interact with a narcissist, which in turn increases absenteeism and staff turnover. Boddy finds the same dynamic where there is a corporate psychopath in the organisation.
Workplace conflict
Interpersonal conflict among people at work has been shown to be one of the most frequently noted stressors for employees. Conflict can be precipitated by workplace harassment. Workplace conflict is also associated with other stressors, such as role conflict, role ambiguity, and heavy workload. Conflict has also been linked to strains such as anxiety, depression, physical symptoms, and low levels of job satisfaction.
Sexual harassment
A review of the literature indicates that sexual harassment, which principally affects women, negatively affects workers' psychological well-being. Other findings suggest that women who experience higher levels of harassment are more likely to perform poorly at work.
Sexual harassment can happen to anyone of any gender and the harasser can be someone of any gender, the harasser does not need to be someone of the opposite sex. The harasser can be someone with a higher position than you, but it is not always the case. You can be harassed by a fellow co-worker, someone from another department or even by someone who is not an employee.
Sexual harassment includes but is not limited to:
sexual assault
nonconsensual contact
rape
attempted rape
forcing the victim to perform sexual acts on the attacker
sending sexually explicit photos or messages to someone else
exposing yourself to another
performing sexual acts on self
verbal harassment
includes jokes referencing sexual acts
discussing sexual relations, sexual stories or sexual fantasies
Work–life balance
refers to the extent to which there is equilibrium between work demands and one's personal life outside of work. Workers face increasing challenges to meeting workplace demands and fulfilling their family roles as well as other roles outside of work.
Occupational group
Lower status occupational groups are at higher risk of work-related ill health than higher occupational groups. This is in part due to adverse work and employment conditions. Furthermore, such conditions have greater effects on ill-health to those in lower socio-economic positions.
Prevention/Intervention
A combination of organizational change and stress management can be a useful approach for alleviating or preventing stress at work. Both organizations and employees can employ strategies at organizational and individual levels. Generally, organizational level strategies include job procedure modification and employee assistance programs (EAP). A meta-analysis of experimental studies found that cognitive-behavioral interventions, in comparison to relaxation and organizational interventions, provided the largest effect with regard to improving workers' symptoms of psychological distress. A systematic review of stress-reduction techniques among healthcare workers found that cognitive behavioral training lowered emotional exhaustion and feelings of lack of personal accomplishment.
An occupational stressor that needs to be addressed is the problem of an imbalance between work and life outside of work. The Work, Family, and Health Study was a large-scale intervention study, the purpose of which was to help insure that employees achieve a measure of work–life balance. The intervention strategies included training supervisors to engage in more family-supportive behaviors. Another study component provided employees with increased control over when and where they work. The intervention led to improved home life, better sleep quality, and better safety compliance, mainly for the lowest paid employees.
Many organizations manage occupational stressors associated with health and safety in a fragmented way; for example, one department may house an employee assistance program and another department manages exposures to toxic chemicals. The Total Worker Health (TWH) idea, which was initiated by the National Institute of Occupational Safety and Health (NIOSH), provides a strategy in which different levels of worker health promotion activity are programmatically integrated. TWH-type interventions integrate health protection and health promotion components. Health protection components are ordinarily unit- or organization-wide, for example, reducing exposures to aerosols. Health promotion components are more individually oriented, in other words, oriented toward the wellness and/or well-being of individual workers, for example, smoking cessation programs. A review of 17 TWH-type interventions, i.e., interventions that integrate organizational-level occupational safety/health components and individual employee health promotion components, indicated that integrated programs can improve worker health and safety.
Experts from NIOSH recommended a number of practical ways to reduce occupational stress. These include the following:
Ensure that the workload is in line with workers' capabilities and resources.
Design jobs to provide meaning, stimulation, and opportunities for workers to use their skills.
Clearly define workers' roles and responsibilities.
To reduce workplace stress, managers may monitor the workload given out to the employees. Also while they are being trained they should let employees understand and be notified of stress awareness.
Give workers opportunities to participate in decisions and actions affecting their jobs.
Improve communications-reduce uncertainty about career development and future employment prospects.
Provide opportunities for social interaction among workers.
Establish work schedules that are compatible with demands and responsibilities outside the job.
Combat workplace discrimination (based on race, gender, national origin, religion or language).
Bringing in an objective outsider such as a consultant to suggest a fresh approach to persistent problems.
Introducing a participative leadership style to involve as many people as possible to resolve stress-producing problems.
Encourage work–life balance through family-friendly benefits and policies
An insurance company conducted several studies on the effects of stress prevention programs in hospital settings. Program activities included (1) employee and management education on job stress, (2) changes in hospital policies and procedures to reduce organizational sources of stress, and (3) the establishment of employee assistance programs. In one study, the frequency of medication errors declined by 50% after prevention activities were implemented in a 700-bed hospital. In a second study, there was a 70% reduction in malpractice claims in 22 hospitals that implemented stress prevention activities. In contrast, there was no reduction in claims in a matched group of 22 hospitals that did not implement stress prevention activities.
There is evidence that remote work could reduce job stress. One reason is that it provides employees more control over how they complete their work. Remote workers reported more job satisfaction and less desire to find a new job, less stress, improved work/life balance and higher performance rating by their managers.
One study modeled scenario-based training as a means to reduce occupational stress by providing simulated experience prior to performing a task.
Signs and symptoms of excessive job and workplace stress
Signs and symptoms of excessive job and workplace stress include:
Occupations concerned with reducing job stress
According to the Centers for Disease Control and Prevention, occupational health psychology (OHP) has made occupational stress a major research focus. Occupational health psychologists seek to reduce occupational stress by working with individuals and changing the workplace to make it less stressful. Industrial and organizational psychologists also have skills that bear on occupational stress (e.g., job design), they can also contribute to alleviating job stress.
The CDC states that "many psychologists have argued that the psychology field needs to take a more active role in research and practice to prevent occupational stress, illness, and injury," which is what the relatively new field of occupational health psychology is "all about". According to Spector, other subdisciplines within psychology had been relatively absent from research on occupational stress.
Occupational stress in the United Kingdom
An estimated 440,000 people in the UK say they experience work-related stress, resulting in nearly 9.9 million lost work days from 2014 to 2015. This makes it one of the most important causes of lost working-days in the UK. To reduce the prevalence of occupational stress, the Health and Safety Executive (HSE) has published the Management Standards, which is used by workplaces to assess the risk of work-related stress. Other methods used by the HSE to reduce occupational stress in the UK include "maintaining and enhancing the enforcement profile on work-related ill health to highlight the consequences of failure, and to hold those responsible to account".
Occupational stress in the United States
The Occupational Safety and Health Administration (OSHA) estimates that 83% of US workers suffer from work-related stress, with 65% of US workers reporting that work was a "very significant or somewhat significant source of stress in each year from 2019-2021." An estimated 120,000 deaths per year are caused by occupational stress in the United States. A number of programs to research and implement interventions to reduce occupational stress have been established by US government agencies, such as the National Institute for Occupational Safety and Health (NIOSH) and OSHA, including the Healthy Work Design and Well-being Cross-Sector program.
Occupational stress in Japan
Across 12 industries, 10.2 - 27.6% of Japanese employees have demonstrated severe levels of occupational stress. The high prevalence of severe occupational stress among workers in Japan leads to hundreds of thousands in human capital loss per employee throughout their careers. The Japanese term "Karoshi" refers to "overwork death", a case in which a sudden death is caused by a factor related to ones occupation, such as occupational stress. Concerns regarding occupational stress in Japan have grown over the years, due to societal factors such as long working hours. These concerns are being addressed through a number of national programs such as the government-mandated Stress Check Program, which requires all companies with more than 50 employees to assess the stress of its employees at least once a year.
Occupational stress in South Africa
In South Africa, over 40% of all work-related illness is caused by occupational stress, resulting in billions of rands in lost production annually. While occupational stress is rising globally, Sub-Saharan African countries have been among the worst affected regions in the world. The Occupational Health and Safety Act of 1993 established legal policy to encourage worker health in South Africa, but included few measures to manage stress among South African workers. Long working hours and inability to control work situations contribute to high rates of occupational stress among the many South Africans working in construction and labor professions.
See also
References
Further reading
Barling, J., Kelloway, E. K., & Frone, M. R. (Eds.) (2005). Handbook of work stress. Thousand Oaks, CA: Sage.
Cooper, C. L. (1998). Theories of organizational stress. Oxford, UK: Oxford University Press.
Cooper, C. L., Dewe, P. J. & O'Driscoll, M. P. (2001) Organizational stress: A review and critique of theory, research, and applications. Thousand Oaks, CA: Sage.
Pilkington, A. and others. (2000). Sudbury: HSE Books. (Contract Research Report No. 322/2000.) Baseline measurements for the evaluation of work-related stress campaign. by
Saxby, C. (June 2008). Barriers to Communication. Evansville Business Journal. 1–2.
Schonfeld, I.S., & Chang, C.-H. (2017). Occupational health psychology: Work, stress, and health. New York: Springer Publishing Company.
Psychological stress
Occupational stress
Occupational health psychology
Industrial and organizational psychology
Workplace | 0.764972 | 0.990816 | 0.757947 |
History of depression | What was previously known as melancholia and is now known as clinical depression, major depression, or simply depression and commonly referred to as major depressive disorder by many health care professionals, has a long history, with similar conditions being described at least as far back as classical times.
Ancient to medieval period
In ancient Greece, disease was thought due to an imbalance in the four basic bodily fluids, or humors. Personality types were similarly thought to be determined by the dominant humor in a particular person. Derived from the Ancient Greek , "black", and , "bile", melancholia was described as a distinct disease with particular mental and physical symptoms by Hippocrates in his Aphorisms, where he characterized all "fears and despondencies, if they last a long time" as being symptomatic of the ailment.
Aretaeus of Cappadocia later noted that sufferers were "dull or stern; dejected or unreasonably torpid, without any manifest cause". The humoral theory fell out of favor but was revived in Rome by Galen. Melancholia was a far broader concept than today's depression; prominence was given to a clustering of the symptoms of sadness, dejection, and despondency, and often fear, anger, delusions and obsessions were included.
Physicians in the Persian and then the Muslim world developed ideas about melancholia during the Islamic Golden Age. Ishaq ibn Imran (d. 908) combined the concepts of melancholia and phrenitis. The 11th century Persian physician Avicenna described melancholia as a depressive type of mood disorder in which the person may become suspicious and develop certain types of phobias.
His work, The Canon of Medicine, became the standard of medical thinking in Europe alongside those of Hippocrates and Galen. Moral and spiritual observations also abounded, and in the Christian environment of medieval Europe, a malaise called acedia (sloth or absence of caring) was identified, involving a tendency of the will to low spirits and lethargy typically linked to isolation.
The seminal scholarly work of the 17th century was English scholar Robert Burton's book, The Anatomy of Melancholy, drawing on numerous theories and the author's own experiences. Burton suggested that melancholy could be combatted with a healthy diet, sufficient sleep, music, and "meaningful work", along with talking about the problem with a friend.
During the 18th century, the humoral theory of melancholia was increasingly being challenged by mechanical and electrical explanations; references to dark and gloomy states gave way to ideas of slowed circulation and depleted energy.
German physician Johann Christian Heinroth, however, argued melancholia was a disturbance of the soul due to moral conflict within the patient.
Eventually, various authors proposed up to 30 different sub-types of melancholia, and alternative terms were suggested and discarded. Hypochondria came to be seen as a separate disorder. Melancholia and melancholy had been used interchangeably until the 19th century, but the former came to refer to a pathological condition and the latter to a temperament.
The term depression was derived from the Latin verb , "to press down". From the 14th century, "to depress" meant to subjugate or to bring down in spirits. It was used in 1665 in English author Richard Baker's Chronicle to refer to someone having "a great depression of spirit", and by English author Samuel Johnson in a similar sense in 1753. The term also came into use in physiology and economics.
An early usage referring to a psychiatric symptom was by French psychiatrist Louis Delasiauve in 1856, and by the 1860s it was appearing in medical dictionaries to refer to a physiological and metaphorical lowering of emotional function. Since Aristotle, melancholia had been associated with men of learning and intellectual brilliance, a hazard of contemplation and creativity. The newer concept abandoned these associations and, through the 19th century, became more associated with women.
Although melancholia remained the dominant diagnostic term, depression gained increasing currency in medical treatises and was a synonym by the end of the century; German psychiatrist Emil Kraepelin may have been the first to use it as the overarching term, referring to different kinds of melancholia as depressive states. English psychiatrist Henry Maudsley proposed an overarching category of affective disorder.
20th and 21st centuries
In the 20th century, the German psychiatrist Emil Kraepelin was the first to distinguish manic depression. The influential system put forward by Kraepelin unified nearly all types of mood disorder into manic–depressive insanity. Kraepelin worked from an assumption of underlying brain pathology, but also promoted a distinction between endogenous (internally caused) and exogenous (externally caused) types.
The unitarian view became more popular in the United Kingdom, while the binary view held sway in the US, influenced by the work of Swiss psychiatrist Adolf Meyer and before him Sigmund Freud, the father of psychoanalysis.
Freud had likened the state of melancholia to mourning in his 1917 paper Mourning and Melancholia. He theorized that objective loss, such as the loss of a valued relationship through death or a romantic breakup, results in subjective loss as well; the depressed individual has identified with the object of affection through an unconscious, narcissistic process called the libidinal cathexis of the ego.
Such loss results in severe melancholic symptoms more profound than mourning; not only is the outside world viewed negatively, but the ego itself is compromised. The patient's decline of self-perception is revealed in his belief of his own blame, inferiority, and unworthiness. He also emphasized early life experiences as a predisposing factor.
Meyer put forward a mixed social and biological framework emphasizing reactions in the context of an individual's life, and argued that the term depression should be used instead of melancholia.
The DSM-I (1952) contained depressive reaction and the DSM-II (1968) depressive neurosis, defined as an excessive reaction to internal conflict or an identifiable event, and also included a depressive type of manic-depressive psychosis within Major affective disorders.
In the mid-20th century, other psycho-dynamic theories were proposed. Existential and humanistic theories represented a forceful affirmation of individualism. Austrian existential psychiatrist Viktor Frankl connected depression to feelings of futility and meaninglessness. Frankl's logotherapy addressed the filling of an "existential vacuum" associated with such feelings, and may be particularly useful for depressed adolescents.
American existential psychologist Rollo May hypothesized that "depression is the inability to construct a future". In general, May wrote that depression "occur[s] more in the dimension of time than in space," and the depressed individual fails to look ahead in time properly. Thus the "focusing upon some point in time outside the depression ... gives the patient a perspective, a view on high so to speak; and this may well break the chains of the ... depression."
Humanistic psychologists argued that depression resulted from an incongruity between society and the individual's innate drive to self-actualize, or to realize one's full potential. American humanistic psychologist Abraham Maslow theorized that depression is especially likely to arise when the world precludes a sense of "richness" or "totality" for the self-actualizer.
Cognitive psychologists offered theories on depression in the mid-twentieth century. Starting in the 1950s, Albert Ellis argued that depression stemmed from irrational "should" and "musts" leading to inappropriate self-blame, self-pity, or other-pity in times of adversity. Starting in the 1960s, Aaron Beck developed the theory that depression results from a "cognitive triad" of negative thinking patterns, or "schemas," about oneself, one's future, and the world.
In the mid-20th century, researchers theorized that depression was caused by a chemical imbalance in neurotransmitters in the brain, a theory based on observations made in the 1950s of the effects of reserpine and isoniazid in altering monoamine neurotransmitter levels and affecting depressive symptoms. During the 1960s and 70s, manic-depression came to refer to just one type of mood disorder (now most commonly known as bipolar disorder) which was distinguished from (unipolar) depression. The terms unipolar and bipolar had been coined by German psychiatrist Karl Kleist.
The term major depressive disorder was introduced by a group of US clinicians in the mid-1970s as part of proposals for diagnostic criteria based on patterns of symptoms (called the Research Diagnostic Criteria, building on earlier Feighner Criteria), and was incorporated into the DSM-III in 1980. To maintain consistency the ICD-10 used the same criteria, with only minor alterations, but using the DSM diagnostic threshold to mark a mild depressive episode, adding higher threshold categories for moderate and severe episodes.
DSM-IV-TR excluded cases where the symptoms are a result of bereavement, although it was possible for normal bereavement to evolve into a depressive episode if the mood persisted and the characteristic features of a major depressive episode developed. The criteria were criticized because they do not take into account any other aspects of the personal and social context in which depression can occur. In addition, some studies found little empirical support for the DSM-IV cut-off criteria, indicating they are a diagnostic convention imposed on a continuum of depressive symptoms of varying severity and duration.
The ancient idea of melancholia still survives in the notion of a melancholic sub-type. The new definitions of depression were widely accepted, albeit with some conflicting findings and views, and the nomenclature continues in DSM-IV-TR, published in 2000.
There has been some criticism of the expansion of coverage of the diagnosis, related to the development and promotion of antidepressants and the biological model since the late 1950s.
See also
History of mental disorders
Classification of mental disorders
References
Cited texts
Major depressive disorder
Mood disorders
Depression | 0.76913 | 0.985459 | 0.757946 |
Effects of human sexual promiscuity | Human sexual promiscuity is the practice of having many different sexual partners. In the case of men, this behavior of sexual nondiscrimination and hypersexuality is referred to as satyriasis, while in the case of women, this behavior is conventionally known as nymphomania. Both conditions are regarded as possibly compulsive and pathological qualities, closely related to hyper-sexuality. The results of, or costs associated with, these behaviors are the effects of human sexual promiscuity.
A high number of sexual partners in a person's life usually means they are at a higher risk of sexually transmitted infections and life-threatening cancers. These costs largely pertain to the dramatic consequences to physical and mental health. The physical health risks mainly consist of the sexually transmitted infection risks, such as HIV and AIDS, that increase as individuals have develop sexual partners over their lifetime. The mental health risks typically associated with promiscuous individuals are mood, and personality disorders, often resulting in substance use disorders and, or permanent illness. These effects typically translate into several other long-term issues in people's lives and in their relationships, especially in the case of adolescents or those with previous pathological illnesses, disorders, or factors such as family dysfunction and social stress.
Research has also shown that there might be some benefit regarding the health fitness of the offsprings of promiscuous females in some animals.
Promiscuity in adolescents
The prevalence of promiscuity, in the case of adolescents, is known to be a root cause for many physical, mental, and socio-economic risks. Research has found that adolescents, in particular, are at a higher risk of negative consequences as a result of promiscuity.
In sub-Saharan Africa, adolescents engaged in promiscuous activities face many health and economic risks related to teenage pregnancy, maternal mortality, labor complications, and loss of educational opportunities.
It is suggested that the increasing association of sexually transmitted infections among adolescents could be a result of barriers to prevention and management services, such as infrastructural barriers (improper medical treatment facilities), cost barriers, educational barriers, and social factors such as concerns of confidentiality and embarrassment.
Physical health effects
Incidence and prevalence estimates suggest that adolescents, in comparison to adults, are particularly at higher risk of developing sexually transmitted infections, such as chlamydia, gonorrhea, syphilis and herpes. It is accepted that adolescent females are especially at risk to develop sexually transmitted infections. This is claimed to be due to the increased cervical ectopy, which is more susceptible to infection. In addition to these risks, adolescent mothers, whose offspring are generally first-births, are at a higher risk of certain pregnancy and labor complications, which can affect the mother and the offspring, as well as the entire community and future generations.
Pregnancy and maternal labor complications
It has been found that pregnancy-related complications cause up to half of all deaths in women of reproductive age in developing countries. In some areas, for every one woman who dies a maternal death, there are 10-15 who suffer severe damage to health by labor, which often causes substantial mental health risks and distress. These figures, however, are estimations since official data is not recorded in registration systems. In the context of pregnancy, maternal complications, and maternal death, it has been studied that age itself may cause fewer health risks for the mother or the offspring due to the prevalence of first-births among the younger ages. First births are higher among teenagers and are usually more complicated than higher-order births. Included in these observations are other complications related to delivery such as cephalopelvic disproportion, which is a condition in which the mother's pelvis is too small relative to the child's head to allow the child to pass. Cephalopelvis disproportion is most common in younger women. Many of these risks are higher among younger females, and a more mature physique is considered to be ideal for a successful pregnancy and childbearing. A mother older than 35 years old, however, may at a higher risk of facing various other labor complications.
In a study of over 22,000 births in Zaria, Nigeria, it was found that maternal mortality was 2-3 times higher for women 15 years old and under than for women from 16–29 years old. It was also found that in Africa, those under the age of 15 are 5-7 times more likely to have maternal deaths than women just 5–9 years older.
Sexually transmitted infections
While rates of these sexually transmitted infections increased for 15-24 year-old individuals in the United States for both males and females in 2016–2017, the rates of chlamydia are found to be consistently highest among 15-24 year-old young women. Reported cases of primary and secondary syphilis have consistently been higher among adolescent men and women compared to adult men and women. In the United States in 2017, there were 1,069,111 reported cases of chlamydia among persons aged 15–25, which represented the majority, almost 63%, of all chlamydia cases in the United States. These figures increased by 7.5% from 2016 in the 15-25 age group. In the 20-24 age group, the rate was increased by 5.0% during the same time frame. Among men in the 15-24 age group, there was an increase of 8.9% in 2017 since 2016 and an increase of 29.1% since 2013.
Gonorrhea infection cases were also reported to have increased for the 15-19 year age group in 2017 since 2016. In the case of women aged 15–24, there was an increase of 14.3% in 2017 since 2016, and a 24.1% increase since 2013. Among men, the rate of reported gonorrhea infections rose 913.4% in 2017 since 2016 and 951.6% since 2013. 20-24-year-old women had the highest increase in reported cases of gonorrhea among women, and the 15-19 year old age group had the second highest rate of increase.
While cases of primary and secondary syphilis is much rarer than gonorrhea, chlamydia, and herpes, the reported cases had increased for both males and females. In 15-24-year-old women, the cases of syphilis had increased 107.8% in 2017 since 2016 and increased 583.3% since 2013. In the case of 15-24-year-old men, the rate increased 8.3% to 26.1 cases per 100,000 males in 2017 since 2016 and 50.9% since 2013. Primary and secondary syphilis reports increased 9.8% for the 15-19 year age group and 7.8% for the 20-24 year age group from 2016 to 2017.
In the United States, Human papillomavirus is the most common STI. Routine use of HPV vaccines have greatly reduced the prevalence of HPV in specimens of females aged 14–19 and 20–24, the age group most at risk of contracting HPV, in 2011-2014 since 2003–2006.
Mental health effects
Emotional and mental disruptions are also observed to be an effect of the promiscuity in adolescence. Studies have shown a correlation and direct relationship between adolescent sexual risk taking and mental health risks. Sexual risks include multiple sexual partners, lack of protection use, and sexual intercourse at a young age. The mental risks that are associated with these include cognitive disorders such as anxiety, depression, and a substance use disorder. It is also found that sexual promiscuity in teens can be a result of substance misuse and pre-existing mental health conditions such as clinical depression.
In relation to the contraction of sexually transmitted infections, there is shown to be a correlation to decreased mental health. The neurosyphilis disease is known to cause extreme depression, mania, psychosis, and even hallucinations in late stages of the diseases. The chlamydia infection is known to increase rates of depression even in asymptomatic individuals.
STIs can put women at a high risk for infertility, which generally leads to feelings of depression. This holds true for women who are still able to conceive because there is a high risk of transferring the disease to their child through pregnancy or child birth.
Women are of higher susceptibility to psychosocial mental health effects of STIs. They report to having feelings immense of shame, guilt, and self blame after diagnosis. This can lead to avoidant behaviors and fear of disclosure to not only sexual partners but family and friends. All of these behaviors are associated with a decline to mental health, whether it is depression, anxiety, or any other disorder.
Other factors contribute to how STIs effect mental health and these include history of trauma and stigma from the disease.
Socio and economic effects
Sexual risk-taking and promiscuous activities, in regards to the youth, can also lead to many social and economic risks. In sub-Saharan Africa, for example, research has found that teenage pregnancy poses significant social and economic risks, as it forces young women, particularly those from extremely low-income families, to leave school to pursue childbearing. These disruptions in basic education pose life-long and generational risks to those involved. Social condemnation also prevents these young mothers from seeking help, and as a result are at a higher risk for developing other physical and mental risks, which can later result in physical health risks and substance use.
Promiscuity in adults
Sexual promiscuity in adults, as with adolescents, presents substantial risks to physical, mental, and socioeconomic health. Having multiple sexual partners is linked with risks such as maternal deaths and complications, cancers, sexually transmitted infections, alcohol, and substance use, and social condemnation in some societies. A higher number of sexual partners poses a greater risk of contracting sexually transmitted infections, mental health issues, and alcohol/substance use. Adults, however, are generally found to be less at risk of certain pregnancy and labor complications, such as cephalopelvis disproportion, than adolescents, while being at higher risk for other labor complications.
Physical health effects
Promiscuity in adults has detrimental effects on physical health. As the number of sexual partners a person has in his or her lifetime increases, the higher the risk he or she contracts sexually transmitted infections. The length of a sexual relationship with a partner, the number of past and present partners, and pre-existing conditions are all variables that affect the development of risks in a person's life. Promiscuous individuals may also be at a higher risk of developing prostate cancer, cervical cancer, and oral cancer as a result of having multiple sexual partners, and combined with other risky acts such as smoking, and substance use, promiscuity can also lead to heart disease.
Despite the frequency of HIV/AIDS cases decreasing as medical treatment and education on the matter improve, HIV/AIDS has still been responsible for over 20 million lives in 20 years, greatly affecting the livelihoods of whole communities in developing nations. According to the World Health Organization, over 40 million people are currently infected with HIV/AIDS, and 95% of these cases are in the developing world.
Over 340 million treatable sexually transmitted infections affect people around the world each year, which presents a great risk to individuals as they become more susceptible to HIV and more likely to spread the virus.
Studies have also shown that individuals who engage in long-term relationships, as opposed to hypersexual and promiscuous behavior are less likely to fall victim to domestic violence.
Mental health effects
According to research conducted by Sandhya Ramrakha of the Dunedin School of Medicine, the probability of developing a substance use disorder increased linearly with an increase in the number of sexual partners. This was particularly greater for women, however, there was no correlation with other mental health risks. This contrasts other studies that find there indeed is a correlation between mental health risk and multiple sexual partners.
Social and economic effects
Having multiple sexual partners frequently adversely affects educational opportunities for young women, which can affect their careers and opportunities as adults; the frequency of multiple sexual partners have negative long-term economic effects for women as a result of a loss of schooling. There is little evidence, however, that the number of sexual partners adversely affects the educational and economic opportunities for males.
Reducing the effects
Human sexual promiscuity presents substantial physical, mental, and socio-economic risks to adolescents as well as adults in all parts of the world. Researchers and organizations have identified ways of reducing these risks over time. These include the prevention and treatment of sexually transmitted infections and other effects of human sexual promiscuity.
Prevention
According to the World Health Organization, the reduction in the harmful risks of human sexual promiscuity can be achieved first by prevention. These are sustained through HIV and STI prevention programs, defined in the Declaration of Commitment during the United Nations General Assembly on HIV/AIDS in June 2001. Safe sex, condom and contraceptive usage and effectual STI management are essential in preventing the spread of these sexually transmitted infections, and it can also improve the social and economic status of entire communities as young women can pursue education instead of childbearing. At a large enough scale with a target on STI concentrated locations with high rates of STIs, these programs can greatly reduce the effects of promiscuity.
Treatment
Many of these lower-income areas lack proper equipment or facilities to treat these risks. Expansion of antiretroviral treatment and the enabling of broader access to all medical services and support can be paramount in the treatment of sexually transmitted infections once they occur. For the mental health risks that human sexual promiscuity presents, effective counseling services, and facilities must be offered, enabling the reduction of these risks over time.
References
Human sexuality
Promiscuity
Societal effects of promiscuity may include crimes of passion as jealous partners may seek to drive-off competition.
Also, child support may be reduced as males
may be loth to contribute to support of children who may not be their own.
https://www.ojp.gov/ncjrs/virtual-library/abstracts/crime-passion-and-changing-cultural-construction-jealousy
https://www.texasattorneygeneral.gov/child-support/paternity/mistaken-paternity#:~:text=If%20the%20genetic%20testing%20results,child%20relationship%20and%20support%20obligation.
(Trying for first time here to cite references) | 0.765105 | 0.990611 | 0.757921 |
Boundaries of the Mind | Boundaries of the Mind (2004) is a thorough treatment of the role and conceptualization of the individual in psychology, by author Robert A. Wilson, a professor in the Department of Philosophy at the University of Alberta.
Structure
It is the first book in a planned three-volume set, entitled The Individual in the Fragile Sciences. The second volume examines the individual in biological sciences and the third, the individual's role in social sciences.
The book is divided into four parts:
Part I motivates the study of the individual in psychology, provides a framework for contrasting nativist and empiricist views and provides a history of psychology that traces its gradual independence from physiology and philosophy to a subject in its own right.
Part II spans topics for which Wilson is already well-known: the individualism–externalism debate, narrow and wide content, and the metaphysics of realization.
Part III explores the consequences of this radical form of externalism from the perspective of various research programs in psychology: memory, development, and theory of mind. Wilson applies his externalist framework to stake out his own conception of consciousness, the TESEE acronym: Temporally Extended, Scaffolded, Embodied, Embedded.
Part IV closes the book with a discussion of the cognitive metaphor in the biological and social sciences.
Approach
TESEE is an approach to the processes of awareness/introspection, meta-representation and attention. It is continuous with the embedded and embodied approach to memory, cognitive development, and theory of mind.
Its complicated processes of awareness extend beyond the immediate subject in space and time. They exploit information-rich external bits of language and navigation equipment (the scaffolds) and rely on dynamic relations between the subject's body and the environment in which it is located.
This approach can be extended to phenomenal consciousness, arguing that a phenomenal property is not an intrinsic property of experience but rather a feature of the representation of its objects. As such, phenomenal properties inherit their importance from the intentional contents to which they apply. According to representationalists such as Fred Dretske, William Lycan, and Michael Tye - phenomenal consciousness is externalistic. Thus Wilson thinks that this global externalism goes both too far and not far enough.
TESEE conceptions of vision and visual consciousness relies on the sensorimotor theory of visual consciousness of philosophers Alva Noë and Susan Hurley, and psychologist J. Kevin O'Regan, arguing that vision, like touch, involves active and dynamic exploration of the contingent features of the environment.
Second book
The second book is Genes and the Agents of Life: The Individual in the Fragile Sciences (Biology) published in 2005.
References
Review by L. A. Shapiro, University of Wisconsin, Madison
Review by Ray Rennard
External links
sample in pdf version
Books about cognition
2004 non-fiction books
Cambridge University Press books | 0.776799 | 0.975647 | 0.757882 |
Psychosexual disorder | Psychosexual disorder is a sexual problem that is psychological, rather than physiological in origin. "Psychosexual disorder" was a term used in Freudian psychology. The term "psychosexual disorder" (Turkish: Psikoseksüel bozukluk) has been used by the TAF for homosexuality as a reason to ban the LGBT people from military service.
Paraphilias
Paraphilias are generally defined as psychosexual disorders in which significant distress or an impairment in a domain of functioning results from recurrent intense sexual urges, fantasies or behaviors generally involving an unusual object, activity, or situation. An alternative definition is given by the DSM-5 which labels them as sexual; attractions to objects, situations or people that deviate from the desires and sexual behaviors that are considered to be socially acceptable. Examples of these paraphilias would include fetishism, sexual masochism and sadism and more.
Fetishism and transvestic fetishism
Fetishism is a disorder that is characterized by a sexual fixation, fantasies or behaviors toward an inanimate object, these objects frequently are articles of clothing. It is only through this object which the individual can achieve sexual gratification. It is not rare that an individual will rub or smell the object. This disorder is more common in males and it is not understood why.
Transvestic fetishism also commonly known as transvestism, is a diagnoses found in the DSM. There are four factors that revolve around this syndrome. These include cross-dressing, being associated with sexual arousal, in a biological male, and the person has to be a heterosexual male.
Sexual sadism and sexual masochism
The disorders known as sexual sadism and sexual masochism are oftentimes confused or hard to separate when their definitions are compared but diagnostic criteria differ slightly between the two and allows for more easy classification. Sexual sadism disorder and sexual masochism are defined as receiving sexual arousal from the humiliation, pain and or suffering of an individual and are thought to overlap with multiple other conditions due to its description along with diagnostic criteria.
Voyeurism, exhibitionism and frotteurism
Voyeurism is self-reported sexual arousal from spying on others or the observation of others who are engaged in sexual interaction.
Exhibitionism are public acts of exposing parts of one's body that are not socially acceptable to be exposed. Exhibitionistic acts are among the most common of the potentially law-breaking sexual behaviors. Examples of this would include "streaking" during a professional sporting event or protesting a political event in the nude.
Frotteurism is considered a rare paraphilia that revolves around an individual's sexual satisfaction being derived from rubbing upon another non-consenting individual. The term frotteurism itself can be broken down and derived from the French verb frotter which means rubbing.
Diagnosis
In the DSM-5 all paraphilia disorders can be diagnosed by two main criteria that are referred to criteria A and criteria B respectively. The A and B criteria include a duration in which the behavior must be present for (typically six months) and specific details of actions or thoughts that are correlated specifically with the respective disorder being diagnosed.
Treatment
Psychosexual disorders can vary greatly in severity and treatability. Medical professionals and licensed therapists are necessary in diagnosis and treatment plans. Treatment can vary from therapy to prescription medication. Sex therapy, behavioral therapy, and group therapy may be helpful to those distressed by sexual dysfunction. More serious sexual disorders may be treated with androgen blockers or selective serotonin reuptake inhibitors (SSRIs) to help restore hormonal and neurochemical balances.
History
Sigmund Freud
Sigmund Freud has contributed to the idea of psychosexual disorders and furthered research of the topic through his ideas of psychosexual development and his psychoanalytic sex drive theory. According to Freud's ideas of psychosexual development, as a child, one will progress through five stages of development. These stages being the oral stage (1 -1 1/2 yrs), the anal stage(1 1/2- 3yrs) phallic stage (3-5 yrs), the latency stage (5-12 yrs) and the genital stage (from puberty on). A psychosexual disorder could arise in an individual if the individual does not progress through these stages properly. Proper progression through these stages requires the correct amounts of stimulation and gratification at each stage. If there is too little stimulation at a certain stage fixation occurs and becoming overly fixated could lead to a psychosexual disorder. In contrast, too much stimulation at a certain stage of development could lead to regression when that individual is in distress, also possibly leading to a psychosexual disorder.
Richard Freiherr von Kraft-Ebing
Richard Krafft-Ebing was a German psychiatrist who sought to revolutionize sexuality in the late nineteenth century. Working in a time of sexual modesty, Krafft-Ebing brought light to sexuality as an innate human nature verses deviancy. His most notable work, Psychopathia Sexualis, was a collection of case studies highlighting sexual practices of the general public. The textbook was the first of its kind recognizing the variation within human sexuality, such as: nymphomania, fetishism, and homosexuality. Psychiatrists were now able to diagnose psychosexual disorders in place of perversions. Psychopathia Sexualis was used as reference in psychological, medical, and judicial settings. Krafft-Ebing is considered the founder of medical sexology; he is the predecessor of both Sigmund Freud and Havelock Ellis.
Havelock Ellis
Havelock Ellis was an English physician and writer born in the eighteen hundreds who studied human sexuality, and is referred to as one of the earliest sexologists. Ellis's work was geared towards human sexual behavior. His major work was a seven-volume publication called Studies in the Psychology of Sex, which related sex to society. Published in 1921, Studies in the Psychology of Sex covered the evolution of modesty, sexual periodicity, auto-erotism, sexual inversion, sexual impulse, sexual selection, and erotic symbolism. Ellis also conceived the term eonism, which references a man dressing as a woman. He elaborated on this term in his publication of Eonism and Other Supplementary Studies. He wrote Sexual Inversion as well in hopes to address any ignorance people have on the topic.
See also
LGBT rights in Turkey
Psychoanalysis
Sigmund Freud
Psychosexual development
References
External links
Freudian psychology
Paraphilias | 0.768542 | 0.986125 | 0.757879 |
Systematic review | A systematic review is a scholarly synthesis of the evidence on a clearly presented topic using critical methods to identify, define and assess research on the topic. A systematic review extracts and interprets data from published studies on the topic (in the scientific literature), then analyzes, describes, critically appraises and summarizes interpretations into a refined evidence-based conclusion. For example, a systematic review of randomized controlled trials is a way of summarizing and implementing evidence-based medicine.
While a systematic review may be applied in the biomedical or health care context, it may also be used where an assessment of a precisely defined subject can advance understanding in a field of research. A systematic review may examine clinical tests, public health interventions, environmental interventions, social interventions, adverse effects, qualitative evidence syntheses, methodological reviews, policy reviews, and economic evaluations.
Systematic reviews are closely related to meta-analyses, and often the same instance will combine both (being published with a subtitle of "a systematic review and meta-analysis"). The distinction between the two is that a meta-analysis uses statistical methods to induce a single number from the pooled data set (such as an effect size), whereas the strict definition of a systematic review excludes that step. However, in practice, when one is mentioned the other may often be involved, as it takes a systematic review to assemble the information that a meta-analysis analyzes, and people sometimes refer to an instance as a systematic review even if it includes the meta-analytical component.
An understanding of systematic reviews and how to implement them in practice is common for professionals in health care, public health, and public policy.
Systematic reviews contrast with a type of review often called a narrative review. Systematic reviews and narrative reviews both review the literature (the scientific literature), but the term literature review without further specification refers to a narrative review.
Characteristics
A systematic review can be designed to provide a thorough summary of current literature relevant to a research question. A systematic review uses a rigorous and transparent approach for research synthesis, with the aim of assessing and, where possible, minimizing bias in the findings. While many systematic reviews are based on an explicit quantitative meta-analysis of available data, there are also qualitative reviews and other types of mixed-methods reviews which adhere to standards for gathering, analyzing and reporting evidence.
Systematic reviews of quantitative data or mixed-method reviews sometimes use statistical techniques (meta-analysis) to combine results of eligible studies. Scoring levels are sometimes used to rate the quality of the evidence depending on the methodology used, although this is discouraged by the Cochrane Library. As evidence rating can be subjective, multiple people may be consulted to resolve any scoring differences between how evidence is rated.
The EPPI-Centre, Cochrane, and the Joanna Briggs Institute have been influential in developing methods for combining both qualitative and quantitative research in systematic reviews. Several reporting guidelines exist to standardise reporting about how systematic reviews are conducted. Such reporting guidelines are not quality assessment or appraisal tools. The Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) statement suggests a standardized way to ensure a transparent and complete reporting of systematic reviews, and is now required for this kind of research by more than 170 medical journals worldwide. Several specialized PRISMA guideline extensions have been developed to support particular types of studies or aspects of the review process, including PRISMA-P for review protocols and PRISMA-ScR for scoping reviews. A list of PRISMA guideline extensions is hosted by the EQUATOR (Enhancing the QUAlity and Transparency Of health Research) Network. However, the PRISMA guidelines have been found to be limited to intervention research and the guidelines have to be changed in order to fit non-intervention research. As a result, Non-Interventional, Reproducible, and Open (NIRO) Systematic Reviews was created to counter this limitation.
For qualitative reviews, reporting guidelines include ENTREQ (Enhancing transparency in reporting the synthesis of qualitative research) for qualitative evidence syntheses; RAMESES (Realist And MEta-narrative Evidence Syntheses: Evolving Standards) for meta-narrative and realist reviews; and eMERGe (Improving reporting of Meta-Ethnography) for meta-ethnograph.
Developments in systematic reviews during the 21st century included realist reviews and the meta-narrative approach, both of which addressed problems of variation in methods and heterogeneity existing on some subjects.
Types
There are over 30 types of systematic review and Table 1 below non-exhaustingly summarises some of these. There is not always consensus on the boundaries and distinctions between the approaches described below.
Scoping reviews
Scoping reviews are distinct from systematic reviews in several ways. A scoping review is an attempt to search for concepts by mapping the language and data which surrounds those concepts and adjusting the search method iteratively to synthesize evidence and assess the scope of an area of inquiry. This can mean that the concept search and method (including data extraction, organisation and analysis) are refined throughout the process, sometimes requiring deviations from any protocol or original research plan. A scoping review may often be a preliminary stage before a systematic review, which 'scopes' out an area of inquiry and maps the language and key concepts to determine if a systematic review is possible or appropriate, or to lay the groundwork for a full systematic review. The goal can be to assess how much data or evidence is available regarding a certain area of interest. This process is further complicated if it is mapping concepts across multiple languages or cultures.
As a scoping review should be systematically conducted and reported (with a transparent and repeatable method), some academic publishers categorize them as a kind of 'systematic review', which may cause confusion. Scoping reviews are helpful when it is not possible to carry out a systematic synthesis of research findings, for example, when there are no published clinical trials in the area of inquiry. Scoping reviews are helpful when determining if it is possible or appropriate to carry out a systematic review, and are a useful method when an area of inquiry is very broad, for example, exploring how the public are involved in all stages systematic reviews.
There is still a lack of clarity when defining the exact method of a scoping review as it is both an iterative process and is still relatively new. There have been several attempts to improve the standardisation of the method, for example via a Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) guideline extension for scoping reviews (PRISMA-ScR). PROSPERO (the International Prospective Register of Systematic Reviews) does not permit the submission of protocols of scoping reviews, although some journals will publish protocols for scoping reviews.
Stages
While there are multiple kinds of systematic review methods, the main stages of a review can be summarised as follows:
Defining the research question
Some reported that the 'best practices' involve 'defining an answerable question' and publishing the protocol of the review before initiating it to reduce the risk of unplanned research duplication and to enable transparency and consistency between methodology and protocol. Clinical reviews of quantitative data are often structured using the mnemonic PICO, which stands for 'Population or Problem', 'Intervention or Exposure', 'Comparison', and 'Outcome', with other variations existing for other kinds of research. For qualitative reviews, PICo is 'Population or Problem', 'Interest', and 'Context'.
Searching for sources
Relevant criteria can include selecting research that is of good quality and answers the defined question. The search strategy should be designed to retrieve literature that matches the protocol's specified inclusion and exclusion criteria. The methodology section of a systematic review should list all of the databases and citation indices that were searched. The titles and abstracts of identified articles can be checked against predetermined criteria for eligibility and relevance. Each included study may be assigned an objective assessment of methodological quality, preferably by using methods conforming to the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) statement, or the standards of Cochrane.
Common information sources used in searches include scholarly databases of peer-reviewed articles such as MEDLINE, Web of Science, Embase, and PubMed, as well as sources of unpublished literature such as clinical trial registries and grey literature collections. Key references can also be yielded through additional methods such as citation searching, reference list checking (related to a search method called 'pearl growing'), manually searching information sources not indexed in the major electronic databases (sometimes called 'hand-searching'), and directly contacting experts in the field.
To be systematic, searchers must use a combination of search skills and tools such as database subject headings, keyword searching, Boolean operators, and proximity searching, while attempting to balance sensitivity (systematicity) and precision (accuracy). Inviting and involving an experienced information professional or librarian can improve the quality of systematic review search strategies and reporting.
'Extraction' of relevant data
Relevant data are 'extracted' from the data sources according to the review method. The data extraction method is specific to the kind of data, and data extracted on 'outcomes' is only relevant to certain types of reviews. For example, a systematic review of clinical trials might extract data about how the research was done (often called the method or 'intervention'), who participated in the research (including how many people), how it was paid for (for example, funding sources) and what happened (the outcomes). Relevant data are being extracted and 'combined' in an intervention effect review, where a meta-analysis is possible.
Assess the eligibility of the data
This stage involves assessing the eligibility of data for inclusion in the review, by judging it against criteria identified at the first stage. This can include assessing if a data source meets the eligibility criteria, and recording why decisions about inclusion or exclusion in the review were made. Software can be used to support the selection process, including text mining tools and machine learning, which can automate aspects of the process. The 'Systematic Review Toolbox' is a community driven, web-based catalogue of tools, to help reviewers chose appropriate tools for reviews.
Analyse and combine the data
Analysing and combining data can provide an overall result from all the data. Because this combined result may use qualitative or quantitative data from all eligible sources of data, it is considered more reliable as it provides better evidence, as the more data included in reviews, the more confident we can be of conclusions. When appropriate, some systematic reviews include a meta-analysis, which uses statistical methods to combine data from multiple sources. A review might use quantitative data, or might employ a qualitative meta-synthesis, which synthesises data from qualitative studies. A review may also bring together the findings from quantitative and qualitative studies in a mixed methods or overarching synthesis. The combination of data from a meta-analysis can sometimes be visualised. One method uses a forest plot (also called a blobbogram). In an intervention effect review, the diamond in the 'forest plot' represents the combined results of all the data included. An example of a 'forest plot' is the Cochrane Collaboration logo. The logo is a forest plot of one of the first reviews which showed that corticosteroids given to women who are about to give birth prematurely can save the life of the newborn child.
Recent visualisation innovations include the albatross plot, which plots p-values against sample sizes, with approximate effect-size contours superimposed to facilitate analysis. The contours can be used to infer effect sizes from studies that have been analysed and reported in diverse ways. Such visualisations may have advantages over other types when reviewing complex interventions.
Communication and dissemination
Once these stages are complete, the review may be published, disseminated, and translated into practice after being adopted as evidence. The UK National Institute for Health Research (NIHR) defines dissemination as "getting the findings of research to the people who can make use of them to maximise the benefit of the research without delay".
Some users do not have time to invest in reading large and complex documents and/or may lack awareness or be unable to access newly published research. Researchers are therefore developing skills to use creative communication methods such as illustrations, blogs, infographics and board games to share the findings of systematic reviews.
Automation
Living systematic reviews are a newer kind of semi-automated, up-to-date online summaries of research that are updated as new research becomes available. The difference between a living systematic review and a conventional systematic review is the publication format. Living systematic reviews are "dynamic, persistent, online-only evidence summaries, which are updated rapidly and frequently".
The automation or semi-automation of the systematic process itself is increasingly being explored. While little evidence exists to demonstrate it is as accurate or involves less manual effort, efforts that promote training and using artificial intelligence for the process are increasing.
Research fields
Health and medicine
Current use of systematic reviews in medicine
Many organisations around the world use systematic reviews, with the methodology depending on the guidelines being followed. Organisations which use systematic reviews in medicine and human health include the National Institute for Health and Care Excellence (NICE, UK), the Agency for Healthcare Research and Quality (AHRQ, US), and the World Health Organization. Most notable among international organisations is Cochrane, a group of over 37,000 specialists in healthcare who systematically review randomised trials of the effects of prevention, treatments, and rehabilitation as well as health systems interventions. They sometimes also include the results of other types of research. Cochrane Reviews are published in The Cochrane Database of Systematic Reviews section of the Cochrane Library. The 2015 impact factor for The Cochrane Database of Systematic Reviews was 6.103, and it was ranked 12th in the Medicine, General & Internal category.
There are several types of systematic reviews, including:
Intervention reviews assess the benefits and harms of interventions used in healthcare and health policy.
Diagnostic test accuracy reviews assess how well a diagnostic test performs in diagnosing and detecting a particular disease. For conducting diagnostic test accuracy reviews, free software such as MetaDTA and CAST-HSROC in the graphical user interface is available.
Methodology reviews address issues relevant to how systematic reviews and clinical trials are conducted and reported.
Qualitative reviews synthesize qualitative evidence to address questions on aspects other than effectiveness.
Prognosis reviews address the probable course or future outcome(s) of people with a health problem.
Overviews of Systematic Reviews (OoRs) compile multiple pieces of evidence from systematic reviews into a single accessible document, sometimes referred to as umbrella reviews.
Living systematic reviews are continually updated, incorporating relevant new evidence as it becomes available.
Rapid reviews are a form of knowledge synthesis that "accelerates the process of conducting a traditional systematic review through streamlining or omitting specific methods to produce evidence for stakeholders in a resource-efficient manner".
Reviews of complex health interventions in complex systems are to improve evidence synthesis and guideline development.
Patient and public involvement in systematic reviews
There are various ways patients and the public can be involved in producing systematic reviews and other outputs. Tasks for public members can be organised as 'entry level' or higher. Tasks include:
Joining a collaborative volunteer effort to help categorise and summarise healthcare evidence
Data extraction and risk of bias assessment
Translation of reviews into other languages
A systematic review of how people were involved in systematic reviews aimed to document the evidence-base relating to stakeholder involvement in systematic reviews and to use this evidence to describe how stakeholders have been involved in systematic reviews. Thirty percent involved patients and/or carers. The ACTIVE framework provides a way to describe how people are involved in systematic review and may be used as a way to support systematic review authors in planning people's involvement. Standardised Data on Initiatives (STARDIT) is another proposed way of reporting who has been involved in which tasks during research, including systematic reviews.
There has been some criticism of how Cochrane prioritises systematic reviews. Cochrane has a project that involved people in helping identify research priorities to inform Cochrane Reviews. In 2014, the Cochrane–Wikipedia partnership was formalised.
Environmental health and toxicology
Systematic reviews are a relatively recent innovation in the field of environmental health and toxicology. Although mooted in the mid-2000s, the first full frameworks for conduct of systematic reviews of environmental health evidence were published in 2014 by the US National Toxicology Program's Office of Health Assessment and Translation and the Navigation Guide at the University of California San Francisco's Program on Reproductive Health and the Environment. Uptake has since been rapid, with the estimated number of systematic reviews in the field doubling since 2016 and the first consensus recommendations on best practice, as a precursor to a more general standard, being published in 2020.
Social, behavioural, and educational
In 1959, social scientist and social work educator Barbara Wootton published one of the first contemporary systematic reviews of literature on anti-social behavior as part of her work, Social Science and Social Pathology.
Several organisations use systematic reviews in social, behavioural, and educational areas of evidence-based policy, including the National Institute for Health and Care Excellence (NICE, UK), Social Care Institute for Excellence (SCIE, UK), the Agency for Healthcare Research and Quality (AHRQ, US), the World Health Organization, the International Initiative for Impact Evaluation (3ie), the Joanna Briggs Institute, and the Campbell Collaboration. The quasi-standard for systematic review in the social sciences is based on the procedures proposed by the Campbell Collaboration, which is one of several groups promoting evidence-based policy in the social sciences.
Others
Some attempts to transfer the procedures from medicine to business research have been made, including a step-by-step approach, and developing a standard procedure for conducting systematic literature reviews in business and economics.
Systematic reviews are increasingly prevalent in other fields, such as international development research. Subsequently, several donors (including the UK Department for International Development (DFID) and AusAid) are focusing more on testing the appropriateness of systematic reviews in assessing the impacts of development and humanitarian interventions.
The Collaboration for Environmental Evidence (CEE) has a journal titled Environmental Evidence, which publishes systematic reviews, review protocols, and systematic maps on the impacts of human activity and the effectiveness of management interventions.
Review tools
A 2022 publication identified 24 systematic review tools and ranked them by inclusion of 30 features deemed most important when performing a systematic review in accordance with best practices. The top six software tools (with at least 21/30 key features) are all proprietary paid platforms, typically web-based, and include:
Giotto Compliance
DistillerSR
Nested Knowledge
EPPI-Reviewer Web
LitStream
JBI SUMARI
The Cochrane Collaboration provides a handbook for systematic reviewers of interventions which "provides guidance to authors for the preparation of Cochrane Intervention reviews." The Cochrane Handbook also outlines steps for preparing a systematic review and forms the basis of two sets of standards for the conduct and reporting of Cochrane Intervention Reviews (MECIR; Methodological Expectations of Cochrane Intervention Reviews). It also contains guidance on integrating patient-reported outcomes into reviews.
Limitations
Out-dated or risk of bias
While systematic reviews are regarded as the strongest form of evidence, a 2003 review of 300 studies found that not all systematic reviews were equally reliable, and that their reporting can be improved by a universally agreed upon set of standards and guidelines. A further study by the same group found that of 100 systematic reviews monitored, 7% needed updating at the time of publication, another 4% within a year, and another 11% within 2 years; this figure was higher in rapidly changing fields of medicine, especially cardiovascular medicine. A 2003 study suggested that extending searches beyond major databases, perhaps into grey literature, would increase the effectiveness of reviews.
Some authors have highlighted problems with systematic reviews, particularly those conducted by Cochrane, noting that published reviews are often biased, out of date, and excessively long. Cochrane reviews have been criticized as not being sufficiently critical in the selection of trials and including too many of low quality. They proposed several solutions, including limiting studies in meta-analyses and reviews to registered clinical trials, requiring that original data be made available for statistical checking, paying greater attention to sample size estimates, and eliminating dependence on only published data. Some of these difficulties were noted as early as 1994:
Methodological limitations of meta-analysis have also been noted. Another concern is that the methods used to conduct a systematic review are sometimes changed once researchers see the available trials they are going to include. Some websites have described retractions of systematic reviews and published reports of studies included in published systematic reviews. Eligibility criteria that is arbitrary may affect the perceived quality of the review.
Limited reporting of data from human studies
The AllTrials campaign report that around half of clinical trials have never reported results and works to improve reporting. 'Positive' trials were twice as likely to be published as those with 'negative' results.
As of 2016, it is legal for-profit companies to conduct clinical trials and not publish the results. For example, in the past 10 years, 8.7 million patients have taken part in trials that have not published results. These factors mean that it is likely there is a significant publication bias, with only 'positive' or perceived favourable results being published. A recent systematic review of industry sponsorship and research outcomes concluded that "sponsorship of drug and device studies by the manufacturing company leads to more favorable efficacy results and conclusions than sponsorship by other sources" and that the existence of an industry bias that cannot be explained by standard 'risk of bias' assessments.
Poor compliance with review reporting guidelines
The rapid growth of systematic reviews in recent years has been accompanied by the attendant issue of poor compliance with guidelines, particularly in areas such as declaration of registered study protocols, funding source declaration, risk of bias data, issues resulting from data abstraction, and description of clear study objectives. A host of studies have identified weaknesses in the rigour and reproducibility of search strategies in systematic reviews. To remedy this issue, a new PRISMA guideline extension called PRISMA-S is being developed. Furthermore, tools and checklists for peer-reviewing search strategies have been created, such as the Peer Review of Electronic Search Strategies (PRESS) guidelines.
A key challenge for using systematic reviews in clinical practice and healthcare policy is assessing the quality of a given review. Consequently, a range of appraisal tools to evaluate systematic reviews have been designed. The two most popular measurement instruments and scoring tools for systematic review quality assessment are AMSTAR 2 (a measurement tool to assess the methodological quality of systematic reviews) and ROBIS (Risk Of Bias In Systematic reviews); however, these are not appropriate for all systematic review types.
History
The first publication that is now recognized as equivalent to a modern systematic review was a 1753 paper by James Lind, which reviewed all of the previous publications about scurvy. Systematic reviews appeared only sporadically until the 1980s, and became common after 2000. More than 10,000 systematic reviews are published each year.
History in medicine
A 1904 British Medical Journal paper by Karl Pearson collated data from several studies in the UK, India and South Africa of typhoid inoculation. He used a meta-analytic approach to aggregate the outcomes of multiple clinical studies. In 1972, Archie Cochrane wrote: "It is surely a great criticism of our profession that we have not organised a critical summary, by specialty or subspecialty, adapted periodically, of all relevant randomised controlled trials". Critical appraisal and synthesis of research findings in a systematic way emerged in 1975 under the term 'meta analysis'. Early syntheses were conducted in broad areas of public policy and social interventions, with systematic research synthesis applied to medicine and health. Inspired by his own personal experiences as a senior medical officer in prisoner of war camps, Archie Cochrane worked to improve the scientific method in medical evidence. His call for the increased use of randomised controlled trials and systematic reviews led to the creation of The Cochrane Collaboration, which was founded in 1993 and named after him, building on the work by Iain Chalmers and colleagues in the area of pregnancy and childbirth.
See also
Critical appraisal
Further research is needed
Systematic searching
Horizon scanning
Literature review
Living review
Meta-analysis
Metascience
Peer review
Review journal
Generalized model aggregation (GMA)
Umbrella review
References
STARDIT report Q101116128.
External links
Systematic Review Tools — Search and list of systematic review software tools
Cochrane Collaboration
MeSH: Review Literature—articles about the review process
MeSH: Review [Publication Type] - limit search results to reviews
Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) Statement , "an evidence-based minimum set of items for reporting in systematic reviews and meta-analyses"
PRISMA Extension for Scoping Reviews (PRISMA-ScR): Checklist and explanation
Animated Storyboard: What Are Systematic Reviews? - Cochrane Consumers and Communication Group
Sysrev - a free platform with open access systematic reviews.
STARDIT - an open access data-sharing system to standardise the way that information about initiatives is reported.
Evidence-based practices
Information science
Meta-analysis
Nursing research | 0.760678 | 0.996309 | 0.75787 |
Ekistics | Ekistics is the science of human settlements including regional, city, community planning and dwelling design. Its major incentive was the emergence of increasingly large and complex conurbations, tending even to a worldwide city. The study involves every kind of human settlement, with particular attention to geography, ecology, human psychology, anthropology, culture, politics, and occasionally aesthetics.
As a scientific mode of study, ekistics currently relies on statistics and description, organized in five ekistic elements or principles: nature, anthropos, society, shells, and networks. It is generally a more scientific field than urban planning, and has considerable overlap with some of the less restrained fields of architectural theory.
In application, conclusions are drawn aimed at achieving harmony between the inhabitants of a settlement and their physical and socio-cultural environments.
Etymology
The term ekistics was coined by Constantinos Apostolos Doxiadis in 1942. The word is derived from the Greek adjective more particularly from the neuter plural . The ancient Greek adjective meant . It was derived from , an ancient Greek noun meaning . This may be regarded as deriving indirectly from another ancient Greek noun, , meaning , and especially (used by Plato), or . All these words grew from the verb , , and were ultimately derived from the noun , .
The Shorter Oxford English Dictionary contains a reference to an ecist, oekist or oikist, defining him as: "the founder of an ancient Greek ... colony". The English equivalent of oikistikē is ekistics (a noun). In addition, the adjectives ekistic and ekistical, the adverb ekistically, and the noun ekistician are now also in current use.
Scope
In terms of outdoor recreation, the term ekistic relationship is used to describe one's relationship with the natural world and how they view the resources within it.
The notion of ekistics implies that understanding the interaction between and within human groups—infrastructure, agriculture, shelter, function (job)—in conjunction with their environment directly affects their well-being (individual and collective). The subject begins to elucidate the ways in which collective settlements form and how they inter-relate. By doing so, humans begin to understand how they 'fit' into a species, i.e. Homo sapiens, and how Homo sapiens 'should' be living in order to manifest our potential—at least as far as this species is concerned (as the text stands now). Ekistics in some cases argues that in order for human settlements to expand efficiently and economically we must reorganize the way in which the villages, towns, cities, metropolises are formed.
As Doxiadis put it, "... This field (ekistics) is a science, even if in our times it is usually considered a technology and an art, without the foundations of a science - a mistake for which we pay very heavily." Having recorded very successfully the destructions of the ekistic wealth in Greece during WWII, Doxiadis became convinced that human settlements are subjectable to systematic investigation. Doxiadis, being aware of the unifying power of systems thinking and particularly of the biological and evolutionary reference models as used by many famous biologists-philosophers of his generation, especially Sir Julian Huxley (1887–1975), Theodosius Dobzhansky (1900–75), Dennis Gabor (1900–79), René Dubos (1901–82), George G. Simpson (1902–84), and Conrad Waddington (1905–75), used the biological model to describe the "ekistic behavior" of anthropos (the five principles) and the evolutionary model to explain the morphogenesis of human settlements (the eleven forces, the hierarchical structure of human settlements, dynapolis, ecumenopolis). Finally, he formulated a general theory which considers human settlements as living organisms capable of evolution, an evolution that might be guided by Man using "ekistic knowledge".
Units
Doxiadis believed that the conclusion from biological and social experience was clear: to avoid chaos we must organize our system of life from anthropos (individual) to ecumenopolis (global city) in hierarchical levels, represented by human settlements. So he articulated a general hierarchical scale with fifteen levels of ekistic units:
anthropos – 1
room – 2
house – 5
housegroup (hamlet) – 40
small neighborhood (village) – 250
neighborhood – 1,500
small polis (town) – 10,000
polis (city) – 75,000
small metropolis – 500,000
metropolis – 4 million
small megalopolis – 25 million
megalopolis – 150 million
small eperopolis – 750 million
eperopolis – 7.5 billion
ecumenopolis – 50 billion
The population figures above are for Doxiadis' ideal future ekistic units for the year 2100, at which time he estimated (in 1968) that Earth would achieve zero population growth at a population of 50,000,000,000 with human civilization being powered by fusion energy.
Publications
The Ekistics and the New Habitat, printed from 1957 to 2006 and began calling for new papers to be published online in 2019.
Ekistics is a 1968 book by Konstantinos Doxiadis, often titled Introduction to Ekistics.
See also
Arcology
Conurbation
Consolidated city-county
Global city
Human ecosystem
Megacity
Megalopolis (term)
Metropolitan area
Permaculture
Principles of intelligent urbanism
Further reading
Doxiadis, Konstantinos Ekistics 1968
References
External links
The Institute of Ekistics
World Society for Ekistics
Ekistic Units
City of the Future
Urban studies and planning terminology
Architectural terminology | 0.768362 | 0.986321 | 0.757852 |
Bodymind | Bodymind is an approach to understand the relationship between the human body and mind where they are seen as a single integrated unit. It attempts to address the mind–body problem and resists the Western traditions of mind–body dualism.
Dualism vs holism
In the field of philosophy, the theory of dualism is the speculation that the mental and the physical parts of us, like our minds and our bodies, are different or separate.
Modern understanding
"The mind is composed of mental fragments- sensations, feelings, thoughts, imaginations, all flowing now in an ordered sequence, now in a chaotic fashion…. On the other hand, the body is constructed under the underlying laws of physics, and its components obey the well-enumerated laws of physiology. It is these characteristic differences between these two – between mind and body – that lead to the Mind-Body problem.". While Western populations tend to believe more in the idea of dualism, there is also good research on the neurophysiology of emotions and their foundation in human meaning making, the function of the mind, such as the research of Candace Pert.
Relevance to alternative medicine
In the field of alternative medicine, bodymind implies that
The body, mind, emotions, and spirit are dynamically interrelated.
Experience, including physical stress, emotional injury, and pleasures are stored in the body's cells which in turn affects one's reactions to stimuli.
The term can be a number of disciplines, including:
Psychoneuroimmunology, the study of the interaction between psychological processes and the nervous and immune systems of the human body.
Body psychotherapy, a branch of psychotherapy which applies basic principles of somatic psychology. It originated in the work of Pierre Janet and particularly Wilhelm Reich.
Neurobiology, the study of the nervous system
Psychosomatic medicine, an interdisciplinary medical field exploring the relationships among social, psychological, and behavioral factors on bodily processes and quality of life in humans and animals. Clinical situations where mental processes act as a major factor affecting medical outcomes are areas where psychosomatic medicine excels.
Postural Integration, a process-oriented body psychotherapy originally developed in the late 1960s by Jack Painter (1933–2010) in California, US, after exploration in the fields of humanistic psychology and the human potential movement. The method aims to support personal change and self development, through a particular form of manipulative holistic bodywork.
See also
Ableism
Binding problem
Bodymind (disability studies)
Developmental disability
Disability
Disability and religion
Disability culture
Disability in the United States
Disability rights
Disability studies
Emotional or behavioral disability
Inclusion (disability rights)
Invisible disability
List of disability studies journals
Medical model of disability
Services for the disabled
Sexuality and disability
Social model of disability
Society for Disability Studies
References
Further reading
Benson MD, Herbert; ( 2000) (1975), The Relaxation Response, Harper
Bracken, Patrick & Philip Thomas; (2002), "Time to move beyond the mind-body split", editorial, British Medical Journal 2002;325:1433–1434 (21 December)
Dychtwald, Ken; (1986), Bodymind Penguin Putman Inc. NY,
Gallagher, Shaun; (2005) ‚ How the Body Shapes the Mind Oxford: Oxford University Press.
Hill, Daniel (2015) Affect Regulation Theory. A Clinical Model W. W. Norton.& Co .
Keinänen, Matti; (2005), Psychosemiosis as a Key to Body-Mind Continuum: The Reinforcement of Symbolization-Reflectiveness in Psychotherapy. Nova Science Publishers. .
Mayer, Emeran A. 2003. The Neurobiology Basis of Mind Body Medicine: Convergent Traditional and Scientific Approaches to Health, Disease, and Healing. Source: https://web.archive.org/web/20070403123225/http://www.aboutibs.org/Publications/MindBody.html (accessed: Sunday January 14, 2007).
Money, John; (1988) Gay, Straight, and In-Between: The Sexology of Erotic Orientation. New York: Oxford University Press.
Rothschild, Babette; ( 2000) The Body Remembers: The Psychophysiology of Trauma and Trauma Treatment. W W Norton & Co Inc.
Scheper-Hughes, Nancy, and Margaret M. Lock; (1987) The Mindful Body: A Prolegomenon to Future Work in Medical Anthropology with Margaret Lock. Medical Anthropology Quarterly. (1): 6–41.
Seem, Mark & Kaplan, Joan; (1987) Bodymind Energetics, Towards a Dynamic Model of Health Healing Arts Press, Rochester VT,
Clare, Eli. "Brilliant Imperfection: Grappling with Cure"
Schalk, Sami. "Bodyminds Reimagined: (Dis)ability, Race, and Gender in Black Women's Speculative Fiction"
Patsavas, Alyson. "Recovering a Cripistemology of Pain: Leaky Bodies, Connective Tissue, and Feeling Discourse"
Price, Margaret. "The Bodymind Problem and the Possibilities of Pain"
Kafer, Alison. "Feminist, Queer, Crip"
Hall, Kim. "Gender" chapter from "Keywords for Disability Studies".
McRuer, Robert, and Johnson, Merri Lisa. "Proliferating Cripistemologies: A Virtual Roundtable".
Garland-Thomson, Rosemarie. "Extraordinary Bodies: Figuring Physical Disability in American Culture and Literature".
Garland-Thomson, Rosemarie. "Becoming Disabled".
Body psychotherapy
Popular psychology | 0.77971 | 0.971935 | 0.757828 |
Problematic social media use | Experts from many different fields have conducted research and held debates about how using social media affects mental health. Research suggests that mental health issues arising from social media use affect women more than men and vary according to the particular social media platform used, although it does affect every age and gender demographic in different ways. Psychological or behavioural dependence on social media platforms can result in significant negative functions in individuals' daily lives. Studies show there are several negative effects that social media can have on individuals' mental health and overall well-being. While researchers have attempted to examine why and how social media is problematic, they still struggle to develop evidence-based recommendations on how they would go about offering potential solutions to this issue. Because social media is constantly evolving, researchers also struggle with whether the disorder of problematic social media use would be considered a separate clinical entity or a manifestation of underlying psychiatric disorders. These disorders can be diagnosed when an individual engages in online content/conversations rather than pursuing other interests.
Symptoms
While there exists no official diagnostic term or measurement, problematic social media use can be conceptualized as a non-substance-related disorder, resulting in preoccupation and compulsion to engage excessively in social media platforms despite negative consequences.
Problematic social media use is associated with various psychological and physiological effects, such as anxiety and depression in children and young people.
A 2022 meta-analysis showed moderate and significant associations between problematic social media use in youth and increased symptoms of depression, anxiety, and stress. Another meta-analysis in 2019, investigating Facebook use and symptoms of depression, also showed an association, with a small effect size. In a 2018 systematic review and meta-analysis, problematic Facebook use was shown to have negative effects on well-being in adolescents and young adults, and psychological distress was also found with problematic use. Frequent social media use was shown in a cohort study of 15- and 16-year-olds to have an association with self-reported symptoms of attention deficit hyperactivity disorder followed up over two years.
Decrease in mood
A 2016 technological report by the American Academy of Pediatrics, benefits and concerns where identified in relation to adolescent mental health and social media use. It showed that the amount of time spent on social media is not the key factor but rather how time is spent. Declines in well-being and life satisfaction were found in older adolescents who passively consumed social media; however, these were not shown in those who were more actively engaged. The report also found a U-shaped, curvilinear relationship between the amount of time spent on digital media and with risk of depression developing, at both the low and high ends of Internet use.
Eating disorders
According to research by Flinders University, social media use correlates with eating disorders. The study found eating disorders in 52% of girls and 45% of boys, from a group of 1,000 participants who used social media.
Through the extensive use of social media, adolescents are exposed to images of bodies that are unattainable, especially with the growing presence of photo-editing apps that allow you to alter the way that your body appears in a photo. This can, in turn, influence both the diet and exercise practices of adolescents as they try to fit the standard that their social media consumption has set for them.
Instagram users who partake in looking for social media status and compare themselves to others tend to have an increase in negative various psychological effects including body image issues and eating disorders. According to a study that included 2,475 students by Madeline Dougherty (née Wick), PhD, a professor at Florida State University, and her doctoral advisor, Pamela Keel, PhD, a psychology professor at Florida State University, 1 in 3 women responded that they edit their pictures to change their weight and shape before posting a picture to Instagram. A similar study in Australia and New Zealand found 52% of girls ages 13 to 14 with a social media account were very likely to have eating disorders like skipping a meal or over-exercising. These various studies found that teenage girls who viewed their retouched photo and compared that to their untouched photo directly harmed their body image. Although this happens amongst various age groups and genders it was found that this tends to have a greater effect on the younger age group of women.
This relationship with social media and body image is fixable by consuming less "fitspiration" and understanding that the images that they see of are altered. Many social media influencers also have body dissatisfaction and pose as lifestyle content creators and influence you woman into believing they can look like them, when they themselves do not even meet the standard they are presenting
Excessive use
One can evaluate their social media habits and behavior toward it to help determine if an addiction is present. Addictions are a certain type of impulse control disorder, which may lead one to lose track of time while using social media. For instance, one's psychological clock may run slower than usual, and the user's self-consciousness is compromised. Therefore, individuals may passively consume media for longer amounts of time. Psychologists estimate that as many as 5 to 10% of Americans meet the criteria for social media addiction today. Addictive social media use will look much like that of any other substance use disorder, including mood modification, salience, tolerance, withdrawal symptoms, conflict, and relapse. In the digital age, it is common for adolescents to use their smartphones for entertainment purposes, education, news, and managing their daily life. Therefore, adolescents are further at risk for developing addictive behaviors and habits. Many medical experts have looked at the survey and come up with a clear conclusion, saying that teenagers' excessive smartphone use has an impact on their behavior and even their mental health. If the excessive use of social media and the platforms encompassed therein have proven to cause mental health issues, eating disorders, and lowered self-esteem, and the use of such media has been shown to be addictive in some form or another, medically there should be an avenue to treat the use or excessive use of the media platforms. For example, a study involving 157 online learners showed that, on average, learners on massive open online courses spend half of their online time on YouTube and social media, and less than 2% of visited websites accounted for nearly 80% of their online time. The excessive use is causing underlying health conditions that in themselves are treatable, but if these issues stem from the use of social media platforms, the addictive nature of these platforms should be addressed in a way to reduce or eradicate the health-related or mental related effects resulting. More studies need to be done, more funding has to be provided, and the addiction to such platforms should be seen as a true addiction and treated as such, and not simply discarded as a millennial issue. Not only is the amount of time spent on social media the main cause to media addiction, but also the type of platform this media is being consumed on. Yes, the algorithm plays a significant role in what appears on your main screen for any platform, however. growing boys and girls do not fully understand the concept and circumstances they live in. They live in a world that can be broadcast at any point and follow them into school. They are at a disadvantage to almost need social media as a currency to fit in at school and make their life presentable enough to be assumed as attractive, interesting, or likable just off their feed. Jon Haidt, a social psychologist and author, has done extensive research on the true hold social media not only has on people, but especially teens and young adults. Over 80% of high school students use social media "constantly" or on a daily basis. Girls are heavier users on visually oriented platforms which invites comparison with other girls. This competitiveness can cause anxiety and depression and a vicious cycle in trying to obtain an unachievable standard
Social media addiction from an anthropological lens
Studies done to explore the negative effects of social media have not produced any definitive findings. Addiction to social media remains a controversial topic despite these mixed results and is not recognized by the Diagnostic and Statistical Manual of Mental Disorders (DSM-5) as a disorder. From an anthropological lens, addiction to social media is a socially constructed concept that has been medicalized because this behavior does not align with behavior accepted by certain hegemonic social groups. It is important to consider that disapproval of using social media for too long could also be related to capitalism, reduction in productivity of employees when people constantly scroll, and difficulty governing populations because of the content they consume. Harm caused to individuals through social media addiction is a social fact. Whether researchers can find evidence to support these claims is irrelevant if most people believe it is harmful, as this shared belief affects people in a material way. As much as excessive use of social media can take attention away from the person in front of you at dinner, it also connects people to loved ones, and maintains connections that would otherwise be severed due to distant
Social anxiety
Social media allows users to openly share their feelings, values, relationships, and thoughts. With the platform social media provides, users can freely express their emotions. However, social media may also be a platform for discrimination and cyberbullying. There is also a strong positive correlation between social anxiety and social media usage, and in particular between cyberostracism and social media disorder. Social anxiety is defined as having intense anxiety or fear of being judged, negatively evaluated, or rejected in a social or performance situation. Many people with social anxiety go to the internet as an escape from reality, so they often withdraw from in-person communication and feel most comfortable with online communication. People usually act differently on social media than they do in person, resulting in many activities and social groups being different when using social media. Thus, social media can worsen anxiety through the constant social comparison and fear of missing out (FOMO). However, social media can also be a safe space wherein you can connect with others and build connections and support from online communities. Even though social media can satisfy personal communication needs, those who use it at higher rates are shown to have higher levels of psychological distress.
Lowered self-esteem
Low self-esteem has generally had some sort of connection to serious mental health illnesses such as depression. some studies have been done to find if social media platforms have any sort of correlation to low self-esteem. One such study in which participants were given the Rosenberg Self-Esteem Scale to rate their self-esteem based on their social media usage found that participants that used Facebook tended to rate themselves more poorly on their general self-esteem. Social media sites such as Instagram may exacerbate feelings of insufficiency and isolation as users engage in social comparison. This issue is especially pertinent for individuals with narcissistic tendencies who seek affirmation and praise.
Another reason why depression is associated with social media might be what psychologists call displacement – which means what teenagers are not doing during time that's displaced by social media. This includes mental health-boosting activities such as exercise, sleep and developing talents.
Solitary experiences
Although many studies have found a relationship between loneliness and problematic social media use among young adults, a recent systematic review has observed that this relationship collapses when loneliness is observed longitudinally, or when considered contextually (i.e., across the individual's life contexts) or when other psychological factors are included in the mediation model. Furthermore, Pezzi and colleagues pointed out that beyond loneliness (which has been summatively considered by the literature to date), no other solitary experience has been investigated in relation to problematic social media use in young adults. In fact, further research is needed to clarify the relationship between these variables.
Mechanisms
A 2017 review article noted the "cultural norm" among adolescents of being always on or connected to social media, remarking that this reflects young people's "need to belong" and stay up-to-date and that this perpetuates a "fear of missing out". Other motivations include information seeking and identity formation, as well as voyeurism and cyber-stalking. For some individuals, social media can become "the single most important activity that they engage in". This can be related to Maslow's hierarchy of needs, with basic human needs often met by social media. Positive-outcome expectations and limited self-control of social media use can develop into "addictive" social media use. Further problematic use may occur when social media is used to cope with psychological stress, or a perceived inability to cope with life demands.
Cultural anthropologist Natasha Dow Schüll noted parallels to the gambling industry inherent to the design of various social media sites, with "'ludic loops' or repeated cycles of uncertainty, anticipation and feedback" potentially contributing to problematic social media use. Another factor directly facilitating the development of addiction to social media is the implicit attitude toward the IT artifact.
Mark D. Griffiths, a chartered psychologist focusing on the field of behavioural addictions, also postulated in 2014 that social networking online may fulfill basic evolutionary drives in the wake of mass urbanization worldwide. The basic psychological needs of "secure, predictable community life that evolved over millions of years" remain unchanged, leading some to find online communities to cope with the new individualized way of life in some modern societies.
According to Andreassen, empirical research indicates that addiction to social media is triggered by dispositional factors (such as personality, desires, and self-esteem), but specific socio-cultural and behavioural reinforcement factors remain to be investigated empirically.
A secondary analysis of a large English cross-sectional survey of 12,866 13 to 16-year-olds published in Lancet found that mental health outcomes problematic use of social media platforms may be in part due to exposure to cyberbullying, as well as displacement in sleep architecture and physical exercise, especially in girls. Through cyberbullying and discrimination researchers have found that depression rates among teens have drastically increased. In a study done on 1,464 random users on Twitter, 64% of those people were depressed, while the majority of depressed users were between 11 and 20. The study was associated with a lack of confidence due to stigma for those who were depressed. Out of the 64% that were depressed, over 90% of them were extremely low in profile images and shared media. Moreover, the study also found a strong correlation between the female gender and expression of depression, concluding that the female-to-male ratio is 2:1 for major depressive disorder.
In 2018, Harvard University neurobiology research technician Trevor Haynes postulated that social media may stimulate the reward pathway in the brain. An ex-Facebook executive, Sean Parker, has also espoused this theory. Social media addiction may also have other neurobiological risk factors; understanding this addiction is still being actively studied and researched, but there is some evidence that suggests a possible link between problematic social media use and neurobiological aspects.
Six key mechanisms
There are six key mechanisms attributed to the addictive nature of social media and messaging platforms.
Endless scrolling and streaming
To attract maximum user attention, app developers distort time by affecting the 'flow' of content when scrolling. This distortion makes it difficult for users to recognise the length of time they spend on social media. Principles similar to Skinner's variable-ratio conditioning can be found with the intermittent release of rewarding reinforcement in an unpredictable stream of 'bad' content. This makes extinguishing behavioural conditioning difficult. Behavioural conditioning is also achieved via the 'auto-play' default of streaming platforms. The more absorbed the viewer becomes, the more time distortion occurs, making it more difficult to stop watching. This is further coupled with minimal time to cancel the next stream thereby creating a false sense of urgency followed by an absorbing relief.
Endowment effect/Exposure effect
Investing time in social media platforms generates an emotional attachment to the virtual setting the user creates. The user values this above its actual value, which is referred to as the endowment effect. The more time a person spends curating their social media presence, the more difficult it is for them to give up social media as they have placed an emotional value on this virtual existence higher than its actual value. The user is more prone to loss aversion from this endowment. As a result, they are less willing to stop their use of social media.
This is further compounded by the mere exposure the user has to the respective platforms. This exposure effect suggests that repeated exposure to a distinct stimulus by the user will condition the user into an enhanced or improved attitude toward it. With social media, repetitive exposure to the platforms improves the user's attitude towards them. The advertising industry has recognised this potential but rarely used it due to their belief in an inherent conflict between overexposure and the law of familiarity. The more mere exposure a user has to a social media platform, the more they like to use it. This makes the act of removing social media problematic thereby highlighting the effect's contribution to social media's addictive nature.
Social pressures
Social media has developed expectations of immediacy which then create social pressures. One study into the social pressures created by the instant messaging platform, WhatsApp, showed the "Last Seen" feature contributed to the expectation of a fast response. This feature serves as an "automatic approximation of availability" thereby denoting a time frame by which the sender is aware the receiver will reply in and similarly a time frame the receiver must reply in without causing tensions to their relationship.
This was further seen in the "Read Receipt" (in the form of ticks) feature on WhatsApp. The nudge of a double tick highlights the reception of the message therefore the sender is consciously aware that the receiver has likely seen the message. The receiver would equally feel pressure to respond fast for fear of violating the sender's expectation. Since both sides know the working mechanics of the Last Seen and Read Receipt features, social pressure in the speed of response is created.
The effect of this has been linked to the addictive nature of the features as it offers a possible explanation for frequent checking for notifications. Furthermore, it has also been suggested to undermine well-being.
Personalized newsfeed
Google is the first tech firm to adopt the personalization of user content. The company does this by tracking: "search history, click history, location on Google and on other websites, language search query, choice of web browser and operating system, social connections, and time taken to make search decisions." Facebook similarly adopted this method in their recording of user endorsement through the "Like" and react options. Facebook's personalization mechanics are so precise they are capable of tracking the mood of their users. The overall effect of this is that it creates "highly interesting, personalized websites" tailored to each user which in turn leads to more time online and further increases the chances of the user developing an addictive or problematic behaviour with social media.
Social rewards and comparisons
The "Like" mechanism is another example of social media's problematic features. It is a social cue that visually represents the social validation the user either gives or receives. One study explored the quantifiable and qualitative effects the "Like" button had on social endorsement. The study asked 39 adolescents to submit their own Instagram photos alongside neutral and risky photos which were then reproduced into a testing app that controlled the number of likes the photo would initially receive prior to testing. The result found adolescents were more likely to endorse both risky and neutral photos if they had more likes. Furthermore, the study suggested that adolescents were more inclined to perceive a qualitative effect of the photos depending on the strength of peer endorsement. Whilst "quantifiable social endorsement is a relatively new phenomenon," this study is suggestive of the effects the "Like" option as a social cue has on adolescents.
Another study looking at different types, three modalities (social interaction, simulation, and search for relations), and two genders (male and female) assessed whether self-esteem contributed to Facebook use in the context of a social comparison variable. Males were found to have less of a social comparison orientation between the tested contribution; however, their self-esteem and length of time on Facebook were found to have a negative link. For females, social comparison was the primary factor in the relationship between self-esteem and Facebook use: "females with low self-esteem seem to spend more time on Facebook in order to compare themselves to others and possibly increase their self-esteem since social comparison serves the function of self-enhancement and self-improvement.". As a result, females tend to be at a higher risk of developing problematic use of social media than males. In accordance with the individual traits being tested, the study highlights the tendency to socially compare and its relationship with self-esteem and the length of Facebook use.
Zeigarnik effect/Ovsiankina effect
The Zeigarnik effect suggests the human brain will continue to pursue an unfinished task until a satisfying closure. The endless nature of social media platforms affects this effect as they prevent the user from "finishing" the scrolling thereby developing a subconscious desire to continue and "finish" the task.
The Ovsiankina effect is similar as it suggests there is a tendency to pick up an unfinished or interrupted action. The "brief, fast-paced give and take" of social media subverts the satisfying closure which in turn creates a need to continue with the intent of producing a satisfying closure.
Platforms consist of unfinished and interruptible mechanisms which affect both of these Effects. Whilst a mechanism of social media platforms, it is more clearly seen with Freemium games like Candy Crush Saga.
Platform-specific risks
Studies have shown differences in motivations and behavioural patterns among social media platforms, especially regarding their problematic use. In the United Kingdom, a study of 1,479 people between 14 and 24 years old compared the psychological benefits and deficits of the five largest social media platforms: Facebook, Instagram, Snapchat, Twitter, and YouTube. Negative effects of smartphone use include "phubbing," which is snubbing someone by checking one's smartphone in the middle of a real-life conversation. The study was used to check the direct and indirect associations of neuroticism, trait anxiety, and trait fear of missing out with phubbing via state fear of missing out and problematic Instagram use. A total number of 423 adolescents and emerging adults between the ages of 14 and 21 years old (53% female) participated in the study. The findings indicated that females had significantly higher scores of phubbing, fear of missing out, problematic Instagram use, trait anxiety, and neuroticism. Problematic social media use (PSMU) presented in the study that was invested also in the influences of demographics and Big Five personality dimensions on social media use motives; demographics and use motives on social media site preferences; and demographics, personality, popular social media sites, and social media use motives on PSMU. The study consisted of 1008 undergraduate students, between the age of 17 and 32 years old. Participants who preferred Instagram, Snapchat, and Facebook reported higher scores of problematic social media use. The study concluded that YouTube was the only platform with a net positive rating based on 14 questions related to health and well-being, followed by Twitter, Facebook, Snapchat, and finally Instagram. Instagram had the lowest rating: it was identified as having some positive effects such as self-expression, self-identity, and community, but ultimately was outweighed by its negative effects on sleep, body image, and "fear of missing out".
Limiting social media use
A three-week study for limiting social media usage was conducted on 108 female and 35 male undergraduate students at the University of Pennsylvania. Prior to the study, participants were required to have Facebook, Instagram, and Snapchat accounts on an iPhone device. This study observed the student's well-being by sending a questionnaire at the start of the experiment, as well as at the end of every week. Students were asked questions about their well-being on the scale of: "social support," "fear of missing out," "loneliness," "anxiety," "depression," "self-esteem," and "autonomy and self-acceptance." The conclusion of the study revealed that limiting social media usage on a mobile phone to 10 minutes per platform per day had a significant impact on well-being. Loneliness and depressive symptoms declined in the group that had limited social media usage. Students with depressive symptoms had a much higher impact with social media restriction if they began with higher levels of depression.
Social media's effect on older generations
In many ways, older generations are affected by social media in different areas to teens and young adults. Social media plays an integral role in the daily lives of middle aged adults, especially in regards to their career and communication. Studies have suggested that many individuals feel that smartphones are vital for their career planning and success, but a pressure to connect with family and friends via social media becomes an issue. This is reinforced by further studies suggesting that middle aged people feel more isolated and lonely due to the use of social media, to the extent of diagnosis of anxiety and depression with excessive use. Similarly to teens and young adults, comparisons to others is often the reason for negative mental impacts amongst middle aged individuals. Surveys suggest that a pressure to perform and feelings of inferiority due to observing others lives through social media has caused depression and anxiety amongst middle class individuals specifically. However, older generations do reap the benefits of the rise of social media. The feelings of loneliness and isolation have decreased in elderly individuals who use social media to connect to others, ultimately leading to a more fulfilling and physically healthy lifestyle, due to the ability to communicate and stay in touch with people they would have physically not been able to see.
Social media's influence on education
Social media has been shown to impact many elements of society and our day to day lives, however a more recent development has been one relating to education. When it comes to schools with limited resources, social media has been able to enrich learning environments by creating an easily accessible pool of information for learning and education. An example of this being the social media presence now evident from National Geographic and the BBC. A current theory being that this access creates a more equitable learning environment
However, the ease at which we are able to access this information also poses problems relating to its validity, as well as the way in which it can be interpreted. Not all published media has transferable cultural relevance or meaning. However, to have access to publications, data and visuals allows for an element of contextualising to occur and this can be relevant pertaining to history and cross-culture based teaching. An example of media in this case being YouTube.
Digital media has created an instant reward system for people to immediately get dopamine just by using their phone. In school, many teachers report that their students are unfocused and unmotivated. They spend their free time on social media and because of this are experiencing weaker critical thinking skills, impatience, and a lack in perseverance. Students are having a harder time controlling their attention because of how fast their phone can change from topic to topic. They are being exposed to addictive stimuli which allows the brain to be better at skimming through information rather than understanding it
Having social media as a tool to assist education also creates platforms for wider communication and information sharing, for example teachers, which encourages interaction and socialisation. Social media's influence and impact on education is something that will continue to become more evident as not only technology and social media advances, but how accessible that technology becomes.
Treatment and research
Currently, no diagnosis exists for problematic social media use in either the ICD-11 or DSM-5.
There are many ways that an addiction to social media can be expressed in individuals. According to clinical psychologist Cecilie Schou Andreassen and her colleagues, there are five potential factors that indicate a person's dependence to social media:
Mood swings: a person uses social media to regulate his or her mood, or as a means of escaping real world conflicts
Relevance: social media starts to dominate a person's thoughts at the expense of other activities
Tolerance: a person increases their time spent on social media to experience previously associated feelings they had while using social media;
Withdrawal: when a person can not access social media their sleeping or eating habits change or signs of depression or anxiety can become present.
Conflicts in real life: when social media is used excessively, it can affect real-life relationships with family and friends.
In addition to Andreassen's factors, Griffiths further explains that someone is addicted to social media if their behaviour fulfills any of these six criteria:
Salience: social media becomes the most important part of someone's life;
Mood modification: a person uses social media as a means of escape because it makes them feel "high", "buzzed", or "numb";
Tolerance: a person gradually increases their time spent on social media to maintain that escapist feeling;
Withdrawal: unpleasant feelings or physical sensations when the person is unable to use social media or does not have access to it;
Conflict: social media use causes conflict in interpersonal dynamics, loses desire to participate in other activities, and becomes pervasive;
Relapse: the tendency for previously affected individuals to revert to previous patterns of excessive social media use.
He continues to add that excessive use of an activity, like social media, does not directly equate with addiction because there are other factors that could lead to someone's social media addiction including personality traits and pre-existing tendencies.
Turel and Serenko summarize three types of general models people might have that can lead to addictive social media use:
Cognitive-behavioral model – People increase their use of social media when they are in unfamiliar environments or awkward situations;
Social skill model – People pull out their phones and use social media when they prefer virtual communication as opposed to face-to-face interactions because they lack self-presentation skills;
Socio-cognitive model – This person uses social media because they love the feeling of people liking and commenting on their photos and tagging them in pictures. They are attracted to the positive outcomes they receive on social media.
Based on those models, Xu and Tan suggest that the transition from normal to problematic social media use occurs when a person relies on it to relieve stress, loneliness, depression, or provide continuous rewards.
Management
No established treatments exist, but from research from the related entity of Internet addiction disorder, treatments have been considered, with further research needed. Screen time recommendations for children and families have been developed by the American Academy of Pediatrics.
Possible therapeutic interventions published by Andreassen include:
Self-help interventions, including application-specific timers;
Cognitive behavioural therapy; and
Organisational and schooling support.
Possible treatment for social anxiety disorder includes cognitive behavioral therapy (CBT) as well. CBT helps victims of social anxiety to improve their ways of thinking, behaving, and reacting to stressful situations. Withal, most CBT is held in a group format to help improve social skills.
Medications have not been shown to be effective in randomized, controlled trials for the related conditions of Internet addiction disorder or gaming disorder.
Technology management
As awareness of these issues has increased, many technology and medical communities have continued to work together to develop novel solutions. Apple Inc. purchased a third-party application and incorporated it as "screen time", promoting it as an integral part of iOS 12. A German technology startup developed an Android phone specifically designed for efficiency and minimizing screen time. News Corp reported multiple strategies for minimizing screen time. Facebook and Instagram have announced "new tools" that they think may assist with addiction to their products. In an interview in January 2019, Nick Clegg, then head of global affairs at Facebook, claimed that Facebook committed to doing "whatever it takes to make this safer online especially for [young people]". Facebook committed to change, admitting "heavy responsibilities" to the global community, and invited regulation by governments. Recent research evidence suggests that providing adaptive assistance can be effective in compensating for self-regulatory skills for some users. For instance, a study involving 157 online learners demonstrated that modifying learners' web browsing environment to support self-regulation was associated with a change in behaviour, including a reduction in time spent online, particularly on websites related to entertainment activities. These findings suggest that interventions aimed at modifying the web browsing environments may be effective in reducing excessive time spent on social media and other leisure-oriented websites. However, the effectiveness of the intervention was moderated by learners' individual differences (self-reported personality traits).
Government response
A survey conducted by Pew Research Center from January 8 through February 7, 2019, found that 80% of Americans go online every day. Among young adults, 48% of 18- to 29-year-olds reported going online 'almost constantly' and 46% of them reported going online 'multiple times per day.' Young adults going online 'almost constantly' increased by 9% just since 2018. On July 30, 2019, U.S. Senator Josh Hawley introduced the Social Media Addiction Reduction Technology (SMART) Act which is intended to crack down on "practices that exploit human psychology or brain physiology to substantially impede freedom of choice". It specifically prohibits features including infinite scrolling and Auto-Play.
A study conducted by Junling Gao and associates in Wuhan, China, on mental health during the COVID-19 outbreak revealed that there was a high prevalence of mental health problems including generalized anxiety and depression. This had a positive correlation to 'frequent social media exposure.' Based on these findings, the Chinese government increased mental health resources during the COVID-19 pandemic, including online courses, online consultation and hotline resources.
Parental responses
Parents play an instrumental role in protecting their children from problematic social media use. Parents' methods for monitoring, regulating, and understanding their children's social media use are referred to as parental mediation. Parental mediation strategies include active, restrictive, and co-using methods. Active mediation involves direct parent-child conversations that are intended to educate children on social media norms and safety, as well as the variety and purposes of online content. Restrictive mediation entails the implementation of rules, expectations, and limitations regarding children's social media use and interactions. Co-use is when parents jointly use social media alongside their children, and is most effective when parents are actively participating (like asking questions, making inquisitive/supportive comments) versus being passive about it. Active mediation is the most common strategy used by parents, though the key to success for any mediation strategy is consistency/reliability. When parents reinforce rules inconsistently, have no mediation strategy, or use highly restrictive strategies for monitoring their children's social media use, there is an observable increase in children's aggressive behaviours. When parents openly express that they are supportive of their child's autonomy and provide clear, consistent rules for media use, problematic usage and aggression decreases. Knowing that consistent, autonomy-supportive mediation has more positive outcomes than inconsistent, controlling mediation, parents can consciously foster more direct, involved, and genuine dialogue with their children. This can help prevent or reduce problematic social media use in children and teenagers.
Scales and measures
Problematic social media use has been a concern basically since the advent of it. There have been several scales developed and validated that help to understand the issues regarding problematic social media use. One of the first scales was an eight-item scale that was used for Facebook use. The Facebook Intensity Scale (FBI) was used multiple times and showed good reliability and validity. This scale only covered three areas of social media engagement, which left the scale lacking. Although the FBI was a good measure it lacked the needed component of purpose of use. The Multi-dimensional Facebook Intensity Scale (MFIS) investigated different dimensions of use that include overuse and reasons for use. The MFIS is composed of 13 items and has been used on several samples. The MFIS also had good reliability and validity, but the scale was directed toward the use of Facebook, and social media is far more than just one platform. The Social Networking Activity Intensity Scale (SNAIS) was created to look at the frequency of use of several platforms and investigated three facets of engagement with a 14-item survey. This scale looked at the purposes of use both entertainment and social function, and the scale as a whole had acceptable reliability and validity. The Social Media Disorder Scale (SMD) is a nine-item scale that was created to investigate addiction to social media and get to the heart of the issue. This scale has been used in conjunction with multiple scales and does measure social media addiction. The SMD has been tested and has good reliability and validity. This tool can be used by itself or in conjunction with other measures for future research and appears to be a reliable scale. There are many other scales that have been created; however, there is not one single scale that is being used by all researchers.
History
Because technological advances are considered "progress," it becomes more challenging to admit and confront the negative effects associated with them.
Causality has not been established, despite associations between digital media use and mental health symptoms and diagnoses being observed. Nuances and caveats published by researchers are often misunderstood by the general public and misrepresented by the media. According to a review published in 2016, Internet addiction and social media addiction are not well-defined constructs. No gold standard diagnostic criteria or universally agreed upon theories on the interrelated constructs exist.
The proposed disorder is generally defined if "excessive use damages personal, family and/or professional life" as proposed by Griffiths. The most notable of these addictions being gambling disorder, gaming addiction, Internet addiction, sex addiction, and work addiction.
Several studies have shown that women are more likely to overuse social media, while men are more likely to overuse video games.
There have been studies linking extraversion to overuse of social media, and other addictive tendencies. Along with extraversion, neuroticism has also been linked to an increased risk to developing social media addiction. It has been shown that people who are high in neuroticism are more keen to use a screen to interact with people rather than face to face contact because they find that easier. This has led multiple experts cited by Hawi and colleagues to suggest that digital media overuse may not be a singular construct, with some calling to delineate proposed disorders based on the type of digital media used. A 2016 psychological review stated that "studies have also suggested a link between innate basic psychological needs and social network site addiction [...] Social network site users seek feedback, and they get it from hundreds of people—instantly. Alternatively, it could be argued that the platforms are designed to get users 'hooked'."
Implications
Research shows that increase social media use and exposure to social media platforms can lead to negative results and bullying over time. While social media's main intention is to share information and communicate with friends and family, there is more evidence pertaining to negative factors rather than positive ones. Not only can social media expose people to bullying, but it can also increase users' chances of depression and self harm. Research assumes that those from the ages of 13-15 struggle the most with these issues, but they can be seen in college students as well. According to the Center for Disease Control and Prevention's 2019 Youth Risk Behavior Surveillance System, data showed that approximately 15% of high school students were electronically bullied in the 12 months prior to the survey that students were asked to complete. Bullying over social media has sparked suicide rates immensely within the last decade.
Molly Russell case
In November 2017, a fourteen year old British girl from Harrow, London, named Molly Russell, took her own life after viewing negative, graphic, and descriptive content primarily on social media platforms such as Facebook and Twitter. This news shocked the fourteen year old's parents who mentioned that she had never shown any previous signs of struggle and was doing very well in school. It was revealed in the court that six months prior to Molly's death, she had accumulated a total of 16,300 pieces of negative content on Instagram such as topics of self-harm, depression, and suicide. While breaking the numbers down on how often Molly viewed these kinds of posts each day, it was revealed that twelve per day were related to either self-harm or depression. Due to the fact that both Instagram and Pinterest have algorithms set in place where similar content will show up often if it is interacted with more than once, Molly was surrounded by this content daily. It was also noted that throughout Molly's experience on social media, there were never any warning signs pertaining to the information she viewed on these particular platforms. Not only was this the first time that an internet company has been legally blamed for the death of a teenager, but this is also the first time that senior executives were required to give evidence in an official court of law. Technology companies such as Pinterest and Meta (which owns Instagram) were considered to be at fault for the lack of policies and regulations that were not already set into place prior to Molly's death. Between 2009 and 2019, there has been a 146% increase in suicide rates and can be alarming since social media plays such an active role in teenager's behaviour. Merry Varney, the lawyer who represented Molly's case explained that the findings used in court "captured all of the elements of why this material is so harmful." Dr. Navin Venugopal was the child psychiatrist asked to speak on this case when they were determining Molly's cause of death. Dr. Venugopal disclosed that after reviewing Molly's content on both Pinterest and Instagram, Molly was highly at risk. Even more so, he called the material "disturbing and distressing" and was unable to sleep well for weeks. The coroner of this case, Andrew Walker also concluded that Molly's death was "an act of self harm suffering from depression and the negative effects of online content". Molly's case has sparked a lot of attention not only across the UK but in the U.S. as well. It raises the question on whether or not policies and regulations will either be set into place or changed to protect the safety of children on the Internet. Child safety campaigners hope that creating regulations will help to shift the fundamentals that are associated with social media platforms such as Instagram and Pinterest.
Laws, policies, and regulations to minimize harm
Molly Russell's case sparked discussion both in the UK and the U.S. on how to protect individuals from harmful online content. In the UK, the Online Safety Bill was officially introduced into Parliament in March 2022: the bill covers a range of possible dangerous content such as revenge porn, grooming, hate speech, or anything related to suicide. Overall, the bill will not only protect children from online content but talk about how they can deal with this content that may be illegal. It also covers verification roles and advertising as this will all be covered on the social media platform's terms and conditions page. If the social media platforms fail to comply with these new regulations, they will face a $7500 fine for each offense. When it comes to the U.S., recommendations were offered such as finding an independent agency to implement a system of regulations similar to the Online Safety Bill in the U.K. Another potential idea was finding a specific rule making agency where the authority is strictly and solely focused on a digital regulator who is available 24/7. California already launched an act called the Age Appropriate Design Code Act in August 2022, which aims to protect children under the age of eighteen especially regarding privacy on the Internet. The overall hope and goal of these new laws, policies, and regulations set into place is to 1) ensure that a case such as Molly's never happens again and 2) protects individuals from harmful online content that can lead to mental health problems such as suicide, depression, and self-harm.
In 2022, a case was successfully litigated that implicated a social media platform in the suicide of a Canadian teenage girl named Amanda Todd who died by hanging. This was the first time that any social media platform was held liable for a user's actions.
See also
References
Further reading
National Institute of Mental Health, (2099), Social Anxiety Disorder : More Than Just Shyness, U.S., retrieved from Social Anxiety Disorder: More Than Just Shyness - National Institute of Mental Health (NIMH)
Walton, A. G. (2018, November 18).
External links
Digital media use and mental health
Social media
Social influence
Behavioral addiction | 0.762873 | 0.993362 | 0.757808 |
Hartman Personality Profile | The Color Code Personality Profile also known as The Color Code or The People Code is a personality test designed by Taylor Hartman. Despite being widely used in business and other fields, it is a pseudoscience.
Classifying the motive types
The Hartman Personality Profile is based on the notion that all people possess one of four driving "core motives". The Color Code is based on four types of personality, identified by color: Red, (motivated by power); Blue, (motivated by intimacy); White, (motivated by peace); and Yellow, (motivated by fun). Although demographic groups vary, Hartman suggests that Reds comprise 25% of the population; Blues 35%; Whites 20%; and Yellows 20%. There is no scientific proof to support these claims.
Criticism
The Hartman Institute and its many subsidiaries offer "coaches" to businesses seeking to improve interpersonal relations, for career counselling, or to collect data for use in hiring practices. The test informally passes most psychometric measures of reliability and face validity, but this may be attributed to the open predictability of the test. The criteria are likely self-fulfilling to an extent. Although internal and small sample corporate-sponsored data have been reported, no peer-reviewed studies of the psychometric value of the test exist.
See also
Table of similar systems of comparison of temperaments
References
Personality typologies | 0.774323 | 0.978655 | 0.757795 |
Biological specificity | Biological specificity is the tendency of a characteristic such as a behavior or a biochemical variation to occur in a particular species.
Biochemist Linus Pauling stated that "Biological specificity is the set of characteristics of living organisms or constituents of living organisms of being special or doing something special. Each animal or plant species is special. It differs in some way from all other species...biological specificity is the major problem about understanding life."
Biological specificity within Homo sapiens
Homo sapiens has many characteristics that show the biological specificity in the form of behavior and morphological traits.
Morphologically, humans have an enlarged cranial capacity and more gracile features in comparison to other hominins. The reduction of dentition is a feature that allows for the advantage of adaptability in diet and survival. As a species, humans are culture dependent and much of human survival relies on the culture and social relationships. With the evolutionary change of the reduction of the pelvis and enlarged cranial capacity; events like childbirth are dependent on a safe, social setting to assist in the childbirth; a birthing mother will seek others when going into labor. This is a uniquely human experience, as other animals are able to give birth on their own and often choose to isolate themselves to do so to protect their young.
An example of a genetic adaptation unique to humans is the gene apolipoprotein E (APOE4) on chromosome 19. While chimpanzees may have the APOE gene, the study "The apolipoprotein E (APOE) gene appears functionally monomorphic in chimpanzees" shows that the diversity of the APOE gene in humans in unique. The polymorphism in APOE is only in humans as they carry alleles APOE2, APOE3, APOE4; APOE4 which allows human to break down fatty protein and eat more protein than their ancestors is also a genomic risk factor for Alzheimer's disease.
There are many behavioral characteristics that are specific to Homo sapiens in addition to childbirth. Specific and elaborate tool creation and use and language are other areas. Humans do not simply communicate; language is essential to their survival and complex culture. This culture must be learned, is variable and highly malleable to fit distinct social parameters. Humans do not simply communicate with a code or general understanding, but adhere to social standards, hierarchies, technologies, complex system of regulations and must maintain many dimensions of relationships in order to survive. This complexity of language and the dependence on culture is uniquely human.
Intraspecific behaviors and variations exist within Homo sapiens which adds to the complexity of culture and language. Intraspecific variations are differences in behavior or biology within a species. These variations and the complexity within our society lead to social constructs such as race, gender, and roles. These add to power dynamics and hierarchies within the already multifaceted society.
Subtopics
Characteristics may further be described as being interspecific, intraspecific, and conspecific.
Interspecific
Interspecificity (literally between/among species), or being interspecific, describes issues between individuals of separate species. These may include:
Interspecies communication, communication between different species of animals, plants, fungi or bacteria
Interspecific competition, when individuals of different species compete for the same resource in an ecosystem
Interspecific feeding, when adults of one species feed the young of another species
Interspecific hybridization, when two species within the same genus generate offspring. Offspring may develop into adults but may be sterile.
Interspecific interaction, the effects organisms in a community have on one another
Interspecific pregnancy, pregnancy involving an embryo or fetus belonging to another species than the carrier
Intraspecific
Intraspecificity (literally within species), or being intraspecific, describes behaviors, biochemical variations and other issues within individuals of a single species. These may include:
Intraspecific antagonism, when individuals of the same species are hostile to one another
Intraspecific competition, when individuals or groups of individuals from the same species compete for the same resource in an ecosystem
Intraspecific hybridization, hybridization between sub-species within a species.
Intraspecific mimicry
Conspecific
Two or more organisms, populations, or taxa are conspecific if they belong to the same species. Where different species can interbreed and their gametes compete, the conspecific gametes take precedence over heterospecific gametes. This is known as conspecific sperm precedence, or conspecific pollen precedence in plants.
Heterospecific
The antonym of conspecificity is the term heterospecificity: two individuals are heterospecific if they are considered to belong to different biological species.
Related concepts
Congeners are organisms within the same genus.
See also
Evolutionary biology
References
External links
Evolutionary biology concepts | 0.7683 | 0.986286 | 0.757764 |
Cross-cultural psychology | Cross-cultural psychology is the scientific study of human behavior and mental processes, including both their variability and invariance, under diverse cultural conditions. Through expanding research methodologies to recognize cultural variance in behavior, language, and meaning it seeks to extend and develop psychology. Since psychology as an academic discipline was developed largely in North America and Europe, some psychologists became concerned that constructs and phenomena accepted as universal were not as invariant as previously assumed, especially since many attempts to replicate notable experiments in other cultures had varying success. Since there are questions as to whether theories dealing with central themes, such as affect, cognition, conceptions of the self, and issues such as psychopathology, anxiety, and depression, may lack external validity when "exported" to other cultural contexts, cross-cultural psychology re-examines them. It does so using methodologies designed to factor in cultural differences so as to account for cultural variance. Some critics have pointed to methodological flaws in cross-cultural psychological research, and claim that serious shortcomings in the theoretical and methodological bases used impede, rather than help, the scientific search for universal principles in psychology. Cross-cultural psychologists are turning more to the study of how differences (variance) occur, rather than searching for universals in the style of physics or chemistry.
While cross-cultural psychology represented only a minor area of psychology prior to WWII, it began to grow in importance during the 1960s. In 1971, the interdisciplinary Society for Cross-Cultural Research (SCCR) was founded, and in 1972 the International Association for Cross-Cultural Psychology (IACCP) was established. Since then, this branch of psychology has continued to expand as there has been an increasing popularity of incorporating culture and diversity into studies of numerous psychological phenomena.
Cross-cultural psychology is differentiated from (but influences and is influenced by), cultural psychology, which refers to the branch of psychology that holds that human behavior is strongly influenced by cultural differences, meaning that psychological phenomena can only be compared with each other across cultures to a limited extent. In contrast, cross-cultural psychology includes a search for possible universals in behavior and mental processes. Cross-cultural psychology "can be thought of as a type [of] research methodology, rather than an entirely separate field within psychology". In addition, cross-cultural psychology can be distinguished from international psychology, with the latter centering around the global expansion of psychology, especially during recent decades. Nevertheless, cross-cultural psychology, cultural psychology, and international psychology are united by a common concern for expanding psychology into a universal discipline capable of understanding psychological phenomena across cultures and in a global context.
Definitions and early work
Two definitions of the field include: "the scientific study of human behavior and its transmission, taking into account the ways in which behaviors are shaped and influenced by social and cultural forces" and "the empirical study of members of various cultural groups who have had different experiences that lead to predictable and significant differences in behavior". Culture, as a whole, may also be defined as "the shared way of life of a group of people." In contrast to sociologists, most cross-cultural psychologists do not draw a clear dividing line between social structure and cultural belief systems.
Early work in cross-cultural psychology was suggested in Lazarus and Steinthal's journal Zeitschrift für Völkerpsychologie und Sprachwissenschaft [Journal of Folk Psychology and Language Science], which began to be published in 1860. More empirically oriented research was subsequently conducted by Williams H. R. Rivers (1864–1922) who attempted to measure the intelligence and sensory acuity of indigenous people residing in the Torres Straits area, located between Australia and New Guinea. The father of modern psychology, Wilhelm Wundt, published ten volumes on Völkerpsychologie (a kind of historically oriented cultural psychology), but these volumes have had only limited influence in the English-speaking world. Wundt's student Franz Boas, an anthropologist at Columbia University, challenged several of his students such as Ruth Benedict and Margaret Mead to study psychological phenomena in nonwestern cultures such as Japan, Samoa, and New Guinea. They emphasized the enormous cultural variability of many psychological phenomena thereby challenging psychologists to prove the cross-cultural validity of their favorite theories.
Etic v. emic perspectives
Other fields of psychology focus on how personal relationships impact human behavior; however, they fail to take into account the significant impact that culture may have on human behavior. The Malinowskian dictum focuses on the idea that there is a necessity to understand the culture of a society in its own terms instead of the common search for finding universal laws that apply to all human behavior. Cross-culture psychologists have used the emic/etic distinction for some time. The emic approach studies behavior from within the culture, and mostly is based on one culture; the etic approach studies behavior from outside the culture system, and is based on many cultures. Currently, many psychologists conducting cross-cultural research are said to use what is called a pseudoetic approach. This pseudoetic approach is actually an emic based approach developed in a Western culture while being designed to work as an etic approach. Irvine and Carroll brought an intelligence test to another culture without checking whether the test was measuring what it was intended to measure. This can be considered pseudoetic work because various cultures have their own concepts for intelligence.
Research and applications
Self-concept on bi-culture
Research on how the self is construed (whether, for example, in individualistic or collectivistic terms) has been a major topic of research in cross-cultural psychology for decades. Some psychologist employed cultural priming to understand how people living with multiple cultures interpret events. For example, Hung and his associates displayed a different set of culture related images to study participants, such as the U.S. White House and a Chinese temple, and then watched a clip of an individual fish swimming ahead of a group of fishes. When exposed to the latter, Hong Kong participants were more likely to reason in a collectivistic way. In contrast, participants who viewed Western images were more likely to give a reverse response and focus more on individual fish. People from bi-culture society when primed with different cultural icons, they are inclined to make cultural activated attribution. Pronoun circling task, is also another cultural priming task, by asking participant consciously circling the pronoun, like "We", "us", "I", and "me", during paragraph reading.
Geert Hofstede and the dimensions of culture
The Dutch psychologist Geert Hofstede revolutionized the field doing worldwide research on values for IBM in the 1970s. Hofstede's cultural dimensions theory is not only the springboard for one of the most active research traditions in cross-cultural psychology, but is also cited extensively in the management literature. His initial work found that cultures differ on four dimensions: power distance, uncertainty avoidance, masculinity-femininity, and individualism-collectivism. Later, after The Chinese Culture Connection extended his research using indigenous Chinese materials, he added a fifth dimension - long-term orientation (originally called Confucian dynamism) - which can be found in other cultures besides China. Still later, after work with Michael Minkov using data from the World Values Survey, he added a sixth dimension - indulgence versus restraint.
Despite its popularity, Hofstede's work has been seriously questioned by McSweeney (2002). Furthermore, Berry et al. challenge some of the work of Hofstede, proposing alternative measures to assess individualism and collectivism. Indeed, the individualism-collectivism debate has itself proven to be problematic, with Sinha and Tripathi (1994) arguing that strong individualistic and collectivistic orientations may coexist in the same culture (they discuss India in this connection). This has proven to be a problem with many of the various linear dimensions that are, by nature, dichotomous. Cultures are much more complex and contextually based than represent in these inflexible dimensional representations.
Counseling and clinical psychology
Cross-cultural clinical psychologists (e.g., Jefferson Fish) and counseling psychologists (e.g., Lawrence H. Gerstein, Roy Moodley, and Paul Pedersen) have applied principles of cross-cultural psychology to psychotherapy and counseling. Additionally, the book by Uwe P. Gielen, Juris G. Draguns, and Jefferson M. Fish titled "Principles of Multicultural Counseling and Therapy" contains numerous chapters on the application of culture in counseling. Joan D. Koss-Chioino, Louise Baca, and Luis A. Varrga are all listed in this book (in the chapter titled "Group Therapy with Mexican American and Mexican Adolescents: Focus on Culture) as working with Latinos in their way of therapy that is known to be culturally sensitive. For example, in their therapy they create a "fourth life space" that allows children/adolescents to reflect on difficulties they may be facing. Furthermore, in the book it is stated that various countries are now starting to incorporate multicultural interventions into their counseling practices. The countries listed included: Malaysia, Kuwait, China, Israel, Australia, and Serbia. Lastly, in the chapter titled "Multiculturalism and School Counseling: Creating Relevant Comprehensive Guidance and Counseling Programs," Hardin L. K. Coleman, and Jennifer J. Lindwall propose a way to incorporate cultural components into school counseling programs. Specifically, they emphasize the necessity of the counselor's having multicultural competence and the ability to apply this knowledge when working with persons of varying ethnic backgrounds. In addition, several recent volumes have reviewed the state of counseling psychology and psychotherapy around the world while discussing cross-cultural similarities and differences in counseling practices.
Five-factor model of personality
Can the traits defined by American psychologists be generalized across people from different countries? In response to this questions, cross-cultural psychologists have often questioned how to compare traits across cultures. To examine this question, lexical studies measuring personality factors using trait adjectives from various languages have been conducted. Over time these studies have concluded that the factors of Extraversion, Agreeableness, and Conscientiousness almost always appear, yet Neuroticism and Openness to Experience sometimes do not. Therefore, it is difficult to determine whether these traits are nonexistent in certain cultures or whether different sets of adjectives must be used to measure them. However, many researches believe that the FFM is a universal structure and can be used within cross-cultural research and research studies in general. However, other cultures may include even more significant traits that go beyond those traits included in the FFM.
Emotion judgments
Researchers have often wondered whether people across various cultures interpret emotions in similar ways. In the field of cross-cultural psychology, Paul Ekman has conducted research examining judgments in facial expression cross-culturally. One of his studies included participants from ten different cultures who were required to indicate emotions and the intensity of each emotion based upon picture of persons expressing various emotions. The results of the study showed that there was agreement across cultures as to which emotions were the most and second most intense. These findings provide support for the view that there are at least some universal facial expressions of emotion. Nevertheless, it is also important to note that in the study there were differences in the way in which participants across cultures rated emotion intensity.
While there are said to be universally recognized facial expressions, Yueqin Huang and his colleagues performed research that looked at how a culture may apply different labels to certain expressions of emotions. Huang et al. (2001) in particular compared Chinese versus American perceptions of facial emotion expressions. They found that the Chinese participants were not as skilled as the American participants at perceiving the universal emotional expressions of people coming from a culture different than their own. These findings show support for the notion that cross-cultural differences exist in emotional judgement. Huang et al. (2001) suggest that Asians may use different cues on the face to interpret the emotional expression. Also, because every culture has different values and norms, it is important to analyze those differences in order to gain a better understanding as to why certain emotions are either interpreted differently or not at all. For example, as Huang et al. (2001) point out, it is common for 'negative emotions' not to be welcomed in many Asian cultures. This important information may be critical in recognizing the cross-cultural difference between Asian and American judgments of the universal emotional expressions.
Differences in subjective well-being
The term "subjective well-being" is frequently used throughout psychology research and is made up of three main parts: 1) life satisfaction (a cognitive evaluation of one's overall life), 2) the presence of positive emotional experiences, and 3) the absence of negative emotional experiences. Across cultures people may have differing opinions on the "ideal" level of subjective well-being. For example, Brazilians have been shown in studies to find positive emotions very desirable whereas the Chinese did not score as highly on the desire for positive emotions. Consequently, when comparing subjective well-being cross-culturally it appears important to take into account how the individuals in one culture may rate one aspect differently from individuals from another culture. It is difficult to identify a universal indicator as to how much subjective well-being individuals in different societies experience over a period of time. One important topic is whether individuals from individualistic or collectivistic countries are happier and rate higher on subjective well-being. Diener, Diener, and Diener, 1995, noted that individualist cultural members are found to be happier than collectivist cultural members. It is also important to note that happier nations may not always be the wealthier nations. While there are strong associations between cultural average income and subjective well-being, the "richer=happier" argument is still a topic of hot debate. One factor that may contribute to this debate is that nations that are economically stable may also contain various non-materialistic features such as a more stable democratic government, better enforcement of human rights, etc. that could overall contribute to a higher subjective well-being. Therefore, it has yet to be determined whether a higher level of subjective well-being is linked to material affluence or whether it is shaped by other features that wealthy societies often possess and that may serve as intermediate links between affluence and well-being.
How different cultures resolve conflict
Grossmann et al. use evidence to show how cultures differ in the ways they approach social conflict and how culture continues to be an important factor in human development even into old age. Specifically, the paper examines aging-related differences in wise reasoning among the American and Japanese cultures. Participants' responses revealed that wisdom (e.g., recognition of multiple perspectives, one's limits of personal knowledge, and the importance of compromise) increased with age among Americans, but older age was not directly associated with wiser responses amongst the Japanese participants. Furthermore, younger and middle-aged Japanese participants illustrated higher scores than Americans for resolving group conflicts. Grossmann et al. found that Americans tend to emphasize individuality and solve conflict in a direct manner, while the Japanese place an emphasis on social cohesion and settle conflict more indirectly. The Japanese are motivated to maintain interpersonal harmony and avoid conflict, resolve conflict better, and are wiser earlier in their lives. Americans experience conflict gradually, which results in continuous learning about how to solve conflict and increased wisdom in their later years. The current study supported the concept that varying cultures use different methods to resolve conflict.
Differences in conflict resolution across cultures can also be seen with the inclusion of a third party. These differences can be found when a third party becomes involved and provides a solution to the conflict. Asian and American cultural practices play a role in the way the members of the two cultures handle conflict. A technique used by Korean-Americans may reflect Confucian values while the American technique will be consistent with their individualistic and capitalistic views. Americans will have more structure in their processes which provides standards for similar situations in the future. Contrary to American ways, Korean-Americans will not have as much structure in resolving their conflicts, but more flexibility while solving a problem. For Korean-Americans, the correct way may not always be set but can usually be narrowed down to a few possible solutions.
Gender-role and gender-identity differences and similarities
Williams and Best (1990) have looked at different societies in terms of prevailing gender stereotypes, gender-linked self-perceptions, and gender roles. The authors found both universal similarities as well as differences between and within more than 30 nations. The Handbook of Cross-Cultural Psychology also contains a review on the topic of sex, gender, and culture. One of the main findings overall was that under the topic of sex and gender, pan-cultural similarities were shown to be greater than cultural differences. Furthermore, across cultures the way in which men and women relate to one another in social groups has been shown to be fairly similar. Further calls have been made to examine theories of gender development as well as how culture influences the behavior of both males and females.
Cross-cultural human development
This topic represents a specialized area of cross-cultural psychology and can be viewed as the study of cultural similarities and differences in developmental processes and their outcomes as expressed by behavior and mental processes in individuals and groups. As presented by Bornstein (2010), Gielen and Roopnarine (2016) and Gardiner and Kosmitzki (2010), researchers in this area have examined various topics and domains of psychology (e.g., theories and methodology, socialization, families, gender roles and gender differences, the effects of immigration on identity), human development across the human life cycle in various parts of the world, children in difficult circumstances such as street children and war-traumatized adolescents, and global comparisons between, and influences on, children and adults. Because only 3.4% of the world's children live in the United States, such research is urgently needed to correct the ethnocentric presentations that can be found in many American textbooks (Gielen, 2016).
Berry et al. refer to evidence that a number of different dimensions have been found in cross-cultural comparisons of childrearing practices, including differences on the dimensions of obedience training, responsibility training, nurturance training (the degree to which a sibling will care for other siblings or for older people), achievement training, self-reliance, and autonomy; Moreover, the Handbook of Cross-Cultural Psychology Volume 2 contains an extensive chapter (The Cultural Structuring of Child Development by Charles M. Super and Sara Harkness) on cross-cultural influences on child development. They stated that three recurring topics were shown to consistently come up during their review: "how best to conceptualize variability within and across cultural settings, to characterize activities of the child's mind, and to improve methodological research in culture and development."
Behavior and motivation
Behavior and motivation are broad concepts and have sparked a lot of debate on the etic vs. emic perspective. One of the current major motivational theories is self-determination theory, which has been claimed to be etic universal, or, in other words, the validatity of this theory can be empirically identified across cultures. Most studies supporting this claim of cross-cultural generalizability have been focused on comparing industrialized countries in high- and middle-income regions, but could not establish the global validity of this theory. Recently, some studies attempted to bridge this gap. For example, a study compared self-determination theory models predicting physical activity in diabetes patients across diverse disadvantaged populations (i.e. immigrant populations in urban Sweden and South Africa, and a population in rural Uganda) using multigroup structural equation modelling. The motivational process models did not correspond entirely and the study could not provide sufficient evidence for the etic validity of self-determination theory. The authors concluded that more research is needed which is in alignment with the lack of evidence for many other psychosocial domains for which research has been limited to western, industrialised and high-income countries.
Future developments
The rise of cross-cultural psychology reflects a general process of globalization in the social sciences that seeks to purify specific areas of research which have western biases. In this way, cross-cultural psychology (together with international psychology) aims to make psychology less ethnocentric in character than it has been in the past. Cross-cultural psychology is now taught at numerous universities located around the world, both as a specific content area as well as a methodological approach designed to broaden the field of psychology.
See also
Cross-cultural competence
Cross-cultural psychiatry
Cross-cultural leadership
Cross-cultural studies
Cultural agility
Cultural neuroscience
Cultural psychology
International Association for Cross-Cultural Psychology
International psychology
Journal of Cross-Cultural Psychology
Social Axioms Survey
World Values Survey
Notes
References
Berry, J. W., Poortinga, Y. H., Breugelmans, S. M., Chasiotis, A. & Sam, D. L. (2011). Cross-cultural psychology: Research and applications (3rd ed.). New York, NY: Cambridge University Press.
Further reading
Journal of Cross-Cultural Psychology (JCCP)
Cross-Cultural Research (SCCR)
Robert T. Carter (Editor). (2005). Handbook of Racial-cultural Psychology and Counselling. Vols. 1–2 New Jersey: John Wiley & Sons. (set). Volume 1: Theory and Research ; Volume 2: Training and Practice .
Pandey, J., Sinha, D., & Bhawal, D. P. S. (1996). Asian contributions to cross-cultural psychology. London, UK: Sage.
Shiraev, E., & Levy, D. (2013). Cross-cultural psychology: Critical thinking and contemporary applications (5th ed.). Boston: Allyn & Bacon.
Smith, P. K., Fischer, R., Vignoles, V. L., & Bond, M. H. (2013). Understanding social psychology across cultures: Engaging with others in a changing world (2nd ed.). Thousand Oaks, CA: Sage.
Singh, R. & Dutta, S. (2010). Race" and culture: Tools, techniques and trainings. A manual for professionals. London: Karnac Systemic Thinking and Practice Series.
Major reviews of literature in cross-cultural psychology, from:
Five chapters in the Lindzey and Aronson Handbook of Social Psychology: Whiting 1968 on the methodology of one kind of cross-cultural research, Tajfek 1969 on perception, DeVos and Hippler 1969 on cultural psychology, Inkeles and Levinson 1969 on national character, and Etzioni 1969 on international relations
Child's (1968) review of the culture and personality area in the Borgatta and Lambert Handbook of Personality Theory and Research
Honigmann's (1967) book on personality and culture
Online publications
The following publications on the subject have been made available online on Google Book Search in their entirety or with substantial preview:
External links
Culture readings online | 0.771037 | 0.982779 | 0.757758 |
Behavioralism | Behavioralism is an approach in the philosophy of science, describing the scope of the fields now collectively called the behavioral sciences; this approach dominated the field until the late 20th century. Behavioralism attempts to explain human behavior from an unbiased, neutral point of view, focusing only on what can be verified by direct observation, preferably using statistical and quantitative methods. In doing so, it rejects attempts to study internal human phenomena such as thoughts, subjective experiences, or human well-being. The rejection of this paradigm as overly-restrictive would lead to the rise of cognitive approaches in the late 20th and early 21st centuries.
Origins
From 1942 through the 1970s, behavioralism gained support. It was probably Dwight Waldo who coined the term for the first time in a book called "Political Science in the United States" which was released in 1956. It was David Easton however who popularized the term. It was the site of discussion between traditionalist and new emerging approaches to political science. The origins of behavioralism is often attributed to the work of University of Chicago professor Charles Merriam, who in the 1920s and 1930s emphasized the importance of examining political behavior of individuals and groups rather than only considering how they abide by legal or formal rules.
As a political approach
Prior to the "behavioralist revolution", political science being a science at all was disputed. Critics saw the study of politics as being primarily qualitative and normative, and claimed that it lacked a scientific method necessary to be deemed a science. Behavioralists used strict methodology and empirical research to validate their study as a social science. The behavioralist approach was innovative because it changed the attitude of the purpose of inquiry. It moved toward research that was supported by verifiable facts. In the period of 1954-63, Gabriel Almond spread behavioralism to comparative politics by creation of a committee in SSRC. During its rise in popularity in the 1960s and '70s, behavioralism challenged the realist and liberal approaches, which the behavioralists called "traditionalism", and other studies of political behavior that was not based on fact.
To understand political behavior, behavioralism uses the following methods: sampling, interviewing, scoring and scaling, and statistical analysis.
Behavioralism studies how individuals behave in group positions realistically rather than how they should behave. For example, a study of the United States Congress might include a consideration of how members of Congress behave in their positions. The subject of interest is how Congress becomes an 'arena of actions' and the surrounding formal and informal spheres of power.
Meaning of the term
David Easton was the first to differentiate behavioralism from behaviorism in the 1950s (behaviorism is the term mostly associated with psychology). In the early 1940s, behaviorism itself was referred to as a behavioral science and later referred to as behaviorism. However, Easton sought to differentiate between the two disciplines:
Behavioralism was not a clearly defined movement for those who were thought to be behavioralists. It was more clearly definable by those who were opposed to it, because they were describing it in terms of the things within the newer trends that they found objectionable. So some would define behavioralism as an attempt to apply the methods of natural sciences to human behavior. Others would define it as an excessive emphasis upon quantification. Others as individualistic reductionism. From the inside, the practitioners were of different minds as what it was that constituted behavioralism. [...] And few of us were in agreement.
With this in mind, behavioralism resisted a single definition. Dwight Waldo emphasized that behavioralism itself is unclear, calling it "complicated" and "obscure." Easton agreed, stating, "every man puts his own emphasis and thereby becomes his own behavioralist" and attempts to completely define behavioralism are fruitless. From the beginning, behavioralism was a political, not a scientific concept. Moreover, since behavioralism is not a research tradition, but a political movement, definitions of behavioralism follow what behavioralists wanted. Therefore, most introductions to the subject emphasize value-free research. This is evidenced by Easton's eight "intellectual foundation stones" of behavioralism:
Regularities – The generalization and explanation of regularities.
Commitment to Verification – The ability to verify ones generalizations.
Techniques – An experimental attitude toward techniques.
Quantification – Express results as numbers where possible or meaningful.
Values – Keeping ethical assessment and empirical explanations distinct.
Systemization – Considering the importance of theory in research.
Pure Science – Deferring to pure science rather than applied science.
Integration – Integrating social sciences and value.
Objectivity and value-neutrality
According to David Easton, behavioralism sought to be "analytic, not substantive, general rather than particular, and explanatory rather than ethical." In this, the theory seeks to evaluate political behavior without "introducing any ethical evaluations." Rodger Beehler cites this as "their insistence on distinguishing between facts and values."
Criticism
The approach has come under fire from both conservatives and radicals for the purported value-neutrality. Conservatives see the distinction between values and facts as a way of undermining the possibility of political philosophy. Neal Riemer believes behavioralism dismisses "the task of ethical recommendation" because behavioralists believe "truth or falsity of values (democracy, equality, and freedom, etc.) cannot be established scientifically and are beyond the scope of legitimate inquiry."
Christian Bay believed behavioralism was a pseudopolitical science and that it did not represent "genuine" political research. Bay objected to empirical consideration taking precedence over normative and moral examination of politics.
Behavioralism initially represented a movement away from "naive empiricism", but as an approach has been criticized for "naive scientism". Additionally, radical critics believe that the separation of fact from value makes the empirical study of politics impossible.
Crick's critique
British scholar Bernard Crick in The American Science of Politics (1959), attacked the behavioral approach to politics, which was dominant in the United States, but little known in Britain. He identified and rejected six basic premises and in each case argued the traditional approach was superior to behavioralism:
research can discover uniformities in human behavior,
these uniformities could be confirmed by empirical tests and measurements,
quantitative data was of the highest quality, and should be analyzed statistically,
political science should be empirical and predictive, downplaying the philosophical and historical dimensions,
value-free research was the ideal, and
social scientists should search for a macro theory covering all the social sciences, as opposed to applied issues of practical reform.
See also
Behaviorism
Postpositivism
Post-behavioralism
Notes
References
External links
Brooks, David (2008-10-27). "The Behavioral Revolution". The New York Times.
Comparative politics
Subfields of political science
Political theories | 0.763375 | 0.992632 | 0.757751 |
FOP | Fop is a pejorative term for a foolish man.
FOP or fop may also refer to:
Science and technology
Feature-oriented positioning, in scanning microscopy
Feature-oriented programming, in computer science, software product lines
Fibrodysplasia ossificans progressiva, a connective tissue disease which can result in muscles fusing into bone
Formatting Objects Processor, a Java application
fop herbicides, the aryloxyphenoxypropionate subtype of ACCase inhibitors
Other uses
The Fairly OddParents, an American television series
Fellowship of Presbyterians, now The Fellowship Community, a Christian movement in the United States
Festival of Praise, a music festival in Singapore
Flowery orange pekoe, a grade of tea leaf
Fop Smit (1777–1866), Dutch naval architect and shipbuilder
Fraternal Order of Police, an American police organization
Fred. Olsen Production, a Norwegian gas and oil company
Freedom of panorama, a concept in copyright law
Morris Army Airfield, in Georgia, United States
FOP grade tea
Federation of Planets, from Star Trek | 0.760907 | 0.995794 | 0.757707 |
Psychological behaviorism | Psychological behaviorism is a form of behaviorism—a major theory within psychology which holds that generally human behaviors are learned—proposed by Arthur W. Staats. The theory is constructed to advance from basic animal learning principles to deal with all types of human behavior, including personality, culture, and human evolution. Behaviorism was first developed by John B. Watson (1912), who coined the term "behaviorism", and then B. F. Skinner who developed what is known as "radical behaviorism". Watson and Skinner rejected the idea that psychological data could be obtained through introspection or by an attempt to describe consciousness; all psychological data, in their view, was to be derived from the observation of outward behavior. The strategy of these behaviorists was that the animal learning principles should then be used to explain human behavior. Thus, their behaviorisms were based upon research with animals.
Staats' program takes the animal learning principles, in the form in which he presents them, to be basic. But, also on the basis of his study of human behaviors, adds human learning principles. These principles are unique, not evident in any other species. Holth also critically reviews psychological behaviorism as a "path to the grand reunification of psychology and behavior analysis".
Basic principles
The preceding behaviorisms of Ivan P. Pavlov, Edward L. Thorndike, John B. Watson, B. F. Skinner, and Clark L. Hull studied the basic principles of conditioning with animals. These behaviorists were animal researchers. Their basic approach was that those basic animal principles were to be applied to the explanation of human behavior. They did not have programs for the study of human behavior broadly, and deeply.
Staats was the first to do his research with human subjects. His study ranged from research on basic principles to research and theory analysis of a wide variety of human behaviors, real life human behaviors. That is why Warren Tryon (2004) suggested that Staats change the name of his approach to psychological behaviorism, because Staats behaviorism is based upon human research and unifies aspects of traditional study with his behaviorism.
That includes his study of the basic principles. For example, the original behaviorists treated the two types of conditioning in different ways. The most generally used way by B. F. Skinner constructively considered classical conditioning and operant conditioning to be separate and independent principles. In classical conditioning, if a piece of food is provided to a dog shortly after a buzzer is sounded, for a number of times, the buzzer will come to elicit salivation, part of an emotional response. In operant conditioning, if a piece of food is presented to a dog after the dog makes a particular motor response, the dog will come to make that motor response more frequently.
For Staats, these two types of conditioning are not separate, they interact. A piece of food elicits an emotional response. A piece of food presented after the dog has made a motor response will have the effect of strengthening that motor response so that it occurs more frequently in the future.
Staats sees the piece of food to have two functions: one function is that of eliciting an emotional response, the other function is that of strengthening the motor behavior the precedes the presenting of food. So classical conditioning and operant conditioning are very much related.
Positive emotion stimuli will serve as positive reinforcers. Negative emotion stimuli will serve as punishers. As a consequence of humans' inevitable learning positive emotion stimuli will serve as positive discriminative stimuli, incentives. Negative emotion stimuli will serve as negative discriminative stimuli, disincentives. Therefore, emotion stimuli also have reinforcing value and discriminative stimulus value. Unlike Skinner's basic principles, emotion and classical conditioning are central causes of behavior.
Principles of human learning
Unlike the other behaviorisms, Staats' considers human learning principles. He states that humans learn complex repertoires of behavior like language, values, and athletic skills—that is cognitive, emotional, and sensory motor repertoires. When such a repertoire has been learned, they change the individual's learning ability. A child who has learned language, a basic repertoire, can learn to read. A person who has learned a value system, such as a system of beliefs in human freedom, can learn to value different forms of government. An individual who has learned to be a track athlete, can learn to move more quickly as a football player. This introduces a basic principle of psychological behaviorism, that human behavior is learned cumulatively. Learning one repertoire enables the individual to learn other repertoires that enable the individual to learn additional repertoires, and on and on. Cumulative learning is a unique human characteristic. It has taken humans from chipping hand axes to flying to the moon, learned repertoires that enable the learning of new repertoires that enable the learning of new repertoires in an endless fashion of achievement.
That theory development enables psychological behaviorism to deal with types of human behavior. Out of the reach of radical behaviorism, for example, personality.
Foundations of personality theory
Description
Staats proposes that radical behaviorism is insufficient, because in his view psychology needs to unify traditional knowledge of human behavior with behaviorism. He has called that behaviorizing psychology in a way that enables psychological behaviorism to deal with topics not usually dealt with in behaviorism, such as personality. According to this theory, personality consists of three huge and complex behavioral repertoires:
sensory-motor repertoire, including basic sensory-motor abilities, as well as attentional and social skills;
language-cognitive repertoire, including receptive language, expressive language, and receptive-expressive language;
emotional-motivational repertoire, including positive and negative patterns of emotional reaction directing the whole behavior of the person.
The infant begins life without the basic behavioral repertoires. They are acquired through complex learning, and as this occurs, the child becomes able to respond appropriately to various situations.
Whereas at the beginning learning involves only basic conditioning, as repertories are acquired the child's learning improves, being aided by the repertoires that are already functional. The way a person experiences the world depends on his/her repertoires. The individual's environment to the present results in learning a basic behavioral repertoire (BBR). The individual's behavior is function of the life situation and the individual's BBR. The BBRs are both a dependent and an independent variable, as they result from learning and cause behavior, constituting the individual's personality. According to this theory, biological conditions of learning are essential. Biology provides the mechanisms for learning and performance of behavior. For example, a severely brain-damaged child will not learn BBRs in a normal manner.
According to Staats, the biological organism is the mechanism by which the environment produces learning that results in basic behavioral repertoires which constitute personality. In turn, these repertoires, once acquired, are modifying the brain's biology, through the creation of new neural connections. Organic conditions affect behavior through affecting learning, basic repertoires, and sensory processes. The effect of environment on behavior can be proximal, here-and-now, or distal, through memory and personality. Thus, biology provides the mechanism, learning and environment provide the content of behavior and personality. Creative behavior is explained by novel combinations of behaviors elicited by new, complex environmental situations. The self is the individual's perception of his/her behavior, situation, and organism. Personality, situation, and the interaction between them are the three main forces explaining behavior. The world acts upon the person, but the person also acts both on the world, and on him/herself.
Methods
The methodology of psychological behavioral theory contains techniques of assessment and therapy specially designed for the three behavioral repertoires:
classical sensorimotor techniques;
language-cognitive techniques (verbal association, verbal imitation, and verbal-writing);
emotional-motivational techniques (the time-out technique).
Paradigm
Psychology and behaviorism
Watson named the approach behaviorism as a form of revolution against the then prevalent use of introspection to study the mind. Introspection was subjective and variable, not a source of objective evidence, and the mind consisted of an inferred entity that could never be observed. He insisted psychology had to be based on objective observation of behavior and the objective observation of the environmental events that cause behavior. Skinner's radical behaviorism also has not established a systematic relationship to traditional psychology knowledge.
Psychological behaviorism—while bolstering Watson's rejection of inferring the existence of internal entities such as mind, personality, maturation stages, and free will—considers important knowledge produced by non-behavioral psychology that can be objectified by analysis in learning-behavioral terms. As one example, the concept of intelligence is inferred, not observed, and thus intelligence and intelligence tests are not considered systematically in behaviorism. However, PB considers IQ tests measure important behaviors that predict later school performance and intelligence is composed of learned repertoires of such behaviors. Joining the knowledge of behaviorism and intelligence testing yields concepts and research concerning what intelligence is behaviorally, what causes intelligence, as well as how intelligence can be increased. It is thus a behaviorism that systematically incorporates and explains, behaviorally, empirical parts of psychology.
Basic principles
The different behaviourisms also differ with respect to basic principles. Skinner contributed greatly in separating Pavlov's classical conditioning of emotion responses and operant conditioning of motor behaviors. Staats, however, notes that food was used by Pavlov to elicit a positive emotional response in his classical conditioning and Thorndike Edward Thorndike used food as the reward (reinforcer) that strengthened a motor response in what came to be called operant conditioning, thus emotion-eliciting stimuli are also reinforcing stimuli. Watson, although the father of behaviorism, did not develop and research a basic theory of the principles of conditioning. The behaviorists whose work centered on that development treated differently the relationship of the two types of conditioning. Skinner's basic theory was advanced in recognizing two different types of conditioning, but he didn't recognize their interrelatedness, or the importance of classical conditioning, both very central for explaining human behavior and human nature.
Staats' basic theory specifies the two types of conditioning and the principles of their relationship. Since Pavlov used a food stimulus to elicit an emotional response and Thorndike used food as a reward (reinforcer) to strengthen a particular motor response, whenever food is used both types of conditioning thus take place. That means that food both elicits a positive emotion and food will serve as a positive reinforcer (reward). It also means that any stimulus that is paired with food will come to have those two functions. Psychological behaviorism and Skinner's behaviorism both consider operant conditioning a central explanation of human behavior, but PB additionally concerns emotion and classical conditioning.
Language
This difference between the two behaviorisms can be seen clearly in their theories of language. Staats, extending prior theory indicates that a large number of words elicit either a positive or negative emotional response because of prior classical conditioning. As such they should transfer their emotional response to anything with which they are paired. PB provides evidence this is the case. PB's basic learning theory also states that emotional words have two additional functions. They will serve as rewards and punishments in learning other behaviors, and they also serve to elicit either approach or avoidance behavior. Thus, (1) hearing that people of an ethnic group are dishonest will condition a negative emotion to the name of that group as well as to members of that group, (2) complimenting (saying positive emotional words to) a person for a performance will increase the likelihood the person will perform that action later on, and (3) seeing the sign RESTAURANT will elicit a positive emotion in a hungry driver and thus instigate turning into the restaurant's parking lot. Each case depends upon words eliciting an emotional response.
PB treats various aspects of language, from its original development in children to its role in intelligence and in abnormal behavior, and backs this up with basic and applied study. His theory paper in the journal Behavior Therapy helped introduce cognitive (language) behavior therapy to the behavioral field.
Child development
Much of the research on which PB is based has concerned children's learning. For example, there is a series of studies of the first learning of reading with preschoolers and also a series studying and training dyslexic adolescent children. The psychological behaviorism (PB) position became that the norms of child development—the ages when important behaviors appear—are due to learning, not biological maturation.
Staats began studies to analyze cases of important human behaviors in basic and applied ways in 1954. In 1958 he analyzed dyslexia and introduced his token reinforcer system (later called the token economy) along with his teaching method and materials for treating the disorder. When his daughter Jenny was born in 1960 he began to study and to produce her language, emotional, and sensory-motor development. When she was a year and a half old he began teaching her number concepts, and then reading six months later, using his token reinforcer system, as he recorded on audiotape. Films were made in 1966 of Staats being interviewed about his conception of how variations in children's home learning variously prepared them for school on the first of three Arthur Staats YouTube videos. Following that the second Staats YouTube video records him beginning teaching his three-year-old son with the reading learning (and counting) method he developed in 1962 with his daughter. This film also shows a graduate assistant working with a culturally deprived four-year-old learning reading and writing numbers and counting, participating voluntarily. The Staats YouTube video number 3 has additional cases of these usually delayed children voluntarily learning much ahead of time these cognitive repertoires that prepare them for school. This group of 11 children gained an average of 11 points in IQ and advanced significantly on a child development measure as they also learned to like the learning situation. Staats published the first study in this series in 1962 and describes his later studies and his more general conception in his 1963 book. This research, that included work with his own children from birth on, was the basis for Staats' books specifying the importance of the parents' early training of the child in language and other cognitive repertoires. He shows they are the foundations for being intelligent and doing well on entering school. There are new studies showing that parents who talk to their children more have children with advanced language development, school success, and intelligence measures. These statistical studies should be joined with Staats' work with individual children that shows the specifics of the learning involved and how to best produce it. The two together show powerfully the importance of early child learning.
Staats also applied his approach in fathering his own children and employed his findings in constructing conception of human behavior and human nature. He deals with many aspects of child development, from babbling to walking to discipline and time-out, and he considers parents one of his audiences. In the last of his books he summarizes his theory of child development. His position is that children are the young of the human species that has a body that can make an infinity of different behaviors. The human species also has a nervous system and brain of 100 billion neurons that can learn in marvelous complexity. The child's development consists of the learning of repertoires, extraordinarily complex, like a language-cognitive repertoire, an emotional-motivational repertoire, and a sensory-motor repertoire, each including sub-repertoires of various kinds. The child's behavior, in the various life situations encountered, depend upon the repertoires that have been learned. The child's ability to learn in the variety of situations encountered also depends on the repertoires that have been learned. This conception makes parenting central in the child's development, supported by many studies in behavior analysis, and offers knowledge to parents in raising their children.
Personality
Staats describes humans great variability in behavior, across different people. Those individual differences are consistent in different life situations and typify people. Those differences also tend to run in families. Such phenomena have led to the concept of personality as some internal trait that is inherited that strongly determines individuals' characteristic ways of behaving. Personality conceived in that way remains an inference, based on how people behave, but with no evidence of what personality is.
More successful has been the measurement of personality. There are tests of intelligence for example. No internal organ of intelligence has been found, and no genes either, but intelligence tests have been constructed that predict (helpfully but not perfectly) the performance of children in school. Children who have the behaviors measured on the tests display better learning behaviors in the classroom. Although such tests have been widely applied radical behaviorism has not invested in the study of personality or personality testing.
Psychological behaviorism (e.g.) however considers it important to study what personality is, how personality determines behavior, what causes personality, as well as what personality tests measure. Tests (including intelligence tests) are considered to measure different repertoires of behavior that individuals have learned. The individual in life situations also displays behaviors that have been learned. That is why personality tests can predict how people will behave. That means also that tests can be used to identify important human behaviors, and the learning that produces those behaviors can be studied. Gaining that knowledge will make it possible to develop environmental experiences that produce or prevent types of personality from developing. A study has shown, for example, that in learning to write letters of the alphabet children learn repertoires that make them more intelligent.
Abnormal personality
Psychological behaviorism's theory of abnormal personality rejects the concept of mental illness. Rather behavior disorders are composed of learned repertoires of abnormal behavior. Behavior disorders also involve not having learned basic repertoires that are needed in adjusting to life's demands. Severe autism can involve not having learned a language repertoire as well as having learned tantrums and other abnormal repertoires.
PB's theories of various behavior disorders employ the Diagnostic and Statistical Manual of Mental Disorders (DSM) descriptions of both abnormal repertoires and the absence of normal repertoires. Psychological behaviorism provides the framework for an approach to clinical treatment of behavior disorders, as shown in the field of behavior analysis. PB theory also indicates how behavior disorders can be prevented by preventing the abnormal learning conditions that produce them.
Education
The PB theory is that child development, besides its physical growth, consists of the learning of repertoires some of which are basic in the sense they provide the behaviors for many life situations and also they determine what and how well the individual can learn. That theory states that humans are unique in having a building type of learning, cumulative learning, in which basic repertoires enable the child to learn other repertoires that enable the learning of other repertoires. Learning language, for example, enables the child to learn various other repertoires, like reading, number concepts, and grammar. Those repertoires provide the bases for learning other repertoires. For example, reading ability, opens the possibilities for an individual to do things and learn things that a non-reader cannot.
With that theory, and with its empirical methodology, PB applies to education. For example, it has a theory of reading that explains children's differences, from dyslexia to advanced reading ability. PB also suggests how to treat dyslexic children and those with other learning disabilities. Psychological behaviorism's approach has been supported and advanced in the field of behavior analysis.
Human evolution
Human origin is generally explained by Darwin's natural selection; However, while Darwin gathered imposing evidence showing the evolution of physical characteristics of species his view that behavioral characteristics (such as human intelligence) also evolved was pure assumption with no evidentiary support PB presents a different theory, that the cumulative learning of pre-human hominins drove human evolution. That explains the consistent increase in brain size over the course of human evolution. That occurred because the members of the evolving hominin species were continually learning new language, emotion-motivation, and sensory-motor repertoires. That meant the new generations had to learn those ever more complex repertoires. It was cumulative learning that consistently created the selection device for the members of those generations that had the larger brains and were the better learners.
That theory makes learning ability central in human origin, selecting who would survive and reproduce, until the advent of Homo sapiens where all individuals (except if damaged) have full brains and full learning ability.
Theory levels
Psychological behaviorism is set forth as an overarching theory, constructed of multiple theories in various areas. Staats considers it a unified theory. The areas are related, their principles consistent, and they are advanced consistently, composing levels from basic to increasingly advanced. Its most basic level calls for a systematic study of the biology of the learning "organs" and their evolutionary development, from species like amoeba that have no learning ability to humans that have the most. The basic learning principles constitute another level of theory, as do the human learning principles that specify cumulative learning. How the principles work—in areas like child development, personality, abnormal personality, clinical treatment, education, and human evolution—compose additional levels of study. Staats sees the overarching theory of PB as basic for additional levels that compose the social sciences of sociology, linguistics, political science, anthrology, and paleoanthropology. He criticizes the disunification of the sciences that study human behavior and human nature. Because they are disconnected, they do not build a related, simpler and more understandable conception and scientific endeavor as, for example, the biological sciences do. This philosophy of science of unification is at one with Staats' attempt to construct his unified psychological behaviorism.
Projections
Psychological behaviorism's works project new basic and applied science at its various theory levels. The basic principles level, as one example, needs to study systematically the relationship of the classical conditioning of emotional responses and the operant conditioning of motor responses. As another projection, the field of child development should focus on the study of the learning of the basic repertoires. One essential is the systematic detailed study of the learning experiences of children in the home from birth on. He says such research could be accomplished by installing cameras in the homes of volunteering, remunerated families. This research should also be done to discover how such learning produces both normal and abnormal personality development. As another example, PB also calls for educational research into how school learning could be advanced using its methods and theories. Also, Staats' theory of human evolution is seen to call for research and theory developments.
See also
Cognitive ethology
Time-out (parenting)
Tree of knowledge system
Unified science
References
Behaviorism | 0.782075 | 0.968819 | 0.75769 |
Post-traumatic growth | In psychology, posttraumatic growth (PTG) is positive psychological change experienced as a result of struggling with highly challenging, highly stressful life circumstances. These circumstances represent significant challenges to the adaptive resources of the individual, and pose significant challenges to the individual's way of understanding the world and their place in it. Posttraumatic growth involves "life-changing" psychological shifts in thinking and relating to the world and the self, that contribute to a personal process of change, that is deeply meaningful.
People who have experienced post-traumatic growth often report changes within the following five factors: appreciation of life; relating to others; personal strength; new possibilities; and spiritual, existential or philosophical change.
Global Context & History
The general understanding that suffering and distress can potentially yield positive change is thousands of years old. For example, some of the early ideas and writing of the ancient Hebrews, Greeks, and early Christians, as well as some of the teachings of Hinduism, Buddhism, Islam and the Baháʼí Faith contain elements of the potentially transformative power of suffering. Attempts to understand and discover the meaning of human suffering represent a central theme of much philosophical inquiry and appear in the works of novelists, dramatists and poets.
Traditional psychology's equivalent to thriving is resilience, which is reaching the previous level of functioning before a trauma, stressor, or challenge. The difference between resilience and thriving is the recovery point – thriving goes above and beyond resilience, and involves finding benefits within challenges.
The term "posttraumatic growth" was coined by psychologists Richard Tedeschi and Lawrence Calhoun at the University of North Carolina at Charlotte. According to Tedeschi, as many as 89% of survivors report at least one aspect of posttraumatic growth, such as a renewed appreciation for life.
Variants of the idea have included Crystal Park's proposed stress related growth model, which highlighted the derived sense of meaning in the context of adjusting to challenging and stressful situations, and Joseph and Linley's proposed adversarial growth model, which linked growth with psychological wellbeing. According to the adversarial growth model, whenever an individual is experiencing a challenging situation, they can either integrate the traumatic experience into their current belief system and worldviews or they can modify their beliefs based on their current experiences. If the individual positively accommodates the trauma-related information and assimilates prior beliefs, psychological growth can occur following adversity.
The Development of Post-Traumatic Growth
The Relationship Between Trauma, PTG, and Other Outcomes
Psychological trauma is an emotional response caused by severe distressing events that are outside the normal range of human experiences. While the idea that positive change may occur following trauma may seem paradoxical, it is common and well documented. However, not everyone who experiences a traumatic event will necessarily develop post-traumatic growth. This is because growth does not occur as a direct result of trauma; rather, it is the individual's struggle with the new reality in the aftermath of trauma that is crucial in determining the extent to which post-traumatic growth occurs.
While PTG often leads individuals to live in ways that are fulfilling and meaningful, the presence of PTG and distress are not mutually exclusive. Experiencing trauma is typically associated with distress and loss, and PTG does not change this. PTG and negative trauma related outcomes (e.g. PTSD) often coexist. Encouragingly, reports of growth experiences in the aftermath of traumatic events far outnumber reports of psychiatric disorders.
Creating Post Traumatic Growth
Posttraumatic growth occurs with the attempts to adapt to highly negative sets of circumstances that can engender high levels of psychological distress such as major life crises, which typically engender unpleasant psychological reactions. Such experiences often alter or renew one's core relationships or concepts, leading to PTG.
A Model of PTG
Calhoun and Tedeschi (2006) outline their updated model of posttraumatic growth in Handbook of Post-traumatic Growth: Research and Practice. Most importantly, this model includes:
Characteristics of the Person and of the Challenging Circumstances
Management of Emotional Distress
Rumination
Self-Disclosure
Sociocultural Influences
Narrative Development
Life Wisdom
Promotive Factors
Various factors have been identified as associated the development of PTG. In 2011 Iversen and Christiansen and Elklit suggested that predictors of growth have different effects on PTG on micro-, meso-, and macro level, and a positive predictor of growth on one level can be a negative predictor of growth on another level. This might explain some of the inconsistent research results within the area.
Trauma Types: Characteristics of the traumatic event may contribute to the development or inhibition of PTG. For example, For PTG to come about, the severity of the traumatic experience must be enough to threaten one's preexisting understanding the world or their personal narrative. However, extremely severe trauma exposure may overwhelm one's ability to comprehend and grow from the experience. Experiencing Multiple Sources of Trauma is also considered promotive of PTG. While gender roles did not reliably predict PTG, they are indicative of the type of trauma that an individual experiences. Women tend to experience victimization on a more individual and interpersonal level (e.g. sexual victimization) while men tend to experience more systemic and collective traumas (e.g. military and combat). Given that group dynamics appear to play a predictive role in post-traumatic growth, it can be argued that the type of exposure may indirectly predict growth in men (Lilly 2012).
Responding to the Traumatic Experience: The different ways in which a person may process or engage after a traumatic experience may influence whether PTG comes about. The presence of rumination, sharing negative emotions, positive coping strategies (e.g. spirituality), event centrality, resilience, and growth actions are associated with increased PTG.
Many individuals ruminate extensively about a traumatic experience after it has occurred. In this context, rumination is not necessarily negative and can mean the same thing as cognitive engagement. When this occurs, the individual is investing mental resources into understanding and making sense of their experience. People typically participate in this way to comprehend and explain their experience (Why? How?) and to discover how their experience factors into their perceptions and plans (What does this mean? What now?). While neither is entirely bad, deliberate rather than intrusive rumination can be the most effective at producing growth.
The use of different coping strategies to adjust to a stressor may also influence the development of PTG. As Richard G. Tedeschi and other post-traumatic growth researchers have found, the ability to accept situations that cannot be changed is crucial for adapting to traumatic life events. They call it "acceptance coping", and have determined that coming to terms with reality is a significant predictor of post-traumatic growth. It is also alleged, though currently under further investigation, that opportunity for emotional disclosure can lead to post-traumatic growth though did not significantly reduce post-traumatic stress symptomology.
The Individual's Characteristics: Some personality traits have been found to be associated with increased PTG. These traits include openness, agreeableness, altruistic behaviors, extraversion, conscientiousness, sense of coherence (SOC), sense of purpose, hopefulness, and low neuroticism are associated with PTG. Despite being otherwise undesirable, narcissism is also associated with PTG. These traits may increase an individual's capacity to adapt to traumas, leading to growth.
Social Support: Social support has been found to be a mediator of PTG. Not only are high levels of pre-exposure social support associated with growth, but there is some neurobiological evidence to support the idea that support will modulate a pathological response to stress in the hypothalamic-pituitary-adrenocortical (HPA) pathway in the brain (Ozbay 2007). It also benefits a person to have supportive others that can aid in posttraumatic growth by providing a way to craft narratives about the changes that have occurred, and by offering perspectives that can be integrated into schema change. These relationships help develop narratives; narratives of trauma and survival are always important in posttraumatic growth because they forces survivors to confront questions of meaning and how answers to those questions can be reconstructed.
Religion and Spirituality: Spirituality has been shown to highly correlate with post-traumatic growth and in fact, many of the most deeply spiritual beliefs are a result of trauma exposure.
Other Variables:
Age: Post-traumatic growth has been studied in children to a lesser extent. A review by Meyerson and colleagues found various relations between social and psychological factors and posttraumatic growth in children and adolescents, but concluded that fundamental questions about its value and function remain.
Interdisciplinary Connections
Personality Psychology & PTG
Historically, personality traits have been depicted as being stable following the age of 30. Since 1994, research findings suggested that personality traits can change in response to life transition events during middle and late adulthood. Life transition events may be related to work, relationships, or health. Moderate amounts of stress were associated with improvements in the traits of mastery and toughness. Individuals experiencing moderate amounts of stress were found to be more confident about their abilities and had a better sense of control over their lives. Further, moderate amounts of stress were also associated with better resilience, which can be defined as successful recovery to baseline following stress. An individual who experienced moderate amounts of stressful events was more likely to develop coping skills, seek support from their environment, and experience more confidence in their ability to overcome adversity.
Post-traumatic growth & Personality psychology
Experiencing a traumatic event can have a transformational role in personality among certain individuals and facilitate growth. For example, individuals who have experienced trauma have been shown to exhibit greater optimism, positive affect, and satisfaction with social support, as well as increases in the number of social supportive resources. Similarly, research reveals personality changes among spouses of terminal cancer patients suggesting such traumatic life transitions facilitated increases in interpersonal orientation, prosocial behaviors, and dependability scores.
The outcome of traumatic events can be negatively impacted by factors occurring during and after the trauma, potentially increasing the risk of developing posttraumatic stress disorder, or other mental health difficulties.
Further, characteristics of the trauma and personality dynamics of the individual experiencing the trauma each independently contributed to posttraumatic growth. If the amounts of stress are too low or too overwhelming, a person cannot cope with the situation. Personality dynamics can either facilitate or impede posttraumatic growth, regardless of the impact of traumatic events.
Mixed Findings in Personality Psychology
Research of posttraumatic growth is emerging in the field of personality psychology, with mixed findings. Several researchers examined posttraumatic growth and its associations with the big five personality model. Posttraumatic growth was found to be associated with greater agreeableness, openness, and extraversion. Agreeableness relates to interpersonal behaviors which include trust, altruism, compliance, honesty, and modesty. Individuals who are agreeable are more likely to seek support when needed and to receive it from others. Higher scores on the agreeableness trait can facilitate the development of posttraumatic growth.
Individuals who score high on openness scales are more likely to be curious, open to new experiences, and emotionally responsive to their surroundings. It is hypothesized that following a traumatic event, individuals who score high on openness would more readily reconsider their beliefs and values that may have been altered. Openness to experiences is thus key for facilitating posttraumatic growth. Individuals who score high on extraversion were more likely to adopt more problem-solving strategies, cognitive restructuring, and seek more support from others. Individuals who score high on extraversion use coping strategies that enable posttraumatic growth. Research among veterans and among children of prisoners of war suggested that openness and extraversion contributed to posttraumatic growth.
Research among community samples suggested that openness, agreeableness, and conscientiousness contributed to posttraumatic growth. Individuals who score high on conscientiousness tend to be better at self-regulating their internal experience, have better impulse control, and are more likely to seek achievements across various domains. The conscientiousness trait has been associated with better problem-solving and cognitive restructuring. As such, individuals who are conscientious are more likely to better adjust to stressors and exhibit posttraumatic growth.
Other research among bereaved caregivers and among undergraduates indicated that posttraumatic growth was associated with extraversion, agreeableness, and conscientiousness. As such, the findings linking the big five personality traits with posttraumatic growth are mixed.
Personality Dynamics & Trauma Types
Recent research is examining the influence of trauma types and personality dynamics on posttraumatic growth. Individuals who aspire to standards and orderliness are more likely to develop posttraumatic growth and better overall mental health. It is hypothesized that such individuals can better process the meaning of hardships as they experience moderate amounts of stress. This tendency can facilitate positive personal growth. On the other hand, it was found that individuals who have trouble in regulating themselves are less likely to develop posttraumatic growth and more likely to develop trauma-spectrum disorders and mood disorders. This is in line with past research that suggested that individuals who scored higher on self-discrepancy were more likely to score higher on neuroticism and exhibit poor coping. Neuroticism relates to an individual's tendency to respond with negative emotions to threat, frustration, or loss. As such, individuals with high neuroticism and self-discrepancy are less likely to develop posttraumatic growth. Research has highlighted the important role that collective processing of emotional experiences has on posttraumatic growth. Those who are more capable of engaging with their emotional experiences due to crisis and trauma, and make meaning of these are more likely to increase in their resilience and community engagement following the disaster. Furthermore, collective processing of these emotional experiences leads to greater individual growth and collective solidarity and belongingness.
Personality Characteristics
Two personality characteristics that may affect the likelihood that people can make positive use of the aftermath of traumatic events that befall them include extraversion and openness to experience. Also, optimists may be better able to focus attention and resources on the most important matters, and disengage from uncontrollable or unsolvable problems. The ability to grieve and gradually accept trauma could also increase the likelihood of growth.
Individual differences in coping strategies set some people on a maladaptive spiral, whereas others proceed on an adaptive spiral. With this in mind, some early success in coping could be a precursor to posttraumatic growth. A person's level of confidence could also play a role in her or his ability to persist into growth or, out of lack of confidence, give up.
Positive Psychology & PTG
Posttraumatic growth can be seen as a form of positive psychology. In the 1990s, the field of psychology began a movement towards understanding positive psychological outcomes after trauma. Researchers initially referred to this phenomenon in number of different ways, "positive life changes", "growing in the aftermath of suffering", and "positive adaptation to trauma". But it was not until Tedeschi and Calhoun created the "Posttraumatic Growth Inventory (PTGI)" in 1996 in which the term posttraumatic growth (PTG) was born. Around the same time, a new area of strengths-based psychology emerged.
Positive psychology involves studying positive mental processes aimed at understanding positive psychological outcomes and "healthy" individuals. This framework was intended to serve as an answer to "mental illness" focused psychology. The core ideals of positive psychology are included, but not limited to:
Positive personality traits (optimism, subjective well-being, happiness, self-determination)
Authenticity
Finding meaning and purpose (self-actualization)
Spirituality
Healthy interpersonal relationships
Satisfaction with life
Gratitude
The concept of PTG has been described as a part of the positive psychology movement. Since PTG describes positive outcomes post trauma rather than negative outcomes, it falls under the category of positive psychological changes. Positive psychology intends to lay claim on all capacities of positive mental functioning. So, even though PTG (as a defined concept) was not initially described in the positive psychology framework, it is presently included in positive psychological theories. This is reinforced by the parallels between the core concepts of positive psychology and PTG. This is observable through comparing the 5 domains of the PTGI with the core ideals of positive psychology.
Positive Psychology & Domains of the PTGI
Positive psychological changes and outcomes are defined as a part of positive psychology. PTG is specifically the positive psychological changes post-trauma. The domains of PTG are defined as the different areas of positive psychological changes that are possible post-trauma. The PTGI, a measure designed by Tedeschi and Calhoun in 1996, measures PTG across the following areas or domains:
New Possibilities: The positive psychological changes described by the domain of "New Possibilities" are developing new interests, establishing a new path in life, doing better things with one's life, new opportunities, and an increased likelihood to change what is needed. This can be compared to the "finding meaning and purpose" core ideal of positive psychology.
Relating to Others: The positive psychological changes described by the domain "Relating to Others" are increased reliability on others in times of trouble, greater sense of closeness with others, willingness to express emotions to others, increased compassion for others, increased effort in relationships, greater appreciation of how wonderful people are, and increased acceptance about needing others. This can be compared to the "healthy interpersonal relationships" core ideal of positive psychology.
Personal Strength: The positive psychological changes described by the domain "Personal Strength" are a greater feeling of self-reliance, increased ability to handle difficulties, improved acceptance of life outcomes and new discovery of mental strength. This can be compared to the "positive personality traits (self-determination, optimism)" core ideals of positive psychology.
Spiritual Change: The positive psychological changes described by the domain "Spiritual Change" are a better understanding of spiritual matters and a stronger religious (or spiritual) faith. This can be compared to the "spirituality and authenticity" core ideal of positive psychology.
Appreciation of Life: The positive psychological changes described by the domain "Appreciation of Life" are changed priorities regarding what is important in life, a greater appreciation of the value of one's own life, and increased appreciation of each day. This can be compared to the "satisfaction with life" core ideal of positive psychology.
In 2004, Tedeschi and Calhoun released an updated framework of PTG. The overlaps between positive psychology and posttraumatic growth demonstrate an overwhelming association between these frameworks. However, Tedeschi and Calhoun note that even though these domains describe positive psychological changes post-trauma, the presence of PTG does not necessarily rule out the occurrence of any simultaneous negative post-trauma mental processes nor negative outcomes (such as psychological distress).
Positive Psychology & Clinical Applications
In a clinical setting, PTG is often included as a part of positive psychology in terms of methodology and treatment goals. Positive psychology interventions (PPI) generally include a multidimensional, therapeutic approach in which psychological tests are measurements to track progress. For clinical PPI involving recovery from trauma, there is usually at least one measure of PTG. Most trauma research and clinical intervention focuses on evaluating the negative outcomes post-trauma. But from a positive psychological perspective, a strengths-based approach might be more relevant for clinical intervention aimed at recovery. While PTG has been effectively measured in a number of relevant areas of psychology, it has been especially successful in health psychology.
In the exploration of PTG in health psychology settings (hospitals, long-term care clinics, etc.), well-being (a core ideal of positive psychology) was linked to increased PTG in patients. PTG is seen more often in health psychology settings when PPI are utilized. While the focus in health psychology settings is to foster resilience, new research indicates that health psychology practitioners, doctors, and nurses should also aim to increase positive psychological outcomes (such as PTG) as a part of their recovery goals. Resilience is also central to positive psychology and is involved with PTG. Resilience has been distinguished as a pathway to PTG, but its exact relationship is currently still being explored. That being said, they are both positive psychological processes with strong ties to positive psychology.
The use of PPI post-trauma is not only effective in increasing PTG, but it has also been shown to reduce negative posttraumatic symptoms. These reductions on posttraumatic stress symptoms and increases in PTG have been demonstrated to be long-lasting. When participants were followed up at 12 months post PPI, not only was the PTG still present, it actually increased over time. PPI targeted at reducing stress have demonstrated promising results across a large number of studies.
Conclusion
Over the last 25 years, PTG has demonstrated its place in the framework of positive psychology in theory and in practice. The theoretical framework put forth by Seligman and Csikszentmihalyi and Tedeschi & Calhoun have substantial overlap and both cite "positive psychological changes". While positive psychology speaks to a general focus on positive aspects of human psychology, PTG speaks specifically to positive psychological change after trauma. This would inherently make PTG a sub-category of positive psychology. PTG has also been referred to in the literature as perceived benefits, positive changes, stress-related growth, and adversarial growth. However, it is made clear that regardless of the terminology, it is based is positive mental changes, which is the essence of positive psychology.
Psycho-Oncology & PTG
The study of those who have experienced cancer has contributed significantly to the understanding of PTG. While more research is needed to establish the prevalence of cancer related PTG, there is mounting evidence that high rates of patients experience some form of positive growth.
Trauma Exposure in Psycho-Oncology
Individuals diagnosed with cancer may encounter a diverse range of stressors across the stages of the experience. Further, what is traumatic differs from person to person. For example, feelings of uncertainty or fear of death are common following a diagnosis. Distress may also arise from physical symptoms from the illness itself or from cancer treatments. The process of contending with cancer often brings about significant life changes such as economic strain or social role reversals. Among survivors, fear of recurrence is common. The loved ones and caregivers of patients may also experience severe stressors which may lead to PTG.
The impact of trauma on this population is evident in both negative and growth outcomes. PTSD is more common among individuals who are diagnosed with cancer than those who have not, and rates of PTSD are higher in those who experience some cancer types (e.g. brain cancer) and treatment types (e.g. chemotherapy) than in others. Cancer type also matters for PTG, as more advanced forms are more strongly associated with growth. Studying cancer patients has shed light on the relationship on the relationship between PTSD and PTG. While some studies have found a correlation between PTSD and PTG among cancer patients, others conclude that they are independent constructs.
Promotive Factors in Psycho-Oncology
There are many variables which are associated with development of PTG for oncology patients such as social support, subjective appraisal of the threat, and positive coping strategies. In cancer patients, hope, optimism, spirituality, and positive coping styles are associated with PTG outcomes.
Limited research has investigated whether psychosocial interventions can support the development of PTG. A recent meta-analysis of randomized controlled trials found that psychosocial interventions for cancer patients, especially mindfulness-based interventions, show promise in facilitating PTG. More research is needed in this area to understand how interventions can impact PTG in oncology populations.
Characterizing PTG Outcomes in Psycho-Oncology
Post-traumatic growth takes on many forms in the lives of cancer patients and survivors. For patients, PTG is often described in three categories. 1) They may identify themselves as having strengths or skills that made them competent in the difficult situation. 2) After emotional growth, they may find changes in their personal relationships such as increased closeness or appreciation. 3) Their experience may lead to a greater appreciation of life or strengthen their spirituality.
Jimmie Holland, a founder in the field of psycho-oncology, provides examples of growth following cancer in her book In The Human Side of Cancer. Holland tells the story of one patient, Jim, whose experience with PTG altered both his perspective on life and his interpersonal relationships. After undergoing radiation for cancer of the vocal cord, Jim found a new appreciation for health and used his experience to motivate his sons to never start smoking. Further, survivors of cancer often discover a new sense of compassion and find new purpose in giving back to others. After surviving osteogenic sarcoma which resulted in the amputation of her leg, Sheila Kussner began giving back by visiting other amputees in hospitals to share support. She later went on to raise millions of dollars for cancer research and establishing the Hope and Cope program at the Montreal Jewish General hospital which provides psychological support to thousands of patients. These examples may fit within the realm of PTG.
Related Theories and Constructs: Resilience, Thriving, Positive Disintegration, etc.
Resilience
In general, research in psychology shows that people are resilient overall. For example, Southwick and Charney, in a study of 250 prisoners of war from Vietnam, showed that participants developed much lower rates of depression and PTSD symptoms than expected. Donald Meichenbaum estimated that 60% of North Americans will experience trauma in their lifetime, and of these while no one is unscathed, some 70% show resilience and 30% show harmful effects. Similarly, 68 million women of the 150 million in America will be victimized over their lifetime, but a shocking 10% will suffer insofar as they must seek help from mental health professionals.
In general, traditional psychology's approach to resiliency as exhibited in the studies above is a problem-oriented one, assuming that PTSD is the problem and that resiliency just means to avoid or fix that problem in order to maintain baseline well-being. This type of approach fails to acknowledge any growth that might occur beyond the previously set baseline, however. Positive psychology's idea of thriving attempts to reconcile that failure. A meta-analysis of studies done by Shakespeare-Finch and Lurie-Beck in this area indicates that there is actually an association between PTSD symptoms and posttraumatic growth. The null hypothesis that there is no relationship between the two was rejected for the study. The correlation between the two was significant and was found to be dependent upon the nature of the event and the person's age. For example, survivors of sexual assault show less posttraumatic growth than survivors of natural disaster. Ultimately, however, the meta-analysis serves to show that PTSD and posttraumatic growth are not mutually exclusive ends of a recovery spectrum and that they may actually co-occur during a successful process to thriving.
It is important to note that while aspects of resilience and growth aid an individual's psychological well-being, they are not the same thing. Dr. Richard Tedeschi and Dr. Erika Felix specifically note that resilience suggests bouncing back and returning to one's previous state of being, whereas post-traumatic growth fosters a transformed way of being or understanding for an individual. Often, traumatic or challenging experiences force an individual to re-evaluate core beliefs, values, or behaviors on both cognitive and emotional levels; the idea of post-traumatic growth is therefore rooted in the notion that these beliefs, values, or behaviors come with a new perspective and expectation after the event. Thus, post-traumatic growth centers around the concept of change, whereas resilience suggests the return to previous beliefs, values, or lifestyles.
Thriving
To understand the significance of thriving in the human experience, it is important to understand its role within the context of trauma and its separation from traditional psychology's idea of resilience. Implicit in the idea of thriving and resilience both is the presence of adversity. O'Leary and Ickovics created a four-part diagram of the spectrum of human response to adversity, the possibilities of which include: succumbing to adversity, surviving with diminished quality of life, resiliency (returning to baseline quality of life), and thriving. Thriving includes not only resiliency, but an additional further improvement over the quality of life previous to the adverse event.
Thriving in positive psychology definitely aims to promote growth beyond survival, but it is important to note that some of the theories surrounding the causes and effects of it are more ambiguous. Literature by Carver indicates that the concept of thriving is a difficult one to define objectively. He makes the distinction between physical and psychological thriving, implying that while physical thriving has obvious measurable results, psychological thriving does not as much. This is the origin of much ambiguity surrounding the concept. Carver lists several self-reportable indicators of thriving: greater acceptance of self, change in philosophy, and a change in priorities. These are factors that generally lead a person to feel that they have grown, but obviously are difficult to measure quantitatively.
The dynamic systems approach to thriving attempts to resolve some of the ambiguity in the quantitative definition of thriving, citing thriving as an improvement in adaptability to future trauma based on their model of attractors and attractor basins. This approach suggests that reorganization of behaviors is required to make positive adaptive behavior a more significant attractor basin, which is an area the system shows a tendency toward.
In general, as pointed out by Carver, the idea of thriving seems to be one that is hard to remove from subjective experience. However, work done by Meichenbaum to create his Posttraumatic Growth Inventory helps to set forth a more measurable map of thriving. The five fields of posttraumatic growth that Meichenbaum outlined include: relating to others, new possibilities, personal strength, spiritual change, and appreciation for life. Though literature that addresses "thriving" specifically is sparse, there is much research in the five areas Meichenbaum cites as facilitating thriving, all of which supports the idea that growth after adversity is a viable and significant possibility for human well-being.
Positive disintegration
The theory of positive disintegration by Kazimierz Dąbrowski is a theory that postulates that symptoms such as psychological tension and anxiety could be signs that a person might be in positive disintegration.The theory proposes that this can happen when an individual rejects previously adopted values (relating to their physical survival and their place in society), and adopts new values that are based on the higher possible version of who they can be. Rather than seeing disintegration as a negative state, the theory proposes that is a transient state which allows an individual to grow towards their personality ideal. The theory stipulates that individuals who have a high development potential (i.e. those with overexcitabilities), have a higher chance of re-integrating at a higher level of development, after disintegration. Scholarly work is needed to ascertain whether disintegrative processes, as specified by the theory, are traumatic, and whether reaching higher integration, e.g. Level IV (directed multilevel disintegration) or V (secondary integration), can be equated to posttraumatic growth.
Aspects
Another attempt at quantitatively charting the concept of thriving is via the Posttraumatic Growth Inventory. The inventory has 21 items and is designed to measure the extent to which one experiences personal growth after adversity. The inventory includes elements from five key areas: relating to others, new possibilities, personal strength, spiritual change, and appreciation for life. These five categories are reminiscent of the subjective experiences Carver struggled to quantify in his own literature on thriving, but are imposed onto scales to maintain measurability. When considering the idea of thriving from the five-point approach, it is easier to place more research from psychology within the context of thriving. Additionally, a short form version of the Posttraumatic Growth Inventory has been created with only 10 items, selecting two questions for each of the five subscales. Studies have been conducted to better understand the validity of this scale and some have found that self-reported measures of posttraumatic growth are unreliable. Frazier et al. (2009) reported that further improvement could be made to this inventory to better capture actual change.
One of the key facets of posttraumatic growth set forth by Meichenbaum is relating to others. Accordingly, much work has been done to indicate that social support resources are extremely important to the facilitation of thriving. House, Cohen, and their colleagues indicate that perception of adequate social support is associated with improved adaptive tendency. This idea of better adaptive tendency is central to thriving in that it results in an improved approach to future adversity. Similarly, Hazan and Shaver reason that social support provides a solid base of security for human endeavor. The idea of human endeavor here is echoed in another of Meichenbaum's facets of posttraumatic growth, new possibilities, the idea being that a person's confidence to "endeavor" in the face of novelty is a sign of thriving.
Concurrent with a third facet of Meichenbaum's posttraumatic growth, personal strength, a meta analysis of six qualitative studies done by Finfgeld focuses on courage as a path to thriving. Evidence from the analysis indicates that the ability to be courageous includes acceptance of reality, problem-solving, and determination. This not only directly supports the significance of personal strength in thriving, but can also be drawn to Meichenbaum's concept of "new possibilities" through the idea that determination and adaptive problem-solving aid in constructively confronting new possibilities. Besides this, it was found in Finfgeld's study that courage is promoted and sustained by intra- and interpersonal forces, further supporting Meichenbaum's concept of "relating to others" and its effect on thriving.
On Meichenbaum's idea of appreciation for life, research done by Tyson on a sample of people 2–5 years into grieving processing reveals the importance of creating meaning. The studies show that coping with bereavement optimally does not only involve just "getting over it and moving on", but should also include creating meaning to facilitate the best recovery. The study showed that stories and creative forms of expression increase growth following bereavement. This evidence is supported strongly by work done by Michael and Cooper focused on facets of bereavement that facilitate growth including "the age of the bereaved", "social support", "time since death", "religion", and "active cognitive coping strategies". The idea of coping strategies is echoed through the importance thriving places on improving adaptability. The significance of social support to growth found by Michael and Cooper clearly supports Meichenbaum's concept of "relating to others". Similarly, the significance of religion echoes Meichenbaum's "spiritual change" facet of posttraumatic growth.
Comparison-based thinking has been shown to aid in the development of posttraumatic growth, in which a person considers the positive differences between their current lives and their life during a traumatic event. Increases in empathy and desire to help others have been observed in trauma survivors as a form of posttraumatic growth. Storytelling with fellow community members, particularly those who have been through similar trauma, can help form a sense of community and encourage self-reflection.
Criticisms, Concerns, and Objective Evidence of PTG
While posttraumatic growth is commonly self-reported by people from different cultures across the world, concerns were raised on the basis that objectively measurable evidence of posttraumatic growth was limited. This led some to the question of whether posttraumatic growth was real or illusory. The concept that posttraumatic growth can be illusory was originally posed by Andreas Maercker and Tanja Zoellner, who suggested that perceptions of PTG manifest itself in two sides: a transformative, constructive side, and an illusory, self-deceptive side. This self-deception side is used as a mechanism of coping with, or making sense of, a traumatic event in one's life, rather than proof of an improved psychological state. Additionally, Adriel Boals suggests a third branch of PTG: perceived PTG, under which illusory and "genuine" PTG fall . Boals asserts that those with perceived PTG often misreport genuine PTG during self-reports, as they are instead experiencing illusory PTG. Indeed, Boals claims that illusory PTG is more common in individuals with perceived PTG, than is genuine PTG. Furthermore, while a meta-analysis by Shakespeare-Finch and Lurie-Beck found PTG has a strong curvilinear relationship with PTSD (indicating PTG is highest when PTSD is moderate), numerous studies have shown that PTG is positively associated with posttraumatic stress, which authors such as Boals suggest is a contradiction of the original definition of PTG.
More recently, evidence of the objectively measurable existence of PTG has begun to emerge. A range of biological research is finding real differences between individuals with and without PTG at the level of gene expression and brain activity.
See also
Post-traumatic stress disorder
Positive disintegration
Psychological trauma
Psychological resilience
References
Bibliography
Personal development
Psychological concepts | 0.765009 | 0.990366 | 0.757639 |
Attention | Attention or focus, is the concentration of awareness on some phenomenon to the exclusion of other stimuli. It is the selective concentration on discrete information, either subjectively or objectively. William James (1890) wrote that "Attention is the taking possession by the mind, in clear and vivid form, of one out of what seem several simultaneously possible objects or trains of thought. Focalization, concentration, of consciousness are of its essence." Attention has also been described as the allocation of limited cognitive processing resources. Attention is manifested by an attentional bottleneck, in terms of the amount of data the brain can process each second; for example, in human vision, less than 1% of the visual input data stream of 1MByte/sec can enter the bottleneck, leading to inattentional blindness.
Attention remains a crucial area of investigation within education, psychology, neuroscience, cognitive neuroscience, and neuropsychology. Areas of active investigation involve determining the source of the sensory cues and signals that generate attention, the effects of these sensory cues and signals on the tuning properties of sensory neurons, and the relationship between attention and other behavioral and cognitive processes, which may include working memory and psychological vigilance. A relatively new body of research, which expands upon earlier research within psychopathology, is investigating the diagnostic symptoms associated with traumatic brain injury and its effects on attention. Attention also varies across cultures.
The relationships between attention and consciousness are complex enough that they have warranted philosophical exploration. Such exploration is both ancient and continually relevant, as it can have effects in fields ranging from mental health and the study of disorders of consciousness to artificial intelligence and its domains of research.
Contemporary definition and research
Prior to the founding of psychology as a scientific discipline, attention was studied in the field of philosophy. Thus, many of the discoveries in the field of attention were made by philosophers. Psychologist John B. Watson calls Juan Luis Vives the father of modern psychology because, in his book De Anima et Vita (The Soul and Life), he was the first to recognize the importance of empirical investigation. In his work on memory, Vives found that the more closely one attends to stimuli, the better they will be retained.
By the 1990s, psychologists began using positron emission tomography (PET) and later functional magnetic resonance imaging (fMRI) to image the brain while monitoring tasks involving attention. Considering this expensive equipment was generally only available in hospitals, psychologists sought cooperation with neurologists. Psychologist Michael Posner (then already renowned for his influential work on visual selective attention) and neurologist Marcus Raichle pioneered brain imaging studies of selective attention. Their results soon sparked interest from the neuroscience community, which until then had been focused on monkey brains. With the development of these technological innovations, neuroscientists became interested in this type of research that combines sophisticated experimental paradigms from cognitive psychology with these new brain imaging techniques. Although the older technique of electroencephalography (EEG) had long been used to study the brain activity underlying selective attention by cognitive psychophysiologists, the ability of the newer techniques to measure precisely localized activity inside the brain generated renewed interest by a wider community of researchers. A growing body of such neuroimaging research has identified a frontoparietal attention network which appears to be responsible for control of attention.
A definition of a psychological construct forms a research approach to its study. In scientific works, attention often coincides and substitutes the notion of intentionality due to the extent of semantic uncertainty in the linguistic explanations of these notions' definitions. Intentionality has in turn been defined as "the power of minds to be about something: to represent or to stand for things, properties and states of affairs". Although these two psychological constructs (attention and intentionality) appear to be defined by similar terms, they are different notions. To clarify the definition of attention, it would be correct to consider the origin of this notion to review the meaning of the term given to it when the experimental study on attention was initiated. It is thought that the experimental approach began with famous experiments with a 4 x 4 matrix of sixteen randomly chosen letters – the experimental paradigm that informed Wundt's theory of attention. Wundt interpreted the experimental outcome introducing the meaning of attention as "that psychical process, which is operative in the clear perception of the narrow region of the content of consciousness." These experiments showed the physical limits of attention threshold, which were 3-6 letters observing the matrix during 1/10 s of their exposition. "We shall call the entrance into the large region of consciousness - apprehension, and the elevation into the focus of attention - apperception." Wundt's theory of attention postulated one of the main features of this notion that attention is an active, voluntary process realized during a certain time. In contrast, neuroscience research shows that intentionality may emerge instantly, even unconsciously; research reported to register neuronal correlates of an intentional act that preceded this conscious act (also see shared intentionality). Therefore, while intentionality is a mental state (“the power of the mind to be about something”, arising even unconsciously), the description of the construct of attention should be understood in the dynamical sense as the ability to elevate the clear perception of the narrow region of the content of consciousness and to keep in mind this state for a time. The attention threshold would be the period of minimum time needed for employing perception to clearly apprehend the scope of intention. From this perspective, a scientific approach to attention is relevant when it considers the difference between these two concepts (first of all, between their statical and dynamical statuses).
The growing body of literature shows empirical evidence that attention is conditioned by the number of elements and the duration of exposition. Decades of research on subitizing have supported Wundt's findings about the limits of a human ability to concentrate awareness on a task. Latvian prof. Sandra Mihailova and prof. Igor Val Danilov drew an essential conclusion from the Wundtian approach to the study of attention: the scope of attention is related to cognitive development. As the mind grasps more details about an event, it also increases the number of reasonable combinations within that event, enhancing the probability of better understanding its features and particularity. For example, three items in the focal point of consciousness have six possible combinations (3 factorial), and four items have 24 (4 factorial) combinations. This number of combinations becomes significantly prominent in the case of a focal point with six items with 720 possible combinations (6 factorial). Empirical evidence suggests that the scope of attention in young children develops from two items in the focal point at age up to six months to five or more items in the focal point at age about five years. As follows from the most recent studies in relation to teaching activities in school, “attention” should be understood as “the state of concentration of an individual’s consciousness on the process of selecting by his own psyche the information he requires and on the process of choosing an algorithm for response actions, which involves the intensification of sensory and intellectual activities”.
Selective and visual
In cognitive psychology there are at least two models which describe how visual attention operates. These models may be considered metaphors which are used to describe internal processes and to generate hypotheses that are falsifiable. Generally speaking, visual attention is thought to operate as a two-stage process. In the first stage, attention is distributed uniformly over the external visual scene and processing of information is performed in parallel. In the second stage, attention is concentrated to a specific area of the visual scene (i.e., it is focused), and processing is performed in a serial fashion.
The first of these models to appear in the literature is the spotlight model. The term "spotlight" was inspired by the work of William James, who described attention as having a focus, a margin, and a fringe. The focus is an area that extracts information from the visual scene with a high-resolution, the geometric center of which being where visual attention is directed. Surrounding the focus is the fringe of attention, which extracts information in a much more crude fashion (i.e., low-resolution). This fringe extends out to a specified area, and the cut-off is called the margin.
The second model is called the zoom-lens model and was first introduced in 1986. This model inherits all properties of the spotlight model (i.e., the focus, the fringe, and the margin), but it has the added property of changing in size. This size-change mechanism was inspired by the zoom lens one might find on a camera, and any change in size can be described by a trade-off in the efficiency of processing. The zoom-lens of attention can be described in terms of an inverse trade-off between the size of focus and the efficiency of processing: because attention resources are assumed to be fixed, then it follows that the larger the focus is, the slower processing will be of that region of the visual scene, since this fixed resource will be distributed over a larger area. It is thought that the focus of attention can subtend a minimum of 1° of visual angle, however the maximum size has not yet been determined.
A significant debate emerged in the last decade of the 20th century in which Treisman's 1993 Feature Integration Theory (FIT) was compared to Duncan and Humphrey's 1989 attentional engagement theory (AET). FIT posits that "objects are retrieved from scenes by means of selective spatial attention that picks out objects' features, forms feature maps, and integrates those features that are found at the same location into forming objects." Treismans's theory is based on a two-stage process to help solve the binding problem of attention. These two stages are the preattentive stage and the focused attention stage.
Preattentive Stage: The unconscious detection and separation of features of an item (color, shape, size). Treisman suggests that this happens early in cognitive processing and that individuals are not aware of the occurrence due to the counter intuitiveness of separating a whole into its part. Evidence shows that preattentive focuses are accurate due to illusory conjunctions.
Focused Attention Stage: The combining of all feature identifiers to perceive all parts as one whole. This is possible through prior knowledge and cognitive mapping. When an item is seen within a known location and has features that people have knowledge of, then prior knowledge will help bring features all together to make sense of what is perceived. The case of R.M's damage to his parietal lobe, also known as Balint's syndrome, shows the incorporation of focused attention and combination of features in the role of attention.
Through sequencing these steps, parallel and serial search is better exhibited through the formation of conjunctions of objects. Conjunctive searches, according to Treismans, are done through both stages in order to create selective and focused attention on an object, though Duncan and Humphrey would disagree. Duncan and Humphrey's AET understanding of attention maintained that "there is an initial pre-attentive parallel phase of perceptual segmentation and analysis that encompasses all of the visual items present in a scene. At this phase, descriptions of the objects in a visual scene are generated into structural units; the outcome of this parallel phase is a multiple-spatial-scale structured representation. Selective attention intervenes after this stage to select information that will be entered into visual short-term memory." The contrast of the two theories placed a new emphasis on the separation of visual attention tasks alone and those mediated by supplementary cognitive processes. As Rastophopoulos summarizes the debate: "Against Treisman's FIT, which posits spatial attention as a necessary condition for detection of objects, Humphreys argues that visual elements are encoded and bound together in an initial parallel phase without focal attention, and that attention serves to select among the objects that result from this initial grouping."
Neuropsychological model
In the twentieth century, the pioneering research of Lev Vygotsky and Alexander Luria led to the three-part model of neuropsychology defining the working brain as being represented by three co-active processes listed as Attention, Memory, and Activation. A.R. Luria published his well-known book The Working Brain in 1973 as a concise adjunct volume to his previous 1962 book Higher Cortical Functions in Man. In this volume, Luria summarized his three-part global theory of the working brain as being composed of three constantly co-active processes which he described as the; (1) Attention system, (2) Mnestic (memory) system, and (3) Cortical activation system. The two books together are considered by Homskaya's account as "among Luria's major works in neuropsychology, most fully reflecting all the aspects (theoretical, clinical, experimental) of this new discipline." The product of the combined research of Vygotsky and Luria have determined a large part of the contemporary understanding and definition of attention as it is understood at the start of the 21st-century.
Multitasking and divided attention
Multitasking can be defined as the attempt to perform two or more tasks simultaneously; however, research shows that when multitasking, people make more mistakes or perform their tasks more slowly. Attention must be divided among all of the component tasks to perform them. In divided attention, individuals attend or give attention to multiple sources of information at once or perform more than one task at the same time.
Older research involved looking at the limits of people performing simultaneous tasks like reading stories, while listening and writing something else, or listening to two separate messages through different ears (i.e., dichotic listening). Generally, classical research into attention investigated the ability of people to learn new information when there were multiple tasks to be performed, or to probe the limits of our perception (c.f. Donald Broadbent). There is also older literature on people's performance on multiple tasks performed simultaneously, such as driving a car while tuning a radio or driving while being on the phone.
The vast majority of current research on human multitasking is based on performance of doing two tasks simultaneously, usually that involves driving while performing another task, such as texting, eating, or even speaking to passengers in the vehicle, or with a friend over a cellphone. This research reveals that the human attentional system has limits for what it can process: driving performance is worse while engaged in other tasks; drivers make more mistakes, brake harder and later, get into more accidents, veer into other lanes, and/or are less aware of their surroundings when engaged in the previously discussed tasks.
There has been little difference found between speaking on a hands-free cell phone or a hand-held cell phone, which suggests that it is the strain of attentional system that causes problems, rather than what the driver is doing with his or her hands. While speaking with a passenger is as cognitively demanding as speaking with a friend over the phone, passengers are able to change the conversation based upon the needs of the driver. For example, if traffic intensifies, a passenger may stop talking to allow the driver to navigate the increasingly difficult roadway; a conversation partner over a phone would not be aware of the change in environment.
There have been multiple theories regarding divided attention. One, conceived by cognitive scientist Daniel Kahneman, explains that there is a single pool of attentional resources that can be freely divided among multiple tasks. This model seems oversimplified, however, due to the different modalities (e.g., visual, auditory, verbal) that are perceived. When the two simultaneous tasks use the same modality, such as listening to a radio station and writing a paper, it is much more difficult to concentrate on both because the tasks are likely to interfere with each other. The specific modality model was theorized by Cognitive Psychologists David Navon and Daniel Gopher in 1979. However, more recent research using well controlled dual-task paradigms points at the importance of tasks.
As an alternative, resource theory has been proposed as a more accurate metaphor for explaining divided attention on complex tasks. Resource theory states that as each complex task is automatized, performing that task requires less of the individual's limited-capacity attentional resources. Other variables play a part in our ability to pay attention to and concentrate on many tasks at once. These include, but are not limited to, anxiety, arousal, task difficulty, and skills.
Simultaneous
Simultaneous attention is a type of attention, classified by attending to multiple events at the same time. Simultaneous attention is demonstrated by children in Indigenous communities, who learn through this type of attention to their surroundings. Simultaneous attention is present in the ways in which children of indigenous backgrounds interact both with their surroundings and with other individuals. Simultaneous attention requires focus on multiple simultaneous activities or occurrences. This differs from multitasking, which is characterized by alternating attention and focus between multiple activities, or halting one activity before switching to the next.
Simultaneous attention involves uninterrupted attention to several activities occurring at the same time. Another cultural practice that may relate to simultaneous attention strategies is coordination within a group. Indigenous heritage toddlers and caregivers in San Pedro were observed to frequently coordinate their activities with other members of a group in ways parallel to a model of simultaneous attention, whereas middle-class European-descent families in the U.S. would move back and forth between events. Research concludes that children with close ties to Indigenous American roots have a high tendency to be especially wide, keen observers. This points to a strong cultural difference in attention management.
Alternative topics and discussions
Overt and covert orienting
Attention may be differentiated into "overt" versus "covert" orienting.
Overt orienting is the act of selectively attending to an item or location over others by moving the eyes to point in that direction. Overt orienting can be directly observed in the form of eye movements. Although overt eye movements are quite common, there is a distinction that can be made between two types of eye movements; reflexive and controlled. Reflexive movements are commanded by the superior colliculus of the midbrain. These movements are fast and are activated by the sudden appearance of stimuli. In contrast, controlled eye movements are commanded by areas in the frontal lobe. These movements are slow and voluntary.
Covert orienting is the act of mentally shifting one's focus without moving one's eyes. Simply, it is changes in attention that are not attributable to overt eye movements. Covert orienting has the potential to affect the output of perceptual processes by governing attention to particular items or locations (for example, the activity of a V4 neuron whose receptive field lies on an attended stimuli will be enhanced by covert attention) but does not influence the information that is processed by the senses. Researchers often use "filtering" tasks to study the role of covert attention of selecting information. These tasks often require participants to observe a number of stimuli, but attend to only one. The current view is that visual covert attention is a mechanism for quickly scanning the field of view for interesting locations. This shift in covert attention is linked to eye movement circuitry that sets up a slower saccade to that location.
There are studies that suggest the mechanisms of overt and covert orienting may not be controlled separately and independently as previously believed. Central mechanisms that may control covert orienting, such as the parietal lobe, also receive input from subcortical centres involved in overt orienting. In support of this, general theories of attention actively assume bottom-up (reflexive) processes and top-down (voluntary) processes converge on a common neural architecture, in that they control both covert and overt attentional systems. For example, if individuals attend to the right hand corner field of view, movement of the eyes in that direction may have to be actively suppressed.
Covert attention has been argued to reflect the existence of processes "programming explicit ocular movement". However, this has been questioned on the grounds that N2, "a neural measure of covert attentional allocation—does not always precede eye movements". However, the researchers acknowledge, "it may be impossible to definitively rule out the possibility that some kind of shift of covert attention precedes every shift of overt attention".
Exogenous and endogenous orienting
Orienting attention is vital and can be controlled through external (exogenous) or internal (endogenous) processes. However, comparing these two processes is challenging because external signals do not operate completely exogenously, but will only summon attention and eye movements if they are important to the subject.
Exogenous (from Greek exo, meaning "outside", and genein, meaning "to produce") orienting is frequently described as being under control of a stimulus. Exogenous orienting is considered to be reflexive and automatic and is caused by a sudden change in the periphery. This often results in a reflexive saccade. Since exogenous cues are typically presented in the periphery, they are referred to as peripheral cues. Exogenous orienting can even be observed when individuals are aware that the cue will not relay reliable, accurate information about where a target is going to occur. This means that the mere presence of an exogenous cue will affect the response to other stimuli that are subsequently presented in the cue's previous location.
Several studies have investigated the influence of valid and invalid cues. They concluded that valid peripheral cues benefit performance, for instance when the peripheral cues are brief flashes at the relevant location before the onset of a visual stimulus. Psychologists Michael Posner and Yoav Cohen (1984) noted a reversal of this benefit takes place when the interval between the onset of the cue and the onset of the target is longer than about 300 ms. The phenomenon of valid cues producing longer reaction times than invalid cues is called inhibition of return.
Endogenous (from Greek endo, meaning "within" or "internally") orienting is the intentional allocation of attentional resources to a predetermined location or space. Simply stated, endogenous orienting occurs when attention is oriented according to an observer's goals or desires, allowing the focus of attention to be manipulated by the demands of a task. In order to have an effect, endogenous cues must be processed by the observer and acted upon purposefully. These cues are frequently referred to as central cues. This is because they are typically presented at the center of a display, where an observer's eyes are likely to be fixated. Central cues, such as an arrow or digit presented at fixation, tell observers to attend to a specific location.
When examining differences between exogenous and endogenous orienting, some researchers suggest that there are four differences between the two kinds of cues:
exogenous orienting is less affected by cognitive load than endogenous orienting;
observers are able to ignore endogenous cues but not exogenous cues;
exogenous cues have bigger effects than endogenous cues; and
expectancies about cue validity and predictive value affects endogenous orienting more than exogenous orienting.
There exist both overlaps and differences in the areas of the brain that are responsible for endogenous and exogenous orientating. Another approach to this discussion has been covered under the topic heading of "bottom-up" versus "top-down" orientations to attention. Researchers of this school have described two different aspects of how the mind focuses attention to items present in the environment. The first aspect is called bottom-up processing, also known as stimulus-driven attention or exogenous attention. These describe attentional processing which is driven by the properties of the objects themselves. Some processes, such as motion or a sudden loud noise, can attract our attention in a pre-conscious, or non-volitional way. We attend to them whether we want to or not. These aspects of attention are thought to involve parietal and temporal cortices, as well as the brainstem. More recent experimental evidence support the idea that the primary visual cortex creates a bottom-up saliency map, which is received by the superior colliculus in the midbrain area to guide attention or gaze shifts.
The second aspect is called top-down processing, also known as goal-driven, endogenous attention, attentional control or executive attention. This aspect of our attentional orienting is under the control of the person who is attending. It is mediated primarily by the frontal cortex and basal ganglia as one of the executive functions. Research has shown that it is related to other aspects of the executive functions, such as working memory, and conflict resolution and inhibition.
Influence of processing load
A "hugely influential" theory regarding selective attention is the perceptual load theory, which states that there are two mechanisms that affect attention: cognitive and perceptual. The perceptual mechanism considers the subject's ability to perceive or ignore stimuli, both task-related and non task-related. Studies show that if there are many stimuli present (especially if they are task-related), it is much easier to ignore the non-task related stimuli, but if there are few stimuli the mind will perceive the irrelevant stimuli as well as the relevant. The cognitive mechanism refers to the actual processing of the stimuli. Studies regarding this showed that the ability to process stimuli decreased with age, meaning that younger people were able to perceive more stimuli and fully process them, but were likely to process both relevant and irrelevant information, while older people could process fewer stimuli, but usually processed only relevant information.
Some people can process multiple stimuli, e.g. trained Morse code operators have been able to copy 100% of a message while carrying on a meaningful conversation. This relies on the reflexive response due to "overlearning" the skill of morse code reception/detection/transcription so that it is an autonomous function requiring no specific attention to perform. This overtraining of the brain comes as the "practice of a skill [surpasses] 100% accuracy," allowing the activity to become autonomic, while your mind has room to process other actions simultaneously.
Based on the primary role of the perceptual load theory, assumptions regarding its functionality surrounding that attentional resources are that of limited capacity which signify the need for all of the attentional resources to be used. This performance, however, is halted when put hand in hand with accuracy and reaction time (RT). This limitation arises through the measurement of literature when obtaining outcomes for scores. This affects both cognitive and perceptual attention because there is a lack of measurement surrounding distributions of temporal and spatial attention. Only a concentrated amount of attention on how effective one is completing the task and how long they take is being analyzed making a more redundant analysis on overall cognition of being able to process multiple stimuli through perception.
Clinical model
Attention is best described as the sustained focus of cognitive resources on information while filtering or ignoring extraneous information. Attention is a very basic function that often is a precursor to all other neurological/cognitive functions. As is frequently the case, clinical models of attention differ from investigation models. One of the most used models for the evaluation of attention in patients with very different neurologic pathologies is the model of Sohlberg and Mateer. This hierarchic model is based in the recovering of attention processes of brain damage patients after coma. Five different kinds of activities of growing difficulty are described in the model; connecting with the activities those patients could do as their recovering process advanced.
Focused attention: The ability to respond discretely to specific sensory stimuli.
Sustained attention (vigilance and concentration): The ability to maintain a consistent behavioral response during continuous and repetitive activity.
Selective attention: The ability to maintain a behavioral or cognitive set in the face of distracting or competing stimuli. Therefore, it incorporates the notion of "freedom from distractibility."
Alternating attention: The ability of mental flexibility that allows individuals to shift their focus of attention and move between tasks having different cognitive requirements.
Divided attention: This refers to the ability to respond simultaneously to multiple tasks or multiple task demands.
This model has been shown to be very useful in evaluating attention in very different pathologies, correlates strongly with daily difficulties and is especially helpful in designing stimulation programs such as attention process training, a rehabilitation program for neurological patients of the same authors.
Other descriptors for types of attention
Mindfulness: Mindfulness has been conceptualized as a clinical model of attention. Mindfulness practices are clinical interventions that emphasize training attention functions.
Vigilant attention: Remaining focused on a non-arousing stimulus or uninteresting task for a sustained period is far more difficult than attending to arousing stimuli and interesting tasks, and requires a specific type of attention called 'vigilant attention'. Thereby, vigilant attention is the ability to give sustained attention to a stimulus or task that might ordinarily be insufficiently engaging to prevent our attention being distracted by other stimuli or tasks.
Neural correlates
Most experiments show that one neural correlate of attention is enhanced firing. If a neuron has a different response to a stimulus when an animal is not attending to a stimulus, versus when the animal does attend to the stimulus, then the neuron's response will be enhanced even if the physical characteristics of the stimulus remain the same.
In a 2007 review, Professor Eric Knudsen describes a more general model which identifies four core processes of attention, with working memory at the center:
Working memory temporarily stores information for detailed analysis.
Competitive selection is the process that determines which information gains access to working memory.
Through top-down sensitivity control, higher cognitive processes can regulate signal intensity in information channels that compete for access to working memory, and thus give them an advantage in the process of competitive selection. Through top-down sensitivity control, the momentary content of working memory can influence the selection of new information, and thus mediate voluntary control of attention in a recurrent loop (endogenous attention).
Bottom-up saliency filters automatically enhance the response to infrequent stimuli, or stimuli of instinctive or learned biological relevance (exogenous attention).
Neurally, at different hierarchical levels spatial maps can enhance or inhibit activity in sensory areas, and induce orienting behaviors like eye movement.
At the top of the hierarchy, the frontal eye fields (FEF) and the dorsolateral prefrontal cortex contain a retinocentric spatial map. Microstimulation in the FEF induces monkeys to make a saccade to the relevant location. Stimulation at levels too low to induce a saccade will nonetheless enhance cortical responses to stimuli located in the relevant area.
At the next lower level, a variety of spatial maps are found in the parietal cortex. In particular, the lateral intraparietal area (LIP) contains a saliency map and is interconnected both with the FEF and with sensory areas.
Exogenous attentional guidance in humans and monkeys is by a bottom-up saliency map in the primary visual cortex. In lower vertebrates, this saliency map is more likely in the superior colliculus (optic tectum).
Certain automatic responses that influence attention, like orienting to a highly salient stimulus, are mediated subcortically by the superior colliculi.
At the neural network level, it is thought that processes like lateral inhibition mediate the process of competitive selection.
In many cases attention produces changes in the EEG. Many animals, including humans, produce gamma waves (40–60 Hz) when focusing attention on a particular object or activity.
Another commonly used model for the attention system has been put forth by researchers such as Michael Posner. He divides attention into three functional components: alerting, orienting, and executive attention that can also interact and influence each other.
Alerting is the process involved in becoming and staying attentive toward the surroundings. It appears to exist in the frontal and parietal lobes of the right hemisphere, and is modulated by norepinephrine.
Orienting is the directing of attention to a specific stimulus.
Executive attention is used when there is a conflict between multiple attention cues. It is essentially the same as the central executive in Baddeley's model of working memory. The Eriksen flanker task has shown that the executive control of attention may take place in the anterior cingulate cortex
Cultural variation
Children appear to develop patterns of attention related to the cultural practices of their families, communities, and the institutions in which they participate.
In 1955, Jules Henry suggested that there are societal differences in sensitivity to signals from many ongoing sources that call for the awareness of several levels of attention simultaneously. He tied his speculation to ethnographic observations of communities in which children are involved in a complex social community with multiple relationships.
Many Indigenous children in the Americas predominantly learn by observing and pitching in. There are several studies to support that the use of keen attention towards learning is much more common in Indigenous Communities of North and Central America than in a middle-class European-American setting. This is a direct result of the Learning by Observing and Pitching In model.
Keen attention is both a requirement and result of learning by observing and pitching-in. Incorporating the children in the community gives them the opportunity to keenly observe and contribute to activities that were not directed towards them. It can be seen from different Indigenous communities and cultures, such as the Mayans of San Pedro, that children can simultaneously attend to multiple events. Most Maya children have learned to pay attention to several events at once in order to make useful observations.
One example is simultaneous attention which involves uninterrupted attention to several activities occurring at the same time. Another cultural practice that may relate to simultaneous attention strategies is coordination within a group. San Pedro toddlers and caregivers frequently coordinated their activities with other members of a group in multiway engagements rather than in a dyadic fashion. Research concludes that children with close ties to Indigenous American roots have a high tendency to be especially keen observers.
This learning by observing and pitching-in model requires active levels of attention management. The child is present while caretakers engage in daily activities and responsibilities such as: weaving, farming, and other skills necessary for survival. Being present allows the child to focus their attention on the actions being performed by their parents, elders, and/or older siblings. In order to learn in this way, keen attention and focus is required. Eventually the child is expected to be able to perform these skills themselves.
Modelling
In the domain of computer vision, efforts have been made to model the mechanism of human attention, especially the bottom-up intentional mechanism and its semantic significance in classification of video contents. Both spatial attention and temporal attention have been incorporated in such classification efforts.
Generally speaking, there are two kinds of models to mimic the bottom-up salience mechanism in static images. One is based on the spatial contrast analysis. For example, a center–surround mechanism has been used to define salience across scales, inspired by the putative neural mechanism. It has also been hypothesized that some visual inputs are intrinsically salient in certain background contexts and that these are actually task-independent. This model has established itself as the exemplar for salience detection and consistently used for comparison in the literature; the other kind of model is based on the frequency domain analysis. This method was first proposed by Hou et al.. This method was called SR. Then, the PQFT method was also introduced. Both SR and PQFT only use the phase information. In 2012, the HFT method was introduced, and both the amplitude and the phase information are made use of. The Neural Abstraction Pyramid is a hierarchical recurrent convolutional model, which incorporates bottom-up and top-down flow of information to iteratively interpret images.
Hemispatial neglect
Hemispatial neglect, also called unilateral neglect, often occurs when people have damage to the right hemisphere of their brain. This damage often leads to a tendency to ignore the left side of one's body or even the left side of an object that can be seen. Damage to the left side of the brain (the left hemisphere) rarely yields significant neglect of the right side of the body or object in the person's local environments.
The effects of spatial neglect, however, may vary and differ depending on what area of the brain was damaged. Damage to different neural substrates can result in different types of neglect. Attention disorders (lateralized and nonlaterized) may also contribute to the symptoms and effects. Much research has asserted that damage to gray matter within the brain results in spatial neglect.
New technology has yielded more information, such that there is a large, distributed network of frontal, parietal, temporal, and subcortical brain areas that have been tied to neglect. This network can be related to other research as well; the dorsal attention network is tied to spatial orienting. The effect of damage to this network may result in patients neglecting their left side when distracted about their right side or an object on their right side.
Attention in social contexts
Social attention is one special form of attention that involves the allocation of limited processing resources in a social context. Previous studies on social attention often regard how attention is directed toward socially relevant stimuli such as faces and gaze directions of other individuals. In contrast to attending-to-others, a different line of researches has shown that self-related information such as own face and name automatically captures attention and is preferentially processed comparing to other-related information. These contrasting effects between attending-to-others and attending-to-self prompt a synthetic view in a recent Opinion article proposing that social attention operates at two polarizing states: In one extreme, individual tends to attend to the self and prioritize self-related information over others', and, in the other extreme, attention is allocated to other individuals to infer their intentions and desires. Attending-to-self and attending-to-others mark the two ends of an otherwise continuum spectrum of social attention. For a given behavioral context, the mechanisms underlying these two polarities might interact and compete with each other in order to determine a saliency map of social attention that guides our behaviors. An imbalanced competition between these two behavioral and cognitive processes will cause cognitive disorders and neurological symptoms such as autism spectrum disorders and Williams syndrome.
Distracting factors
According to Daniel Goleman's book, Focus: The Hidden Driver of Excellence, there are two types of distracting factors affecting focus – sensory and emotional.
A sensory distracting factor would be, for example, while a person is reading this article, they are neglecting the white field surrounding the text.
An emotional distracting factor would be when someone is focused on answering an email, and somebody shouts their name. It would be almost impossible to neglect the voice speaking it. Attention is immediately directed toward the source. Positive emotions have also been found to affect attention. Induction of happiness has led to increased response times and an increase in inaccurate responses in the face of irrelevant stimuli. Two possible theories as to why emotions might make one more susceptible to distracting stimuli is that emotions take up too much of one's cognitive resources and make it harder to control your focus of attention. The other theory is that emotions make it harder to filter out distractions, specifically with positive emotions due to a feeling of security.
Another distracting factor to attention processes is insufficient sleep. Sleep deprivation is found to impair cognition, specifically performance in divided attention. Divided attention is possibly linked with the circadian processes.
Failure to attend
Inattentional blindness was first introduced in 1998 by Arien Mack and Irvic Rock. Their studies show that when people are focused on specific stimuli, they often miss other stimuli that are clearly present. Though actual blindness is not occurring here, the blindness that happens is due to the perceptual load of what is being attended to. Based on the experiment performed by Mack and Rock, Ula Finch and Nilli Lavie tested participants with a perceptual task. They presented subjects with a cross, one arm being longer than the other, for 5 trials. On the sixth trial, a white square was added to the top left of the screen. The results conclude that out of 10 participants, only 2 (20%) actually saw the square. This would suggest that when a higher focus was attended to the length of the crossed arms, the more likely someone would altogether miss an object that was in plain sight.
Change blindness was first tested by Rensink and coworkers in 1997. Their studies show that people have difficulty detecting changes from scene to scene due to the intense focus on one thing, or lack of attention overall. This was tested by Rensink through a presentation of a picture, and then a blank field, and then the same picture but with an item missing. The results showed that the pictures had to be alternated back and forth a good number of times for participants to notice the difference. This idea is greatly portrayed in films that have continuity errors. Many people do not pick up on differences when in reality, the changes tend to be significant.
History of the study
Philosophical period
Psychologist Daniel E. Berlyne credits the first extended treatment of attention to philosopher Nicolas Malebranche in his work "The Search After Truth". "Malebranche held that we have access to ideas, or mental representations of the external world, but not direct access to the world itself." Thus in order to keep these ideas organized, attention is necessary. Otherwise we will confuse these ideas. Malebranche writes in "The Search After Truth", "because it often happens that the understanding has only confused and imperfect perceptions of things, it is truly a cause of our errors.... It is therefore necessary to look for means to keep our perceptions from being confused and imperfect. And, because, as everyone knows, there is nothing that makes them clearer and more distinct than attentiveness, we must try to find the means to become more attentive than we are". According to Malebranche, attention is crucial to understanding and keeping thoughts organized.
Philosopher Gottfried Wilhelm Leibniz introduced the concept of apperception to this philosophical approach to attention. Apperception refers to "the process by which new experience is assimilated to and transformed by the residuum of past experience of an individual to form a new whole." Apperception is required for a perceived event to become a conscious event. Leibniz emphasized a reflexive involuntary view of attention known as exogenous orienting. However, there is also endogenous orienting which is voluntary and directed attention. Philosopher Johann Friedrich Herbart agreed with Leibniz's view of apperception; however, he expounded on it in by saying that new experiences had to be tied to ones already existing in the mind. Herbart was also the first person to stress the importance of applying mathematical modeling to the study of psychology.
Throughout the philosophical era, various thinkers made significant contributions to the field of attention studies, beginning with research on the extent of attention and how attention is directed. In the beginning of the 19th century, it was thought that people were not able to attend to more than one stimulus at a time. However, with research contributions by Sir William Hamilton, 9th Baronet this view was changed. Hamilton proposed a view of attention that likened its capacity to holding marbles. You can only hold a certain number of marbles at a time before it starts to spill over. His view states that we can attend to more than one stimulus at once. William Stanley Jevons later expanded this view and stated that we can attend to up to four items at a time.
1860–1909
This period of attention research took the focus from conceptual findings to experimental testing. It also involved psychophysical methods that allowed measurement of the relation between physical stimulus properties and the psychological perceptions of them. This period covers the development of attentional research from the founding of psychology to 1909.
Wilhelm Wundt introduced the study of attention to the field of psychology. Wundt measured mental processing speed by likening it to differences in stargazing measurements. Astronomers in this time would measure the time it took for stars to travel. Among these measurements when astronomers recorded the times, there were personal differences in calculation. These different readings resulted in different reports from each astronomer. To correct for this, a personal equation was developed. Wundt applied this to mental processing speed. Wundt realized that the time it takes to see the stimulus of the star and write down the time was being called an "observation error" but actually was the time it takes to switch voluntarily one's attention from one stimulus to another. Wundt called his school of psychology voluntarism. It was his belief that psychological processes can only be understood in terms of goals and consequences.
Franciscus Donders used mental chronometry to study attention and it was considered a major field of intellectual inquiry by authors such as Sigmund Freud. Donders and his students conducted the first detailed investigations of the speed of mental processes. Donders measured the time required to identify a stimulus and to select a motor response. This was the time difference between stimulus discrimination and response initiation. Donders also formalized the subtractive method which states that the time for a particular process can be estimated by adding that process to a task and taking the difference in reaction time between the two tasks. He also differentiated between three types of reactions: simple reaction, choice reaction, and go/no-go reaction.
Hermann von Helmholtz also contributed to the field of attention relating to the extent of attention. Von Helmholtz stated that it is possible to focus on one stimulus and still perceive or ignore others. An example of this is being able to focus on the letter u in the word house and still perceiving the letters h, o, s, and e.
One major debate in this period was whether it was possible to attend to two things at once (split attention). Walter Benjamin described this experience as "reception in a state of distraction." This disagreement could only be resolved through experimentation.
In 1890, William James, in his textbook The Principles of Psychology, remarked:
James differentiated between sensorial attention and intellectual attention. Sensorial attention is when attention is directed to objects of sense, stimuli that are physically present. Intellectual attention is attention directed to ideal or represented objects; stimuli that are not physically present. James also distinguished between immediate or derived attention: attention to the present versus to something not physically present. According to James, attention has five major effects. Attention works to make us perceive, conceive, distinguish, remember, and shorten reactions time.
1910–1949
During this period, research in attention waned and interest in behaviorism flourished, leading some to believe, like Ulric Neisser, that in this period, "There was no research on attention". However, Jersild published very important work on "Mental Set and Shift" in 1927. He stated, "The fact of mental set is primary in all conscious activity. The same stimulus may evoke any one of a large number of responses depending upon the contextual setting in which it is placed". This research found that the time to complete a list was longer for mixed lists than for pure lists. For example, if a list was names of animals versus a list of the same size with names of animals, books, makes and models of cars, and types of fruits, it takes longer to process the second list. This is task switching.
In 1931, Telford discovered the psychological refractory period. The stimulation of neurons is followed by a refractory phase during which neurons are less sensitive to stimulation. In 1935 John Ridley Stroop developed the Stroop Task which elicited the Stroop Effect. Stroop's task showed that irrelevant stimulus information can have a major impact on performance. In this task, subjects were to look at a list of colors. This list of colors had each color typed in a color different from the actual text. For example, the word Blue would be typed in Orange, Pink in Black, and so on.
Example: Blue Purple Red Green Purple Green
Subjects were then instructed to say the name of the ink color and ignore the text. It took 110 seconds to complete a list of this type compared to 63 seconds to name the colors when presented in the form of solid squares. The naming time nearly doubled in the presence of conflicting color words, an effect known as the Stroop Effect.
1950–1974
In the 1950s, research psychologists renewed their interest in attention when the dominant epistemology shifted from positivism (i.e., behaviorism) to realism during what has come to be known as the "cognitive revolution". The cognitive revolution admitted unobservable cognitive processes like attention as legitimate objects of scientific study.
Modern research on attention began with the analysis of the "cocktail party problem" by Colin Cherry in 1953. At a cocktail party how do people select the conversation that they are listening to and ignore the rest? This problem is at times called "focused attention", as opposed to "divided attention". Cherry performed a number of experiments which became known as dichotic listening and were extended by Donald Broadbent and others. In a typical experiment, subjects would use a set of headphones to listen to two streams of words in different ears and selectively attend to one stream. After the task, the experimenter would question the subjects about the content of the unattended stream.
Broadbent's Filter Model of Attention states that information is held in a pre-attentive temporary store, and only sensory events that have some physical feature in common are selected to pass into the limited capacity processing system. This implies that the meaning of unattended messages is not identified. Also, a significant amount of time is required to shift the filter from one channel to another. Experiments by Gray and Wedderburn and later Anne Treisman pointed out various problems in Broadbent's early model and eventually led to the Deutsch–Norman model in 1968. In this model, no signal is filtered out, but all are processed to the point of activating their stored representations in memory. The point at which attention becomes "selective" is when one of the memory representations is selected for further processing. At any time, only one can be selected, resulting in the attentional bottleneck.
This debate became known as the early-selection vs. late-selection models. In the early selection models (first proposed by Donald Broadbent), attention shuts down (in Broadbent's model) or attenuates (in Treisman's refinement) processing in the unattended ear before the mind can analyze its semantic content. In the late selection models (first proposed by J. Anthony Deutsch and Diana Deutsch), the content in both ears is analyzed semantically, but the words in the unattended ear cannot access consciousness. Lavie's perceptual load theory, however, "provided elegant solution to" what had once been a "heated debate".
See also
Alertness
Attention deficit hyperactivity disorder
Attention restoration theory
Attention seeking
Attention span
Attention theft
Attentional control
Attentional shift
Binding problem
Cognitive inhibition
Consciousness
Crossmodal attention
Flow (psychology)
Focusing (psychotherapy)
Informal learning
Joint attention
Immanuel Kant
Meditation
Mindfulness
Motivation
Nonverbal communication
Observational Learning
Ovsiankina effect
Perceptual learning#The role of attention
Philosophy
Salience (also called saliency)
Self
Split attention effect
Vigilance
Visual search
Visual spatial attention
Visual temporal attention
Working memory
References
Further reading
Attention deficit hyperactivity disorder
Behavioral concepts
Mental processes
Neuropsychological assessment
Unsolved problems in neuroscience
Philosophy of perception
Concepts in the philosophy of mind | 0.760104 | 0.996757 | 0.757639 |
McGuire's Motivations | McGuire’s Psychological Motivations is a classification system that organizes theories of motives into 16 categories. The system helps marketers to isolate motives likely to be involved in various consumption situations.
Categories
McGuire first divided the motivation into two main categories using two criteria:
Is the mode of motivation cognitive or affective?
Is the motive focused on preservation of the status quo or on growth?
Then for each division in each category he stated there is two more basic elements.
Is this behavior actively initiated or in response to the environment?
Does this behavior help the individual achieve a new internal or a new external relationship to the environment?
Divisions of categories
Cognitive Preservation Motives
a. Need for Consistency (active, internal)
b. Need for Attribution (active, external)
c. Need to categorize (passive, internal)
d. Need for objectification (passive, external)
Cognitive Growth Motives
a. Need for Autonomy (active, internal)
b. Need for Stimulation (active, external)
c. Teleological Need (passive, internal)
d. Utilitarian Need (passive, external)
Affective Preservation Motives
a. Need for Tension Reduction
b. Need for Expression (active, external)
c. Need for Ego Defense (passive, internal)
d. Need for Reinforcement (passive, external)
Affective Growth Motives
a. Need for Assertion (active, internal)
b. Need for Affiliation (active, external)
c. Need for Identification (passive, internal)
d. Need for Modeling (passive, external)
See also
Inoculation theory
References
Hawkins, D, Mothersbaugh, D, & Best, R (2007). Consumer Behaviour: Building Marketing Strategy. New York City: McGraw-Hill.
Loudon, D. L. & Della Bitta, A. J. 1993. Consumer Behavior: Concepts and applications. 4th ed. Singapore: McGraw-Hill. p 326-328.
Classification systems
Marketing
Motivational theories | 0.801526 | 0.945221 | 0.757619 |
Cybernetics: Or Control and Communication in the Animal and the Machine | Cybernetics: Or Control and Communication in the Animal and the Machine is a book written by Norbert Wiener and published in 1948. It is the first public usage of the term "cybernetics" to refer to self-regulating mechanisms. The book laid the theoretical foundation for servomechanisms (whether electrical, mechanical or hydraulic), automatic navigation, analog computing, artificial intelligence, neuroscience, and reliable communications.
A second edition with minor changes and two additional chapters was published in 1961.
Reception
The book aroused a considerable amount of public discussion and comment at the time of publication, unusual for a predominantly technical subject.
"[A] beautifully written book, lucid, direct, and, despite its complexity, as readable by the layman as the trained scientist, if the former is willing to forego attempts to understand mathematical formulas."
"One of the most influential books of the twentieth century, Cybernetics has been acclaimed as one of the 'seminal works' comparable in ultimate importance to Galileo or Malthus or Rousseau or Mill."
"Its scope and implications are breathtaking, and leaves the reviewer with the conviction that it is a major contribution to contemporary thought."
"Cybernetics... is worthwhile for its historical value alone. But it does much more by inspiring the contemporary roboticist to think broadly and be open to innovative applications."
The public interest aroused by this book inspired Wiener to address the sociological and political issues raised in a book targeted at the non-technical reader, resulting in the publication in 1950 of The Human Use of Human Beings.
Table of contents
Introduction
1. Newtonian and Bergsonian Time
2. Groups and Statistical Mechanics
3. Time Series, Information, and Communication
4. Feedback and Oscillation
5. Computing Machines and the Nervous System
6. Gestalt and Universals
7. Cybernetics and Psychopathology
8. Information, Language, and Society
Supplementary chapters in the second edition
9. On Learning and Self-Reproducing Machines
10. Brain Waves and Self-Organising Systems
Synopsis
Introduction
Wiener recounts that the origin of the ideas in this book is a ten-year-long series of meetings at the Harvard Medical School where medical scientists and physicians discussed scientific method with mathematicians, physicists and engineers. He details the interdisciplinary nature of his approach and refers to his work with Vannevar Bush and his differential analyzer (a primitive analog computer), as well as his early thoughts on the features and design principles of future digital calculating machines. He traces the origins of cybernetic analysis to the philosophy of Leibniz, citing his work on universal symbolism and a calculus of reasoning.
Newtonian and Bergsonian Time
The theme of this chapter is an exploration of the contrast between time-reversible processes governed by Newtonian mechanics and time-irreversible processes in accordance with the Second Law of Thermodynamics. In the opening section he contrasts the predictable nature of astronomy with the challenges posed in meteorology, anticipating future developments in Chaos theory. He points out that in fact, even in the case of astronomy, tidal forces between the planets introduce a degree of decay over cosmological time spans, and so strictly speaking Newtonian mechanics do not precisely apply.
Groups and Statistical Mechanics
This chapter opens with a review of the – entirely independent and apparently unrelated – work of two scientists in the early 20th century: Willard Gibbs and Henri Lebesgue. Gibbs was a physicist working on a statistical approach to Newtonian dynamics and thermodynamics, and Lebesgue was a pure mathematician working on the theory of trigonometric series. Wiener suggests that the questions asked by Gibbs find their answer in the work of Lebesgue. Wiener claims that the Lebesgue integral had unexpected but important implications in establishing the validity of Gibbs' work on the foundations of statistical mechanics. The notions of average and measure in the sense established by Lebesgue were urgently needed to provide a rigorous proof of Gibbs' ergodic hypothesis.
The concept of entropy in statistical mechanics is developed, and its relationship to the way the concept is used in thermodynamics. By an analysis of the thought experiment Maxwell's demon, he relates the concept of entropy to that of information.
Time Series, Information, and Communication
This is one of the more mathematically intensive chapters in the book. It deals with the transmission or recording of a varying analog signal as a sequence of numerical samples, and lays much of the groundwork for the development of digital audio and telemetry over the past six decades. It also examines the relationship between bandwidth, noise, and information capacity, as developed by Wiener in collaboration with Claude Shannon. This chapter and the next one form the core of the foundational principles for the developments of automation systems, digital communications and data processing which have taken place over the decades since the book was published.
Feedback and Oscillation
This chapter lays down the foundations for the mathematical treatment of negative feedback in automated control systems. The opening passage illustrates the effect of faulty feedback mechanisms by the example of patients with various forms of ataxia. He then discusses railway signalling, the operation of a thermostat, and a steam engine centrifugal governor. The rest of the chapter is mostly taken up with the development of a mathematical formulation of the operation of the principles underlying all of these processes. More complex systems are then discussed such as automated navigation, and the control of non-linear situations such as steering on an icy road. He concludes with a reference to the homeostatic processes in living organisms.
Computing Machines and the Nervous System
This chapter opens with a discussion of the relative merits of analog computers and digital computers (which Wiener referred to as analogy machines and numerical machines), and maintains that digital machines will be more accurate, electronic implementations will be superior to mechanical or electro-mechanical ones, and that the binary system is preferable to other numerical scales. After discussing the need to store both the data to be processed and the algorithms which are employed for processing that data, and the challenges involved in implementing a suitable memory system, he goes on to draw the parallels between binary digital computers and the nerve structures in organisms.
Among the mechanisms that he speculated for implementing a computer memory system was "a large array of small condensers [ie capacitors in today's terminology] which could be rapidly charged or discharged", thus prefiguring the essential technology of modern dynamic random-access memory chips.
Virtually all of the principles which Wiener enumerated as being desirable characteristics of calculating and data processing machines have been adopted in the design of digital computers, from the early mainframes of the 1950s to the latest microchips.
Gestalt and Universals
This brief chapter is a philosophical enquiry into the relationship between the physical events in the central nervous system and the subjective experiences of the individual. It concentrates principally on the processes whereby nervous signals from the retina are transformed into a representation of the visual field. It also explores the various feedback loops involved in the operation of the eyes: the homeostatic operation of the iris to control light levels, the adjustment of the lens to bring objects into focus, and the complex set of reflex movements to bring an object of attention into the detailed vision area of the fovea.
The chapter concludes with an outline of the challenges presented by attempts to implement a reading machine for the blind.
Cybernetics and Psychopathology
Wiener opens this chapter with the disclaimers that he is neither a psychopathologist nor a psychiatrist, and that he is not asserting that mental problems are failings of the brain to operate as a computing machine. However, he suggests that there might be fruitful lines of enquiry opened by considering the parallels between the brain and a computer. (He employed the archaic-sounding phrase "computing machine", because at the time of writing the word "computer" referred to a person who is employed to perform routine calculations). He then discussed the concept of 'redundancy' in the sense of having two or three computing mechanisms operating simultaneously on the same problem, so that errors may be recognised and corrected.
Information, Language, and Society
Starting with an outline of the hierarchical nature of living organisms, and a discussion of the structure and organisation of colonies of symbiotic organisms, such as the Portuguese Man o' War, this chapter explores the parallels with the structure of human societies, and the challenges faced as they scale and complexity of society increases.
The chapter closes with speculation about the possibility of constructing a chess-playing machine, and concludes that it would be conceivable to build a machine capable of a standard of play better than most human players but not at expert level. Such a possibility seemed entirely fanciful to most commentators in the 1940s, bearing in mind the state of computing technology at the time, although events have turned out to vindicate the prediction – and even to exceed it.
On Learning and Self-Reproducing Machines
Starting with an examination of the learning process in organisms, Wiener expands the discussion to John von Neumann's theory of games, and the application to military situations. He then speculates about the manner in which a chess-playing computer could be programmed to analyse its past performances and improve its performance. This proceeds to a discussion of the evolution of conflict, as in the examples of matador and bull, or mongoose and cobra, or between opponents in a tennis game. He discusses various stories such as The Sorcerer's Apprentice, which illustrate the view that the literal-minded reliance on "magical" processes may turn out to be counter-productive or catastrophic. The context of this discussion was to draw attention to the need for caution in delegating to machines the responsibility for warfare strategy in an age of Nuclear weapons. The chapter concludes with a discussion of the possibility of self-replicating machines and the work of Professor Dennis Gabor in this area.
Brain Waves and Self-Organising Systems
This chapter opens with a discussion of the mechanism of evolution by natural selection, which he refers to as "phylogenetic learning", since it is driven by a feedback mechanism caused by the success or otherwise in surviving and reproducing; and modifications of behaviour over a lifetime in response to experience, which he calls "ontogenetic learning". He suggests that both processes involve non-linear feedback, and speculates that the learning process is correlated with changes in patterns of the rhythms of the waves of electrical activity that can be observed on an electroencephalograph. After a discussion of the technical limitations of earlier designs of such equipment, he suggests that the field will become more fruitful as more sensitive interfaces and higher performance amplifiers are developed and the readings are stored in digital form for numerical analysis, rather than recorded by pen galvanometers in real time - which was the only available technique at the time of writing. He then develops suggestions for a mathematical treatment of the waveforms by Fourier analysis, and draws a parallel with the processing of the results of the Michelson–Morley experiment which confirmed the constancy of the velocity of light, which in turn led Albert Einstein to develop the theory of Special Relativity. As with much of the other material in this book, these pointers have been both prophetic of future developments and suggestive of fruitful lines of research and enquiry.
Influence
The book provided a foundation for research into electronic engineering, computing (both analog and digital), servomechanisms, automation, telecommunications and neuroscience. It also created widespread public debates on the technical, philosophical and sociological issues it discussed. And it inspired a wide range of books on various subjects peripherally related to its content.
The book introduced the word 'cybernetics' itself into public discourse.
Maxwell Maltz titled his pioneering self-development work "Psycho-Cybernetics" in reference to the process of steering oneself towards a pre-defined goal by making corrections to behaviour. Much of the personal development industry and the Human potential movement is said to be derived from Maltz's work.
Cybernetics became a surprise bestseller and was widely read beyond the technical audience that Wiener had expected. In response he wrote The Human Use of Human Beings in which he further explored the social and psychological implications in a format more suited to the non-technical reader.
In 1954, Marie Neurath produced a children's book Machines which seem to Think , which introduced the concepts of Cybernetics, control systems and negative feedback in an accessible format.
References
1948 non-fiction books
Management cybernetics | 0.768599 | 0.985705 | 0.757612 |